Skip to content

Commit

Permalink
Merge branch 'main' into upload_dev
Browse files Browse the repository at this point in the history
  • Loading branch information
medihack committed Dec 20, 2024
2 parents f119d70 + 5b17121 commit f77f744
Show file tree
Hide file tree
Showing 41 changed files with 1,807 additions and 1,492 deletions.
2 changes: 1 addition & 1 deletion .devcontainer/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM mcr.microsoft.com/devcontainers/python:3.12
FROM mcr.microsoft.com/devcontainers/python:3.13

# Install dependency to make bash completion work with invoke
RUN apt-get update && \
Expand Down
17 changes: 11 additions & 6 deletions .github/renovate.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": ["config:recommended", "group:allNonMajor"],
"lockFileMaintenance": { "enabled": true },
"extends": ["config:recommended", "group:allNonMajor", "schedule:weekly"],
"lockFileMaintenance": {
"enabled": true,
"automerge": true,
"schedule": ["before 5am on Monday"]
},
"automerge": true,
"packageRules": [
{
"managers": ["docker-compose"],
Expand All @@ -15,13 +20,13 @@
},
{
"matchManagers": ["poetry"],
"matchPackageNames": ["dicognito"],
"allowedVersions": "<=0.17"
"matchPackageNames": ["pydicom"],
"allowedVersions": "<=2"
},
{
"matchManagers": ["poetry"],
"matchPackageNames": ["pydicom"],
"allowedVersions": "<=2"
"matchPackageNames": ["dicognito"],
"allowedVersions": "<=0.17"
}
]
}
10 changes: 7 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -145,13 +145,17 @@ dump.rdb
# SQLite databases
*.db

#PyCharm
# PyCharm
.idea/

#virtualenv
# virtualenv
bin/
share/
pyvenv.cfg

# Certificate files
*.pem

# ADIT specific
.dicoms
/backups/*
!/backups/.gitkeep
8 changes: 5 additions & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,13 @@ ENV PYTHONUNBUFFERED=1 \
# prepend poetry and venv to path
ENV PATH="$POETRY_HOME/bin:$VENV_PATH/bin:$PATH"

# deps for db management commands and transferring to an archive
# make sure to match the postgres version to the service in the compose file
RUN apt-get update \
&& apt-get install --no-install-recommends -y postgresql-common \
&& /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh -y \
&& apt-get install --no-install-recommends -y \
# deps for db management commands
postgresql-client \
# deps for transferring to an archive
postgresql-client-17 \
p7zip-full


Expand Down
56 changes: 38 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,32 @@

ADIT (Automated DICOM Transfer) is a Swiss army knife to exchange DICOM data between various systems by using a convenient web frontend.

**Developed at**

<table>
<tr>
<td align="center"><a href="https://ccibonn.ai/"><img src="https://github.com/user-attachments/assets/adb95263-bc24-424b-b201-c68492965ebe" width="220" alt="CCI Bonn"/><br />CCIBonn.ai</a></td>
</tr>
</table>

**in Partnership with**

<table>
<tr>

</tr>
<tr>
<td align="center"><a href="https://www.ukbonn.de/"><img src="https://github.com/user-attachments/assets/97a47dc2-5e9d-4903-ad4c-e79206dfb073" height="120" width="auto" alt="UK Bonn"/><br />Universitätsklinikum Bonn</a></td>
<td align="center"><a href="https://www.thoraxklinik-heidelberg.de/"><img src="https://github.com/user-attachments/assets/1485b4c8-0749-4a5e-9574-759a3d819d1e" height="120" width="auto" alt="Thoraxklinik HD"/><br />Thoraxklinik Heidelberg</a></td>
</tr>
<tr>
<td align="center"><a href="https://www.klinikum.uni-heidelberg.de/kliniken-institute/kliniken/diagnostische-und-interventionelle-radiologie/klinik-fuer-diagnostische-und-interventionelle-radiologie/"><img src="https://github.com/user-attachments/assets/6d7c402c-aeed-45db-a9dd-aad232128ef6" height="120" width="auto" alt="UK HD"/><br />Universitätsklinikum Heidelberg</a></td>
</tr>
</table>

> [!IMPORTANT]
> ADIT is currently in early beta stage. While we are actively building and refining its features, users should anticipate ongoing updates and potential breaking changes as the platform evolves. We appreciate your understanding and welcome feedback to help us shape the future of ADIT.
## Features

- Transfer DICOM data between DICOM-compatible servers
Expand All @@ -26,18 +52,6 @@ ADIT (Automated DICOM Transfer) is a Swiss army knife to exchange DICOM data bet

[ADIT Client](https://github.com/openradx/adit-client) is a Python library to query, retrieve and upload DICOM images programmatically from a Python script. Thereby it can interact with DICOM (e.g. PACS) servers connected to an ADIT server.

## Screenshots

![Screenshot1](https://github.com/openradx/adit/assets/120626/f03d6af0-510f-4324-95f4-10bf8522fce2)

![Screenshot2](https://github.com/openradx/adit/assets/120626/2b322dd9-0ce3-4e8f-9ca3-a10e00842c62)

![Screenshot3](https://user-images.githubusercontent.com/120626/155511254-95adbed7-ef2e-44bd-aa3b-6e055be527a5.png)

![Screenshot4](https://user-images.githubusercontent.com/120626/155511300-4dafe29f-748f-4d69-81af-89afe63197a0.png)

![Screenshot5](https://user-images.githubusercontent.com/120626/155511342-e64cd37d-4e92-4a9a-bbb0-4e88ea136d3c.png)

## Architectural overview

The backend of ADIT is built using the Django web framework, and data is stored in a PostgreSQL database. For DICOM transfer [pynetdicom](https://pydicom.github.io/pynetdicom/stable/) of the [pydicom](https://pydicom.github.io/) project is used.
Expand All @@ -48,16 +62,22 @@ When the DICOM data to transfer needs to be modified (e.g. pseudonymized) it is

Downloading data from a DICOM server can done by using a DIMSE operation or by using DICOMweb REST calls. When using DIMSE operations C-GET is prioritized over C-MOVE as a worker can fetch the DICOM data directly from the server. When downloading data using a C-MOVE operation, ADIT commands the source DICOM server to send the data to a C-STORE SCP server of ADIT running in a separate container (`Receiver`) that receives the DICOM data and sends it back to the worker using a TCP Socket connection (`FileTransmitter`).

## Contributors
## Screenshots

![Screenshot1](https://github.com/openradx/adit/assets/120626/f03d6af0-510f-4324-95f4-10bf8522fce2)

![Screenshot2](https://github.com/openradx/adit/assets/120626/2b322dd9-0ce3-4e8f-9ca3-a10e00842c62)

[![medihack](https://github.com/medihack.png?size=50)](https://github.com/medihack)
[![mdebic](https://github.com/mdebic.png?size=50)](https://github.com/mdebic)
[![hummerichsander](https://github.com/hummerichsander.png?size=50)](https://github.com/hummerichsander)
![Screenshot3](https://user-images.githubusercontent.com/120626/155511254-95adbed7-ef2e-44bd-aa3b-6e055be527a5.png)

![Screenshot4](https://user-images.githubusercontent.com/120626/155511300-4dafe29f-748f-4d69-81af-89afe63197a0.png)

![Screenshot5](https://user-images.githubusercontent.com/120626/155511342-e64cd37d-4e92-4a9a-bbb0-4e88ea136d3c.png)

## Disclaimer

ADIT is not a certified medical product. So use at your own risk.
ADIT is intended for research purposes only and is not a certified medical device. It should not be used for clinical diagnostics, treatment, or any medical applications. Use this software at your own risk. The developers and contributors are not liable for any outcomes resulting from its use.

## License

- AGPL 3.0 or later
AGPL 3.0 or later
1 change: 0 additions & 1 deletion TODO.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
## Top

- Delete VSCode stuff inside containers (I think its only when using the container itself as devcontainer)
- When populate_data then also reset_orthancs
- Study Date is changed when using Selective Transfer
- Fix that ADIT DICOMweb supports STOW of multiple image files at once
-- Currently it only allow to upload only one file after another
Expand Down
151 changes: 86 additions & 65 deletions adit/batch_query/processors.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from adit.core.errors import DicomError
from adit.core.models import DicomNode, DicomTask
from adit.core.processors import DicomTaskProcessor
from adit.core.types import ProcessingResult
from adit.core.types import DicomLogEntry, ProcessingResult
from adit.core.utils.dicom_dataset import QueryDataset, ResultDataset
from adit.core.utils.dicom_operator import DicomOperator

Expand All @@ -24,27 +24,27 @@ def __init__(self, dicom_task: DicomTask) -> None:
assert source.node_type == DicomNode.NodeType.SERVER
self.operator = DicomOperator(source.dicomserver)

def process(self) -> ProcessingResult:
# TODO: Allow to get multiple patients (but provide a warning if there are multiple
# patients). We also have to use a for loop then. Make sure that users that provide
# only a patient name must provide a birth date as well.
# Also make sure that patient name doesn't contain wildcards.
def _get_logs(self) -> list[DicomLogEntry]:
logs: list[DicomLogEntry] = []
logs.extend(self.operator.get_logs())
logs.extend(self.logs)
return logs

patient = self._fetch_patient()
def process(self) -> ProcessingResult:
patients = self._fetch_patients()

is_series_query = self.query_task.series_description or self.query_task.series_numbers

if is_series_query:
results = self._query_series(patient.PatientID)
results = self._query_series([patient.PatientID for patient in patients])
msg = f"{len(results)} series found"
else:
results = self._query_studies(patient.PatientID)
results = self._query_studies([patient.PatientID for patient in patients])
msg = f"{len(results)} stud{pluralize(len(results), 'y,ies')} found"

BatchQueryResult.objects.bulk_create(results)

status: BatchQueryTask.Status = BatchQueryTask.Status.SUCCESS
logs = self.operator.get_logs()
logs = self._get_logs()
for log in logs:
if log["level"] == "Warning":
status = BatchQueryTask.Status.WARNING
Expand All @@ -55,7 +55,7 @@ def process(self) -> ProcessingResult:
"log": "\n".join([log["message"] for log in logs]),
}

def _fetch_patient(self) -> ResultDataset:
def _fetch_patients(self) -> list[ResultDataset]:
patient_id = self.query_task.patient_id
patient_name = self.query_task.patient_name
birth_date = self.query_task.patient_birth_date
Expand All @@ -65,32 +65,32 @@ def _fetch_patient(self) -> ResultDataset:
# if those were provided beside the PatientID
if patient_id:
patients = list(self.operator.find_patients(QueryDataset.create(PatientID=patient_id)))
if len(patients) > 1:
raise DicomError("Multiple patients found for this PatientID.")
if len(patients) == 0:
raise DicomError("No patient found with this PatientID.")

# Some checks if they provide a PatientID and also a PatientName or PatientBirthDate.
assert len(patients) == 1
patient = patients[0]
# We can test for equality cause wildcards are not allowed during batch query
# (in contrast to selective transfer).
if patient_name and patient.PatientName != patient_name:
raise DicomError("PatientName doesn't match found patient by PatientID.")
if birth_date and patient.PatientBirthDate != birth_date:
raise DicomError("PatientBirthDate doesn't match found patient by PatientID.")
elif patient_name and birth_date:
patients = list(
self.operator.find_patients(
QueryDataset.create(PatientName=patient_name, PatientBirthDate=birth_date)
)
)
if len(patients) == 0:
raise DicomError("No patient found with this PatientName and PatientBirthDate.")
else:
raise DicomError("PatientName and PatientBirthDate are required.")

if len(patients) == 0:
raise DicomError("Patient not found.")

if len(patients) > 1:
raise DicomError("Multiple patients found.")

patient = patients[0]
raise DicomError("PatientID or PatientName and PatientBirthDate are required.")

# We can test for equality cause wildcards are not allowed during
# batch query (only in selective transfer)
if patient_id and patient_name and patient.PatientName != patient_name:
raise DicomError("PatientName doesn't match found patient by PatientID.")

if patient_id and birth_date and patient.PatientBirthDate != birth_date:
raise DicomError("PatientBirthDate doesn't match found patient by PatientID.")

return patient
return patients

def _fetch_studies(self, patient_id: str) -> list[ResultDataset]:
start_date = self.query_task.study_date_start
Expand Down Expand Up @@ -170,39 +170,21 @@ def _fetch_series(self, patient_id: str, study_uid: str) -> list[ResultDataset]:

return sorted(series_results, key=lambda series: int(series.get("SeriesNumber", 0)))

def _query_studies(self, patient_id: str) -> list[BatchQueryResult]:
studies = self._fetch_studies(patient_id)
def _query_studies(self, patient_ids: list[str]) -> list[BatchQueryResult]:
results: list[BatchQueryResult] = []
for study in studies:
batch_query_result = BatchQueryResult(
job=self.query_task.job,
query=self.query_task,
patient_id=study.PatientID,
patient_name=study.PatientName,
patient_birth_date=study.PatientBirthDate,
study_uid=study.StudyInstanceUID,
accession_number=study.AccessionNumber,
study_date=study.StudyDate,
study_time=study.StudyTime,
study_description=study.StudyDescription,
modalities=study.ModalitiesInStudy,
image_count=study.NumberOfStudyRelatedInstances,
pseudonym=self.query_task.pseudonym,
series_uid="",
series_description="",
series_number="",
)
results.append(batch_query_result)

return results

def _query_series(self, patient_id: str) -> list[BatchQueryResult]:
studies = self._fetch_studies(patient_id)
for patient_id in patient_ids:
studies = self._fetch_studies(patient_id)

if results and studies:
self.logs.append(
{
"level": "Warning",
"title": "Indistinct patients",
"message": "Studies of multiple patients were found for this query.",
}
)

results: list[BatchQueryResult] = []
for study in studies:
series_list = self._fetch_series(patient_id, study.StudyInstanceUID)
for series in series_list:
for study in studies:
batch_query_result = BatchQueryResult(
job=self.query_task.job,
query=self.query_task,
Expand All @@ -214,13 +196,52 @@ def _query_series(self, patient_id: str) -> list[BatchQueryResult]:
study_date=study.StudyDate,
study_time=study.StudyTime,
study_description=study.StudyDescription,
modalities=[series.Modality],
modalities=study.ModalitiesInStudy,
image_count=study.NumberOfStudyRelatedInstances,
pseudonym=self.query_task.pseudonym,
series_uid=series.SeriesInstanceUID,
series_description=series.SeriesDescription,
series_number=str(series.SeriesNumber),
series_uid="",
series_description="",
series_number="",
)
results.append(batch_query_result)

return results

def _query_series(self, patient_ids: list[str]) -> list[BatchQueryResult]:
results: list[BatchQueryResult] = []
for patient_id in patient_ids:
studies = self._fetch_studies(patient_id)

if results and studies:
self.logs.append(
{
"level": "Warning",
"title": "Indistinct patients",
"message": "Studies of multiple patients were found for this query.",
}
)

for study in studies:
series_list = self._fetch_series(patient_id, study.StudyInstanceUID)
for series in series_list:
batch_query_result = BatchQueryResult(
job=self.query_task.job,
query=self.query_task,
patient_id=study.PatientID,
patient_name=study.PatientName,
patient_birth_date=study.PatientBirthDate,
study_uid=study.StudyInstanceUID,
accession_number=study.AccessionNumber,
study_date=study.StudyDate,
study_time=study.StudyTime,
study_description=study.StudyDescription,
modalities=[series.Modality],
image_count=study.NumberOfStudyRelatedInstances,
pseudonym=self.query_task.pseudonym,
series_uid=series.SeriesInstanceUID,
series_description=series.SeriesDescription,
series_number=str(series.SeriesNumber),
)
results.append(batch_query_result)

return results
Loading

0 comments on commit f77f744

Please sign in to comment.