This is a generalized implementation of MRI preprocesing for various ML/AI tasks within the Parra Lab
It is recommended to run this on a Linux system. Windows should work, but has not been verified.
- Displays an interactive webserver for easy control of the preprocessing process (Pending update)
- Takes in raw DICOM directory, analyzes its contents, and produces model inputs with little to no manual intervention
- DICOM headers will be scaned and parsed during processing, a full list of necessary DICOM attributes will be provided in this README at a future date.
-
Clone the repository:
git clone https://github.com/TheParraLab/MRI_preprocessing cd MRI_preprocessing -
Install dependencies:
python3 install.py
Note: This installation script works to install docker and configure it to have access to the GPU for ML applications. For preprocessing alone, GPU access is not required, but docker can be installed manually.
-
Start the application:
bash start_control.sh
When started with 'start_control.sh', you will be prompted with:
- Whether you want to start the webserver component (y/n, default: y)
- The path to the raw data on your local system
This supplied directory will be placed at /FL_system/data/raw/ within the docker container.
If you choose to start with the webserver, the system will be accessible on port 5000 for easy control of the preprocessing process. (Note: The current webserver is under development and not finalized, it should not be exposed outside the network)
If you choose to run without the webserver, the container will start with just the preprocessing capabilities. You can then execute preprocessing commands by:
Option 1: Using the convenience script
bash access_preprocessing.shOption 2: Direct Docker access
docker exec -it control bash
cd /FL_system/code/preprocessing/
# Run individual preprocessing scripts or the full pipelineThe code to perform the preprocessing is provided in /code/preprocessing. The 00_preprocess.sh script will run all preprocessing steps in series, placing fully processed data into /data/inputs.
When required, individual preprocessing scripts can be run by accessing the CLI of the container, and running python3 0X_script.py from within the /FL_system/code/preprocessing/ directory. Python scripts will be modified to take in parameters as command line arguments in the near future.
In the future, I plan for the control webserver to be able to perform model inference and training. To this end, the webserver will report GPU availability (Preprocessing alone does not require the GPU to be present.
TODO: Populate Acknowledgements
Feel free to reach out if you have any questions or suggestions! nleotta000@citymail.cuny.edu