-
Notifications
You must be signed in to change notification settings - Fork 254
Replace pydicom-seg with highdicom and upgrade pydicom to 3.0.1 #1879
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdded a NIfTI↔DICOM‑SEG conversion dispatcher with HighDicom and ITK/dcmqi backends (label extraction, remapping, custom tags), DICOM‑SEG→ITK round‑trip support, defensive package metadata filtering, clearer 404 messages for datastore endpoints, dependency/Python bumps, expanded conversion tests, and minor typing/cleanup. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Dispatcher as nifti_to_dicom_seg()
participant FS as Filesystem
participant HD as highdicom
participant CLI as dcmqi_CLI
Client->>Dispatcher: nifti_to_dicom_seg(series_dir, label, label_info, use_itk, custom_tags)
Dispatcher->>FS: read label (NIfTI) & DICOM series files
Dispatcher->>Dispatcher: _extract_label_info()
alt highdicom available AND not use_itk
Dispatcher->>Dispatcher: remap labels & build SegmentDescriptions
Dispatcher->>HD: construct Segmentation object
HD-->>Dispatcher: Segmentation + pixel data
Dispatcher->>FS: write DICOM SEG file (+ custom tags)
else use_itk OR highdicom unavailable
Dispatcher->>FS: prepare template series / temp files
Dispatcher->>CLI: run itkimage2segimage(...)
CLI-->>FS: produce DICOM SEG file
end
Dispatcher-->>Client: return output_path or raise error
sequenceDiagram
participant Client
participant Reader as dicom_seg_to_itk_image()
participant HD as highdicom
participant ITK as SimpleITK
participant FS as Filesystem
Client->>Reader: dicom_seg_to_itk_image(dicom_seg_path, output_ext)
Reader->>HD: read DICOM SEG -> frames + segment metadata
HD-->>Reader: segmentation frames
Reader->>Reader: assemble 3D volume, set spacing/origin/direction
Reader->>ITK: create SimpleITK image from array + metadata
ITK->>FS: write output (.seg.nrrd or requested ext)
Reader-->>Client: return output_path
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Warning Tools execution failed with the following error: Failed to run tools: 13 INTERNAL: Received RST_STREAM with code 2 (Internal server error) Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
monailabel/datastore/utils/convert.py (2)
309-410: ITK path: don’t catch blindException; uselogger.exception+ preserve tracebackThis is exactly what Ruff is flagging (BLE001/TRY400). Also, the same
label_info[i]vsunique_labelsmismatch applies here.- try: - return _dcmqi_nifti_to_dicom_seg(label, series_dir, template) - except Exception as e: - logger.error(f"ITK DICOM SEG conversion failed: {e}") - return "" + try: + return _dcmqi_nifti_to_dicom_seg(label, series_dir, template) + except (RuntimeError, OSError, ValueError) as e: + logger.exception("ITK DICOM SEG conversion failed") + return ""
454-499: dcmqi wrapper: ensure temp metadata cleanup on failure; avoid temp-name TOCTOU
meta_datais unlinked only on success; on failure it leaks temp json.- Using
NamedTemporaryFile(...).namewithout holding the handle is a small TOCTOU footgun; preferdelete=Falseand explicit cleanup.- output_file = tempfile.NamedTemporaryFile(suffix=".dcm").name - meta_data = tempfile.NamedTemporaryFile(suffix=".json").name - with open(meta_data, "w") as fp: - json.dump(template, fp) + output_file = tempfile.NamedTemporaryFile(suffix=".dcm", delete=False).name + meta_data = tempfile.NamedTemporaryFile(suffix=".json", delete=False).name + try: + with open(meta_data, "w") as fp: + json.dump(template, fp) @@ - run_command(command, args) - os.unlink(meta_data) - return output_file + run_command(command, args) + return output_file + finally: + try: + os.unlink(meta_data) + except OSError: + pass
🧹 Nitpick comments (6)
monailabel/config.py (1)
20-21: Guard againstNonemetadata is a solid hardening change.
Optional: consider normalizing both sides (e.g.,.casefold()/ canonical name) to reduce false negatives across packaging name variants.tests/unit/datastore/test_convert.py (3)
37-70:test_dicom_to_nifti: sanity checks are good; HU bounds may be modality-dependent.
If this test might run on non-CT series in the future, HU assertions (-2048..4095) could become brittle—consider gating them on modality/RescaleType or loosening to “not all NaN/inf + reasonable dynamic range”.
542-596:test_dicom_seg_to_itk_image: good assertions, but ensure temp files don’t leak on failure.
Same pattern: cleanup happens after asserts; considertry/finallyarounddicom_seg_fileandresult.
597-621:test_custom_tags: good coverage; consider asserting tag VR/type expectations if you’ve had regressions.
Current checks validate presence/values; if prior issues involved wrong VR/type, consider also asserting they serialize as expected.monailabel/datastore/utils/convert.py (2)
12-35: Private helper functions should have explicit guards or fail-fast error messagesThe dispatcher
nifti_to_dicom_seg()anddicom_seg_to_itk_image()both guard against missing highdicom with explicitImportErrorchecks (lines 439–440, 513–514), which is good. However,_highdicom_nifti_to_dicom_seg()referenceshddirectly (lines 187, 192, 239–246, 281) without any guard. While this function is genuinely private and only called from the guarded dispatcher, adding a guard at its entry point would:
- Catch accidental direct imports/calls more clearly
- Make the dependency explicit in the code
- Improve maintainability if the function's visibility changes
Consider adding an assert or explicit check at the start of
_highdicom_nifti_to_dicom_seg():if not HIGHDICOM_AVAILABLE: raise ImportError("highdicom is not available")Regarding pixel_array format: highdicom's
Segmentationconstructor accepts labelmap-style arrays (3D shape with integer segment numbers), and your remapping logic (lines 221–223) correctly remaps to sequential 1, 2, 3... as required.
413-452: Improve error message for missing highdicom with installation hintThe current error message
ImportError("highdicom is not available")should include a helpful pip installation hint. Since MONAILabel does not use optional dependency extras (no[dicom]or similar), suggest the error message include:"highdicom is not available. Install it with: pip install highdicom"to provide users with a clear path forward when they hit this runtime error.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (6)
monailabel/config.py(1 hunks)monailabel/datastore/utils/convert.py(5 hunks)monailabel/endpoints/datastore.py(2 hunks)requirements.txt(2 hunks)setup.cfg(1 hunks)tests/unit/datastore/test_convert.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
monailabel/endpoints/datastore.py (1)
monailabel/datastore/local.py (1)
label(70-72)
🪛 Ruff (0.14.8)
tests/unit/datastore/test_convert.py
79-79: Local variable ref_array_sitk is assigned to but never used
Remove assignment to unused variable ref_array_sitk
(F841)
179-179: f-string without any placeholders
Remove extraneous f prefix
(F541)
372-372: f-string without any placeholders
Remove extraneous f prefix
(F541)
382-382: Local variable original_origin is assigned to but never used
Remove assignment to unused variable original_origin
(F841)
397-397: Local variable result_origin is assigned to but never used
Remove assignment to unused variable result_origin
(F841)
502-502: f-string without any placeholders
Remove extraneous f prefix
(F541)
552-552: f-string without any placeholders
Remove extraneous f prefix
(F541)
607-607: f-string without any placeholders
Remove extraneous f prefix
(F541)
833-833: Do not catch blind exception: Exception
(BLE001)
851-851: f-string without any placeholders
Remove extraneous f prefix
(F541)
monailabel/datastore/utils/convert.py
45-45: Local variable t_load is assigned to but never used
Remove assignment to unused variable t_load
(F841)
45-45: Local variable t_cpu is assigned to but never used
Remove assignment to unused variable t_cpu
(F841)
45-45: Local variable t_write is assigned to but never used
Remove assignment to unused variable t_write
(F841)
408-408: Do not catch blind exception: Exception
(BLE001)
409-409: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
440-440: Avoid specifying long messages outside the exception class
(TRY003)
514-514: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (8)
monailabel/endpoints/datastore.py (1)
133-140: Improved missing-file handling (directory-safe) looks correct.
Switching toos.path.isfile(...)prevents “download a directory” edge cases and the 404 details are clearer; logging the resolved path is useful for ops (assuming logs aren’t exposed to end users).Also applies to: 152-158
tests/unit/datastore/test_convert.py (5)
12-29: New test dependencies/imports are aligned with the new conversion surface.
No functional issues in this import block.
129-168:test_nifti_to_dicom_seg_highdicom: good end-to-end validation.
Segment count expectation derived from the input label is a nice touch; checkingPixelDataandNumberOfFramescovers the “empty SEG” regression class.
457-541: Empty-label + missing-label-info coverage is solid.
Returning""for empty labels and verifying default segment metadata are both valuable guardrails.
724-815: Large-label-value remap test is strong and robust (full-slice regions).
Using full-slice fills is a good way to avoid “segment disappears” artifacts and makes the remap assertions trustworthy.
841-892: Metadata completeness test is a good “minimum viable SEG” contract.
The required-attribute list is sensible and should catch many partial-write regressions.requirements.txt (1)
27-30: The highdicom==0.26.1 pin is safe—it contains only a bug-fix (tiled segmentation volume origin, #368) with no API changes.Since tests now exercise both SEG backends, CI should validate the exact pinned versions (pydicom==3.0.1, highdicom==0.26.1, pynetdicom==2.0.2, pynrrd==1.1.3) to ensure consistency with the bugfix.
monailabel/datastore/utils/convert.py (1)
501-559: [rewritten review comment]
[classification tag]
|
|
||
| def dicom_to_nifti(series_dir, is_seg=False): | ||
| start = time.time() | ||
| t_load = t_cpu = t_write = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused locals (t_load, t_cpu, t_write)
Ruff is right: these are assigned but never used.
- t_load = t_cpu = t_write = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| t_load = t_cpu = t_write = None |
🧰 Tools
🪛 Ruff (0.14.8)
45-45: Local variable t_load is assigned to but never used
Remove assignment to unused variable t_load
(F841)
45-45: Local variable t_cpu is assigned to but never used
Remove assignment to unused variable t_cpu
(F841)
45-45: Local variable t_write is assigned to but never used
Remove assignment to unused variable t_write
(F841)
🤖 Prompt for AI Agents
In monailabel/datastore/utils/convert.py around line 45, the local variables
t_load, t_cpu and t_write are assigned but never used; remove the assignment
line "t_load = t_cpu = t_write = None" (or any unused variable declarations) to
satisfy Ruff and avoid dead code, and run tests/linter to confirm no other
references remain.
| pydicom>=3.0.1 | ||
| highdicom>=0.26.1 | ||
| pynetdicom>=2.0.2 | ||
| pynrrd>=1.0.0 | ||
| pynrrd>=1.1.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
highdicom 0.26.1 pydicom 3.0 compatibility
💡 Result:
Short answer: not guaranteed. pydicom 3.0 introduced several breaking changes; highdicom 0.26.x documentation does not state explicit support for pydicom 3.0, so you should either pin pydicom <3 (if you must keep highdicom 0.26.1) or upgrade highdicom to a release that explicitly targets newer pydicom. [1][2][3]
Details / sources:
- pydicom 3.0 release notes (major breaking changes). [1]
- highdicom installation/docs (describes dependency on pydicom but does not claim explicit pydicom-3.0 compatibility for 0.26.1). [2][3]
Recommendation:
- If you need stability now: install highdicom 0.26.1 with pydicom 2.x (pin pydicom<3).
- If you want pydicom 3.x features, upgrade highdicom to a version that documents pydicom‑3 support (or test your code thoroughly).
References:
[1] pydicom 3.0 release notes.
[2] highdicom installation / requirements (docs).
[3] highdicom — notes showing relation to pydicom in docs.
Resolve pydicom and highdicom version incompatibility.
The constraint pydicom>=3.0.1 is incompatible with highdicom>=0.26.1. pydicom 3.0 introduced breaking changes that highdicom 0.26.1 does not support. Either pin pydicom<3 to maintain highdicom 0.26.1, or upgrade highdicom to a version with explicit pydicom 3.x compatibility.
Separately, confirm whether highdicom should remain a hard dependency or be moved to an optional extra (like dicom-seg).
🤖 Prompt for AI Agents
In setup.cfg around lines 53 to 56, the constraint pydicom>=3.0.1 conflicts with
highdicom>=0.26.1 because highdicom 0.26.1 is not compatible with pydicom 3.x;
fix by either pinning pydicom to a 2.x range (e.g., pydicom<3) to keep highdicom
0.26.1, or upgrade highdicom to a version that declares pydicom 3.x
compatibility (replace highdicom>=0.26.1 with the minimally required compatible
version); additionally decide if highdicom should be a hard dependency or moved
to an optional extra (e.g., [extras] dicom-seg) and update setup.cfg
accordingly.
| def _test_nifti_to_dicom_seg_with_label_info_impl(self, use_itk): | ||
| """Helper: Test NIfTI to DICOM SEG conversion with custom label info.""" | ||
| image = os.path.join(self.dicom_dataset, "1.2.826.0.1.3680043.8.274.1.1.8323329.686549.1629744177.996087") | ||
| label = os.path.join( | ||
| self.dicom_dataset, | ||
| "labels", | ||
| "final", | ||
| "1.2.826.0.1.3680043.8.274.1.1.8323329.686549.1629744177.996087.nii.gz", | ||
| ) | ||
| if not os.path.exists(image) or not os.path.exists(label): | ||
| self.skipTest(f"Test data not found") | ||
|
|
||
| label_info = [ | ||
| { | ||
| "name": "Spleen", | ||
| "description": "Spleen organ", | ||
| "color": [255, 0, 0], | ||
| "model_name": "TestModel", | ||
| } | ||
| ] | ||
|
|
||
| result = nifti_to_dicom_seg(image, label, label_info, use_itk=use_itk) | ||
|
|
||
| assert os.path.exists(result) | ||
| dcm = pydicom.dcmread(result) | ||
|
|
||
| # Verify series description | ||
| assert dcm.SeriesDescription == "TestModel" | ||
|
|
||
| # Verify segment metadata is properly set | ||
| assert len(dcm.SegmentSequence) >= 1 | ||
| seg = dcm.SegmentSequence[0] | ||
|
|
||
| # Verify segment label from label_info | ||
| assert seg.SegmentLabel == "Spleen", f"Expected 'Spleen', got '{seg.SegmentLabel}'" | ||
|
|
||
| # Note: SegmentDescription is optional in DICOM, may not be present | ||
| # The description is typically in SegmentedPropertyTypeCodeSequence.CodeMeaning | ||
|
|
||
| # Verify algorithm information | ||
| assert seg.SegmentAlgorithmType == "AUTOMATIC" | ||
| assert seg.SegmentAlgorithmName == "MONAILABEL" | ||
|
|
||
| # Verify segment has required code sequences | ||
| assert hasattr(seg, "SegmentedPropertyCategoryCodeSequence") | ||
| assert len(seg.SegmentedPropertyCategoryCodeSequence) > 0 | ||
| assert hasattr(seg, "SegmentedPropertyTypeCodeSequence") | ||
| assert len(seg.SegmentedPropertyTypeCodeSequence) > 0 | ||
|
|
||
| # Verify pixel data exists | ||
| assert hasattr(dcm, "PixelData") | ||
| assert len(dcm.PixelData) > 0 | ||
|
|
||
| # Verify frame count | ||
| assert hasattr(dcm, "NumberOfFrames") | ||
| assert dcm.NumberOfFrames > 0 | ||
|
|
||
| os.unlink(result) | ||
|
|
||
| def test_nifti_to_dicom_seg_with_label_info_highdicom(self): | ||
| """Test label info conversion using highdicom implementation.""" | ||
| self._test_nifti_to_dicom_seg_with_label_info_impl(use_itk=False) | ||
|
|
||
| def test_nifti_to_dicom_seg_with_label_info_itk(self): | ||
| """Test label info conversion using ITK implementation.""" | ||
| self._test_nifti_to_dicom_seg_with_label_info_impl(use_itk=True) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove placeholder f-strings (Ruff F541) and keep helper messages useful.
These skipTest(f"...") strings have no placeholders.
@@
- if not os.path.exists(image) or not os.path.exists(label):
- self.skipTest(f"Test data not found")
+ if not os.path.exists(image) or not os.path.exists(label):
+ self.skipTest("Test data not found")🧰 Tools
🪛 Ruff (0.14.8)
179-179: f-string without any placeholders
Remove extraneous f prefix
(F541)
🤖 Prompt for AI Agents
In tests/unit/datastore/test_convert.py around lines 169 to 235, the test uses
skipTest(f"Test data not found") with an unnecessary f-string (Ruff F541);
replace the f-string with a normal string literal (e.g., "Test data not found")
to remove the placeholder formatting while keeping the helper message unchanged.
2e51a42 to
f5c2428
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
monailabel/datastore/utils/convert.py (1)
480-497: Temp file for output may be deleted before use.
tempfile.NamedTemporaryFile(suffix=".dcm")withoutdelete=Falsewill delete the file when closed, but the file is needed afterrun_commandcompletes.- output_file = tempfile.NamedTemporaryFile(suffix=".dcm").name + output_file = tempfile.NamedTemporaryFile(suffix=".dcm", delete=False).name
♻️ Duplicate comments (5)
tests/unit/datastore/test_convert.py (2)
757-768: Broadexcept Exceptionremains from previous review.This was flagged in a past review. The function returns empty string for invalid directories, so consider asserting directly on the return value or catching specific exceptions.
174-176: Remove unused variableref_array_sitk.This variable is assigned but never used, as flagged by static analysis.
# Load reference using both methods to get expected values ref_img = sitk.ReadImage(reference_image) - ref_array_sitk = sitk.GetArrayFromImage(ref_img)monailabel/datastore/utils/convert.py (3)
202-210: DICOM file discovery usingglob("*")may match non-DICOM files.The
glob(file_ext)with default"*"will match all files including non-DICOM files (e.g.,.txt,.json).dcmread()will raiseInvalidDicomErrorfor these, potentially aborting the conversion. This was flagged in a previous review.Consider filtering with
pydicom.misc.is_dicom()or wrappingdcmreadin exception handling:image_files = list(series_dir.glob(file_ext)) - image_datasets = [dcmread(str(f), stop_before_pixels=True) for f in sorted(image_files)] + image_datasets = [] + for f in sorted(image_files): + try: + ds = dcmread(str(f), stop_before_pixels=True) + image_datasets.append(ds) + except Exception: + logger.debug(f"Skipping non-DICOM file: {f}")
237-252: Critical:segmentation_type=BINARYshould beLABELMAPfor integer label array.The code passes a remapped 3D integer array (values 1, 2, 3...) but declares
segmentation_type=BINARY. Per highdicom 0.26.1 documentation:
- BINARY: requires 4D one-hot encoding (F, H, W, S) where each segment is a separate binary mask
- LABELMAP: uses 3D integer arrays where pixel values represent segment numbers
The current data format matches LABELMAP semantics.
seg = hd.seg.Segmentation( source_images=image_datasets, pixel_array=seg_array, - segmentation_type=hd.seg.SegmentationTypeValues.BINARY, + segmentation_type=hd.seg.SegmentationTypeValues.LABELMAP, segment_descriptions=segment_descriptions,
286-298: Custom tags handling has issues with keyword lookup.As noted in a previous review,
k in segwith keyword strings doesn't work correctly in pydicom. TheDataset.__contains__is tag-oriented.
🧹 Nitpick comments (2)
tests/unit/datastore/test_convert.py (1)
107-136: Synthetic label creation helper is useful but has potential OOB risk.The
_create_multi_segment_arraymethod uses fixed slice ranges (e.g.,50:150,x_offset:x_offset+80) that assume minimum dimensions. While this was flagged in previous reviews, the risk remains for smaller test datasets. Consider deriving bounds fromdimsor adding assertions.def _create_multi_segment_array(self, dims, segment_values, region_type="full_slice"): ... label_array = np.zeros((dims[2], dims[1], dims[0]), dtype=np.uint8) + # Ensure dims are large enough for region types that use fixed offsets + if region_type in ("regions", "large_regions"): + min_dim = 160 if region_type == "large_regions" else 100 + 80 * len(segment_values) + assert dims[0] >= min_dim and dims[1] >= min_dim, \ + f"Dims {dims} too small for region_type={region_type}" z_segment = max(1, dims[2] // len(segment_values))monailabel/datastore/utils/convert.py (1)
405-409: Uselogging.exceptionfor better error diagnostics.The broad
except Exceptionis acceptable here since it wraps an external CLI tool, but uselogging.exceptionto include the stack trace.try: return _dcmqi_nifti_to_dicom_seg(label, series_dir, template) except Exception as e: - logger.error(f"ITK DICOM SEG conversion failed: {e}") + logger.exception(f"ITK DICOM SEG conversion failed: {e}") return ""
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (6)
monailabel/config.py(1 hunks)monailabel/datastore/utils/convert.py(5 hunks)monailabel/endpoints/datastore.py(2 hunks)requirements.txt(2 hunks)setup.cfg(1 hunks)tests/unit/datastore/test_convert.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- setup.cfg
- monailabel/config.py
- monailabel/endpoints/datastore.py
🧰 Additional context used
🧬 Code graph analysis (1)
tests/unit/datastore/test_convert.py (1)
monailabel/datastore/utils/convert.py (4)
binary_to_image(73-90)dicom_seg_to_itk_image(500-558)dicom_to_nifti(43-70)nifti_to_dicom_seg(412-450)
🪛 Ruff (0.14.8)
tests/unit/datastore/test_convert.py
176-176: Local variable ref_array_sitk is assigned to but never used
Remove assignment to unused variable ref_array_sitk
(F841)
385-385: Local variable original_origin is assigned to but never used
Remove assignment to unused variable original_origin
(F841)
400-400: Local variable result_origin is assigned to but never used
Remove assignment to unused variable result_origin
(F841)
762-762: Do not catch blind exception: Exception
(BLE001)
monailabel/datastore/utils/convert.py
407-407: Do not catch blind exception: Exception
(BLE001)
408-408: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
439-439: Avoid specifying long messages outside the exception class
(TRY003)
513-513: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (9)
requirements.txt (2)
55-58: Good documentation for optional dcmqi dependency.Clear instructions for installing the optional dcmqi package when ITK-based conversion is needed.
27-30: Dependency upgrades are compatible with the codebase.The upgrade from pydicom 2.4.4 to 3.0.1, addition of highdicom 0.26.1, and pynrrd 1.1.3 align with PR objectives. The codebase already uses non-deprecated APIs (dcmread/dcmwrite, Dataset, Sequence imports) that are fully supported in pydicom 3.x. No usage of deprecated read_file/write_file, JPEGLossless UID comparisons, or direct pixel_array access from pydicom was found. However, note that pydicom 3.x requires Python 3.10+; ensure your deployment environment meets this requirement.
tests/unit/datastore/test_convert.py (3)
12-29: Imports and module setup look correct.The imports include all necessary dependencies for the test suite, and the new utility functions from
convert.pyare properly imported.
139-169: Comprehensive DICOM to NIfTI test with good validation coverage.The test validates 3D structure, non-zero content, HU range bounds, and spacing reasonableness. Cleanup is properly handled.
770-813: Metadata completeness test is thorough.Good validation of required DICOM SEG attributes including SOP Class, Modality, and per-segment attributes.
monailabel/datastore/utils/convert.py (4)
28-33: Good conditional import pattern for highdicom.The try/except with
HIGHDICOM_AVAILABLEflag allows graceful degradation when highdicom is not installed.
93-118: Helper function correctly extracts unique labels.The function properly loads the label file and extracts non-zero unique values. The warning for empty labels is helpful.
412-450: Clean dispatcher pattern with appropriate fallback handling.The function correctly selects between implementations based on
use_itkflag and settings, with proper error handling when highdicom is unavailable.
500-558: DICOM SEG to ITK conversion with proper coordinate handling.Good implementation using highdicom's
get_volume()API. The coordinate system transformations for spacing, origin, and direction are well-documented and appear correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
monailabel/datastore/utils/convert.py (1)
522-567: Critical: temp files may be deleted immediately in_dcmqi_nifti_to_dicom_seg().
tempfile.NamedTemporaryFile(...).namewithoutdelete=False(and without keeping the handle alive) can yield paths that disappear beforeitkimage2segimageuses/writes them.- output_file = tempfile.NamedTemporaryFile(suffix=".dcm").name - meta_data = tempfile.NamedTemporaryFile(suffix=".json").name + output_file = tempfile.NamedTemporaryFile(suffix=".dcm", delete=False).name + meta_data = tempfile.NamedTemporaryFile(suffix=".json", delete=False).name
♻️ Duplicate comments (2)
tests/unit/datastore/test_convert.py (2)
200-254: Remove unusedref_array_sitkand makelabel_bincleanup failure-safe.
This is still an unused local, andos.unlink(label_bin)won’t run ifbinary_to_image()/asserts raise.- ref_array_sitk = sitk.GetArrayFromImage(ref_img) + # Geometry validated via spacing checks below; array not needed. @@ - result = binary_to_image(reference_image, label_bin) - os.unlink(label_bin) + try: + result = binary_to_image(reference_image, label_bin) + finally: + os.unlink(label_bin)
794-818: Don’t catchExceptionhere; assert the expected behavior directly.
nifti_to_dicom_seg()returns""for a missingseries_dirin both backends (it doesn’t need an exception path), so the broad catch mostly masks failures.- try: - # Should handle gracefully (return empty string or raise clear error) - result = nifti_to_dicom_seg(invalid_dir, label, None, use_itk=False) - # If it returns, should be empty or error logged - assert result == "" or not os.path.exists(result), "Should return empty for invalid directory" - except Exception as e: - # If it raises an error, verify it's informative - assert ( - "DICOM" in str(e) or "not found" in str(e).lower() or "No" in str(e) - ), f"Error message should be informative: {e}" - finally: - os.unlink(label) + try: + result = nifti_to_dicom_seg(invalid_dir, label, None, use_itk=False) + assert result == "" + finally: + os.unlink(label)
🧹 Nitpick comments (3)
monailabel/datastore/utils/convert.py (2)
94-143: Avoid reading the NIfTI twice (and make ID extraction robust to falsy values).
_extract_label_info()reads the label, and_highdicom_nifti_to_dicom_seg()/_itk_nifti_to_dicom_seg()read it again. Consider returninglabel_array(ormask) to reuse and cut IO. Alsoentry.get("idx") or ...will skip0if it ever appears; safer to check keys explicitly.
449-478: Don’t log full dcmqi template at INFO + tighten exception handling.
logger.info(json.dumps(template, ...))is very noisy and can leak user-provided metadata; prefer DEBUG and/or redact. Alsoexcept Exception+logger.error(...)drops the traceback.- logger.info(json.dumps(template, indent=2)) + logger.debug("dcmqi template: %s", json.dumps(template, indent=2)) @@ - except Exception as e: - logger.error(f"ITK DICOM SEG conversion failed: {e}") + except (RuntimeError, OSError, ValueError) as e: + logger.exception("ITK DICOM SEG conversion failed: %s", e) return ""tests/unit/datastore/test_convert.py (1)
70-83: Ensure temp-file cleanup even if asserts fail.
_extract_pixels_via_roundtrip()unlinks only on success. Wrap read/extract intry/finallyso failures don’t leak temp files.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
monailabel/datastore/utils/convert.py(5 hunks)tests/unit/datastore/test_convert.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/unit/datastore/test_convert.py (1)
monailabel/datastore/utils/convert.py (4)
binary_to_image(74-91)dicom_seg_to_itk_image(569-627)dicom_to_nifti(44-71)nifti_to_dicom_seg(481-519)
🪛 Ruff (0.14.8)
tests/unit/datastore/test_convert.py
205-205: Local variable ref_array_sitk is assigned to but never used
Remove assignment to unused variable ref_array_sitk
(F841)
811-811: Do not catch blind exception: Exception
(BLE001)
monailabel/datastore/utils/convert.py
263-263: Consider moving this statement to an else block
(TRY300)
365-365: Do not catch blind exception: Exception
(BLE001)
476-476: Do not catch blind exception: Exception
(BLE001)
477-477: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
508-508: Avoid specifying long messages outside the exception class
(TRY003)
582-582: Avoid specifying long messages outside the exception class
(TRY003)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
monailabel/datastore/utils/convert.py (1)
549-567: Make_dcmqi_nifti_to_dicom_segcleanup failure-safe (leaksmeta_dataon errors).
Ifrun_command(...)fails, themeta_datatemporary file is never cleaned up. Wrap the file operations in a try-finally block to ensure cleanup:- with open(meta_data, "w") as fp: - json.dump(template, fp) + try: + with open(meta_data, "w") as fp: + json.dump(template, fp) @@ - run_command(command, args) - os.unlink(meta_data) - return output_file + run_command(command, args) + return output_file + finally: + if os.path.exists(meta_data): + os.unlink(meta_data)
♻️ Duplicate comments (4)
tests/unit/datastore/test_convert.py (3)
109-147: Guard against zero-sized regions for small images (box_w/box_hcan be 0).
dims[0] // (len(segment_values)+1)ordims[1] // 2can yield0, producing empty labels and brittle tests. Suggest clampingbox_w/box_h(and derived indices) to at least 1. (Matches prior feedback.)
200-250: Remove unusedref_array_sitkand make temp-file cleanup failure-safe.
ref_array_sitkis unused (Ruff F841).os.unlink(result)won’t run if an assertion fails mid-test; wrap post-resultusage intry/finally.- ref_array_sitk = sitk.GetArrayFromImage(ref_img) + # Geometry is validated via spacing checks below; no need to materialize the array here. @@ - result = binary_to_image(reference_image, label_bin) - os.unlink(label_bin) + result = None + try: + result = binary_to_image(reference_image, label_bin) + finally: + os.unlink(label_bin) @@ - os.unlink(result) + if result and os.path.exists(result): + os.unlink(result)
791-815: Don’t catch blindExceptionhere; assert on the expected contract directly.
This test currently masks behavior and triggers Ruff BLE001. If invalid series dirs should return""(as the conversion code already does in some paths), assert that directly.- try: - # Should handle gracefully (return empty string or raise clear error) - result = nifti_to_dicom_seg(invalid_dir, label, None, use_itk=False) - # If it returns, should be empty or error logged - assert result == "" or not os.path.exists(result), "Should return empty for invalid directory" - except Exception as e: - # If it raises an error, verify it's informative - assert ( - "DICOM" in str(e) or "not found" in str(e).lower() or "No" in str(e) - ), f"Error message should be informative: {e}" - finally: - os.unlink(label) + result = nifti_to_dicom_seg(invalid_dir, label, None, use_itk=False) + assert result == "", "Expected empty-string result for invalid DICOM directory" + os.unlink(label)monailabel/datastore/utils/convert.py (1)
569-586: Fix non-deterministic selection of SEG file from directory (os.listdir()[0]).
This can pick an arbitrary file (including non-DICOM). Sort and/or validate candidates beforepydicom.dcmread(...). (Same issue as previously reported.)
🧹 Nitpick comments (3)
tests/unit/datastore/test_convert.py (2)
17-26: Consider skipping tests when optional deps (SimpleITK/highdicom) are missing, instead of import-time failure.
Right nowimport SimpleITK as sitkwill hard-fail collection in environments where it’s absent; if SimpleITK is truly optional, gate these tests viaunittest.skipUnless/pytest.importorskip-style patterns.
165-195: HU-range assertion may be modality-specific and brittle.
The[-2048, 4095]range strongly assumes CT; if this test dataset ever changes modality, it’ll fail for non-functional reasons. Consider loosening to “not all zeros + finite” (or read Modality and branch).monailabel/datastore/utils/convert.py (1)
227-273: Spatial sorting is directionally correct; consider tie-breakers + consistency checks for mixed-orientation series.
Sorting bydot(IPP, normal)is the right core approach, but if multiple frames share the same projected location (or orientation varies), ordering can still be unstable. Consider: (1) verifying consistentImageOrientationPatient/FrameOfReferenceUID, and (2) using a tuple key(slice_location, InstanceNumber)as a tie-breaker.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
monailabel/datastore/utils/convert.py(5 hunks)tests/unit/datastore/test_convert.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/unit/datastore/test_convert.py (1)
monailabel/datastore/utils/convert.py (4)
binary_to_image(74-91)dicom_seg_to_itk_image(569-627)dicom_to_nifti(44-71)nifti_to_dicom_seg(481-519)
🪛 Ruff (0.14.8)
tests/unit/datastore/test_convert.py
202-202: Local variable ref_array_sitk is assigned to but never used
Remove assignment to unused variable ref_array_sitk
(F841)
808-808: Do not catch blind exception: Exception
(BLE001)
monailabel/datastore/utils/convert.py
263-263: Consider moving this statement to an else block
(TRY300)
365-365: Do not catch blind exception: Exception
(BLE001)
476-476: Do not catch blind exception: Exception
(BLE001)
477-477: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
508-508: Avoid specifying long messages outside the exception class
(TRY003)
582-582: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (1)
monailabel/datastore/utils/convert.py (1)
12-35: Highdicom availability gating looks good, but ensure callers/tests handle ImportError predictably.
Given highdicom is optional, please confirm the project’s supported install matrix expectsImportErrorvs returning""for missing highdicom (currently dispatcher raises for non-ITK path).
| def _extract_label_info(label, label_info): | ||
| """Extract unique labels and model info from label file. | ||
| # Only use config if no explicit override | ||
| if use_itk is None: | ||
| use_itk = settings.MONAI_LABEL_USE_ITK_FOR_DICOM_SEG | ||
| Args: | ||
| label: Path to NIfTI label file | ||
| label_info: List of dictionaries containing segment information, each with optional | ||
| "idx"/"labelID"/"label" field to map to actual label values | ||
| start = time.time() | ||
| Returns: | ||
| tuple: (unique_labels, info_by_label_id, model_name) or (None, None, None) if empty | ||
| """ | ||
| # Load label file using SimpleITK (consistent with conversion pipeline) | ||
| mask = SimpleITK.ReadImage(label) | ||
| label_array = SimpleITK.GetArrayFromImage(mask) | ||
|
|
||
| label_np, meta_dict = LoadImage(image_only=False)(label) | ||
| unique_labels = np.unique(label_np.flatten()).astype(np.int_) | ||
| # Extract unique non-zero labels | ||
| unique_labels = np.unique(label_array).astype(np.int_) | ||
| unique_labels = unique_labels[unique_labels != 0] | ||
|
|
||
| info = label_info[0] if label_info and 0 < len(label_info) else {} | ||
| model_name = info.get("model_name", "AIName") | ||
| if not unique_labels.size: | ||
| logger.warning("No non-zero labels found in segmentation") | ||
| return None, None, None | ||
|
|
||
| # Build mapping from label ID to metadata | ||
| # Look for explicit ID fields: "idx", "labelID", or "label" | ||
| info_by_label_id = {} | ||
| has_explicit_ids = False | ||
|
|
||
| if label_info: | ||
| for entry in label_info: | ||
| # Find the label ID from various possible field names | ||
| label_id = entry.get("idx") or entry.get("labelID") or entry.get("label") | ||
| if label_id is not None: | ||
| info_by_label_id[int(label_id)] = entry | ||
| has_explicit_ids = True | ||
|
|
||
| # If no explicit IDs found, fall back to positional mapping | ||
| # Assume label_info is ordered to match unique_labels | ||
| if not has_explicit_ids: | ||
| for i, label_id in enumerate(unique_labels): | ||
| if i < len(label_info): | ||
| info_by_label_id[int(label_id)] = label_info[i] | ||
|
|
||
| # Extract model_name (can be in any entry, prefer first entry) | ||
| model_name = "MONAILabel" # Default | ||
| if label_info and len(label_info) > 0: | ||
| model_name = label_info[0].get("model_name", "MONAILabel") | ||
|
|
||
| return unique_labels, info_by_label_id, model_name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Harden _extract_label_info: validate label_info entry types and ID coercion.
entry.get(...) assumes each entry is a dict, and int(label_id) can throw. A single malformed entry will crash conversion.
if label_info:
for entry in label_info:
+ if not isinstance(entry, dict):
+ logger.warning(f"Skipping non-dict label_info entry: {type(entry)}")
+ continue
# Find the label ID from various possible field names
label_id = entry.get("idx") or entry.get("labelID") or entry.get("label")
if label_id is not None:
- info_by_label_id[int(label_id)] = entry
+ try:
+ info_by_label_id[int(label_id)] = entry
+ except (TypeError, ValueError):
+ logger.warning(f"Skipping label_info with non-int label id: {label_id!r}")
+ continue
has_explicit_ids = TrueThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (5)
monailabel/datastore/utils/convert.py (3)
320-324: TimezoneOffsetFromUTC formatting via ISO-string slicing is brittle.Line 323 assumes
dt_now.astimezone().isoformat()always ends with±HH:MM. Safer to format fromdt_now.utcoffset()into±HHMM.
94-142: Harden_extract_label_info()against malformedlabel_infoentries (non-dict / non-int ids).Line 123-128 assumes
entryis a dict andint(label_id)always succeeds; one bad entry can crash conversions.@@ if label_info: for entry in label_info: + if not isinstance(entry, dict): + logger.warning("Skipping non-dict label_info entry: %s", type(entry)) + continue # Find the label ID from various possible field names label_id = entry.get("idx") or entry.get("labelID") or entry.get("label") if label_id is not None: - info_by_label_id[int(label_id)] = entry - has_explicit_ids = True + try: + info_by_label_id[int(label_id)] = entry + has_explicit_ids = True + except (TypeError, ValueError): + logger.warning("Skipping label_info with non-int label id: %r", label_id) + continue
352-367: Catch specific exceptions and log traceback for better diagnostics; considerInvalidDicomError.Line 365-366 should catch expected exceptions from
setattr()rather than allExceptions. Based on pydicom 3.0.1 behavior, likely exceptions includeValueError(invalid keyword/value),TypeError(type mismatch), andInvalidDicomError. Uselogger.exception()to include traceback:- except Exception as ex: - logger.warning(f"Custom tag {keyword} was not written, due to {ex}") + except (ValueError, TypeError, AttributeError) as ex: + logger.exception("Custom tag %s was not written", keyword)Consider also catching
pydicom.errors.InvalidDicomErrorif you want to handle DICOM-specific violations separately.tests/unit/datastore/test_convert.py (2)
393-407: Derive voxel-count expectations fromdims/box_sizeinstead of hard-coding 5000/15000 (test flakiness).Line 397-406 hard-codes minima that will fail when the source series has smaller in-plane dimensions (since
box_sizeis derived fromdims). Compute expected voxel counts from the actual synthetic geometry.@@ - # Each segment should have at least 5000 voxels (100x100 cross-section across slices) - min_expected_voxels = 5000 + # Each segment should have a minimum voxel count derived from the synthetic geometry + box_size = max(1, min(100, dims[0] // 2, dims[1] // 2)) + z_segment = max(1, dims[2] // 3) + # Conservative lower bound: allow some loss during SEG->image reconstruction + min_expected_voxels = int(box_size * box_size * z_segment * 0.8) for seg_num, count in count_per_segment.items(): assert ( count >= min_expected_voxels ), f"Segment {seg_num} has too few voxels: {count} < {min_expected_voxels}" @@ - assert total_nonzero >= 15000, f"Total segmentation voxels too low: {total_nonzero} (expected >= 15000)" + min_total = int(3 * box_size * box_size * z_segment * 0.8) + assert total_nonzero >= min_total, f"Total segmentation voxels too low: {total_nonzero} (expected >= {min_total})"
451-490: ITK round-trip voxel-count ratio needs Z-range alignment (ITK omits empty slices).Line 451-460 correctly allows
result_size[2] <= original_size[2]for ITK, but Line 479-490 still compares voxel counts across arrays with different Z extents, which can make the 3% tolerance meaningless (or fail spuriously).Suggestion: when
use_itk=True, restrictoriginal_arrayto the “non-empty Z slices” (or otherwise the overlapping Z range) before doing the per-label count ratios, e.g. filteroriginal_arraybynp.any(original_array != 0, axis=(1,2))and compare againstresult_array(which should correspond to those slices).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
monailabel/datastore/utils/convert.py(5 hunks)tests/unit/datastore/test_convert.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.8)
monailabel/datastore/utils/convert.py
263-263: Consider moving this statement to an else block
(TRY300)
365-365: Do not catch blind exception: Exception
(BLE001)
477-477: Redundant exception object included in logging.exception call
(TRY401)
508-508: Avoid specifying long messages outside the exception class
(TRY003)
595-597: try-except-continue detected, consider logging the exception
(S112)
595-595: Do not catch blind exception: Exception
(BLE001)
600-603: Avoid specifying long messages outside the exception class
(TRY003)
608-608: Avoid specifying long messages outside the exception class
(TRY003)
6d38a78 to
9ddbe53
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
setup.cfg (1)
53-54: Verify pydicom 3.0.1 and highdicom 0.26.1 compatibility.A previous review flagged that highdicom 0.26.1 may not support pydicom 3.0 due to breaking changes in pydicom 3.0. This is a critical dependency issue that could cause runtime failures.
Please verify compatibility by running the following tests:
#!/bin/bash # Test that highdicom 0.26.1 works with pydicom 3.0.1 python3 << 'EOF' import sys try: import pydicom import highdicom as hd print(f"pydicom version: {pydicom.__version__}") print(f"highdicom version: {hd.__version__}") # Try creating a basic segmentation to verify API compatibility from pydicom.dataset import Dataset ds = Dataset() ds.Rows = 512 ds.Columns = 512 print("Basic import and attribute access successful") sys.exit(0) except Exception as e: print(f"Compatibility issue detected: {e}") sys.exit(1) EOFAdditionally, search for the latest highdicom documentation to confirm pydicom 3.x support:
highdicom 0.26.1 pydicom 3.0 compatibility requirementsmonailabel/datastore/utils/convert.py (1)
611-613: Directory scan exception handling could be more specific.The code catches all exceptions when validating DICOM files. Consider catching only expected exceptions and logging for debugging.
This issue was already flagged in a previous review. The past review comment recommended catching specific exceptions like
pydicom.errors.InvalidDicomErrorandOSError, and adding logging.
🧹 Nitpick comments (1)
monailabel/datastore/utils/convert.py (1)
381-381: Optional: Remove redundant exception reference from logger.exception() call.
logger.exception()automatically includes the exception traceback, so explicitly including{ex}in the message is redundant.- logger.exception(f"Custom tag {keyword} was not written, due to {ex}") + logger.exception(f"Custom tag {keyword} was not written")Also applies to line 493.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (6)
monailabel/config.py(1 hunks)monailabel/datastore/utils/convert.py(5 hunks)monailabel/endpoints/datastore.py(2 hunks)requirements.txt(2 hunks)setup.cfg(1 hunks)tests/unit/datastore/test_convert.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- requirements.txt
- monailabel/config.py
- monailabel/endpoints/datastore.py
🧰 Additional context used
🧬 Code graph analysis (1)
tests/unit/datastore/test_convert.py (1)
monailabel/datastore/utils/convert.py (4)
binary_to_image(74-91)dicom_seg_to_itk_image(585-669)dicom_to_nifti(44-71)nifti_to_dicom_seg(497-535)
🪛 Ruff (0.14.8)
monailabel/datastore/utils/convert.py
263-263: Consider moving this statement to an else block
(TRY300)
381-381: Redundant exception object included in logging.exception call
(TRY401)
493-493: Redundant exception object included in logging.exception call
(TRY401)
524-524: Avoid specifying long messages outside the exception class
(TRY003)
611-613: try-except-continue detected, consider logging the exception
(S112)
611-611: Do not catch blind exception: Exception
(BLE001)
616-619: Avoid specifying long messages outside the exception class
(TRY003)
624-624: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (4)
monailabel/datastore/utils/convert.py (1)
29-35: LGTM: Clean optional dependency handling.The try-except pattern with availability flag is appropriate for handling the optional highdicom dependency.
tests/unit/datastore/test_convert.py (3)
39-66: LGTM: Well-structured test utilities.The utility methods
_get_test_paths(),_load_dicom_series(), and_create_label_file()provide clean abstractions that improve test readability and maintainability.
67-89: LGTM: Proper resource cleanup in test utilities.The
_extract_pixels_via_roundtrip()method correctly uses try-finally to ensure temporary files are cleaned up, addressing concerns from previous reviews about test file leakage.
111-177: LGTM: Robust synthetic label generation with proper boundary checks.The
_create_multi_segment_array()method includes defensivemax(1, ...)guards (lines 134, 135, 139, 147, 154, 160, 166, 172) to prevent zero-sized regions, addressing concerns from previous reviews about handling small test images.
4877a00 to
43e1e39
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
monailabel/datastore/utils/convert.py (1)
94-142: Harden_extract_label_infoagainst malformed label_info entries.Past review feedback suggested validating entry types and handling ID coercion errors. The current implementation assumes each entry is a dict and that
label_idcan be safely converted toint. Malformed entries will cause crashes.Apply defensive checks:
if label_info: for entry in label_info: + if not isinstance(entry, dict): + logger.warning(f"Skipping non-dict label_info entry: {type(entry)}") + continue # Find the label ID from various possible field names label_id = entry.get("idx") or entry.get("labelID") or entry.get("label") if label_id is not None: - info_by_label_id[int(label_id)] = entry + try: + info_by_label_id[int(label_id)] = entry + except (TypeError, ValueError): + logger.warning(f"Skipping label_info with non-int label id: {label_id!r}") + continue has_explicit_ids = TrueBased on past review comment that remains unaddressed.
🧹 Nitpick comments (2)
monailabel/datastore/utils/convert.py (2)
367-382: Optional: Remove redundant exception object fromlogger.exception()call.
logger.exception()automatically includes exception info, so theexparameter is redundant:- logger.exception(f"Custom tag {keyword} was not written, due to {ex}") + logger.exception(f"Custom tag {keyword} was not written")
489-494: Optional: Remove redundant exception object fromlogger.exception()call.Similar to the earlier comment,
logger.exception()includes exception info automatically:- logger.exception("ITK DICOM SEG conversion failed: %s", e) + logger.exception("ITK DICOM SEG conversion failed")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (6)
monailabel/config.py(1 hunks)monailabel/datastore/utils/convert.py(5 hunks)monailabel/endpoints/datastore.py(2 hunks)requirements.txt(2 hunks)setup.cfg(1 hunks)tests/unit/datastore/test_convert.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- setup.cfg
🧰 Additional context used
🧬 Code graph analysis (2)
tests/unit/datastore/test_convert.py (1)
monailabel/datastore/utils/convert.py (4)
binary_to_image(74-91)dicom_seg_to_itk_image(585-670)dicom_to_nifti(44-71)nifti_to_dicom_seg(497-535)
monailabel/endpoints/datastore.py (1)
monailabel/datastore/local.py (1)
label(70-72)
🪛 Ruff (0.14.8)
monailabel/datastore/utils/convert.py
263-263: Consider moving this statement to an else block
(TRY300)
381-381: Redundant exception object included in logging.exception call
(TRY401)
493-493: Redundant exception object included in logging.exception call
(TRY401)
524-524: Avoid specifying long messages outside the exception class
(TRY003)
617-620: Avoid specifying long messages outside the exception class
(TRY003)
625-625: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (9)
requirements.txt (1)
27-30: LGTM! Dependency upgrades align with PR objectives.The upgrade to pydicom 3.0.1 (HTJ2K support), addition of highdicom 0.26.1, and pynrrd 1.1.3 (numpy 2.x compatibility) are well-motivated. The documentation block for the optional dcmqi dependency is helpful for users who need the legacy ITK-based conversion path.
Also applies to: 55-58
monailabel/config.py (1)
21-21: LGTM! Defensive guard against None metadata.Filtering out distributions with
Nonemetadata prevents potentialAttributeErrorexceptions when calling.get("Name")on None. This is a sensible robustness improvement.monailabel/endpoints/datastore.py (1)
136-139: LGTM! Improved error messages provide clearer diagnostics.The updated error messages distinguish between missing files and directory paths, making debugging easier for API consumers.
Also applies to: 156-157
monailabel/datastore/utils/convert.py (3)
497-535: LGTM! Dispatcher correctly routes to highdicom or ITK implementations.The function properly checks highdicom availability and dispatches based on the
use_itkflag. The logic is clear and well-structured.
538-582: LGTM! Excellent error messaging for missing dcmqi tool.The detailed error message with installation instructions and alternative suggestions provides excellent user experience. The trade-off of a longer error message (TRY003 lint) is worthwhile here for clarity.
585-670: LGTM! Robust DICOM SEG to ITK conversion with clear coordinate transformations.The implementation correctly:
- Handles both file and directory inputs deterministically (sorted files)
- Uses specific exception types for DICOM validation
- Documents coordinate system transformations between highdicom (slice, row, column) and SimpleITK (x, y, z) ordering
- Provides helpful error messages for missing DICOM files
tests/unit/datastore/test_convert.py (3)
29-177: LGTM! Well-structured test utilities with robust bounds checking.The helper methods provide excellent test infrastructure:
_create_multi_segment_array()safely handles small images withmax(1, ...)bounds enforcement_extract_pixels()uses cleanup in finally blocks_validate_dicom_seg()provides comprehensive metadata validation- Clear separation of concerns makes tests readable and maintainable
178-437: LGTM! Comprehensive tests with proper cleanup and dynamic validation.The tests correctly:
- Validate both highdicom and ITK implementations via helpers
- Compute expected voxel counts from actual test geometry (lines 392-422)
- Use
addCleanup()andtry/finallyfor reliable resource cleanup- Test edge cases including non-sequential label remapping (1, 5, 10 → 1, 2, 3)
439-880: LGTM! Comprehensive test coverage validates all conversion scenarios.The test suite thoroughly validates:
- Round-trip conversion accuracy (NIfTI → DICOM SEG → NIfTI) with 3% tolerance
- Edge cases: empty labels, missing metadata, invalid directories
- Large label value remapping (100, 200, 255 → 1, 2, 3)
- Custom DICOM tag propagation
- Required DICOM SEG metadata completeness
- Both highdicom and ITK implementations for symmetry
The tests are well-structured, use proper cleanup, and provide clear failure messages.
Major changes: - Replace unmaintained pydicom-seg library with well-established highdicom - Upgrade pydicom from 2.4.4 to 3.0.1, enabling HTJ2K support Refactoring in monailabel/datastore/utils/convert.py: - Split nifti_to_dicom_seg into dispatcher + two implementations: * _highdicom_nifti_to_dicom_seg: Highdicom-based conversion * _itk_nifti_to_dicom_seg: Dcmqi/itkimage2segimage conversion - Rename itk_image_to_dicom_seg → _dcmqi_nifti_to_dicom_seg for consistency Tests in tests/unit/datastore/test_convert.py: - Test both highdicom and ITK implementations via helper methods - Validate metadata preservation, geometry, pixel data, and segment remapping - Tests for non-sequential label remapping (1,5,10 → 1,2,3) - Round-trip conversion tests with 3% tolerance for both implementations - Handle ITK-specific behavior (stores only non-empty slices) Minor improvements: - Update pynrrd from 1.0.0 to 1.1.3 (former caused problem with numpy 2.x) - Add documentation for optional dcmqi dependency Signed-off-by: Joaquin Anton Guirao <janton@nvidia.com> [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Code review fixes Signed-off-by: Joaquin Anton Guirao <janton@nvidia.com> [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Lint fixes Signed-off-by: Joaquin Anton Guirao <janton@nvidia.com> More code review fixes Signed-off-by: Joaquin Anton Guirao <janton@nvidia.com> More fixes Signed-off-by: Joaquin Anton Guirao <janton@nvidia.com> [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Fixes Signed-off-by: Joaquin Anton Guirao <janton@nvidia.com> [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Joaquin Anton Guirao <janton@nvidia.com>
43e1e39 to
4e5f4a1
Compare
|
@SachidanandAlle Could you please review this one? |
|
There are some checks failing.. please fix the same. otherwise the code changes look good to merge. |
|
and if the fix need to discontinue the support for python 3.9.. we can do so.. in that case keep 3.10 as the minimum version needed (main branch and all new versions from here) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
monailabel/datastore/utils/convert.py (1)
122-128: Add validation for label_info entry types and ID coercion.Per the past review comment, the function assumes each
entryis a dict and thatlabel_idcan be coerced to int. A malformed entry will crash the conversion. Add type checks and exception handling:🔎 Recommended validation
if label_info: for entry in label_info: + if not isinstance(entry, dict): + logger.warning(f"Skipping non-dict label_info entry: {type(entry)}") + continue # Find the label ID from various possible field names label_id = entry.get("idx") or entry.get("labelID") or entry.get("label") if label_id is not None: - info_by_label_id[int(label_id)] = entry + try: + info_by_label_id[int(label_id)] = entry + except (TypeError, ValueError): + logger.warning(f"Skipping label_info with non-int label id: {label_id!r}") + continue has_explicit_ids = TrueBased on learnings, this addresses a previously identified gap in input validation.
🧹 Nitpick comments (3)
monailabel/datastore/utils/convert.py (2)
367-382: Remove unused exception variable.Line 380: The exception variable
exis caught but never used. Per Ruff F841 and the refined exception handling, remove the unused binding:- except (AttributeError, KeyError, TypeError, pydicom.errors.InvalidDicomError) as ex: + except (AttributeError, KeyError, TypeError, pydicom.errors.InvalidDicomError): logger.exception(f"Custom tag {keyword} was not written") continue
490-494: Remove unused exception variable.Line 492: The exception variable
eis caught but never used. The logger.exception() call automatically includes the exception info, so the variable binding is unnecessary:- except (RuntimeError, OSError, ValueError) as e: + except (RuntimeError, OSError, ValueError): logger.exception("ITK DICOM SEG conversion failed") return ""tests/unit/datastore/test_convert.py (1)
392-422: Consider documenting the 80% tolerance rationale.The voxel count validation correctly derives expected values from test geometry and applies an 80% tolerance. Consider adding a brief comment explaining this tolerance accounts for DICOM compression, format conversion, and ITK/dcmqi behavior (storing only non-empty slices).
📝 Suggested documentation addition
# Compute expected voxels per segment (box_size * box_size * z_slices) expected_voxels_per_segment = [box_size * box_size * z for z in z_slices_per_segment] -# Use 80% tolerance to account for compression and round-trip losses +# Use 80% tolerance to account for: +# - DICOM compression artifacts +# - Format conversion losses (NIfTI → DICOM SEG → NIfTI) +# - ITK/dcmqi behavior (stores only non-empty slices) min_expected_per_segment = [int(expected * 0.8) for expected in expected_voxels_per_segment]
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (10)
.github/workflows/docker-plugin.yml(1 hunks).github/workflows/release.yml(1 hunks)docs/source/installation.rst(1 hunks)monailabel/config.py(1 hunks)monailabel/datastore/utils/convert.py(5 hunks)monailabel/endpoints/datastore.py(2 hunks)plugins/dsa/Dockerfile(1 hunks)requirements.txt(2 hunks)setup.cfg(2 hunks)tests/unit/datastore/test_convert.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- setup.cfg
🧰 Additional context used
🧬 Code graph analysis (1)
monailabel/endpoints/datastore.py (1)
monailabel/datastore/local.py (1)
label(70-72)
🪛 Ruff (0.14.8)
monailabel/datastore/utils/convert.py
263-263: Consider moving this statement to an else block
(TRY300)
380-380: Local variable ex is assigned to but never used
Remove assignment to unused variable ex
(F841)
492-492: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
524-524: Avoid specifying long messages outside the exception class
(TRY003)
617-620: Avoid specifying long messages outside the exception class
(TRY003)
625-625: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (23)
plugins/dsa/Dockerfile (1)
12-12: LGTM: Python 3.10 base image update.The base image update from Python 3.9 to 3.10 aligns with the PR-wide Python version requirement changes (setup.cfg, workflows, documentation).
requirements.txt (2)
55-58: LGTM: Clear documentation for optional dcmqi dependency.The comment block clearly documents the optional dcmqi dependency, installation instructions, and reference link. This helps users understand the legacy DICOM SEG conversion option.
27-30: The codebase is already compatible with pydicom 3.0.1. All pydicom usage patterns found use modern APIs:dcmread()frompydicom.filereaderand the.save_as()method for writing, both of which are supported in 3.0.1. No deprecated APIs likeread_file()orwrite_file()are present. Standard regression testing with the new dependency is recommended as a general best practice for any major version upgrade..github/workflows/docker-plugin.yml (1)
29-32: LGTM: Workflow Python version update.The workflow correctly updates both the step name and python-version input to 3.10, consistent with the PR-wide Python version requirement.
monailabel/config.py (1)
20-21: LGTM: Defensive guard against None metadata.Adding the
x.metadata is not Nonefilter prevents potentialAttributeErrorwhen iterating distributions. This is a solid defensive programming practice..github/workflows/release.yml (1)
38-41: LGTM: Release workflow Python version update.The release workflow correctly updates to Python 3.10, maintaining consistency across CI/CD pipelines.
docs/source/installation.rst (1)
55-55: LGTM: Documentation reflects Python 3.10+ requirement.The installation documentation correctly updates the prerequisite Python version to match the new minimum requirement.
monailabel/endpoints/datastore.py (2)
136-139: LGTM: Clearer error message for missing/invalid images.The enhanced error message explicitly mentions both missing files and directories, providing better diagnostic information to users.
155-157: LGTM: Clearer error message for missing/invalid labels.Consistent with the image error message enhancement, this provides better diagnostic information for label retrieval failures.
monailabel/datastore/utils/convert.py (7)
29-34: LGTM: Clean optional dependency pattern for highdicom.The try/except pattern with
HIGHDICOM_AVAILABLEflag properly handles the optional highdicom dependency, allowing graceful degradation when not installed.
324-338: LGTM: Robust timezone offset formatting.The
format_timezone_offsethelper correctly computes the timezone offset in ±HHMM format, addressing the fragility of string slicing noted in past reviews.
517-535: LGTM: Clean dispatcher implementation.The dispatcher correctly routes between highdicom and ITK implementations based on configuration and availability, with appropriate error checking and logging.
549-563: LGTM: Excellent user-facing error message for missing dcmqi.The detailed error message provides clear installation instructions and alternatives. While Ruff TRY003 flags long messages, this comprehensive guidance significantly improves user experience when the optional dependency is missing.
596-620: LGTM: Robust directory handling with deterministic file selection.The function correctly handles both file and directory inputs, sorts files for determinism, filters for valid DICOM files with targeted exception handling, and provides clear error messages when no valid files are found.
Based on learnings, this addresses previously identified issues with non-deterministic file selection and broad exception handling.
637-661: LGTM: Correct coordinate system transformations.The spacing and direction transformations correctly map between highdicom's (slice, row, column) ordering and SimpleITK's (x, y, z) = (column, row, slice) ordering. The column reordering logic is well-documented and correct.
245-272: LGTM: Robust spatial sorting with proper DICOM geometry.The spatial sorting correctly uses
ImageOrientationPatientandImagePositionPatientto compute slice ordering via plane normal projection, addressing the limitations of filename-based sorting noted in past reviews. The multi-level fallback strategy ensures graceful degradation.Note: Ruff TRY300 suggests moving the final
return 0.0to an else block, but the current structure is clear and the optimization is negligible.tests/unit/datastore/test_convert.py (7)
17-26: LGTM! Clean imports aligned with PR objectives.The import structure properly reflects the PR's dependency changes (pydicom 3.0.1 replacing pydicom-seg) and imports all necessary test functions from the refactored conversion module.
39-177: Well-designed test utilities with robust bounds checking.The helper methods properly handle edge cases and resource cleanup:
_extract_pixelsuses try/finally for cleanup (lines 72-82)_create_multi_segment_arrayprevents zero-sized regions withmax(1, ...)guards and proper bounds clamping- All helpers have clear documentation and appropriate error handling
227-232: Excellent resource cleanup patterns throughout.The test suite consistently uses
self.addCleanup()and try/finally blocks to ensure temporary files are removed even when assertions fail. This prevents test artifacts from accumulating and ensures clean CI environments.Also applies to: 283-283, 604-610, 645-645
467-479: Good handling of implementation-specific behavior.The test correctly accommodates the difference between ITK/dcmqi (which stores only non-empty slices) and highdicom (which preserves full dimensions). The conditional assertions based on
use_itkflag properly validate each implementation's guarantees.
269-330: Comprehensive test coverage with consistent dual-implementation testing.The test suite excellently validates both highdicom and ITK/dcmqi implementations through the
_test_*_impl(use_itk)pattern. Coverage includes:
- Core conversions and round-trips
- Metadata preservation and defaults
- Edge cases (empty labels, invalid inputs, large/non-sequential values)
- Custom DICOM tags and multi-segment scenarios
This design ensures both conversion paths are tested identically and consistently.
Also applies to: 332-437, 439-521, 523-549, 551-593, 653-736, 738-814
816-832: Proper error case handling without blind exception catching.The test correctly validates that
nifti_to_dicom_segreturns an empty string for invalid inputs rather than raising exceptions. The try/finally ensures cleanup even if the assertion fails. This addresses the previous review concern about blind exception catching.
834-880: Thorough DICOM SEG metadata validation.The metadata completeness test comprehensively validates required DICOM attributes at both document and segment levels, ensuring compliance with the DICOM Segmentation Storage standard (including Labelmap variant). The test properly checks SOP Class UIDs, modality, and all required segment attributes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
monailabel/interfaces/app.pymonailabel/interfaces/tasks/infer_v2.pymonailabel/interfaces/utils/app.pymonailabel/tasks/infer/bundle.py
💤 Files with no reviewable changes (1)
- monailabel/interfaces/utils/app.py
✅ Files skipped from review due to trivial changes (1)
- monailabel/tasks/infer/bundle.py
🔇 Additional comments (1)
monailabel/interfaces/tasks/infer_v2.py (1)
35-42: Remove type annotations fromInferTypeenum members per Python typing standards.Members defined within an enum class should not include explicit type annotations. Removing the
: strannotations from all eight enum members (lines 35-42) correctly applies this best practice. SinceInferTypealready inherits fromstr, the annotations are redundant and type annotations on enum members are incorrect and misleading to readers of the code. This change has no functional impact—enum values and behavior remain identical.
a139a48 to
b2a17c6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (9)
.github/workflows/docker-plugin.yml.github/workflows/release.ymldocs/source/installation.rstmonailabel/interfaces/app.pymonailabel/interfaces/tasks/infer_v2.pymonailabel/interfaces/utils/app.pymonailabel/tasks/infer/bundle.pyplugins/dsa/Dockerfilesetup.cfg
💤 Files with no reviewable changes (1)
- monailabel/interfaces/utils/app.py
🚧 Files skipped from review as they are similar to previous changes (5)
- docs/source/installation.rst
- .github/workflows/release.yml
- plugins/dsa/Dockerfile
- monailabel/tasks/infer/bundle.py
- .github/workflows/docker-plugin.yml
🧰 Additional context used
🧬 Code graph analysis (1)
monailabel/interfaces/app.py (1)
monailabel/interfaces/exception.py (2)
MONAILabelException(46-55)MONAILabelError(15-43)
🪛 Ruff (0.14.8)
monailabel/interfaces/app.py
105-105: Expected a statement
(invalid-syntax)
105-105: Expected a statement
(invalid-syntax)
105-105: Expected a statement
(invalid-syntax)
105-105: Expected a statement
(invalid-syntax)
105-105: Simple statements must be separated by newlines or semicolons
(invalid-syntax)
106-106: Unexpected indentation
(invalid-syntax)
108-108: Expected an expression
(invalid-syntax)
108-108: Expected an expression
(invalid-syntax)
108-108: Expected an expression
(invalid-syntax)
113-113: Expected an expression
(invalid-syntax)
113-113: Expected an expression
(invalid-syntax)
113-113: Expected an expression
(invalid-syntax)
113-113: Expected ), found name
(invalid-syntax)
114-114: Expected a statement
(invalid-syntax)
331-331: Compound statements are not allowed on the same line as simple statements
(invalid-syntax)
333-333: Simple statements must be separated by newlines or semicolons
(invalid-syntax)
🔇 Additional comments (5)
monailabel/interfaces/tasks/infer_v2.py (1)
35-42: LGTM! Follows enum best practices.Removing the redundant type annotations is correct. Since
InferTypeinherits fromstr, the: strannotations on each member were unnecessary—the type is already established by the class declaration.setup.cfg (3)
30-30: Python 3.10+ requirement is appropriately aligned with dependency upgrades.The change from
python_requires >= 3.9to>= 3.10aligns with the updated highdicom requirements. This should be applied consistently across CI/CD configurations per the PR objectives.
56-56: LGTM – pynrrd upgrade for numpy 2.x compatibility.The upgrade to
pynrrd>=1.1.3correctly addresses numpy 2.x compatibility, as documented in the PR objectives.
53-54: Upgrade highdicom to version 0.27.0 for pydicom 3.x compatibility.Highdicom 0.27.0 depends upon pydicom > 3.0.1, while the current constraint
highdicom>=0.26.1may not guarantee compatibility withpydicom>=3.0.1. Update tohighdicom>=0.27.0to ensure compatibility:- highdicom>=0.26.1 + highdicom>=0.27.0monailabel/interfaces/app.py (1)
331-336: LGTM - Clean exception handling for disabled batch inference.The helper method correctly raises
MONAILabelExceptionwith an appropriate error type and a clear, actionable message. This is part of the solution that ensures_batch_infercan always be safely invoked as a callable (when using the correct merge conflict resolution).
b2a17c6 to
0e19c6a
Compare
Signed-off-by: Joaquin Anton Guirao <janton@nvidia.com> linter fixes Signed-off-by: Joaquin Anton Guirao <janton@nvidia.com> [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Joaquin Anton Guirao <janton@nvidia.com>
0e19c6a to
c62d40a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
monailabel/interfaces/app.py (1)
333-356: LGTM! Safe delegation pattern.The
batch_infermethod now safely delegates toself._batch_infer, which is guaranteed to be a callable. When batch inference is disabled, users receive a clear exception message.Optional: Consider documenting the disabled state
The docstring could mention that batch inference may be disabled via configuration. However, the exception message is already clear when this occurs, so this is a minor documentation enhancement.
def batch_infer(self, request, datastore=None): """ Run batch inference for an existing pre-trained model. + + Note: Batch inference must be enabled via MONAI_LABEL_TASKS_BATCH_INFER setting. Args: request: JSON object which contains `model`, `params` and `device`
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (9)
.github/workflows/docker-plugin.yml.github/workflows/release.ymldocs/source/installation.rstmonailabel/interfaces/app.pymonailabel/interfaces/tasks/infer_v2.pymonailabel/interfaces/utils/app.pymonailabel/tasks/infer/bundle.pyplugins/dsa/Dockerfilesetup.cfg
💤 Files with no reviewable changes (1)
- monailabel/interfaces/utils/app.py
✅ Files skipped from review due to trivial changes (1)
- .github/workflows/docker-plugin.yml
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/source/installation.rst
- setup.cfg
🧰 Additional context used
🧬 Code graph analysis (1)
monailabel/interfaces/app.py (1)
monailabel/interfaces/exception.py (2)
MONAILabelException(46-55)MONAILabelError(15-43)
🔇 Additional comments (6)
monailabel/interfaces/tasks/infer_v2.py (1)
35-42: LGTM! Clean simplification of enum declarations.Removing the redundant type annotations is appropriate since
InferTypealready inherits fromstr(line 20), making the explicit annotations unnecessary. This plain assignment style is more idiomatic for typed Enums in Python..github/workflows/release.yml (1)
38-41: Verify Python version consistency across all workflows before approvalThe Python 3.10 update in release.yml (lines 38-41) correctly updates both the step name and
python-versionparameter. However, approval should be conditional on verifying that all GitHub Actions workflows have been consistently updated to Python 3.10 and that no Python 3.9 references remain.monailabel/interfaces/app.py (2)
105-109: LGTM! Type-safety issue resolved.The initialization now ensures
_batch_inferis always aCallable, addressing the previous critical review concern. When batch inference is disabled, the lambda provides a clear error message rather than causing aTypeError.
326-331: LGTM! Clear error handling.The dedicated error-raising method provides a descriptive message and uses the appropriate exception type (
INVALID_INPUT). This follows good defensive programming practices.monailabel/tasks/infer/bundle.py (2)
210-210: LGTM!The type annotation for the temporary accumulator variable
resis accurate and improves code readability.
260-260: Type annotation improves clarity (same pattern as line 197).The type hint for
postis consistent with thepreannotation and improves code quality. The same verification concern applies here: ensure line 264's reassignment is type-safe when the parsed content is not aComposeinstance.
Major changes:
Refactoring in monailabel/datastore/utils/convert.py:
Tests in tests/unit/datastore/test_convert.py:
Minor improvements:
Summary by CodeRabbit
Bug Fixes
New Features
Behavior
Tests
Chores / Docs
✏️ Tip: You can customize this high-level summary in your review settings.