Test files repository and dicom_parser's GitHub address

DICOM testing dataset resources, already in Git and Datalad:

1 Like

+1 on storing test data in a separate repo and pulling it in via a git submodule. I did this on my last project and really like how much smaller the code repo becomes, especially when it is a submodule of a larger project that is just building it (and not running tests)

Can I suggest you consider validation datasets that do not require a lot of disk space, provide validated conversion and are designed to illustrate corner cases that could confuse a converter. Meeting these specifications allows one to include automated regression testing that can be applied to each commit.

For example, dcm2niix includes several dcm_qa repositories as submodules and uses Travis to automatically test each commit to detect any changes in conversion results.

I may certainly be biased, but I would think it would be great if the community could use and extend the repositories currently used with dcm2niix. These simple repositories can help assure conformance across tools. The current datasets have been specifically designed to test corner cases:

  • dcm_qa Siemens V*, spatial orientation, total readout time, phase encoding polarity, mosaics, common compressed transfer syntaxes (JPEGLossless:Non-hierarchical-1stOrderPrediction and JPEG2000LosslessOnly).
  • dcm_qa_nih Siemens V* and GE
  • dcm_qa_uih UIH data.
  • dcm_qa_philips Philips data, diffusion bvec/bval.
  • dcm_qa_enh enhanced DICOM from Canon, Philips, Siemens X*.
  • dcm_qa_fmap GE, Philips and Siemens field maps. nb Siemens fieldmaps duplicate instance numbers.
  • dcm_qa_stc EPI slice timing for GE and Siemens V*, X*.
  • dcm_qa_asl Siemens V* Arterial Spin Labeling.
  • dcm_qa_agfa DICOM mangling of data from any vendor touched by an AGFA or dcm4che PACS.
  • dcm_qa_canon data from Canon.
  • dcm_qa_toshiba data from Toshiba (now Canon).
  • dcm_qa_ge GE slice timing across versions and with hyper band (aka multi band)
  • dcm_qa_ct computerized axial tomography from GE and Philips showcasing modality specific features like gantry tilt.
  • dcm_qa_me multi-echo data from Siemens V*.

You may also want to look at exemplars of diffusion, archival and unusual sequences. These are generally much larger datasets than the dcm_qa test modules, but do exhibit unique properties.

You may also want to look at the old rosetta bit project that provides data from several vendors.

Hi Chris - thanks for posting - I’m sorry I hadn’t got round yet to doing the work to understand what you have done. That’s my fault, because I knew that you had already done careful and well-organized work on this.

Here’s some notes to me, but also to check my understanding. Looking quickly, each of these repositories has:

  • An In folder containing the input DICOM files
  • A Ref folder containing the files as converted by dcm2nii, in BIDS format.
  • A batch.sh file that runs the dcm2nii conversion, outputs to an Out directory, and then compares the results to the Ref directory.
  • Maybe some relevant documentation for the data.

Is that right? What is the best way for us to do things which will also be useful for you?

@matthew.brett your summary is correct. You could easily create your own .sh files to test your conversion. I would be happy to move these from my personal Github account to some more public account. You can also create new dcm_qa_* repositories to demonstrate new features.

I do think a crucial component of the dcm_qa_* repositories is the README file that describes the corner case exhibited by the dataset. For example, the dcm_qa_ge does not only provide example datasets, but also a minimal C program to demonstrate how to calculate the slice times independent of dcm2niix (which is particularly unintuitive for some of the multi-band patterns).

The C program is a very good feature. We were discussing this in our recent Zoom call, that we want to find some cross-language way of expressing the custom rules for DICOM processing - so that they can be used and maintained by as wide a community as possible. Have you got any ideas how best to do that?

For the repositories - I don’t think it matters where they are really, as long as there’s some master index that knows where they are.

The problem I see is that the format of the repository Ref directories, instantiates the dcm2nii conversion - very reasonably - but that may be less useful for other software making different choices in the conversion. I mean, all other software will then be implicitly comparing itself against dcm2nii. So there’s a sense in which the repository format is dcm2nii specific.

@matthew.brett to clarify, with the dcm_qa Ref all tools are implicitly comparing themselves against dicm2nii, not dcm2niix. Specifically, for 3D acquisitions there are 48 lossless combinations for storing the image data. There are 16 combinations for 2D EPI acquisition, as most processing tools require that the acquired slices matches the first two dimensions (i,j) on disk, with the temporal slices in the 3rd (k) dimension.

Once upon a time, dcm2niix would convert DICOM data with minimal manipulation for fastest speed, therefore most 2D axial DICOMs would be saved as LPS, and a 3D sequence saved as sagittal DICOMs might be saved as PSR. In contrast, dicm2nii would flip the row order when converting DICOM to NIfTI (as DICOM space has the first row at the top of the screen, while the NIfTI default has it at the bottom), so it would save the 2D axial EPI as LAS. In addition, dicm2nii will losslessly rotate 3D acquisitions to match the NIfTI identity matrix (RAS). Since these choices are arbitrary, the default behavior of dcm2niix was adapted to match dicm2nii, allowing simple scripts to identify conversion errors between these tools. You can elicit the original (faster) behavior of dcm2niix with a couple switches: dcm2niix -x i -y n /path/to/DICOMs. The point is that the choice is arbitrary, but as a community our life is simpler if we come to a consensus regarding how to handle these.

With respect to the JSON files, some of the fields are defined by BIDS while other fields are unique to dcm2niix (meta data useful for some pipelines). These are defined in a table which includes a Python script for extracting the fields defined by BIDS .

It is worth noting that most floating point calculations in dcm2niix are single-precision, while Matlab (dicm2nii) and Python use double precision by default. Likewise, different tools may use their own precision and algorithms for converting binary floating point data to ASCII values. Therefore, there may be small rounding error differences in some values. Therefore, one might want to consider providing a tolerance in comparing results from one tool to another.

My own preference for documenting features would be to recommend Python with type hints and NumPy for matrix calculations. The problem with pseudocode is that it can not be tested. Providing operational code allows people to replicate results. Python is popular in our field, and unlike Matlab does not require professional tools. However, I do think type hints really help people who are unfamiliar with Python and making the types explicit helps understand values: for example on Siemens the in-plane acceleration factor must be an integer, while Philips allows real numbers. Using int or float in code allows the developer to determine the expectations of the algorithm. Python has nice explicit rules like arrays indexed from 0, row major order, etc. While these differ between languages, choosing one language makes sense.

The only reason I wrote the dcm_qa_ge example in C is that it was the basis for the dcm2niix code. Anyone who wishes to can make a pull request with a Python translation. For full disclosure, Brice Fernandez (an MRI engineer at GE) wrote the original GE slice timing code in Matlab. Since he works for the manufacturer, he could confirm that change in behavior for slice timing patterns was first launched with GE Software 27.3. However, he was unsure if he was allowed to release his code as open source. So I wrote the C code using his description and validated against his Matlab code.

@neurolabusc thank you so much for these fantastic resources!

I opened a new GitHub organization (Open DICOM), I think I’ll move dicom_parser there in a few days after neurohackademy is over and I finish some WIP.

It certainly seems like the dcm_qa_* repositories could and should provide the basis for any generalized testing resource. However, it is likely that we will need to add many files as a reference for Python-specific tests. I think the main alternatives for handling this might be:

  1. Carefully PRing the existing repositories, hopefully without cluttering things too much or breaking anything.
  2. Forking the existing repositories under Open DICOM, adding whatever we need and probably also modifying the general structure.
  3. Creating a tests repository that provides an index of the submodules with the actual test files and encapsulates any logic required to access these files easily. This repository could then be added as a submodule with the --recurse-submodules flag (and perhaps expose fixtures containing the file paths?).
  4. A combination of 2 and 3.

Does this make sense? I don’t have any experience managing large files with Git :sweat:

1 Like

@baratzz the repositories are open source, so you are free to clone them or modify them as you wish. You might want to consider if a fork is required, or if we could simply move them to your Open DICOM organization and have them continue to be submodules for dcm2niix.

The scriptable nature of Python will help many users. The fact that Python is free is a strong benefit relative to Matlab. With modern computers, much of conversion performance is limited by memory bandwidth and disk I/O, so there is strong rationale for using a higher level language. Creating a tool now that most major vendors have released enhanced-DICOM variations should also aid the maintainability of your tool, versus older tools that have kludges to support these new formats.

My own preference is to avoid forking unless necessary, so one does not have to coordinate updates between different versions. As Matthew noted, there are numerous correct solutions for conversion, but I do think it would be great to keep different tools harmonized with each other. I do think a terrific outcome would be if the Python code the team develops could act as a drop-in replacement for the numerous tools that currently use dcm2niix. In situations where there are multiple arbitrary ways to store data, I would encourage you to mimic the strategies chosen by dcm2niix and dicm2nii. This will help developers of other tools and users by providing consistency. Another benefit to you is that we have tried to carefully document and validate this meta data, which makes it easy to clone. Many of the attributes that are not yet defined in the BIDS standard were explicitly requested by tool developers, and should encourage reproducible science.

As others can attest, I was a very vocal opponent of some of the features for both the NIfTI and BIDS formats. However, after they were formalized, I have worked to support them with my tools. Any format is a trade off, and what is really important for our community is to have well defined standards that allow well curated datasets.

Just to agree strongly - that we should have a very good reason indeed not to choose defaults that match the defaults people are used to, and in particular, we’d have to have a good reason to chose defaults that differ from dcm2niix, given how widely it is used. Of course, we might well choose to output - for example - more metadata - but for things like orientation of the output data, there’s no strong apriori reason to chose the original over the RAS / dcm2niix orientation, and we should surely follow dcm2niix in those kinds of defaults.

Hi, I’ve been following this discussion and also spoke to @baratzz about this matter. The dcm_qa repositories look very well curated, so big thank you for the links and detailed explanations.

I have a question which is slightly tangential. I was wondering if there is a relation between the dcm_qa’s and the DataLad repository linked previously? There doesn’t seem to be, judging by looking at some file names.

If the one at datasets.datalad.org is out of date, do you think it would be worthwile to ceate a new datalad dataset with all the dcm_qa’s attached (I am not an expert, so not quite sure how to set this up, but this seems possible)?

And consequently, would that be any use for that in setting up a testing environment (e.g. using datalad to only get the desired sub-datasets)? Though probably adding the chosen ones as submodules would do just as well.

1 Like

That is certainly my intention. Generally speaking, I would like for dicom_parser to conform with dcm2niix’s strategies and provide equivalent results. There is no problem enabling more customized configurations if users express interest, but I doubt that will be an immediate concern.

I also strongly agree with everything you said about Python and about harmonizing tools to work together using well-defined standards. I understand your preference regarding forking, and I would be more than happy for Open DICOM to host these repositories (I only offered because I didn’t know how you might feel about all the extra files, and potentially a little work, it might add).

It really would be fantastic if dicom_parser could offer all the features provided by dcm2niix. We definitely have some work ahead of us, but I am hopeful. I will take some time next week to properly review “The First Step for Neuroimaging Data Analysis: DICOM to NIfTI conversion” and create the required issues in the repository.

@msz some DICOMs in the DataLad repository come from the dcm2niix NITRC pages I linked previously. Most of those require too much disk space to be a standard Github repository. By design, the dcm_qa scans are low resolution, simple scans to demonstrate issues - they require little disk space and are fast for Travis-based regression testing. With regards for data being out of date, I think it is good to have exemplars of old Classic DICOM and modern enhanced DICOM, and also have exemplars of some the peculiarities. My dcm_qa_enh shows the latest modern enhanced DICOMs from Philips, Siemens and Canon. It is worth saying that very few research scientists are using Siemens enhanced data yet, as it only available on the 70cm scanners (e.g. XA series Vida and Sola), while most scientists prefer the performance benefits of a narrower 60cm bore (e.g. V*-series Trio and Prisma).

@baratzz I have no problem with extra files. I think if we can use the same test battery to test multiple converters, we can work cooperatively to improve each tool. In addition, in future we may decide to add additional useful meta-data to the JSON files, and knowledge gained by one team can be shared with the others.

I think the important thing with The first step... paper is that it was co-authored by developers of different conversion tools (dicm2nii, dcm2niix, mriconvert, SPM). The DICOM conversion community is a cooperative space, and we are working together not competing. When one of us sees a new and unusual DICOM image we tend to share our knowledge with each other.

It’s all clear now. Just to clarify: by “out of date” I meant “possibly not covering the new data” rather than “containing data which is old”. Thanks!

I have created a few extra dcm_qa_* repositories to illustrate edge cases:

  • dcm_qa_philips_asl Philips classic DICOM ASL data: note that Philips can assign instance numbers randomly, and for ASL volumes can be distinguished by six dimensions: 3 spatial plus repeat, phase and control/label.
  • dcm_qa_philips_asl_enh is the same data as the previous repository, but saved as enhanced DICOM, where tag 0020,9157 can guide volume indexing.
  • dcm_qa_mosaic demonstrates the influence Siemens mosaic image numbering.
  • dcm_qa_ts illustrates popular DICOM transfer syntaxes.

I also cleaned up the links to other sources for diverse DICOM datasets.

1 Like

Wonderful! Thank you so much.
Once I’m convinced I have at least the basic conversion functionality properly covered, I’ll start integrating these as submodules and creating the appropriate test cases.

@baratzz you may want to extend the existing dicom2nifti - it handles most of these edge cases well. It would benefit from additional of features like the ability to generate a BIDS sidecar, improved handling of enhanced DICOM, and support for RGB images. Even if you decide to create an entirely new project, I would suggest you carefully look at dicom2nifti, as it solves many of the problems faced by any attempt to handle to complexities of DICOM.

Thank you for sharing - I also came across it recently, and it certainly seems like a great resource, if not possible solution. It does seem to require NiBabel, which might be a problem. My plan was to try and use it as a reference to implement similar functionality in dicom_parser, so that NiBabel could easily generate the NIfTI-formatted array and metadata and then take care of the IO. I’m open to any suggestions, of course. @matthew.brett @effigies @moloney

Hello all,
As @matthew.brett noted

It might be good to get your input as a community moving forward. Do any of you have thoughts with regarding dcm2niix issue 533 and demonstrated in the DICOM data at dcm_qa_philips_asl_enh and dcm_qa_philips_asl_enh?

Basically, the choice is

a.) a DICOM to NIfTI converter should respect the volume order explicitly described by the Dimension Index Values (0020,9157)
b.) The DICOM to NIfTI converter should store 3D volumes on disk in the acquired temporal order, even if this is different from the explicit instructions of 0020,9157 and requires some inference regarding the order (that might make the tool more fragile)?