Replies: 2 comments 1 reply
-
Hi @yoPitarch, Thanks for the detailed post. With regards to this:
Don't think you need to do this before running Auto3D. Just leave the system to detect the best transforms to apply. I'd suggest you start running only 2 folds (create a JSON file with only 2 folds) and use a single backbone for faster training (i.e. segresnet). Once you see all this is working, you can scale up using multiple backbone networks and more folds. I'd suggest running the Auto3D like this:
Please let us know |
Beta Was this translation helpful? Give feedback.
-
Hi @diazandr3s Thank you for your answer. I did what you have suggested (a single backbone, only 2 folds and 2 epochs) and still got the same error. Actually, I guess I found the root cause of the error : image intensities are sometimes out of the boundaries specified in the ScaleIntensityRanged transform. Indeed, I have written my own data loader and applied the same transform. I can interate over the data loader when (some of) the image intensities are in the range specified by a_min and a_max in the ScaleIntensityRanged transform. For some unclear reasons, some of my images have intensities thart are very different (eg, ranging from -5 to 7). I can visualize them in Paraview and I know that they are MRI (I personnaly collect these images from different centers). Maybe, these variations can be explained because images were captured with different scanners. When replacing the Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
Hello,
I am very impressed by auto3DSeg potential and would love to test it on a custom MRI dataset. I have successfully managed to execute the different tutorials but I cannot managed to train models on my custom dataset :
data analysis step is ok but training the model fails due to an error while applying the training transforms.
Here is what I've done so far and the resulting erros/questions. Sorry for the long post. I'm trying to describe what I did precisely.
I have a set of MRI images with different size and spacing (resolution). I'm not sure if this is required but I first transform my dataset to make sure that my images have the same size (in terms of voxel). To do so, I've applied the following transforms :
I know that cropping is really the appropriate transform operation but my intention was to make the AutoRunner works first. I would also say that even on my original dataset I got the same errors. My understanding is thus that this preprocess is not the root cause of my problem.
Then I create the datalist and launch the following piece of code (very similar to the one in the Hello World tutorial):
As mentionned earlier, the data analysis is ok. When moving to the training part, the following exception is thrown:
For information, here is the generated
datastats.yaml
.My question is very simple: what's wrong with what I've done? I must precise that I did the same with a custom CT dataset and got the same problems.
I don't know if it's an Auto3DSeg related question or instead a "pure" MONAI-transform related question, but any help would be highly appreciated.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions