-
Notifications
You must be signed in to change notification settings - Fork 1
Can you provide a pre-trained model for testing? #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for your interest. The model was released on MHub.ai. I have updated the README |
Any idea why it gets stuck in here?
Always gets stuck in the 3rd portion, and does not show any progress. Waited for 2 hours. |
Hmmm, I'm running into the same thing. @jithenece can you take a look? Here's my code: from idc_index import index
from pathlib import Path
Path("example_data").mkdir(exist_ok=True)
client = index.IDCClient()
client.download_from_selection(
seriesInstanceUID=["1.3.6.1.4.1.14519.5.2.1.6450.4004.318185778053926832345567953536"],
downloadDir="example_data",
) Then ran the docker container with the following command mkdir example_output
export in=$(pwd)/example_data
export out=$(pwd)/example_output
docker run --rm -t --gpus all -v $in:/app/data/input_data -v $out:/app/data/output_data mhubai/bamf_nnunet_ct_kidney |
For the previous example, that would finish in about 10 minutes, but not produce output because of a different issue (referenced). This example below runs successfully. If you are having issues, try running mhub with the Download example series from idc_index import index
from pathlib import Path
Path("example_data").mkdir(exist_ok=True)
client = index.IDCClient()
client.download_from_selection(
seriesInstanceUID=["1.3.6.1.4.1.14519.5.2.1.6450.4004.318185778053926832345567953536"],
downloadDir="example_data",
) run container mkdir example_output
export in=$(pwd)/example_data
export out=$(pwd)/example_output
docker run --rm -t --gpus all -v $in:/app/data/input_data -v $out:/app/data/output_data mhubai/bamf_nnunet_ct_kidney |
Need a pre-trained model.
The text was updated successfully, but these errors were encountered: