-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: FreeSurfer LTA file support #17
Conversation
Hello @mgxd! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2019-10-21 22:06:05 UTC |
Codecov Report
@@ Coverage Diff @@
## master #17 +/- ##
=========================================
- Coverage 64.49% 62.8% -1.69%
=========================================
Files 8 10 +2
Lines 521 699 +178
Branches 68 87 +19
=========================================
+ Hits 336 439 +103
- Misses 152 221 +69
- Partials 33 39 +6
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good. Left a couple of minimal comments
('subject', 'U1024'), | ||
('fscale', 'f4')]) | ||
dtype = template_dtype | ||
_xforms = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would not allow this to contain VOX2VOX. Meaning, if transform_code
is 0, then the transforms are decomposed and the RAS2RAS extracted. If that is not possible because moving and/or reference VOX2RAS are missing, then raise an error.
That said, I'd be fine with a NotImplementedError
when transform code is 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since this builds off @effigies implementation, we should rope him in here.
I assumed the scope of his LTA implementation is greater than just nitransforms' use-case, so we may want to still support vox2vox as a valid matrix. however, totally agree we should catch that case within the transforms module, and coerce it into ras2ras.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should definitely permit reading/writing non-RAS2RAS, even if we only ever store RAS2RAS. I vaguely recall I might have intended to store the incoming transform, so that a load-save round trip would not change the contents, and only convert at need, but don't feel bound by this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would the mean
/ sigma
change if we convert the matrix between transform types?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please note I'm not saying we should only support LINEAR_RAS_TO_RAS
, I'm saying we should not write (just write) LINEAR_VOX_TO_VOX
.
VOX2VOX is a legacy method that only makes sense in the context of the early development of image registration. Why (and who) anyone would like to write VOX2VOX? There's literally nothing VOX2VOX can do that cannot be done with RAS2RAS.
judging by https://github.com/freesurfer/freesurfer/blob/d5ff65ce78fee3ef296cc0b4027360ba6f9721f1/utils/transform.cpp#L823, I don't think sigma
or mean
should change with RAS2RAS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be more compelled by some demonstration of better numerical stability or precision of VOX2VOX, but I would actually guess that's also going to work in the opposite way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After checking further, it seems mean and sigma are not necessary for RAS2RAS at all (https://github.com/freesurfer/freesurfer/blob/b156d5aee6df2c7027ea45d3824813f8dcc536ef/lta_convert/lta_convert.cpp#L337).
Co-Authored-By: Chris Markiewicz <[email protected]>
nitransforms/linear.py
Outdated
elif fmt.lower() in ('fs', 'lta'): | ||
with open(filename) as ltafile: | ||
lta = LinearTransformArray.from_fileobj(ltafile) | ||
assert lta['nxforms'] == 1 # ever have multiple transforms? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we should be able to have multiple transforms (i.e., nxforms does not need to be 1)
assert lta['nxforms'] == 1 # ever have multiple transforms? | ||
if lta['type'] != 1: | ||
lta.as_type(1) | ||
matrix = lta['xforms'][0]['m_L'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
matrix is of size N x (D + 1) x (D + 1), where N is the number of transforms and D the dimension (i.e., D belongs to {2, 3})
This PR builds off nipy/nibabel#565 to support reading/writing transforms to the LTA format.