Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: FreeSurfer LTA file support #17

Merged
merged 15 commits into from
Oct 22, 2019
Merged

ENH: FreeSurfer LTA file support #17

merged 15 commits into from
Oct 22, 2019

Conversation

mgxd
Copy link
Member

@mgxd mgxd commented Oct 18, 2019

This PR builds off nipy/nibabel#565 to support reading/writing transforms to the LTA format.

@pep8speaks
Copy link

pep8speaks commented Oct 18, 2019

Hello @mgxd! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2019-10-21 22:06:05 UTC

@codecov-io
Copy link

codecov-io commented Oct 18, 2019

Codecov Report

Merging #17 into master will decrease coverage by 1.68%.
The diff coverage is 56.21%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master     #17      +/-   ##
=========================================
- Coverage   64.49%   62.8%   -1.69%     
=========================================
  Files           8      10       +2     
  Lines         521     699     +178     
  Branches       68      87      +19     
=========================================
+ Hits          336     439     +103     
- Misses        152     221      +69     
- Partials       33      39       +6
Flag Coverage Δ
#unittests 62.8% <56.21%> (-1.69%) ⬇️
Impacted Files Coverage Δ
nitransforms/linear.py 51.61% <10.71%> (-5.36%) ⬇️
nitransforms/tests/test_io.py 100% <100%> (ø)
nitransforms/io.py 60.83% <60.83%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9dfcfb9...77fdbc9. Read the comment docs.

Copy link
Collaborator

@oesteban oesteban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good. Left a couple of minimal comments

('subject', 'U1024'),
('fscale', 'f4')])
dtype = template_dtype
_xforms = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would not allow this to contain VOX2VOX. Meaning, if transform_code is 0, then the transforms are decomposed and the RAS2RAS extracted. If that is not possible because moving and/or reference VOX2RAS are missing, then raise an error.

That said, I'd be fine with a NotImplementedError when transform code is 0.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this builds off @effigies implementation, we should rope him in here.

I assumed the scope of his LTA implementation is greater than just nitransforms' use-case, so we may want to still support vox2vox as a valid matrix. however, totally agree we should catch that case within the transforms module, and coerce it into ras2ras.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should definitely permit reading/writing non-RAS2RAS, even if we only ever store RAS2RAS. I vaguely recall I might have intended to store the incoming transform, so that a load-save round trip would not change the contents, and only convert at need, but don't feel bound by this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would the mean / sigma change if we convert the matrix between transform types?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please note I'm not saying we should only support LINEAR_RAS_TO_RAS, I'm saying we should not write (just write) LINEAR_VOX_TO_VOX.

VOX2VOX is a legacy method that only makes sense in the context of the early development of image registration. Why (and who) anyone would like to write VOX2VOX? There's literally nothing VOX2VOX can do that cannot be done with RAS2RAS.

judging by https://github.com/freesurfer/freesurfer/blob/d5ff65ce78fee3ef296cc0b4027360ba6f9721f1/utils/transform.cpp#L823, I don't think sigma or mean should change with RAS2RAS.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be more compelled by some demonstration of better numerical stability or precision of VOX2VOX, but I would actually guess that's also going to work in the opposite way.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Co-Authored-By: Chris Markiewicz <[email protected]>
elif fmt.lower() in ('fs', 'lta'):
with open(filename) as ltafile:
lta = LinearTransformArray.from_fileobj(ltafile)
assert lta['nxforms'] == 1 # ever have multiple transforms?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we should be able to have multiple transforms (i.e., nxforms does not need to be 1)

assert lta['nxforms'] == 1 # ever have multiple transforms?
if lta['type'] != 1:
lta.as_type(1)
matrix = lta['xforms'][0]['m_L']
Copy link
Collaborator

@oesteban oesteban Oct 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

matrix is of size N x (D + 1) x (D + 1), where N is the number of transforms and D the dimension (i.e., D belongs to {2, 3})

@mgxd mgxd marked this pull request as ready for review October 21, 2019 20:57
@oesteban oesteban merged commit 2ab68b3 into nipy:master Oct 22, 2019
@mgxd mgxd deleted the enh/lta branch October 31, 2019 19:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants