Collecting use cases

Please comment here with a specific use case that you have.

1 Like

Got a question about what the target of the surface API is, and it probably makes sense to start this thread with my response:


The problem is that you have a number of things you want to do with data sampled to a surface:

  1. Arithmetic/modeling (largely geometry independent)
  2. Smoothing (topology + distances are required)
  3. Plot (mesh topology required, alternative coordinates may be useful)
  4. Resample (varies depending on what you’re resampling to)

Additionally, I may want to do things to the mesh itself, such as decimating it to a desired number of vertices.

With a volumetric image, it is sufficient to have a data array and an affine matrix to perform all of these. As you perform operations, it’s easy to associate the affine with the new in-memory images, update the affine, and write out a final image that may have the same or a different affine to the original.

To do the same with a surface, you may need to load several different files, and it may not be clear when you’re going to need each. For example, suppose I do some basic arithmetic and end up with a new data array that does not correspond to a file on disk. If I want to plot it, I have to go back to the original data and find the appropriate geometry files. So I need a way of associating the geometric metadata with the surface-sampled data, ideally without wasting a huge amount of memory. And this API should not be so closely aligned to one on-disk data model (such as the FreeSurfer subject directory) that it becomes difficult to adapt it to another one.

Some perspective from HCP land, if it is useful:

The view/plot case has different recommended viewing surfaces, depending on context:

  1. For comparisons across subjects, use a group average surface
  2. To look for scanner artifacts, use an anatomical surface in rigid-only spatial alignment, while if you want to see most of the data (the usual case), use an inflated surface (group average surfaces already smooth out folding somewhat, so don’t need as much inflation)
  3. There are often also surfaces in another volume space/registration, in particular MNINonLinear, though I don’t know that those are ever a better choice for general display purposes (and if the surface coordinates are inflated, or even group averaged, they don’t really match any volume space anymore).

Distances aren’t too difficult to derive from the coordinates and topology, but we generally think any computation that depends on coordinates should try to get close to the original geometry (when we use group average surfaces, we use the average original vertex areas to compensate the distances, etc. for the loss of folding detail, particularly in functionally aligned data). There are of course exceptions, for instance if you need to match processing across substantially different brain sizes.

Resampling to another mesh uses spherical surfaces (one of which is typically distorted by a registration), which are not really used for anything else under most circumstances. The same spheres allow resampling anatomical or other surfaces between those same meshes, but in many datasets the resampled surfaces will already exist (and repeated resampling would slowly lose folding definition, so it would be good to avoid that).

Our method of mapping volume data to the surface (we don’t call this resampling, it is intentionally discarding a lot of data that doesn’t contain cortex) requires at least the pial and white subject-specific surfaces, and they must match the volume registration the volume data is using.

What I have encountered was mostly I/O related. I don’t really work with FreeSurfer files directly so here are some stuff about CIFTI and GIFTI.

Here are some stuff that’s currently doable but a bit tedious in nibable:

  1. Copy the label of CIFTI file and populate one’s own data
    The analogy in the volume metric world is copy a nifti header and manipulate the data given the header. CIFTI axis is difficult to access at the moment. And I never really remember what elements are involved.

Here’s a snippet I wrote a while ago:

  1. Ensure the correct metadata are populated for Connectome Workbench
    This one is loosely related to plotting.
    I don’t think we should treat WB the canonical reference on how the metadata are used in GIFTI. The official GIFTI API didn’t really impost restrictions from what I remember. However, the chances are, a lot of people use workbench to view CIFTI and GIFTI files. Here’s an existing issue that’s likely due to bad metadata.

Things are not doable in the current scope of nilearn:

  1. Access information related to geometric template.
    For CIFTI (and .func.gii files in some cases), there’s no clear way to link with the exact mesh used other than some workbench scene files to join things up. In volumetric data, strictly speaking there’s no such things either, but loosing such information only have limited impact on how one can plot vaguely identifiable image.

  2. Resampling.
    All I can remember it was just extremely tedious if you want to have your own data on HCP template, or just going between a FreeSurfer mesh and a HCP one. From what I am aware there’s no trivial way to just downsample the mesh freely to a obituary resolution. I know it’s not encouraged but this is extremely useful for development, generating test data, and I always got people asking me how to do it.

The bottle neck for most of the mentioned above is, as Chris already mentioned, the processes require multiple files.

I think the use cases we have in MNE-Python (and PySurfer) are all captured by the above comment. But to detail what we have/do specifically in MNE-Python, we assume that subjects have a FreeSurfer reconstruction, and leverage FreeSurfer’s spherical alignment to do all sorts of procedures:

  1. Downsample the high-resolution meshes (e.g., ~200k vertices to ~4k, evenly spaced in the spherical space) using a recursively subdivided icosahedron or octahedron
  2. Create morph maps from one subject to another (interpolate from a single vertex to the three vertices of the triangle on the other subject’s coregistered surface)
  3. Smooth (or upsample) data by smudging/pushing it to adjacent vertices
  4. Resample from one subject to another by using the morph map plus smoothing
  5. Calculate distances between vertices (this is no on the sphere and is really just scipy.sparse.csgraph.dijkstra)

More generally, our use cases involve time-varying source data (n_vertices, n_times) where n_vertices is almost always on a decimated version of the high-resolution meshes. The most common example is fsaverage’s ico-5 surface, which is just vertex numbers 0-10241 in each hemisphere (they designed fsaverage to have this property), so we have a lot of data that is (20484, n_times) in shape – or even more generally, could be (20484, n_frequencies, n_times), etc. We currently have our own containers for this sort of thing, but they aren’t great, and it would be nice to have a standard to follow instead.

We have code for all of the above, plus plotting routines that evolved from PySurfer. I’m happy to work as much of that as applicable into nibabel once an API evolves!