Template-based prediction.

In this tutorial, we show how to improve inter-subject similarity using a template computed across multiple source subjects. For this purpose, we create a template using Procrustes alignment (hyperalignment) to which we align the target subject, using shared information. We then compare the voxelwise similarity between the target subject and the template to the similarity between the target subject and the anatomical Euclidean average of the source subjects.

We mostly rely on Python common packages and on nilearn to handle functional data in a clean fashion.

To run this example, you must launch IPython via ipython --matplotlib in a terminal, or use jupyter-notebook.

Retrieve the data

In this example we use the IBC dataset, which includes a large number of different contrasts maps for 12 subjects. We download the images for subjects sub-01, sub-02, sub-04, sub-05, sub-06 and sub-07 (or retrieve them if they were already downloaded). imgs is the list of paths to available statistical images for each subjects. df is a dataframe with metadata about each of them. mask is a binary image used to extract grey matter regions.

from fmralign.fetch_example_data import fetch_ibc_subjects_contrasts

imgs, df, mask_img = fetch_ibc_subjects_contrasts(
    ["sub-01", "sub-02", "sub-04", "sub-05", "sub-06", "sub-07"]
)
[_add_readme_to_default_data_locations] Added README.md to
/home/runner/nilearn_data
[get_dataset_dir] Dataset created in /home/runner/nilearn_data/ibc
[fetch_single_file] Downloading data from https://osf.io/pcvje/download ...
[fetch_single_file]  ...done. (2 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/8e275a34345802c5c273312d85957d6c/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/yvju3/download ...
[fetch_single_file]  ...done. (2 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/fe06df963fcb3fd454f63a33f0864e8d/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/8z23h/download ...
[_chunk_report_] Downloaded 9609216 of 21185337 bytes (45.4%%,    1.2s
remaining)
[fetch_single_file]  ...done. (4 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/100352739b7501f0ed04920933b4be36/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/e9kbm/download ...
[_chunk_report_] Downloaded 6070272 of 21196887 bytes (28.6%%,    2.5s
remaining)
[fetch_single_file]  ...done. (3 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/331a0a579c6e46c0911502a96215b358/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/qn5b6/download ...
[_chunk_report_] Downloaded 9691136 of 21197218 bytes (45.7%%,    1.2s
remaining)
[fetch_single_file]  ...done. (3 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/cd396fed594eb866baecd48b70ddf7e7/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/u74a3/download ...
[_chunk_report_] Downloaded 9650176 of 21185350 bytes (45.6%%,    1.2s
remaining)
[fetch_single_file]  ...done. (3 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/fc5556cc3678df4f4ab566414382180a/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/83bje/download ...
[_chunk_report_] Downloaded 6168576 of 21188335 bytes (29.1%%,    2.4s
remaining)
[fetch_single_file]  ...done. (3 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/1beaa1b5a1734a1afbf1c844e1f7a60e/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/43j69/download ...
[_chunk_report_] Downloaded 12820480 of 21187400 bytes (60.5%%,    0.7s
remaining)
[fetch_single_file]  ...done. (4 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/75e62c44985852e000c2b2865badf72d/download...
[uncompress_file] .. done.

Define a masker

We define a nilearn masker that will be used to handle relevant data.

For more information, visit : ‘http://nilearn.github.io/manipulating_images/masker_objects.html

from nilearn.maskers import NiftiMasker

masker = NiftiMasker(mask_img=mask_img).fit()

Prepare the data

For each subject, we will use two series of contrasts acquired during two independent sessions with a different phase encoding: Antero-posterior(AP) or Postero-anterior(PA).

# To infer a template for subjects sub-01 to sub-06 for both AP and PA data,
# we make a list of 4D niimgs from our list of list of files containing 3D images

from nilearn.image import concat_imgs

template_train = []
for i in range(5):
    template_train.append(concat_imgs(imgs[i]))

# sub-07 (that is 5th in the list) will be our left-out subject.
# We make a single 4D Niimg from our list of 3D filenames.

left_out_subject = concat_imgs(imgs[5])

Compute a baseline (average of subjects)

We create an image with as many contrasts as any subject representing for each contrast the average of all train subjects maps.

import numpy as np

masked_imgs = [masker.transform(img) for img in template_train]
average_img = np.mean(masked_imgs, axis=0)
average_subject = masker.inverse_transform(average_img)

Create a template from the training subjects.

We define an estimator using the class TemplateAlignment:
  • We align the whole brain through ‘multiple’ local alignments.

  • These alignments are calculated on a parcellation of the brain in 50 pieces, this parcellation creates group of functionnally similar voxels.

  • The template is created iteratively, aligning all subjects data into a common space, from which the template is inferred and aligning again to this new template space.

from fmralign.template_alignment import TemplateAlignment

# We use Procrustes/scaled orthogonal alignment method
template_estim = TemplateAlignment(
    n_pieces=50,
    alignment_method="scaled_orthogonal",
    mask=masker,
)
template_estim.fit(template_train)
procrustes_template = template_estim.template
/home/runner/work/fmralign/fmralign/fmralign/preprocessing.py:219: UserWarning: Overriding provided-default estimator parameters with provided masker parameters :
Parameter detrend :
    Masker parameter False - overriding estimator parameter None

  self._fit_masker(imgs)
[TemplateAlignment.fit] Resampling mask
/home/runner/work/fmralign/fmralign/fmralign/_utils.py:251: UserWarning: Overriding provided-default estimator parameters with provided masker parameters :
Parameter mask_strategy :
    Masker parameter background - overriding estimator parameter epi
Parameter smoothing_fwhm :
    Masker parameter None - overriding estimator parameter 4.0

  parcellation.fit(images_to_parcel)
[TemplateAlignment.fit] Resampling mask
/home/runner/work/fmralign/fmralign/fmralign/_utils.py:183: UserWarning:
 Some parcels are more than 1000 voxels wide it can slow down alignment,especially optimal_transport :
 parcel 4 : 1988 voxels
 parcel 11 : 1402 voxels
 parcel 13 : 1028 voxels
 parcel 14 : 1353 voxels
 parcel 16 : 1890 voxels
 parcel 17 : 2191 voxels
 parcel 21 : 2491 voxels
 parcel 22 : 1588 voxels
 parcel 28 : 2337 voxels
 parcel 31 : 1809 voxels
 parcel 34 : 1904 voxels
 parcel 37 : 1485 voxels
 parcel 39 : 1741 voxels
 parcel 46 : 2855 voxels
 parcel 49 : 2602 voxels
 parcel 50 : 2311 voxels
  warnings.warn(warning)

Predict new data for left-out subject

We predict the contrasts of the left-out subject using the template we just created. We use the transform method of the estimator. This method takes the left-out subject as input, computes a pairwise alignment with the template and returns the aligned data.

predictions_from_template = template_estim.transform(left_out_subject)
/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/nilearn/masking.py:981: UserWarning: Data array used to create a new image contains 64-bit ints. This is likely due to creating the array with numpy and passing `int` as the `dtype`. Many tools such as FSL and SPM cannot deal with int64 in Nifti images, so for compatibility the data has been converted to int32.
  return new_img_like(mask_img, unmasked, affine)
/home/runner/work/fmralign/fmralign/fmralign/preprocessing.py:219: UserWarning: Overriding provided-default estimator parameters with provided masker parameters :
Parameter detrend :
    Masker parameter False - overriding estimator parameter None

  self._fit_masker(imgs)
[TemplateAlignment.wrapped] Resampling mask

Score the baseline and the prediction

We use a utility scoring function to measure the voxelwise correlation between the images. That is, for each voxel, we measure the correlation between its profile of activation without and with alignment, to see if template-based alignment was able to improve inter-subject similarity.

from fmralign.metrics import score_voxelwise

average_score = masker.inverse_transform(
    score_voxelwise(left_out_subject, average_subject, masker, loss="corr")
)
template_score = masker.inverse_transform(
    score_voxelwise(
        predictions_from_template, procrustes_template, masker, loss="corr"
    )
)

Plotting the measures

Finally we plot both scores

from nilearn import plotting

baseline_display = plotting.plot_stat_map(
    average_score, display_mode="z", vmax=1, cut_coords=[-15, -5]
)
baseline_display.title("Left-out subject correlation with group average")
display = plotting.plot_stat_map(
    template_score, display_mode="z", cut_coords=[-15, -5], vmax=1
)
display.title("Aligned subject correlation with Procrustes template")
  • plot template alignment
  • plot template alignment

We observe that creating a template and aligning a new subject to it yields better inter-subject similarity than regular euclidean averaging.

Total running time of the script: (4 minutes 27.343 seconds)

Gallery generated by Sphinx-Gallery