Template-based prediction.

In this tutorial, we show how to improve inter-subject similarity using a template computed across multiple source subjects. For this purpose, we create a template using Procrustes alignment (hyperalignment) to which we align the target subject, using shared information. We then compare the voxelwise similarity between the target subject and the template to the similarity between the target subject and the anatomical Euclidean average of the source subjects.

We mostly rely on Python common packages and on nilearn to handle functional data in a clean fashion.

To run this example, you must launch IPython via ipython --matplotlib in a terminal, or use jupyter-notebook.

Retrieve the data

In this example we use the IBC dataset, which includes a large number of different contrasts maps for 12 subjects. We download the images for subjects sub-01, sub-02, sub-04, sub-05, sub-06 and sub-07 (or retrieve them if they were already downloaded). imgs is the list of paths to available statistical images for each subjects. df is a dataframe with metadata about each of them. mask is a binary image used to extract grey matter regions.

from fmralign.fetch_example_data import fetch_ibc_subjects_contrasts

imgs, df, mask_img = fetch_ibc_subjects_contrasts(
    ["sub-01", "sub-02", "sub-04", "sub-05", "sub-06", "sub-07"]
)
[_add_readme_to_default_data_locations] Added README.md to
/home/runner/nilearn_data
[get_dataset_dir] Dataset created in /home/runner/nilearn_data/ibc
[fetch_single_file] Downloading data from https://osf.io/pcvje/download ...
[fetch_single_file]  ...done. (4 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/8e275a34345802c5c273312d85957d6c/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/yvju3/download ...
[fetch_single_file]  ...done. (3 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/fe06df963fcb3fd454f63a33f0864e8d/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/8z23h/download ...
[_chunk_report_] Downloaded 1384448 of 21185337 bytes (6.5%%,   14.3s remaining)
[_chunk_report_] Downloaded 2924544 of 21185337 bytes (13.8%%,   12.8s
remaining)
[_chunk_report_] Downloaded 4489216 of 21185337 bytes (21.2%%,   11.8s
remaining)
[_chunk_report_] Downloaded 6062080 of 21185337 bytes (28.6%%,   10.5s
remaining)
[_chunk_report_] Downloaded 7888896 of 21185337 bytes (37.2%%,    8.9s
remaining)
[_chunk_report_] Downloaded 9453568 of 21185337 bytes (44.6%%,    8.0s
remaining)
[_chunk_report_] Downloaded 11034624 of 21185337 bytes (52.1%%,    6.9s
remaining)
[_chunk_report_] Downloaded 12599296 of 21185337 bytes (59.5%%,    5.9s
remaining)
[_chunk_report_] Downloaded 14426112 of 21185337 bytes (68.1%%,    4.5s
remaining)
[_chunk_report_] Downloaded 16203776 of 21185337 bytes (76.5%%,    3.3s
remaining)
[_chunk_report_] Downloaded 17555456 of 21185337 bytes (82.9%%,    2.4s
remaining)
[_chunk_report_] Downloaded 19120128 of 21185337 bytes (90.3%%,    1.4s
remaining)
[_chunk_report_] Downloaded 20684800 of 21185337 bytes (97.6%%,    0.3s
remaining)
[fetch_single_file]  ...done. (19 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/100352739b7501f0ed04920933b4be36/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/e9kbm/download ...
[_chunk_report_] Downloaded 1605632 of 21196887 bytes (7.6%%,   13.2s remaining)
[_chunk_report_] Downloaded 3170304 of 21196887 bytes (15.0%%,   12.2s
remaining)
[_chunk_report_] Downloaded 4734976 of 21196887 bytes (22.3%%,   11.3s
remaining)
[_chunk_report_] Downloaded 6578176 of 21196887 bytes (31.0%%,    9.7s
remaining)
[_chunk_report_] Downloaded 8404992 of 21196887 bytes (39.7%%,    8.4s
remaining)
[_chunk_report_] Downloaded 9969664 of 21196887 bytes (47.0%%,    7.5s
remaining)
[_chunk_report_] Downloaded 11272192 of 21196887 bytes (53.2%%,    6.7s
remaining)
[_chunk_report_] Downloaded 12853248 of 21196887 bytes (60.6%%,    5.7s
remaining)
[_chunk_report_] Downloaded 14409728 of 21196887 bytes (68.0%%,    4.6s
remaining)
[_chunk_report_] Downloaded 16244736 of 21196887 bytes (76.6%%,    3.3s
remaining)
[_chunk_report_] Downloaded 17555456 of 21196887 bytes (82.8%%,    2.5s
remaining)
[_chunk_report_] Downloaded 18841600 of 21196887 bytes (88.9%%,    1.6s
remaining)
[_chunk_report_] Downloaded 20389888 of 21196887 bytes (96.2%%,    0.6s
remaining)
[fetch_single_file]  ...done. (21 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/331a0a579c6e46c0911502a96215b358/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/qn5b6/download ...
[_chunk_report_] Downloaded 2883584 of 21197218 bytes (13.6%%,    6.5s
remaining)
[_chunk_report_] Downloaded 5758976 of 21197218 bytes (27.2%%,    5.7s
remaining)
[_chunk_report_] Downloaded 8413184 of 21197218 bytes (39.7%%,    4.8s
remaining)
[_chunk_report_] Downloaded 11534336 of 21197218 bytes (54.4%%,    3.6s
remaining)
[_chunk_report_] Downloaded 14417920 of 21197218 bytes (68.0%%,    2.5s
remaining)
[_chunk_report_] Downloaded 17039360 of 21197218 bytes (80.4%%,    1.6s
remaining)
[_chunk_report_] Downloaded 19652608 of 21197218 bytes (92.7%%,    0.6s
remaining)
[fetch_single_file]  ...done. (12 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/cd396fed594eb866baecd48b70ddf7e7/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/u74a3/download ...
[_chunk_report_] Downloaded 3178496 of 21185350 bytes (15.0%%,    5.9s
remaining)
[_chunk_report_] Downloaded 6053888 of 21185350 bytes (28.6%%,    5.2s
remaining)
[_chunk_report_] Downloaded 9191424 of 21185350 bytes (43.4%%,    4.1s
remaining)
[_chunk_report_] Downloaded 12328960 of 21185350 bytes (58.2%%,    3.0s
remaining)
[_chunk_report_] Downloaded 15466496 of 21185350 bytes (73.0%%,    1.9s
remaining)
[_chunk_report_] Downloaded 18341888 of 21185350 bytes (86.6%%,    1.0s
remaining)
[_chunk_report_] Downloaded 20955136 of 21185350 bytes (98.9%%,    0.1s
remaining)
[fetch_single_file]  ...done. (12 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/fc5556cc3678df4f4ab566414382180a/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/83bje/download ...
[_chunk_report_] Downloaded 2859008 of 21188335 bytes (13.5%%,    6.4s
remaining)
[_chunk_report_] Downloaded 5799936 of 21188335 bytes (27.4%%,    5.3s
remaining)
[_chunk_report_] Downloaded 8929280 of 21188335 bytes (42.1%%,    4.2s
remaining)
[_chunk_report_] Downloaded 11804672 of 21188335 bytes (55.7%%,    3.2s
remaining)
[_chunk_report_] Downloaded 14426112 of 21188335 bytes (68.1%%,    2.4s
remaining)
[_chunk_report_] Downloaded 17047552 of 21188335 bytes (80.5%%,    1.5s
remaining)
[_chunk_report_] Downloaded 19914752 of 21188335 bytes (94.0%%,    0.5s
remaining)
[fetch_single_file]  ...done. (11 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/1beaa1b5a1734a1afbf1c844e1f7a60e/download...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from https://osf.io/43j69/download ...
[_chunk_report_] Downloaded 1564672 of 21187400 bytes (7.4%%,   13.2s remaining)
[_chunk_report_] Downloaded 3014656 of 21187400 bytes (14.2%%,   12.4s
remaining)
[_chunk_report_] Downloaded 4612096 of 21187400 bytes (21.8%%,   11.0s
remaining)
[_chunk_report_] Downloaded 6283264 of 21187400 bytes (29.7%%,    9.7s
remaining)
[_chunk_report_] Downloaded 7847936 of 21187400 bytes (37.0%%,    8.8s
remaining)
[_chunk_report_] Downloaded 9412608 of 21187400 bytes (44.4%%,    7.8s
remaining)
[_chunk_report_] Downloaded 11059200 of 21187400 bytes (52.2%%,    6.6s
remaining)
[_chunk_report_] Downloaded 12812288 of 21187400 bytes (60.5%%,    5.4s
remaining)
[_chunk_report_] Downloaded 14901248 of 21187400 bytes (70.3%%,    3.9s
remaining)
[_chunk_report_] Downloaded 16990208 of 21187400 bytes (80.2%%,    2.5s
remaining)
[_chunk_report_] Downloaded 18300928 of 21187400 bytes (86.4%%,    1.8s
remaining)
[_chunk_report_] Downloaded 19349504 of 21187400 bytes (91.3%%,    1.2s
remaining)
[_chunk_report_] Downloaded 21176320 of 21187400 bytes (99.9%%,    0.0s
remaining)
[fetch_single_file]  ...done. (19 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/ibc/75e62c44985852e000c2b2865badf72d/download...
[uncompress_file] .. done.

Define a masker

We define a nilearn masker that will be used to handle relevant data.

For more information, visit : ‘http://nilearn.github.io/manipulating_images/masker_objects.html

from nilearn.maskers import NiftiMasker

masker = NiftiMasker(mask_img=mask_img).fit()

Prepare the data

For each subject, we will use two series of contrasts acquired during two independent sessions with a different phase encoding: Antero-posterior(AP) or Postero-anterior(PA).

# To infer a template for subjects sub-01 to sub-06 for both AP and PA data,
# we make a list of 4D niimgs from our list of list of files containing 3D images

from nilearn.image import concat_imgs

template_train = []
for i in range(5):
    template_train.append(concat_imgs(imgs[i]))

# sub-07 (that is 5th in the list) will be our left-out subject.
# We make a single 4D Niimg from our list of 3D filenames.

left_out_subject = concat_imgs(imgs[5])

Compute a baseline (average of subjects)

We create an image with as many contrasts as any subject representing for each contrast the average of all train subjects maps.

import numpy as np

masked_imgs = [masker.transform(img) for img in template_train]
average_img = np.mean(masked_imgs, axis=0)
average_subject = masker.inverse_transform(average_img)

Create a template from the training subjects.

We define an estimator using the class TemplateAlignment:
  • We align the whole brain through ‘multiple’ local alignments.

  • These alignments are calculated on a parcellation of the brain in 50 pieces, this parcellation creates group of functionnally similar voxels.

  • The template is created iteratively, aligning all subjects data into a common space, from which the template is inferred and aligning again to this new template space.

from fmralign.template_alignment import TemplateAlignment

# We use Procrustes/scaled orthogonal alignment method
template_estim = TemplateAlignment(
    n_pieces=50,
    alignment_method="scaled_orthogonal",
    mask=masker,
)
template_estim.fit(template_train)
procrustes_template = template_estim.template
/home/runner/work/fmralign/fmralign/fmralign/preprocessing.py:232: UserWarning: Overriding provided-default estimator parameters with provided masker parameters :
Parameter detrend :
    Masker parameter False - overriding estimator parameter None

  self._fit_masker(imgs)
[TemplateAlignment.fit] Resampling mask
/home/runner/work/fmralign/fmralign/fmralign/_utils.py:254: UserWarning: Overriding provided-default estimator parameters with provided masker parameters :
Parameter mask_strategy :
    Masker parameter background - overriding estimator parameter epi
Parameter smoothing_fwhm :
    Masker parameter None - overriding estimator parameter 4.0

  parcellation.fit(images_to_parcel)
/home/runner/work/fmralign/fmralign/fmralign/_utils.py:254: FutureWarning: The nifti_maps_masker_ attribute is deprecated andwill be removed in Nilearn 0.11.3. Please use maps_masker_ instead.
  parcellation.fit(images_to_parcel)
/home/runner/work/fmralign/fmralign/fmralign/_utils.py:185: UserWarning:
 Some parcels are more than 1000 voxels wide it can slow down alignment,especially optimal_transport :
 parcel 4 : 1988 voxels
 parcel 11 : 1402 voxels
 parcel 13 : 1028 voxels
 parcel 14 : 1353 voxels
 parcel 16 : 1890 voxels
 parcel 17 : 2191 voxels
 parcel 21 : 2491 voxels
 parcel 22 : 1588 voxels
 parcel 28 : 2337 voxels
 parcel 31 : 1809 voxels
 parcel 34 : 1904 voxels
 parcel 37 : 1485 voxels
 parcel 39 : 1741 voxels
 parcel 46 : 2855 voxels
 parcel 49 : 2602 voxels
 parcel 50 : 2311 voxels
  warnings.warn(warning)

Predict new data for left-out subject

We predict the contrasts of the left-out subject using the template we just created. We use the transform method of the estimator. This method takes the left-out subject as input, computes a pairwise alignment with the template and returns the aligned data.

predictions_from_template = template_estim.transform(left_out_subject)
/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/nilearn/masking.py:979: UserWarning: Data array used to create a new image contains 64-bit ints. This is likely due to creating the array with numpy and passing `int` as the `dtype`. Many tools such as FSL and SPM cannot deal with int64 in Nifti images, so for compatibility the data has been converted to int32.
  return new_img_like(mask_img, unmasked, affine)
/home/runner/work/fmralign/fmralign/fmralign/preprocessing.py:232: UserWarning: Overriding provided-default estimator parameters with provided masker parameters :
Parameter detrend :
    Masker parameter False - overriding estimator parameter None

  self._fit_masker(imgs)
[TemplateAlignment.wrapped] Resampling mask

Score the baseline and the prediction

We use a utility scoring function to measure the voxelwise correlation between the images. That is, for each voxel, we measure the correlation between its profile of activation without and with alignment, to see if template-based alignment was able to improve inter-subject similarity.

from fmralign.metrics import score_voxelwise

average_score = masker.inverse_transform(
    score_voxelwise(left_out_subject, average_subject, masker, loss="corr")
)
template_score = masker.inverse_transform(
    score_voxelwise(
        predictions_from_template, procrustes_template, masker, loss="corr"
    )
)

Plotting the measures

Finally we plot both scores

from nilearn import plotting

baseline_display = plotting.plot_stat_map(
    average_score, display_mode="z", vmax=1, cut_coords=[-15, -5]
)
baseline_display.title("Left-out subject correlation with group average")
display = plotting.plot_stat_map(
    template_score, display_mode="z", cut_coords=[-15, -5], vmax=1
)
display.title("Aligned subject correlation with Procrustes template")
  • plot template alignment
  • plot template alignment

We observe that creating a template and aligning a new subject to it yields better inter-subject similarity than regular euclidean averaging.

Total running time of the script: (4 minutes 48.694 seconds)

Gallery generated by Sphinx-Gallery