Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.

3.2.1. fmralign.template_alignment.TemplateAlignment

class fmralign.template_alignment.TemplateAlignment(alignment_method='identity', n_pieces=1, clustering='kmeans', scale_template=False, n_iter=2, save_template=None, n_bags=1, mask=None, smoothing_fwhm=None, standardize=False, detrend=None, target_affine=None, target_shape=None, low_pass=None, high_pass=None, t_r=None, memory=Memory(location=None), memory_level=0, n_jobs=1, verbose=0)[source]

Decompose the source images into regions and summarize subjects information in a template, then use pairwise alignment to predict new contrast for target subject.

__init__(alignment_method='identity', n_pieces=1, clustering='kmeans', scale_template=False, n_iter=2, save_template=None, n_bags=1, mask=None, smoothing_fwhm=None, standardize=False, detrend=None, target_affine=None, target_shape=None, low_pass=None, high_pass=None, t_r=None, memory=Memory(location=None), memory_level=0, n_jobs=1, verbose=0)[source]
Parameters:
alignment_method: string

Algorithm used to perform alignment between X_i and Y_i : * either ‘identity’, ‘scaled_orthogonal’, ‘optimal_transport’, ‘ridge_cv’, ‘permutation’, ‘diagonal’, * or an instance of one of alignment classes (imported from functional_alignment.alignment_methods)

n_pieces: int, optional (default = 1)

Number of regions in which the data is parcellated for alignment. If 1 the alignment is done on full scale data. If > 1, the voxels are clustered and alignment is performed on each cluster applied to X and Y.

clusteringstring or 3D Niimg optional (defaultkmeans)

‘kmeans’, ‘ward’, ‘rena’, ‘hierarchical_kmeans’ method used for clustering of voxels based on functional signal, passed to nilearn.regions.parcellations If 3D Niimg, image used as predefined clustering, n_bags and n_pieces are then ignored.

scale_template: boolean, default False

rescale template after each inference so that it keeps the same norm as the average of training images.

n_iter: int

number of iteration in the alternate minimization. Each img is aligned n_iter times to the evolving template. If n_iter = 0, the template is simply the mean of the input images.

save_template: None or string(optional)

If not None, path to which the template will be saved.

n_bags: int, optional (default = 1)

If 1 : one estimator is fitted. If >1 number of bagged parcellations and estimators used.

mask: Niimg-like object, instance of NiftiMasker or

MultiNiftiMasker, optional (default = None)

Mask to be used on data. If an instance of masker is passed, then its mask will be used. If no mask is given, it will be computed automatically by a MultiNiftiMasker with default parameters.

smoothing_fwhm: float, optional (default = None)

If smoothing_fwhm is not None, it gives the size in millimeters of the spatial smoothing to apply to the signal.

standardize: boolean, optional (default = None)

If standardize is True, the time-series are centered and normed: their variance is put to 1 in the time dimension.

detrend: boolean, optional (default = None)

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details

target_affine: 3x3 or 4x4 matrix, optional (default = None)

This parameter is passed to nilearn.image.resample_img. Please see the related documentation for details.

target_shape: 3-tuple of integers, optional (default = None)

This parameter is passed to nilearn.image.resample_img. Please see the related documentation for details.

low_pass: None or float, optional (default = None)

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details.

high_pass: None or float, optional (default = None)

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details.

t_r: float, optional (default = None)

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details.

memory: instance of joblib.Memory or string (default = None)

Used to cache the masking process and results of algorithms. By default, no caching is done. If a string is given, it is the path to the caching directory.

memory_level: integer, optional (default = None)

Rough estimator of the amount of memory used by caching. Higher value means more memory for caching.

n_jobs: integer, optional (default = 1)

The number of CPUs to use to do the computation. -1 means ‘all CPUs’, -2 ‘all CPUs but one’, and so on.

verbose: integer, optional (default = 0)

Indicate the level of verbosity. By default, nothing is printed.

fit(imgs)[source]

Learn a template from source images, using alignment.

Parameters:
imgs: List of 4D Niimg-like or List of lists of 3D Niimg-like

Source subjects data. Each element of the parent list is one subject data, and all must have the same length (n_samples).

Attributes:
self.template: 4D Niimg object

Length : n_samples

Returns:
self
fit_transform()[source]

Parent method not applicable here. Will raise AttributeError if called.

set_fit_request(*, imgs: bool | None | str = '$UNCHANGED$') TemplateAlignment

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
imgsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for imgs parameter in fit.

Returns:
selfobject

The updated object.

set_transform_request(*, imgs: bool | None | str = '$UNCHANGED$', test_index: bool | None | str = '$UNCHANGED$', train_index: bool | None | str = '$UNCHANGED$') TemplateAlignment

Request metadata passed to the transform method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to transform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to transform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
imgsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for imgs parameter in transform.

test_indexstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for test_index parameter in transform.

train_indexstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for train_index parameter in transform.

Returns:
selfobject

The updated object.

transform(imgs, train_index, test_index)[source]

Learn alignment between new subject and template calculated during fit, then predict other conditions for this new subject. Alignment is learnt between imgs and conditions in the template indexed by train_index. Prediction correspond to conditions in the template index by test_index.

Parameters:
imgs: List of 3D Niimg-like objects

Target subjects known data. Every img must have length (number of sample) train_index.

train_index: list of ints

Indexes of the 3D samples used to map each img to the template. Every index should be smaller than the number of images in the template.

test_index: list of ints

Indexes of the 3D samples to predict from the template and the mapping. Every index should be smaller than the number of images in the template.

Returns:
predicted_imgs: List of 3D Niimg-like objects

Target subjects predicted data. Each Niimg has the same length as the list test_index