Three examples of PALM: Permutation Analysis of Linear Models
Sooner or later we want to deduce properties of our data using statistical inference to derive parameter estimates or to find the (often condemned) p-value of our effect. For both behavioural and neuroimaging, however, we might find that the data doesn’t meet the criteria for parametric tests, such as a t-test or ANOVA. There are a range of alternative non-parametric tests available (Mann-Whitney U test, Friedman test, etc.) and one widely-used approach is permutation testing.
A great tool to perform permutation tests is PALM (Permutation Analysis of Linear Models1). One of the advantages of PALM is the flexibility of input data, because it can work with both volumetric and surface neuroimaging data. Another key feature is that you have strong control over the shuffling strategies even for complex designs. Furthermore, PALM corrects for for multiple testing over multiple contrasts or modalities. In general, the statistics are handled very carefully in PALM allowing you to specify a great range of options, which are detailed in the User Guide.
Below, I will talk you through three examples of how PALM can be used with different study designs and different input data.
Example 1) Simple t-test (with input data: 1D vector)
Let’s consider the situation where you compare a single measurement in 6 control subjects and 6 patients. Our input data is a vector with 12 elements, combining the data of both groups. We store the vector as column in the file ‘my_input.csv’.
7.05
4.44
5.32
etc...
For the design, we construct a linear model consisting of a single explanatory variable (EV) for the factor group, i.e. -1 for control and 1 for patient. In addition, we include a constant EV to model the grand mean. The design is stored in the file ‘my_design.csv’. Here, I won’t go into the background of how to build a general linear model (GLM), but the FSL wiki contains a very helpful site on this topic.
# build the design (python code):
n_subs = 6
n_groups = 2
ev_group = np.array([-1,1]).repeat(n_subs).reshape(-1, 1)
grand_mean = np.ones(n_subs * n_groups).reshape(-1, 1)
my_design = np.hstack((ev_group, grand_mean))
np.savetxt('my_design.csv', my_design, fmt='%1.0f', delimiter=",")
# the design file will look like this:
-1, 1
-1, 1
-1, 1
-1, 1
-1, 1
-1, 1
1, 1
1, 1
1, 1
1, 1
1, 1
1, 1
Because we are running a two-tailed t-test, we need only one row for the contrast of interest, which is stored in ‘my_contrast.csv’.
1, 0
PALM can be called from the command line using the following settings:
$path_to_palm/palm -i my_input.csv -d my_design.csv -t my_contrast.csv -o my_output -n 5000 -twotail
Our output files are the following:
my_output_dat_tstat.csv
my_output_dat_tstat_fwep.csv
my_output_dat_tstat_uncp.csv
my_output_elapsed.csv
my_output_palmconfig.txt
The t-statistic is stored in my_output_dat_tstat.csv
and the associated p-value in my_output_dat_tstat_uncp.csv
. Family-wise error correction does not have an effect in this case, so the file my_output_dat_tstat_fwep.csv
will be identical.
Example 2) 2 x 3 Factorial design (with input data: volumetric imaging files)
In the following example, we’re considering the situation, where we have an additional factor in the design, for example an experimental condition where the subjects received a drug: two of the six subjects in each group received placebo, two received drug1 and two participants received drug2 (this is just an illustrative example, a sample size of 2 would be ridiculous for a drug study!). In those 12 subjects, we acquired an anatomical brain scan and we are interested to see if and where the brains of patients are different and if the drugs have an effect on the brain and on the difference in patients. Statistically speaking, we are looking at the voxel-wise main effect of factor ‘group’ (which has two factors: control and patient), the main effect of factor ‘condition’ (which has three factors: placebo, drug1, drug2) and the interaction effect of ‘group’ and ‘condition’.
Let’s assume that the brain scans have been registered to MNI space and they are stored according to BIDS format in $my_study_dataset/derivatives/sub-control01/MNINonLinear/sub-control01_T1w.nii.gz and $my_study_dataset/derivatives/sub-patient01/MNINonLinear/sub-patient01_T1w.nii.gz, and so on for the other subjects. To make the scans suitable as input for PALM, we have to merge them into a single 4-dimensional file. Here, I use FSL2 tools to manipulate the files:
# initialize file of merged volumes with one dummy volume, which will be removed later
imcp $my_study_dataset/derivatives/sub-control01/MNINonLinear/sub-control01_T1w.nii.gz my_input.nii.gz
# loop over control subjects and patients
for group in control patient; do
for sub in 01 02 03 04 05 06; do
fslmerge -t my_input.nii.gz$merged_files $my_study_dataset/derivatives/sub-${group}${sub}/MNINonLinear/sub-${group}${sub}_T1w.nii.gz
done
done
# remove first dummy volume
pdim=$(fslval my_input.nii.gz "dim4")
fslroi my_input.nii.gz my_input.nii.gz 1 $(( $pdim - 1))
The design file ‘my_design.csv’ will start with one EV for the ‘group’ factor (column 1) like above for the t-test. The next two EVs model the main effect of the factor ‘condition’. The interaction effect is modelled by two EVs (column 4 and 5) and the last column models the grand mean:
# build the design in python:
n_subs = 6
n_groups = 2
n_subs_in_con = 2
ev_group = np.array([-1,1]).repeat(n_subs).reshape(-1, 1)
ev_condition_1 = np.tile(np.array([-1, 0, 1)].repeat(n_subs_in_con),n_groups).reshape(-1, 1)
ev_condition_2 = np.tile(np.array([-1, 1, 0]).repeat(n_subs_in_con),n_groups).reshape(-1, 1)
ev_interaction_1 = ev_group * ev_condition_1
ev_interaction_2 = ev_group * ev_condition_2
grand_mean = np.ones(n_subs * n_groups).reshape(-1, 1)
my_design = np.hstack((ev_group, ev_condition_1, ev_condition_2, ev_interaction_1, ev_interaction_2, grand_mean))
np.savetxt('my_design.csv', my_design, fmt='%1.0f', delimiter=",")
# the resulting design file will look like this:
-1, -1, -1, 1, 1, 1
-1, -1, -1, 1, 1, 1
-1, 0, 1, 0, -1, 1
-1, 0, 1, 0, -1, 1
-1, 1, 0, -1, 0, 1
-1, 1, 0, -1, 0, 1
1, -1, -1, -1, -1, 1
1, -1, -1, -1, -1, 1
1, 0, 1, 0, 1, 1
1, 0, 1, 0, 1, 1
1, 1, 0, 1, 0, 1
1, 1, 0, 1, 0, 1
The contrast file ‘my_contrast.csv’ contains one row each to code for each of the EVs we defined for the two main effects and the interaction effects:
1, 0, 0, 0, 0, 0
0, 1, 0, 0, 0, 0
0, 0, 1, 0, 0, 0
0, 0, 0, 1, 0, 0
0, 0, 0, 0, 1, 0
As the factor ‘condition’ has multiple levels, we want to derive the F-statistic associated with the two contrasts for this factor and the two contrasts for the interaction effect. We model this in the file ‘my_f_tests.csv’, which contains one row to code for the two condition EVs and one row to code for the two interaction contrasts. Note that PALM doesn’t perform rank-1 F-tests, which would be associated with the main effect of ‘group’, because the factor only has two levels.
This is how the F-contrast file will look like:
0, 1, 1, 0, 0
0, 0, 0, 1, 1
The PALM-call looks similar as above, but it includes the -f option to perform the F-tests. We also include the -noniiclass
option, because we are using a gzipped nifti image as input:
$path_to_palm/palm -i my_input.nii.gz -d my_design.csv -t my_contrast.csv -f my_f_tests.csv -o my_output -n 5000 -twotail -noniiclass
As output we get the following files:
my_output_vox_fstat_c6.nii.gz
my_output_vox_fstat_c7.nii.gz
my_output_vox_fstat_fwep_c6.nii.gz
my_output_vox_fstat_fwep_c7.nii.gz
my_output_vox_fstat_uncp_c6.nii.gz
my_output_vox_fstat_uncp_c7.nii.gz
my_output_vox_tstat_c1.nii.gz
my_output_vox_tstat_c2.nii.gz
my_output_vox_tstat_c3.nii.gz
my_output_vox_tstat_c4.nii.gz
my_output_vox_tstat_c5.nii.gz
my_output_vox_tstat_fwep_c1.nii.gz
my_output_vox_tstat_fwep_c2.nii.gz
my_output_vox_tstat_fwep_c3.nii.gz
my_output_vox_tstat_fwep_c4.nii.gz
my_output_vox_tstat_fwep_c5.nii.gz
my_output_vox_tstat_uncp_c1.nii.gz
my_output_vox_tstat_uncp_c2.nii.gz
my_output_vox_tstat_uncp_c3.nii.gz
my_output_vox_tstat_uncp_c4.nii.gz
my_output_vox_tstat_uncp_c5.nii.gz
As you can see, we get niftis image that store voxel-wise t-statistics and p-values for each t-test specified in the contrast file (c1 - c5). The corrected p-value associated with the main effect of ‘group’ is stored in my_output_vox_tstat_fwep_c1.nii.gz
. For the F-tests (c6, c7), the F-statistic and the associated p-values are reported. The p-values for the main effect of ‘condition’ are stored in my_output_vox_fstat_fwep_c6.nii.gz
and for the interaction effect in my_output_vox_fstat_fwep_c7.nii.gz
.
Example 3) Hemispheric differences (with input data: metric surface files)
For the last example, let’s assume we want to test if and where a specific resting-state network is different across the two hemispheres. We acquired data in 5 subjects and derived one metric file for each hemisphere. As the data from both hemispheres comes from the same subjects, it’s a repeated measures design. That’s why we model the random effect associated with each subject as additional column for each subject and we don’t need to model the grand mean anymore. As this is a within-subject design, we can’t assume that the labels for the hemisphere are freely exchangable across all observations (i.e. rows). PALM offers the possibility to include information about exchangeability blocks and variance groups. In the example below we will write out a csv-file that contains a single column to define for which observation the labels can be exchanged.
# build the design in python:
n_subs = 5
ev_hemisphere = np.array([-1,1]).repeat(n_subs).reshape(-1,1)
random_effects = np.identity(n_subs)
my_design = np.hstack((ev_hemisphere, np.vstack((random_effects, random_effects))))
np.savetxt('my_design.csv', my_design, fmt='%1.0f', delimiter=",")
# the design file will look like this:
-1, 1, 0, 0, 0, 0
-1, 0, 1, 0, 0, 0
-1, 0, 0, 1, 0, 0
-1, 0, 0, 0, 1, 0
-1, 0, 0, 0, 0, 1
1, 1, 0, 0, 0, 0
1, 0, 1, 0, 0, 0
1, 0, 0, 1, 0, 0
1, 0, 0, 0, 1, 0
1, 0, 0, 0, 0, 1
For the contrast file, we are only interested in the first EV.
1, 0, 0, 0, 0, 0
In order to make the metric surface data suitable for PALM as input, we have to concatenate the files, similar as above for the volumetric example. Here, I’m using Connectome Workbench3 commands to manipulate the files. I’m assuming that the metric surface data has been resampled to the standard 32k_fs_LR surface mesh and an example filename would be sub-01.L.restingstate.32k_fs_LR.func.gii
.
# compose a string to pass as input to the workbench merging function
command_str=""
# initialize file that stores information about exchangability groups
> ex_blocks.csv
# loop over hemisphere and subjects
for hemi in L R ; do
# initialize exchangability index for this subject
eb_ind=1
for sub in 01 02 03 04 05; do
subject_file=$my_study_dataset/derivatives/sub-$sub/MNINonLinear/fsaverage_LR32k/sub-${sub}.${hemi}.restingstate.32k_fs_LR.func.gii
# flip data on the right hemisphere to the left to allow them to be concatenated
if [[ $hemi == 'R' ]] ; then
wb_command -set-structure $subject_file CORTEX_LEFT
fi
# append the command string
command_str="$command_str -metric $subject_file"
# add this subject's exchangeability index to file
echo $eb_ind >> ex_blocks.csv
# increase the exchangeability index for the next subject
eb_ind=$(( $eb_ind + 1))
done
done
# pass command string to the workbench function
wb_command -metric-merge my_input.func.gii $command_str
Having these files we can call PALM using the following command:
$path_to_palm/palm -i my_input.func.gii -d my_design.csv -t my_contrast.csv -o my_output -n 5000 -twotail -eb ex_blocks.csv -within
The PALM-call will produce the following output files:
my_output_dpv_tstat.func.gii
my_output_dpv_tstat_fwep.gii
my_output_dpv_tstat_uncp.gii
For the p-values associated with the effect of hemisphere, the file of interest is my_output_dpv_tstat_uncp.gii
. In order to display the result onto a surface mesh, the file-extension needs to be changed (renamed) to my_output_dpv_tstat_uncp.func.gii
.
That’s it!
In this post I described three simple examples of how PALM can be used, but there are many (!) more options available, so check out the PALM UserGuide to see which settings you might need. Also, I didn’t to go into depth about the GLM design, but there are many other excellent resources online.
Thanks for reading this post :-)
Nicole
References
1 Winkler, A. M., Ridgway, G. R., Webster, M. A., Smith, S. M. & Nichols, T. E. Permutation inference for the general linear model. NeuroImage 92, 381–397 (2014).
2 Jenkinson, M., Beckmann, C. F., Behrens, T. E. J., Woolrich, M. W. & Smith, S. M. FSL. NeuroImage 62, 782–790 (2012).
3 https://www.humanconnectome.org/software/connectome-workbench