Welcome to Navigating fMRI analysis techniques: a practical guide!
How would you like to begin?
Contact: scottibrain@gmail.com
PLEASE NOTE THAT THIS WEBSITE IS NOT FINISHED. PLEASE CONTACT AUTHORS FOR INFORMATION.
NavigateFMRI (or, Navigating fMRI analysis techniques: a practical guide) was developed by Paul S. Scotti, Jiageng Chen, Xiaoli Zhang, and Julie D. Golomb to help researchers to better understand the currently available fMRI analysis techniques for exploring fMRI brain data and testing hypotheses about the brain.
More specifically, the goal of this tool is to review the most widely used post-processing fMRI methods, including both basic and advanced computational techniques, in an intuitive and hands-on manner. We provide a flowchart that guides the reader to the most appropriate analysis methods, and then we supply simulated data and example Python code to implement each method. Unlike other fMRI resources (we highly recommend Brainiak and fmri4newbies), this web tool focuses on the big picture--what are the different methods, when would you use each method, how do the methods relate to each other and the underlying research question, and what are the basic steps involved in each method (i.e., breadth over depth).
There are many stages to running a complete fMRI experiment, including experimental design, data collection, data pre-processing (e.g., motion correction), data processing (e.g., general linear model estimation), and finally post-processing analysis (e.g., univariate analysis) to reach a conclusion about the brain. Due to the number of steps prior to post-processing analysis, the teaching of these methods can often become confusing despite relatively straightforward steps. For instance, functional connectivity is just the correlations of time series, and representational similarity analysis is just the correlations of conditions! Such intuitive understandings can often be lost because when a researcher finally reaches the post-processing stage, they may not have easy access to their data content (e.g., interpreting the contents of an SPM.mat file is a cruel fate) and only be able to access it via package- and method-specific toolboxes that further obscure our understanding of the basic steps performed under the hood.
This tool can help readers to better understand the main ideas and core implementation of a variety of basic and advanced fMRI methods without worrying about the other fMRI stages such as pre-processing. We accomplish this through easy-to-follow flowcharts, high-level summaries of each method, and practical Python examples using simulated data (we only use standard Python packages in our examples--no custom datasets, packages, or toolboxes need to be installed).
What kind of research question are you trying to answer?
Work in progress
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
# pull data from GitHub
import requests, io
for array in ['activations','sub_id','cond_id','vox_id']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/NavigateFMRI/methods/univariate/{}.npy'.format(array)).content))
# overview of the data
num_subjects = len(np.unique(sub_id)) #20
num_conditions = len(np.unique(cond_id)) #2
num_voxels = len(np.unique(vox_id)) #50
plt.imshow(np.reshape(activations[cond_id==0],(num_subjects,num_voxels)),cmap='PuBu')
plt.colorbar()
plt.title("Activations for comedy movies")
plt.yticks([]);plt.ylabel('Subject ID')
plt.xticks([]);plt.xlabel('Voxel ID')
plt.show()
plt.imshow(np.reshape(activations[cond_id==1],(num_subjects,num_voxels)),cmap='OrRd')
plt.colorbar()
plt.title("Activations for horror movies")
plt.yticks([]);plt.ylabel('Subject ID')
plt.xticks([]);plt.xlabel('Voxel ID')
plt.show()
plt.imshow(np.array([np.mean(np.reshape(activations[cond_id==0],(num_subjects,num_voxels)),axis=1)]).T,cmap='PuBu',clim=[0,1])
plt.title("Avg")
plt.yticks([]);plt.xticks([]);plt.colorbar()
plt.show()
plt.imshow(np.array([np.mean(np.reshape(activations[cond_id==1],(num_subjects,num_voxels)),axis=1)]).T,cmap='OrRd',clim=[0,1])
plt.title("Avg")
plt.yticks([]);plt.xticks([]);plt.colorbar()
plt.show()
diffs=np.array([np.mean(np.reshape(activations[cond_id==1],(num_subjects,num_voxels)),axis=1)]).T - np.array([np.mean(np.reshape(activations[cond_id==0],(num_subjects,num_voxels)),axis=1)]).T
plt.imshow(diffs,cmap='bwr',clim=[-1,1])
plt.title("Diff.")
plt.yticks([]);plt.xticks([])
plt.colorbar()
plt.show()
stat = sp.stats.ttest_rel(diffs.flatten(),np.zeros(num_subjects))
print("Horror vs. Comedy (paired t-test):\nt={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
# pull data from GitHub
import requests, io
for array in ['activations1','activations2','coord']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/NavigateFMRI/methods/searchlight/{}.npy'.format(array)).content))
# activations1 are activations for comedy movies
# activations2 are activations for horror movies
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(coord[:,0],
coord[:,1],
coord[:,2],
c=activations1,
marker='s',vmin=0,vmax=2,
cmap='PuBu')
fig.colorbar(p)
plt.title("Activations for comedy movies")
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(coord[:,0],
coord[:,1],
coord[:,2],
c=activations1,
marker='s',vmin=0,vmax=2,
cmap='OrRd')
fig.colorbar(p)
plt.title("Activations for horror movies")
plt.show()
x0 = 12
y0 = 12
z0 = 12
center=np.array([x0,y0,z0])
cutoff=5
distance=sp.spatial.distance.cdist(coord,center.reshape(1,-1)).ravel()
points_in_sphere=coord[distance<cutoff]
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations1[distance<cutoff],
marker='s',cmap='PuBu',vmin=0,vmax=2)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Comedy")
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations2[distance<cutoff],
marker='s',cmap='OrRd',vmin=0,vmax=2)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Horror")
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations2[distance<cutoff]-activations1[distance<cutoff],
marker='s',cmap='bwr',vmin=-3,vmax=3)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Difference")
plt.show()
diffs=np.full(num_voxels,np.nan)
coord_tracker=np.full((num_voxels,3),np.nan)
cnt=0
for x0 in range(20):
for y0 in range(20):
for z0 in range(20):
center=np.array([x0,y0,z0])
distance=sp.spatial.distance.cdist(coord,center.reshape(1,-1)).ravel()
points_in_sphere=coord[distance<cutoff]
diffs[cnt] = np.abs(np.mean(activations2[distance<cutoff]-activations1[distance<cutoff]))
coord_tracker[cnt,:]=[x0,y0,z0]
cnt+=1
print("Largest absolute difference found at x={} y={} z={}".format(coord_tracker[np.argmax(diffs),0],
coord_tracker[np.argmax(diffs),1],
coord_tracker[np.argmax(diffs),2]))
Largest absolute difference found at x=6.0 y=6.0 z=6.0
And now let's take a look at the sphere where there was the largest difference of conditions! In this way we have explored where the brain was most responsive to task-specific information.
x0 = 6
y0 = 6
z0 = 6
center=np.array([x0,y0,z0])
distance=sp.spatial.distance.cdist(coord,center.reshape(1,-1)).ravel()
points_in_sphere=coord[distance<cutoff]
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations1[distance<cutoff],
marker='s',cmap='PuBu',vmin=0,vmax=2)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Comedy")
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations2[distance<cutoff],
marker='s',cmap='OrRd',vmin=0,vmax=2)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Horror")
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations2[distance<cutoff]-activations1[distance<cutoff],
marker='s',cmap='bwr',vmin=-3,vmax=3)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Difference")
plt.show()
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from sklearn import svm
# pull data from GitHub
import requests, io
for array in ['activations','sub_id','cond_id','vox_id','block_id','trial_id']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/NavigateFMRI/methods/mvpa/mvpc/{}.npy'.format(array)).content))
# overview of the data
num_subjects = len(np.unique(sub_id)) #20
num_conditions = len(np.unique(cond_id)) #2
num_voxels = len(np.unique(vox_id)) #50
num_blocks = len(np.unique(block_id)) #10
plt.hist(activations[cond_id==0],color='blue',bins=30,label='Comedy')
plt.hist(activations[cond_id==1],color='red',bins=30,alpha=.5,label='Horror')
plt.yticks([]);plt.ylabel('Frequency')
plt.xlabel("Activation strength")
plt.legend(bbox_to_anchor=(1, 1))
plt.show()
scores=[];
for s in range(num_subjects):
model = svm.SVC(kernel='linear')
samples_train = activations[(sub_id==s) & (block_id%2==0)].reshape(-1, 1)
classes_train = cond_id[(sub_id==s) & (block_id%2==0)].ravel()
samples_test = activations[(sub_id==s) & (block_id%2==1)].reshape(-1, 1)
classes_test = cond_id[(sub_id==s) & (block_id%2==1)].ravel()
model.fit(samples_train,classes_train)
scores = np.concatenate([scores,[model.score(samples_test,classes_test)]])
plt.scatter(np.arange(len(scores)),scores)
plt.ylim([.0,1.])
plt.ylabel("% correct")
plt.xlabel("Subject ID")
plt.axhline(.5,ls='--',c='k',label='chance')
plt.legend()
plt.show()
print("Avg correct predictions across subs:",np.mean(scores))
stat = sp.stats.ttest_rel(scores,np.ones(num_subjects)*.5)
print("Above-chance prediction accuracy? (t-test):\nt={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
Avg correct predictions across subs: 0.8266
Above-chance prediction accuracy? (t-test): t=9.0744, p=0.0000
s=0
model = svm.SVC(kernel='linear')
samples_train = activations[(sub_id==s) & (block_id%2==0)].reshape(-1, 1)
classes_train = cond_id[(sub_id==s) & (block_id%2==0)].ravel()
samples_test = activations[(sub_id==s) & (block_id%2==1)].reshape(-1, 1)
classes_test = cond_id[(sub_id==s) & (block_id%2==1)].ravel()
model.fit(samples_train,classes_train)
plt.hist(activations[(sub_id==s) & (block_id%2==1) & (cond_id==0)],color='blue',bins=30,label='Comedy')
plt.hist(activations[(sub_id==s) & (block_id%2==1) & (cond_id==1)],color='red',bins=30,alpha=.5,label='Horror')
plt.yticks([]);plt.ylabel('Frequency')
plt.xlabel("Activation strength")
plt.legend(bbox_to_anchor=(1, 1))
xx = np.linspace(-5, 5, 100)
yy = np.linspace(0, 40, 100)
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel()]).T
Z = model.decision_function(xy).reshape(XX.shape)
plt.title("Subject 1")
plt.contour(XX, YY, Z, colors='k', levels=[0],
linestyles=['--', '-', '--'])
plt.show()
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
# pull data from GitHub
import requests, io
for array in ['activations','sub_id','cond_id','block_id','vox_id']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/NavigateFMRI/methods/mvpa/mvpcorrelation/{}.npy'.format(array)).content))
# overview of the data
num_subjects = len(np.unique(sub_id)) #20
num_conditions = len(np.unique(cond_id)) #2
num_voxels = len(np.unique(vox_id)) #50
num_blocks = len(np.unique(block_id)) #40
plt.imshow(activations[(sub_id==0)&(cond_id==0)].reshape(num_blocks//2,num_voxels),cmap='PuBu')
plt.title("Comedy (Subject 1)")
plt.ylabel("Block ID")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
plt.imshow(activations[(sub_id==0)&(cond_id==1)].reshape(num_blocks//2,num_voxels),cmap='OrRd')
plt.title("Horror (Subject 1)")
plt.ylabel("Block ID")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
plt.imshow(activations[(sub_id==0)&(cond_id==0)&(block_id%2==1)].reshape(num_blocks//4,num_voxels),cmap='gray')
plt.title("Comedy (Sub 1, odd blocks)")
plt.ylabel("Block")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
plt.imshow(activations[(sub_id==0)&(cond_id==0)&(block_id%2==0)].reshape(num_blocks//4,num_voxels),cmap='gray')
plt.title("Comedy (Sub 1, even blocks)")
plt.ylabel("Block")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
plt.imshow(activations[(sub_id==0)&(cond_id==1)&(block_id%2==1)].reshape(num_blocks//4,num_voxels),cmap='gray')
plt.title("Horror (Sub 1, odd blocks)")
plt.ylabel("Block")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
plt.imshow(activations[(sub_id==0)&(cond_id==1)&(block_id%2==0)].reshape(num_blocks//4,num_voxels),cmap='gray')
plt.title("Horror (Sub 1, even blocks)")
plt.ylabel("Block")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
# for each subject, correlate odd/even runs for each condition
corr = np.full((num_subjects,num_conditions*2),np.nan)
for s in range(num_subjects):
for c in range(num_conditions*2):
if c == 0: # Comedy (odd) -> Comedy (even)
corr[s,c], _ = sp.stats.pearsonr(activations[(sub_id==s)&(cond_id==0)&(block_id%2==1)],
activations[(sub_id==s)&(cond_id==0)&(block_id%2==0)])
elif c == 1: # Horror (odd) -> Horror (even)
corr[s,c], _ = sp.stats.pearsonr(activations[(sub_id==s)&(cond_id==1)&(block_id%2==1)],
activations[(sub_id==s)&(cond_id==1)&(block_id%2==0)])
elif c == 2: # Comedy (odd) -> Horror (even)
corr[s,c], _ = sp.stats.pearsonr(activations[(sub_id==s)&(cond_id==0)&(block_id%2==1)],
activations[(sub_id==s)&(cond_id==1)&(block_id%2==0)])
elif c == 3: # Horror (odd) -> Comedy (even)
corr[s,c], _ = sp.stats.pearsonr(activations[(sub_id==s)&(cond_id==1)&(block_id%2==1)],
activations[(sub_id==s)&(cond_id==0)&(block_id%2==0)])
with plt.rc_context(rc={'font.size': 20, 'figure.figsize': (5,2)}):
plt.imshow(corr, cmap='gray', aspect='auto')
plt.ylabel('Subject ID')
plt.yticks([]);
plt.xticks(np.arange(4),["Horror\n(Odd)\nHorror\n(Even)","Comedy\n(Odd)\nComedy\n(Even)",
"Horror\n(Odd)\nComedy\n(Even)","Comedy\n(Odd)\nHorror\n(Even)"],
fontsize=14, rotation=0, ha="center")
plt.colorbar()
plt.title("Correlation coefficients")
plt.show()
print('From left to right:')
print('° Comedy (odd) & Comedy (even) Avg. across subjects: r={:.4f}'.format(np.mean(corr[:,0])))
print('° Horror (odd) & Horror (even) Avg. across subjects: r={:.4f}'.format(np.mean(corr[:,1])))
print('° Comedy (odd) & Horror (even) Avg. across subjects: r={:.4f}'.format(np.mean(corr[:,2])))
print('° Horror (odd) & Comedy (even) Avg. across subjects: r={:.4f}'.format(np.mean(corr[:,3])))
stat = sp.stats.ttest_rel(corr[:,:2].flatten(),corr[:,2:].flatten())
print("\nSignificantly different correlations betw first two columns and last two columns? (paired t-test):\nt={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
sub=0 # for first subject
# Calcuate correlations between even- and odd-block conditions
comedy_even = activations[(sub_id==0)&(cond_id==0)&(block_id%2==0)]
comedy_odd = activations[(sub_id==0)&(cond_id==0)&(block_id%2==1)]
horror_even = activations[(sub_id==0)&(cond_id==1)&(block_id%2==0)]
horror_odd = activations[(sub_id==0)&(cond_id==1)&(block_id%2==1)]
# Plot correlations between even- and odd-block conditions
correlation_table = []
# first row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_even,comedy_odd)[0],
stats.pearsonr(horror_even,comedy_odd)[0]])
# second row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_even,horror_odd)[0],
stats.pearsonr(horror_even,horror_odd)[0]])
# plot correlation matrix
plt.rcParams.update({'font.size': 20, 'figure.figsize': (5,5)}) # change default plotting
plt.imshow(correlation_table,cmap='hot')
plt.title("Representational Similarity Matrix\n(Subject {})".format(sub+1),fontsize=18)
plt.yticks(np.arange(2),["Comedy","Horror"],fontsize=16)
plt.ylabel("Odd")
plt.xticks(np.arange(2),["Comedy","Horror"],fontsize=16)
plt.xlabel("Even")
cbar = plt.colorbar()
cbar.set_label('Pearson R', rotation=270)
plt.show()
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
# pull data from GitHub
import requests, io
for array in ['activations','sub_id','cond','block_id','actor']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/navigateFMRI/methods/mvpa/rsa/{}.npy'.format(array)).content))
sub=0 # for first subject
# Calcuate correlations between even- and odd-block conditions
comedy_aniston_even = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==0)&(actor=="Aniston")]
comedy_aniston_odd = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==1)&(actor=="Aniston")]
comedy_depp_even = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==0)&(actor=="Depp")]
comedy_depp_odd = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==1)&(actor=="Depp")]
horror_aniston_even = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==0)&(actor=="Aniston")]
horror_aniston_odd = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==1)&(actor=="Aniston")]
horror_depp_even = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==0)&(actor=="Depp")]
horror_depp_odd = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==1)&(actor=="Depp")]
# Plot correlations between even- and odd-block conditions
correlation_table = []
# first row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_aniston_even,comedy_aniston_odd)[0],
stats.pearsonr(comedy_depp_even,comedy_aniston_odd)[0],
stats.pearsonr(horror_aniston_even,comedy_aniston_odd)[0],
stats.pearsonr(horror_depp_even,comedy_aniston_odd)[0]])
# second row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_aniston_even,comedy_depp_odd)[0],
stats.pearsonr(comedy_depp_even,comedy_depp_odd)[0],
stats.pearsonr(horror_aniston_even,comedy_depp_odd)[0],
stats.pearsonr(horror_depp_even,comedy_depp_odd)[0]])
# third row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_aniston_even,horror_aniston_odd)[0],
stats.pearsonr(comedy_depp_even,horror_aniston_odd)[0],
stats.pearsonr(horror_aniston_even,horror_aniston_odd)[0],
stats.pearsonr(horror_depp_even,horror_aniston_odd)[0]])
# fourth row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_aniston_even,horror_depp_odd)[0],
stats.pearsonr(comedy_depp_even,horror_depp_odd)[0],
stats.pearsonr(horror_aniston_even,horror_depp_odd)[0],
stats.pearsonr(horror_depp_even,horror_depp_odd)[0]])
# plot correlation matrix
plt.rcParams.update({'font.size': 20, 'figure.figsize': (5,5)}) # change default plotting
plt.imshow(correlation_table,cmap='hot')
plt.title("Representational Similarity Matrix\n(Subject {})".format(sub+1))
plt.yticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.ylabel("Horror-Odd Comedy-Odd")
plt.xticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.xlabel("Comedy-Even Horror-Even")
cbar = plt.colorbar()
cbar.set_label('Pearson R', rotation=270)
plt.show()
ComedyVsHorror= []
ComedyVsHorror.append([1,1,0,0])
ComedyVsHorror.append([1,1,0,0])
ComedyVsHorror.append([0,0,1,1])
ComedyVsHorror.append([0,0,1,1])
# plot correlation matrix
plt.rcParams.update({'font.size': 20, 'figure.figsize': (5,5)}) # change default plotting
plt.imshow(ComedyVsHorror,cmap='RdYlBu')
plt.title("Genre RSM")
plt.yticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.ylabel("Horror Comedy")
plt.xticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.xlabel("Comedy Horror")
plt.show()
AnistonVsDepp = []
AnistonVsDepp.append([1,0,1,0])
AnistonVsDepp.append([0,1,0,1])
AnistonVsDepp.append([1,0,1,0])
AnistonVsDepp.append([0,1,0,1])
# plot correlation matrix
plt.rcParams.update({'font.size': 20, 'figure.figsize': (5,5)}) # change default plotting
plt.imshow(AnistonVsDepp,cmap='RdYlBu')
plt.title("Actor RSM")
plt.yticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.ylabel("Horror Comedy")
plt.xticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.xlabel("Comedy Horror")
plt.show()
# Calcuate correlations between even- and odd-block conditions
comedy_allsub = np.full((num_subjects),np.nan)
horror_allsub = np.full((num_subjects),np.nan)
aniston_allsub = np.full((num_subjects),np.nan)
depp_allsub = np.full((num_subjects),np.nan)
for sub in range(num_subjects):
correlation_table=[]
comedy_aniston_even = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==0)&(actor=="Aniston")]
comedy_aniston_odd = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==1)&(actor=="Aniston")]
comedy_depp_even = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==0)&(actor=="Depp")]
comedy_depp_odd = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==1)&(actor=="Depp")]
horror_aniston_even = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==0)&(actor=="Aniston")]
horror_aniston_odd = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==1)&(actor=="Aniston")]
horror_depp_even = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==0)&(actor=="Depp")]
horror_depp_odd = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==1)&(actor=="Depp")]
# first row of correlation matrix:
correlation_table.append([sp.stats.pearsonr(comedy_aniston_even,comedy_aniston_odd)[0],
sp.stats.pearsonr(comedy_depp_even,comedy_aniston_odd)[0],
sp.stats.pearsonr(horror_aniston_even,comedy_aniston_odd)[0],
sp.stats.pearsonr(horror_depp_even,comedy_aniston_odd)[0]])
# second row of correlation matrix:
correlation_table.append([sp.stats.pearsonr(comedy_aniston_even,comedy_depp_odd)[0],
sp.stats.pearsonr(comedy_depp_even,comedy_depp_odd)[0],
sp.stats.pearsonr(horror_aniston_even,comedy_depp_odd)[0],
sp.stats.pearsonr(horror_depp_even,comedy_depp_odd)[0]])
# third row of correlation matrix:
correlation_table.append([sp.stats.pearsonr(comedy_aniston_even,horror_aniston_odd)[0],
sp.stats.pearsonr(comedy_depp_even,horror_aniston_odd)[0],
sp.stats.pearsonr(horror_aniston_even,horror_aniston_odd)[0],
sp.stats.pearsonr(horror_depp_even,horror_aniston_odd)[0]])
# fourth row of correlation matrix:
correlation_table.append([sp.stats.pearsonr(comedy_aniston_even,horror_depp_odd)[0],
sp.stats.pearsonr(comedy_depp_even,horror_depp_odd)[0],
sp.stats.pearsonr(horror_aniston_even,horror_depp_odd)[0],
sp.stats.pearsonr(horror_depp_even,horror_depp_odd)[0]])
comedy_allsub[sub] = np.mean(np.array(correlation_table)[np.array(ComedyVsHorror)==1])
horror_allsub[sub] = np.mean(np.array(correlation_table)[np.array(ComedyVsHorror)==0])
aniston_allsub[sub] = np.mean(np.array(correlation_table)[np.array(AnistonVsDepp)==1])
depp_allsub[sub] = np.mean(np.array(correlation_table)[np.array(AnistonVsDepp)==0])
plt.bar(0,np.mean(comedy_allsub),color='blue')
plt.bar(1,np.mean(horror_allsub),color='darkred')
plt.scatter(np.zeros(num_subjects),comedy_allsub,color='k',zorder=2)
plt.scatter(np.ones(num_subjects),horror_allsub,color='k',zorder=2)
plt.axhline(0,c='k')
plt.title("Same-genre Vs Different-genre")
plt.xticks(np.arange(2),["Same","Diff."])
plt.ylabel("Avg. correlation")
plt.ylim(0,.1)
plt.show()
stat = sp.stats.ttest_rel(comedy_allsub,horror_allsub)
print("\nPaired t-test: t={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
plt.bar(0,np.mean(aniston_allsub),color='blue')
plt.bar(1,np.mean(depp_allsub),color='darkred')
plt.scatter(np.zeros(num_subjects),aniston_allsub,color='k',zorder=2)
plt.scatter(np.ones(num_subjects),depp_allsub,color='k',zorder=2)
plt.axhline(0,c='k')
plt.title("Same-actor Vs Different-actor")
plt.xticks(np.arange(2),["Same","Diff."])
plt.ylabel("Avg. correlation")
plt.ylim(0,.1)
plt.show()
stat = sp.stats.ttest_rel(aniston_allsub,depp_allsub)
print("\nPaired t-test: t={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
Paired t-test: t=5.5199, p=0.0000
Paired t-test: t=-0.0462, p=0.9636
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
# pull data from GitHub
import requests, io
for array in ['enjoyment','time_series1','time_series2']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/navigateFMRI/methods/univariate/{}.npy'.format(array)).content))
# overview of the data
num_subjects = time_series1.shape[0]
num_TRs = time_series1.shape[1]
num_conditions = 2
plt.plot(time_series1[0,:],c='green')
plt.title("Subject 1, ROI A")
plt.ylabel("BOLD signal")
plt.xlabel("TR")
plt.show()
plt.plot(time_series2[0,:],c='purple')
plt.title("Subject 1, ROI B")
plt.ylabel("BOLD signal")
plt.xlabel("TR")
plt.show()
print("Correlation coefficient between the two time series = {}".format(
sp.stats.pearsonr(time_series1[0,:],time_series2[0,:])[0]))
print("Movie enjoyment rating = {} (1-5 scale)".format(enjoyment[0]))
corrs=[]
for sub in range(num_subjects):
coefficient=sp.stats.pearsonr(time_series1[sub,:],time_series2[sub,:])[0]
plt.scatter(sub,coefficient,color='brown')
corrs=np.append(corrs,coefficient)
plt.title("Time-series correlation coefficient")
plt.xlabel("Subject")
plt.show()
for sub in range(num_subjects):
plt.scatter(sub,enjoyment[sub],color='black')
plt.title("Movie enjoyment rating")
plt.yticks(np.arange(5)+1,fontsize=14)
plt.xlabel("Subject")
plt.show()
# do statistics between conditions
stat = sp.stats.ttest_rel(corrs,enjoyment)
print("Correlation coefficients sig. associated with enjoyment ratings? (paired t-test):\nt={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
code
Regarding general linear models, general refers to the different types of analyses that can be used (e.g., correlation, one-sample t-test, ANOVA, etc.) and linear refers to how the model assumes a linear relationship between parameters (often preferred because nonlinear models are prone to overfitting, where a model appears to fit the data well but is not generalizable to new data). We consider the general linear model to be a processing step (rather than a post-processing fMRI method, which is what this website is focused on) and thus will not discuss it in further detail, but it is essentially one way to obtain an estimate of brain activation (in arbitrary units) for every voxel for each condition after accounting for things like the hemodynamic response function (time-course of blood flow) and body motion in the scanner.
In practice, you should use cross-validation such that all data may be predicted without succumbing to circularity/double-dipping.
Unlike NavigateFMRI, Brainiak involves installing a custom Python package and is built to be a helpful tool for use on real data. NavigateFMRI is meant purely for teaching purposes with no intention for the code contained on the website to be directly implemented in real fMRI projects. Brainiak's Python tutorial examples use real fMRI datasets and go into more depth than NavigateFMRI. NavigateFMRI covers more methods than Brainiak and uses simpler, simulated datasets. Our example walkthroughs are much shorter and do not dwell on technical details that may be important to consider when actually implementing techniques on real data.
NavigateFMRI only covers post-processing fMRI methods while fmri4newbies covers all stages of the fMRI pipeline including pre-processing, processing, and post-processing (in addition to information on MRI physics and anatomical & functional brain regions). Unlike NavigateFMRI, fmri4newbies supplies the reader with Powerpoint lecture slides (no example datasets or code).
"Activations" can be raw BOLD signal or beta weights obtained from general linear model estimation.
Please just pretend that Jennifer Aniston has starred in horror movies, we are not movie buffs :). Note: Jennifer Aniston actually has a reputation for being used as an example in neuroimaging experiments because of a study investigating "Grandmother cells", the idea that a neuron may be so complex as to specifically encode your grandmother. The study involved one participant who seemed have a neuron with Jennifer Aniston specific firing.