I ‘m working on the motor imagery data from BBCI2A

we have 9 subjects and four kinds of imagery movement, in order to make a better classification with deep learning

I decide to resource it to check what’s really going on inside the brain when doing this kind of movement.

here we are :

时序分析,求得平均
%% demean 
data=data_train_730_filtered{i};
cfg=[];
cfg.demean='yes';
cfg.baselinewindow  = [0 1];

subject1 = ft_preprocessing(cfg,data);  

cfg=[];
cfg.trials=find(subject1.trialinfo==769);
dataleft=ft_selectdata(cfg,subject1);

cfg.trials=find(subject1.trialinfo==770);
dataright=ft_selectdata(cfg,subject1);

cfg.trials=find(subject1.trialinfo==771);
datafoot=ft_selectdata(cfg,subject1);

cfg.trials=find(subject1.trialinfo==772);
datatougue=ft_selectdata(cfg,subject1);


 cfg = [];
 cfg.covariance = 'yes';
 cfg.covariancewindow = [0 1]; %it will calculate the covariance matrix 
                                   % on the timepoints that are  
                                   % before the zero-time point in the trials
  tlcleft = ft_timelockanalysis(cfg, dataleft);
  tlcright = ft_timelockanalysis(cfg, dataright);
  tlctougue=ft_timelockanalysis(cfg, datatougue);
  tlcfoot=ft_timelockanalysis(cfg, datafoot);

 

 

load vol and sourcespace from template

 load vol sourcespace
 sourcespace = ft_read_headshape('C:\Users\jiahxu\Downloads\fieldtrip-master\fieldtrip-master/template/sourcemodel/cortex_5124.surf.gii'); 

 sourcespace.pnt=sourcespace.pos;
cfg=[];
cfg.channel='all';
cfg.elec=elec22;
cfg.vol=vol;
cfg.grid.pos = sourcespace.pnt;              % source points
cfg.grid.inside = 1:size(sourcespace.pnt,1); % all source points are inside of the brain
lf=ft_prepare_leadfield(cfg);

calculate the leadfiled

 sourcespace.pnt=sourcespace.pos;
cfg=[];
cfg.channel='all';
cfg.elec=elec22;
cfg.vol=vol;
cfg.grid.pos = sourcespace.pnt;              % source points
cfg.grid.inside = 1:size(sourcespace.pnt,1); % all source points are inside of the brain
lf=ft_prepare_leadfield(cfg);

resource with MNE here minimum norm estimation

cfg        = [];
cfg.elec=elec22;
cfg.method = 'mne';
cfg.grid   = lf;
cfg.headmodel     = vol;
cfg.mne.prewhiten = 'yes';
cfg.mne.lambda    = 3;
cfg.mne.scalesourcecov = 'yes';
sourceleft = ft_sourceanalysis(cfg,tlcleft);



cfg        = [];
cfg.elec=elec22;
cfg.method = 'mne';
cfg.grid   = lf;
cfg.headmodel     = vol;
cfg.mne.prewhiten = 'yes';
cfg.mne.lambda    = 3;
cfg.mne.scalesourcecov = 'yes';
sourceright = ft_sourceanalysis(cfg,tlcright);

cfg        = [];
cfg.elec=elec22;
cfg.method = 'mne';
cfg.grid   = lf;
cfg.headmodel     = vol;
cfg.mne.prewhiten = 'yes';
cfg.mne.lambda    = 3;
cfg.mne.scalesourcecov = 'yes';
sourcetougue = ft_sourceanalysis(cfg,tlctougue);


cfg        = [];
cfg.elec=elec22;
cfg.method = 'mne';
cfg.grid   = lf;
cfg.headmodel     = vol;
cfg.mne.prewhiten = 'yes';
cfg.mne.lambda    = 3;
cfg.mne.scalesourcecov = 'yes';
sourcefoot = ft_sourceanalysis(cfg,tlcfoot);

plot the source with a fixed timepoint

for i=1:25

figure(i)
bnd.pnt = sourcespace.pnt;
bnd.tri = sourcespace.tri;
m=sourceleft.avg.pow(:,50*i); % plotting the result at the 450th time-point that is 
                         % 500 ms after the zero time-point
ft_plot_mesh(bnd, 'vertexcolor', m);
fname=[num2str(i),'.png'];
print(i,'-dpng',fname);

clear fname;
close all;
end 


for i=1:25

figure(i)
bnd.pnt = sourcespace.pnt;
bnd.tri = sourcespace.tri;
m=sourceright.avg.pow(:,50*i); % plotting the result at the 450th time-point that is 
                         % 500 ms after the zero time-point
ft_plot_mesh(bnd, 'vertexcolor', m);
fname=[num2str(i),'.png'];
print(i,'-dpng',fname);


clear fname;
close all;
end 


for i=1:25

figure(i)
bnd.pnt = sourcespace.pnt;
bnd.tri = sourcespace.tri;
m=sourcefoot.avg.pow(:,50*i); % plotting the result at the 450th time-point that is 
                         % 500 ms after the zero time-point
ft_plot_mesh(bnd, 'vertexcolor', m);
fname=[num2str(i),'.png'];
print(i,'-dpng',fname);

clear fname;
close all;
end 




for i=1:25

figure(i)
bnd.pnt = sourcespace.pnt;
bnd.tri = sourcespace.tri;
m=sourcetougue.avg.pow(:,50*i); % plotting the result at the 450th time-point that is 
                         % 500 ms after the zero time-point
ft_plot_mesh(bnd, 'vertexcolor', m);
fname=[num2str(i),'.png'];
print(i,'-dpng',fname);

clear fname;
close all;
end 

here take a look the gif l made :

left

left_1

foot

foot_1

right

right_2

tougue

tougue_final

t=0—>t=5 totally 1250 timepoint plot one per 50 timpe point about 200ms

refresh the page if you want play again

 

 

for motor imagery, the csp is still on the state of art to classify the EEG data.

here csp we mean commen spatial patterns which take us of the model that max the different of signals

in fieldtrip, we do as follow :


clabels=subject1.trialinfo;

clabels=clabels'-768;

cfg=[];
cfg.method='csp';
cfg.csp.classlabels=clabels; 类别
cfg.csp.numfilters=6;        patterns 数目  注意要通过time frequency 查看频域的区别  这样才能提高准确率
data=ft_componentanalysis(cfg,subject1)



cfg=[];
cfg.viewmodel='component';
cfg.layout='elec1005.lay';

ft_databrowser(cfg,data)
print(1,'-dpng','component.png');

figure(2)
cfg=[];
cfg.component=1:6;
cfg.layout='elec1005.lay';
cfg.comment='no';

ft_topoplotIC(cfg,data)


 

 

component

topo

how to classify this ?

we have two choice sensor level and freq level both with the SVM

in sensor level :

design=[ones(72,1) ; 2*ones(72,1)]';
cfg=[];
cfg.layout='elec1005.lay';
cfg.method='crossvalidate';
cfg.design=design;
cfg.latency=[2 2.5];      2-2.5秒准确率最高,也就是cue刚刚出现的一段时间 值得思索
cfg.channel={'C3','C4'};  最直接,channel越多 准确率越低
cfg.statistic={'accuracy'  'binomial' 'contingency'};
stat=ft_timelockstatistics(cfg,dataleft,dataright);

stat.statistic.contingency

stat.statistic.accuracy

stat.statistic.binomial

ans =

    0.5764 准确率


ans =

    0.0399  p值 

显然超过了随机猜测的水平 可是依然惨不忍睹

我们在频域来测试这两种不同的信号



cfg=[];
cfg.method='mtmconvol';
cfg.taper='hanning';
cfg.foi=7:1:14;
cfg.t_ftimwin=ones(length(cfg.foi),1).*0.5;
cfg.channel={'C3','C4'};
cfg.toi=1:0.1:4;
cfg.keeptrials='yes';

freqleft=ft_freqanalysis(cfg,dataleft);


freqright=ft_freqanalysis(cfg,dataright);




cfg=[];
cfg.layout='elec1005.lay';
cfg.method='crossvalidate';
cfg.design=design;
cfg.latency=[2.5 3];
cfg.statistic={'accuracy'  'binomial' 'contingency'};
stat=ft_freqstatistics(cfg,freqleft,freqright);

stat.statistic.contingency

stat.statistic.accuracy

stat.statistic.binomial



ans =

    0.7014         频域瞬间飙升


ans =

   7.4369e-07      p值非常显著

上述列子 可以看到
在时域 从2秒到2.5秒这段时间 左和右 有差异,但是仍然难以区分
在频域 从2.5秒到3.5秒这段时间 左和右 有很大差异,比较容易区分

我们从时域上看下:

plot(tlcleft.time,tlcleft.avg(8,:),tlcleft.time,tlcleft.avg(12,:))
line([2 2],[-5.5 5.5])
line([2.5 2.5],[-5.5 5.5])
axis([0 5 -5.5 5.5])
legend('C3','C4')
title('imagery left')



plot(tlcleft.time,tlcright.avg(8,:),tlcleft.time,tlcright.avg(12,:))
line([2 2],[-5.5 5.5])
line([2.5 2.5],[-5.5 5.5])
axis([0 5 -5.5 5.5])
legend('C3','C4')
title('imagery right')


leftc3c4

rightc3c

从频域上看

cfg=[];
cfg.method='mtmconvol';
cfg.taper='hanning';
cfg.foi=1:1:35;
cfg.t_ftimwin=ones(length(cfg.foi),1).*0.5;
cfg.channel={'C3','C4'};
cfg.toi=1:0.05:5;
cfg.keeptrials='no';

freqleft=ft_freqanalysis(cfg,tlcleft);


freqright=ft_freqanalysis(cfg,tlcright);



cfg = [];
cfg.baseline     = [0  1];
cfg.baselinetype = 'absolute';  
cfg.maskstyle    = 'saturation';	
cfg.zlim         = [0 0.1];	
cfg.channel      = 'C3';
subplot(2,1,1);
ft_singleplotTFR(cfg, freqleft);


cfg.zlim         = [0 0.1];	
cfg.channel      = 'C4';
subplot(2,1,2);
ft_singleplotTFR(cfg, freqleft);
suptitle('imagery left');

right
freqright

left
freqleft2

还是不用识别啊