{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# 03. Compute inverse solution\nThis workflow mainly calls the\n`inverse solution pipeline `\nperforming source reconstruction starting from the cleaned raw data obtained\nby `preproc_meg`.\n\nThe first node of the workflow (`concat_event`) **extracts the events** from\nstimulus channel ``STI101``. The events are saved in the `concat_event`\ndirectory. For each subject, the different runs are also concatenated in\none single raw file and saved in `concat_event` directory.\nThe input of this node are the different runs taken from the\npreprocessing workflow directory, i.e. the cleaned\nraw data created by `preproc_meg`.\n\nIn the `inv_solution_node` the raw **data are epoched** accordingly to\nevents specified in ``json`` file and created in `concat_event`.\nThe evoked datasets are created by averaging the different conditions specified\nin the ``json`` file. The estimated neural time series are reconstructed\nby the evoked data for each condition by **dSPM** inverse method.\n\nFinally, the source estimates obtained by the\n`inv_solution_node` are morphed to the ``fsaverage`` brain in the\n`morphing_node`.\n\nAn overview of the workflow, its nodes and connections is given by the `graph_inverse`.\n\n

Warning

Before running this pipeline, the coregistration between the MRI\n and MEG device needs to be performed. It is reccomened to look at the MNE\n |tutorial|

\n.. |tutorial| raw:: html\n\n tutorial\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: Annalisa Pascarella \n# License: BSD (3-clause)\n\n# sphinx_gallery_thumbnail_number = 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Import modules\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import os.path as op\nimport json\nimport pprint # noqa\n\nimport nipype.pipeline.engine as pe\nfrom nipype.interfaces.utility import Function\n\nfrom ephypype.nodes import create_iterator, create_datagrabber\nfrom ephypype.pipelines.fif_to_inv_sol import create_pipeline_source_reconstruction # noqa" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define data and variables\nLet us specify the variables that are specific for the data analysis (the\nmain directories where the data are stored, the list of subjects and\nsessions, ...) and the variable specific for the **inverse pipeline**\n(events_id, inverse method, ...) in a |params.json| file\n\n.. |params.json| replace::\n :download:`json `\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "params = json.load(open(\"params.json\"))\n\npprint.pprint({'parameters': params[\"general\"]})\ndata_type = params[\"general\"][\"data_type\"]\nsubject_ids = params[\"general\"][\"subject_ids\"]\nNJOBS = params[\"general\"][\"NJOBS\"]\nsession_ids = params[\"general\"][\"session_ids\"]\nsubjects_dir = params[\"general\"][\"subjects_dir\"]\n\nis_short = params[\"general\"][\"short\"] # to analyze a shorter segment of data\n\nif \"data_path\" in params[\"general\"].keys():\n data_path = params[\"general\"][\"data_path\"]\nelse:\n data_path = op.expanduser(\"~\")\nprint(\"data_path : %s\" % data_path)\n\n# source reconstruction\npprint.pprint({'inverse parameters': params[\"inverse\"]})\nevents_id = params[\"inverse\"]['events_id']\ncondition = params[\"inverse\"]['condition']\nnew_name_condition = params[\"inverse\"][\"new_name_condition\"]\nt_min = params[\"inverse\"]['tmin']\nt_max = params[\"inverse\"]['tmax']\nspacing = params[\"inverse\"]['spacing'] # oct-6\nsnr = params[\"inverse\"]['snr']\ninv_method = params[\"inverse\"]['method'] # dSPM\nparc = params[\"inverse\"]['parcellation'] # aparc\n\ntrans_fname = op.join(data_path, params[\"inverse\"]['trans_fname'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define functions\nHere we define two different functions that will be encapsulated in two\ndifferent nodes (`concat_event` and `morphing_node`).\nThe ``run_events_concatenate`` function extracts events from the stimulus\nchannel while the ``compute_morph_stc`` morphs the source estimates obtained by\nthe `inv_solution_node` into the ``fsaverage`` brain.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def run_events_concatenate(list_ica_files, subject):\n '''\n The events are extracted from stim channel 'STI101'. The events are saved\n to the Node directory.\n For each subject, the different runs are concatenated in one single raw\n file and saved in the Node directory. We take the different runs from the\n preprocessing workflow directory, i.e. the cleaned raw data.\n '''\n\n print(subject, list_ica_files)\n import os\n import mne\n\n mask = 4096 + 256 # mask for excluding high order bits\n delay_item = 0.0345\n min_duration = 0.015\n\n print(f\"processing subject: {subject}\")\n\n raw_list = list()\n events_list = list()\n fname_events_files = []\n\n print(\" Loading raw data\")\n for i, run_fname in enumerate(list_ica_files):\n run = i + 1\n\n raw = mne.io.read_raw_fif(run_fname, preload=True)\n events = mne.find_events(raw, stim_channel='STI101',\n consecutive='increasing', mask=mask,\n mask_type='not_and',\n min_duration=min_duration)\n\n print(f\" S {subject} - {run} %s\")\n\n fname_events = os.path.abspath(f'run_{run:02d}-eve.fif')\n mne.write_events(fname_events, events)\n fname_events_files.append(fname_events)\n\n delay = int(round(delay_item * raw.info['sfreq']))\n events[:, 0] = events[:, 0] + delay\n events_list.append(events)\n\n raw_list.append(raw)\n\n raw, events = mne.concatenate_raws(raw_list, events_list=events_list)\n raw.set_eeg_reference(projection=True)\n raw_file = os.path.abspath(f'{subject}_sss_filt_dsamp_ica-raw.fif')\n print(raw_file)\n\n raw.save(raw_file, overwrite=True)\n\n event_file = os.path.abspath(f'{subject}_sss_filt_dsamp_ica-raw-eve.fif')\n mne.write_events(event_file, events)\n\n del raw_list\n del raw\n\n return raw_file, event_file, fname_events_files" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def compute_morph_stc(subject, conditions, cond_files, subjects_dir):\n import os.path as op\n import mne\n\n print(f\"processing subject: {subject}\")\n\n # Morph STCs\n stc_morphed_files = []\n for k, cond_file in enumerate(cond_files):\n print(conditions[k])\n print(cond_file)\n stc = mne.read_source_estimate(cond_file)\n\n morph = mne.compute_source_morph(\n stc, subject_from=subject, subject_to='fsaverage',\n subjects_dir=subjects_dir)\n stc_morphed = morph.apply(stc)\n stc_morphed_file = op.abspath('mne_dSPM_inverse_morph-%s' % (conditions[k])) # noqa\n stc_morphed.save(stc_morphed_file)\n\n stc_morphed_files.append(stc_morphed_file)\n\n return stc_morphed_files\n\n\ndef show_files(files):\n print(files)\n return files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Specify Nodes\nInfosource and Datasource\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nWe create an ``infosurce`` node to pass input filenames to the ``datasource``\nnode to grab the data.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "infosource = create_iterator(['subject_id'], [subject_ids])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We set the dir from which to get the preprocessed MEG data\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "preproc_wf_name = 'preprocessing_dsamp_short_workflow' if is_short \\\n else 'preprocessing_dsamp_workflow'\nica_dir = op.join(data_path, preproc_wf_name, 'preproc_meg_dsamp_pipeline')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we want to grab both the raw MEG data and the coregistration file,\nthus we have to specify the search patterns for ``raw_file`` and\n``trans_file`` which are parameterized with the values defined in\n``template_args``.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "template_path = '*'\n\nraw_file = '_session_id_*_subject_id_%s/ica/%s*ses*_*filt*ica.fif'\ntrans_file = '../../%s/ses-meg/meg_short/%s%s.fif' if is_short else \\\n '../../%s/ses-meg/meg/%s%s.fif'\nfield_template = dict(raw_file=raw_file, trans_file=trans_file)\n\ntemplate_args = dict(\n raw_file=[['subject_id', 'subject_id']],\n trans_file=[['subject_id', 'subject_id', \"-trans\"]])\n\ndatasource = create_datagrabber(ica_dir, template_path, template_args,\n field_template=field_template,\n infields=['subject_id'],\n outfields=['raw_file', 'trans_file'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Note

:func:`~ephypype.nodes.create_datagrabber` is used with different\n input paramters respect to `preproc_meg`.

\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n### Event Node\nWe define the Node that encapsulates ``run_events_concatenate`` function\n(see `graph_inverse`) and specify the inputs and outputs of the function.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "concat_event = pe.Node(\n Function(input_names=['list_ica_files', 'subject'],\n output_names=['raw_file', 'event_file', 'fname_events_files'],\n function=run_events_concatenate),\n name='concat_event')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n### Inverse solution Node\nEphypype creates for us a pipeline to compute inverse solution which can be\nconnected to the other nodes we created.\nThe inverse solution pipeline is implemented by the function\n:func:`~ephypype.pipelines.create_pipeline_source_reconstruction`,\nthus to instantiate this pipeline node, we pass our parameters to it.\nSince we want to do source estimation in three different conditions\n(famous faces, unfamiliar faces and scrambled), we provide all information\nrelated to the events in the ``json`` file where we also specify as inverse\nmethod dSPM.\n\nThe inverse pipeline contains three nodes that wrap the MNE Python functions\nthat perform the source reconstruction steps. In particular, the three nodes\nare:\n\n* :func:`~ephypype.interfaces.mne.LF_computation.LFComputation` compute the\n Lead Field matrix;\n* :func:`~ephypype.interfaces.mne.Inverse_solution.NoiseCovariance` computes\n the noise covariance matrix;\n* :func:`~ephypype.interfaces.mne.Inverse_solution.InverseSolution` estimates\n the time series of the neural sources on a set of dipoles grid.\n\nThe **input node** (``raw``, ``sbj_id``, ``trans_file`` and ``events_file``)\nwill be specified once the different nodes are connected.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "inv_sol_workflow = create_pipeline_source_reconstruction(\n data_path, subjects_dir, spacing=spacing, inv_method=inv_method,\n is_epoched=True, is_evoked=True, events_id=events_id, condition=condition,\n t_min=t_min, t_max=t_max, all_src_space=True, parc=parc, snr=snr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n### Morphing Node\nThe last Node we define encapsulates ``compute_morph_stc`` function.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "morph_stc = pe.Node(\n Function(input_names=['subject', 'conditions', 'cond_files', 'subjects_dir'], # noqa\n output_names=['stc_morphed_files'],\n function=compute_morph_stc),\n name=\"morph_stc\")\n\nmorph_stc.inputs.conditions = new_name_condition\nmorph_stc.inputs.subjects_dir = subjects_dir" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create workflow\nThen, we can create our workflow and specify the ``base_dir`` which tells\nnipype the directory in which to store the outputs.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "src_estimation_wf_name = 'source_dsamp_short_reconstruction_' if is_short \\\n else 'source_dsamp_full_reconstruction_'\n\nsrc_reconstruction_pipeline_name = src_estimation_wf_name + \\\n inv_method + '_' + parc.replace('.', '')\n\nmain_workflow = pe.Workflow(name=src_reconstruction_pipeline_name)\nmain_workflow.base_dir = data_path" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Connect Nodes\nFinally, we connect the nodes two at a time. First, we connect the\noutput (``subject_id``) of the ``infosource`` node to the input of\n``datasource`` node. So, these two nodes taken together can grab data.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "main_workflow.connect(infosource, 'subject_id', datasource, 'subject_id')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We connect their outputs to the input of \n(``list_ica_files``, ``subject``) of `concat_event`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "main_workflow.connect(datasource, ('raw_file', show_files),\n concat_event, 'list_ica_files')\nmain_workflow.connect(infosource, 'subject_id', concat_event, 'subject')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The output of ``infosource``, ``datasource`` and ``concat_event`` are the\ninputs of ``inv_sol_workflow``, thus we connect these nodes two at time.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "main_workflow.connect(infosource, ('subject_id', show_files),\n inv_sol_workflow, 'inputnode.sbj_id')\nmain_workflow.connect(datasource, 'trans_file',\n inv_sol_workflow, 'inputnode.trans_file')\nmain_workflow.connect(concat_event, ('raw_file', show_files),\n inv_sol_workflow, 'inputnode.raw')\nmain_workflow.connect(concat_event, ('event_file', show_files),\n inv_sol_workflow, 'inputnode.events_file')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we connect ``infosource`` and ``inv_sol_workflow`` to\n``morph_stc``.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "main_workflow.connect(infosource, 'subject_id', morph_stc, 'subject')\nmain_workflow.connect(inv_sol_workflow, 'inv_solution.stc_files',\n morph_stc, 'cond_files')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run workflow\nNow, we are now ready to execute our workflow.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "main_workflow.write_graph(graph2use='colored') # colored" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n### Workflow graph\n\nTake a moment to pause and notice how the connections\nhere correspond to how we connected the nodes.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt # noqa\nimg = plt.imread(op.join(\n data_path, src_reconstruction_pipeline_name, 'graph.png'))\nplt.figure(figsize=(7, 7))\nplt.imshow(img)\nplt.axis('off')\n\nmain_workflow.config['execution'] = {'remove_unnecessary_outputs': 'false'}\nmain_workflow.run(plugin='LegacyMultiProc', plugin_args={'n_procs': NJOBS})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Results\nThe output of this workflow is the reconstructed neural time series morphed to the standard\nFreeSurfer average subject named fsaverage.The output is stored in the\nworkflow dir defined by ``base_dir``. To plot the estimated source timeseries\nwe can use `plot_stc`.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.8" } }, "nbformat": 4, "nbformat_minor": 0 }