{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# 02. Compute ERP\nThis workflow mainly call the\n:func:`ephypype pipeline `\ncomputing N170 component from cleaned EEG data. The first Node of the workflow\n(`extract_events_node` Node) extracts the events from raw data. The events\nare saved in the Node directory.\nIn the `ERP_pipeline` the raw data are epoched accordingly to events\nextracted in `extract_events` Node.\nThe evoked datasets are created by averaging the different conditions specified\nin ``json`` file.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: Annalisa Pascarella \n# License: BSD (3-clause)\n\n# sphinx_gallery_thumbnail_number = 2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Import modules\nThe first step is to import the modules we need in the script. We import\nmainly from |nipype| and |ephypype| packages.\n\n.. |nipype| raw:: html\n\n nipype\n\n.. |ephypype| raw:: html\n\n ephypype\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import os\nimport os.path as op\n\nimport json\nimport pprint # noqa\nimport ephypype\n\nimport nipype.pipeline.engine as pe\nfrom nipype.interfaces.utility import Function\n\nfrom ephypype.nodes import create_iterator, create_datagrabber\nfrom ephypype.pipelines.preproc_meeg import create_pipeline_evoked\nfrom ephypype.datasets import fetch_erpcore_dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us fetch the data first. It is around 90 MB download.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import ephypype\nhome_dir = op.expanduser(\"~\")\n\nbase_path = op.join(home_dir, 'workshop')\n\ntry:\n os.mkdir(base_path)\n\nexcept OSError:\n print(\"directory {} already exists\".format(base_path))\n\ndata_path = fetch_erpcore_dataset(base_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define data and variables\nLet us specify the variables that are specific for the data analysis (the\nmain directories where the data are stored, the list of subjects and\nsessions, ...) and the variable specific for the particular pipeline\n(events_id, baseline, ...) in a \n:download:`json ` file \n(if it is does work, try to go on the github page, and right-click \"Save As\" on the Raw button)\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Read experiment params as json\nparams = json.load(open(\"params.json\"))\npprint.pprint({'parameters': params[\"general\"]})\n\ndata_type = params[\"general\"][\"data_type\"]\nsubject_ids = params[\"general\"][\"subject_ids\"]\nNJOBS = params[\"general\"][\"NJOBS\"]\nsession_ids = params[\"general\"][\"session_ids\"]\n# data_path = params[\"general\"][\"data_path\"]\n\n# ERP params\nERP_str = 'ERP'\npprint.pprint({'ERP': params[ERP_str]})\nevents_id = params[ERP_str]['events_id']\ncondition = params[ERP_str]['condition']\nbaseline = tuple(params[ERP_str]['baseline'])\nevents_file = params[ERP_str]['events_file']\nt_min = params[ERP_str]['tmin']\nt_max = params[ERP_str]['tmax']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n## Extract events\nThe first Node of the workflow extract events from the raw data. The events\nare extracted using the function [events_from_annotations](https://mne.tools/stable/generated/mne.events_from_annotations.html)\nof [MNE-python](https://mne.tools/stable/index.html) on raw data.\nThe events are saved in the Node directory.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def get_events(raw_ica, subject):\n '''\n First, we get the ica file from the preprocessing workflow directory, i.e.\n the cleaned raw data. The events are extracted from raw annotation and are\n saved in the Node directory.\n '''\n print(subject, raw_ica)\n import mne\n\n rename_events = {\n '201': 'response/correct',\n '202': 'response/error'\n }\n\n for i in range(1, 180 + 1):\n orig_name = f'{i}'\n\n if 1 <= i <= 40:\n new_name = 'stimulus/face/normal'\n elif 41 <= i <= 80:\n new_name = 'stimulus/car/normal'\n elif 101 <= i <= 140:\n new_name = 'stimulus/face/scrambled'\n elif 141 <= i <= 180:\n new_name = 'stimulus/car/scrambled'\n else:\n continue\n\n rename_events[orig_name] = new_name\n\n raw = mne.io.read_raw_fif(raw_ica, preload=True)\n events_from_annot, event_dict = mne.events_from_annotations(raw)\n\n faces = list()\n car = list()\n for key in event_dict.keys():\n if rename_events[key] == 'stimulus/car/normal':\n car.append(event_dict[key])\n elif rename_events[key] == 'stimulus/face/normal':\n faces.append(event_dict[key])\n\n merged_events = mne.merge_events(events_from_annot, faces, 1)\n merged_events = mne.merge_events(merged_events, car, 2)\n\n event_file = raw_ica.replace('.fif', '-eve.fif')\n mne.write_events(event_file, merged_events)\n\n return event_file" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Specify Nodes\n\nBefore to create a workflow we have to create the [nodes](https://miykael.github.io/nipype_tutorial/notebooks/basic_nodes.html)\nthat define the workflow itself. In this example the main Nodes are\n\n* ``infosource`` is a Node that just distributes values\n* ``datasource`` is a DataGrabber Node that allows the user to define flexible search patterns which can be parameterized by user defined inputs \n* ``extract_events`` is a Node containing the function `extract_events`\n* ``ERP_pipeline`` is a Node containing the pipeline created by :func:`~ephypype.pipelines.create_pipeline_evoked`\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Infosource and Datasource\nWe create a node to pass input filenames to\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "infosource = create_iterator(['subject_id', 'session_id'],\n [subject_ids, session_ids])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# ``datasource`` node to grab data. The ``template_args`` in this node iterate\n# upon the values in the infosource node\nica_dir = op.join(\n data_path, 'preprocessing_workflow', 'preproc_eeg_pipeline')\ntemplate_path = \"_session_id_%s_subject_id_%s/ica/sub-%s_ses-%s_*filt_ica.fif\"\ntemplate_args = [['session_id', 'subject_id', 'subject_id', 'session_id']]\ndatasource = create_datagrabber(ica_dir, template_path, template_args)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n### Extract events Node\nThen, we define the Node that encapsulates ``get_events`` function\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "extract_events = pe.Node(\n Function(input_names=['raw_ica', 'subject'],\n output_names=['event_file'],\n function=get_events),\n name='extract_events')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n### ERP Node\nFinally, we create the ephypype pipeline computing evoked data which can be\nconnected to these nodes we created.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ERP_workflow = create_pipeline_evoked(\n data_path, data_type=data_type, pipeline_name=\"ERP_pipeline\",\n events_id=events_id, baseline=baseline,\n condition=condition, t_min=t_min, t_max=t_max)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Specify Workflows and Connect Nodes\nNow, we create our workflow and specify the ``base_dir`` which tells\nnipype the directory in which to store the outputs.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ERP_pipeline_name = ERP_str + '_workflow'\n\nmain_workflow = pe.Workflow(name=ERP_pipeline_name)\nmain_workflow.base_dir = data_path" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We then connect the output of ``infosource`` node to the one of\n``datasource``. So, these two nodes taken together can grab data.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "main_workflow.connect(infosource, 'subject_id', datasource, 'subject_id')\nmain_workflow.connect(infosource, 'session_id', datasource, 'session_id')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We connect the output of ``infosource`` and ``datasource`` to the input\nof ``extract_events`` node\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "main_workflow.connect(datasource, 'raw_file', extract_events, 'raw_ica')\nmain_workflow.connect(infosource, 'subject_id', extract_events, 'subject')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we connect the output of ``infosource``, ``datasource`` and\n``extract_events`` nodes to the input of ``ERP_pipeline`` node.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "main_workflow.connect(infosource, 'subject_id',\n ERP_workflow, 'inputnode.sbj_id')\nmain_workflow.connect(datasource, 'raw_file',\n ERP_workflow, 'inputnode.raw')\nmain_workflow.connect(extract_events, 'event_file',\n ERP_workflow, 'inputnode.events_file')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run workflow\nAfter we have specified all the nodes and connections of the workflow, the\nlast step is to run it by calling the ``run`` method. It\u2019s also possible to\ngenerate static graph representing nodes and connections between them by\ncalling ``write_graph`` method.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "main_workflow.write_graph(graph2use='colored') # optional" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Take a moment to pause and notice how the connections\nhere correspond to how we connected the nodes.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt # noqa\nimg = plt.imread(op.join(data_path, ERP_pipeline_name, 'graph.png'))\nplt.figure(figsize=(6, 6))\nplt.imshow(img)\nplt.axis('off')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we are now ready to execute our workflow.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "main_workflow.config['execution'] = {'remove_unnecessary_outputs': 'false'}\n# Run workflow locally on 1 CPU\nmain_workflow.run(plugin='LegacyMultiProc', plugin_args={'n_procs': NJOBS})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plot results\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import mne # noqa\nimport matplotlib.pyplot as plt # noqa\nfrom ephypype.gather import get_results # noqa\n\nevoked_files, _ = get_results(main_workflow.base_dir,\n main_workflow.name, pipeline='compute_evoked')\n\nfor evoked_file in evoked_files:\n print(f'*** {evoked_file} ***\\n')\n\n ave = mne.read_evokeds(evoked_file)\n faces, car = ave[0], ave[1]\n\n gfp_faces = faces.data.std(axis=0, ddof=0)\n gfp_car = car.data.std(axis=0, ddof=0)\n\n # compare conditions\n contrast = mne.combine_evoked([faces, car], weights=[1, -1])\n gfp_contrast = contrast.data.std(axis=0, ddof=0)\n\n # Reproducing the MNE-Python plot style seen above\n fig, ax = plt.subplots()\n ax.plot(faces.times, gfp_faces * 1e6, color='blue')\n ax.plot(car.times, gfp_car * 1e6, color='orange')\n ax.plot(contrast.times, gfp_contrast * 1e6, color='green')\n ax.legend(['Faces', 'Car', 'Contrast'])\n fig.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.8" } }, "nbformat": 4, "nbformat_minor": 0 }