{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Compute Graph properties from a given connectivity matrix with BCT toolbox\n\nThe _inv_ts_to_bct_graph pipeline performs spectral connectivity over time\nseries, and an example wrap of a function from the\n:download:`Brain Connectivity Toolbox `\nas an\nalternative to the Radatools toolbox for graph metric computation.\n\nThis workflow makes use of two chained pipelines, and\nrequires both graphpype AND ephypype to be installed.\n\nThe **input** data should be a time series matrix in **npy** format.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Authors: David Meunier \n# License: BSD (3-clause)\n# sphinx_gallery_thumbnail_number = 2\nimport os.path as op\nimport nipype.pipeline.engine as pe\nimport nipype.interfaces.io as nio\n\nfrom ephypype.nodes import create_iterator\nfrom ephypype.nodes import get_frequency_band"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Check if data are available\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from graphpype.utils_tests import load_test_data\n\ndata_path = load_test_data(\"data_inv_ts\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, we create our workflow and specify the `base_dir` which tells\nnipype the directory in which to store the outputs.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# workflow directory within the `base_dir`\ngraph_analysis_name = 'inv_ts_to_graph_analysis'\n\nmain_workflow = pe.Workflow(name=graph_analysis_name)\nmain_workflow.base_dir = data_path"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now use a json file for describing the connectivity parameters, loaded\nfrom a json as a dictionnary\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import json # noqa\nimport pprint # noqa\n\ndata_con = json.load(open(op.join(op.dirname(\"__file__\"),\n \"params_connectivity.json\")))\npprint.pprint({'connectivity parameters': data_con})\n\nfreq_band_names = data_con['freq_band_names']\nfreq_bands = data_con['freq_bands']\n\n# spectral_connectivity_parameters\ncon_method = data_con['con_method']\nepoch_window_length = data_con['epoch_window_length']\n\n# sampling frequency\nsfreq = data_con['sfreq'] # When starting from raw MEG\n# (.fif) data, can be directly extracted from the file info\n\nfrequency_node = get_frequency_band(freq_band_names, freq_bands)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we create a node to pass input filenames to DataGrabber from nipype\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"subject_ids = ['sub-0003'] # 'sub-0004', 'sub-0006'\ninfosource = create_iterator(['subject_id', 'freq_band_name'],\n [subject_ids, freq_band_names])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and a node to grab data. The template_args in this node iterate upon\nthe values in the infosource node\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"template_path = '*%s_task-rest_run-01_meg_0_60_raw_filt_dsamp_ica_ROI_ts.npy'\n\ndatasource = pe.Node(\n interface=nio.DataGrabber(infields=['subject_id'], outfields=['ts_file']),\n name='datasource')\n\ndatasource.inputs.base_directory = data_path\ndatasource.inputs.template = template_path\n\ndatasource.inputs.template_args = dict(ts_file=[['subject_id']])\ndatasource.inputs.sort_filelist = True"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We then use the pipeline used in the previous example `conmat_to_graph pipeline `\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from ephypype.pipelines import create_pipeline_time_series_to_spectral_connectivity # noqa\n\nspectral_workflow = create_pipeline_time_series_to_spectral_connectivity(\n data_path, con_method=con_method,\n epoch_window_length=epoch_window_length)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now use a json file for describing the graph parameters, loaded\nfrom a json as a dictionnary\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"data_graph = json.load(open(op.join(op.dirname(\"__file__\"),\n \"params_bct_graph.json\")))\npprint.pprint({'graph parameters': data_graph})\n\n# density of the threshold\ncon_den = data_graph['con_den']\n\n\nfrom graphpype.pipelines import create_pipeline_bct_graph\n\ngraph_workflow = create_pipeline_bct_graph(\n data_path, con_den=con_den)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We then connect the nodes two at a time. We connect the output\nof the infosource node to the datasource node.\nSo, these two nodes taken together can grab data.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"main_workflow.connect(infosource, 'subject_id',\n datasource, 'subject_id')\n\nmain_workflow.connect(infosource, 'freq_band_name',\n frequency_node, 'freq_band_name')\n\nmain_workflow.connect(datasource, 'ts_file',\n spectral_workflow, \"inputnode.ts_file\")\n\nspectral_workflow.inputs.inputnode.sfreq = sfreq\n\nmain_workflow.connect(frequency_node, 'freq_bands',\n spectral_workflow, 'inputnode.freq_band')\n\nmain_workflow.connect(spectral_workflow, 'spectral.conmat_file',\n graph_workflow, \"inputnode.conmat_file\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To do so, we first write the workflow graph (optional)\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"main_workflow.write_graph(graph2use='colored') # colored"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and visualize it. Take a moment to pause and notice how the connections\ncorrespond to how we connected the nodes.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"#import matplotlib.pyplot as plt # noqa\n#img = plt.imread(op.join(data_path, graph_analysis_name, 'graph.png'))\n#plt.figure(figsize=(8, 8))\n#plt.imshow(img)\n#plt.axis('off')\n#plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we are now ready to execute our workflow.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"main_workflow.config['execution'] = {'remove_unnecessary_outputs': 'false'}\nmain_workflow.run()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Run workflow locally on 2 CPUs in parrallel\nmain_workflow.run(plugin='MultiProc', plugin_args={'n_procs': 2})\n###############################################################################\n plotting k_core values\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"#from graphpype.utils_visbrain import visu_graph # noqa\n\n#labels_file = op.join(data_path, \"label_names.txt\")\n#coords_file = op.join(data_path, \"label_centroid.txt\")\n\n##labels_file = op.join(data_path, \"label_names.txt\")\n##coords_file = op.join(data_path, \"label_centroid.txt\")\n\n#from visbrain.objects import SceneObj, BrainObj # noqa\n\n#sc = SceneObj(size=(1500, 1500), bgcolor=(1,1,1))\n\n#views = [\"left\",'top']\n\n\n#for nf, freq_band_name in enumerate(freq_band_names):\n\n #res_path = op.join(\n #data_path, graph_analysis_name,\n #\"graph_bct_pipe\",\n #\"_freq_band_name_\"+freq_band_name+\"_subject_id_sub-0003\")\n\n #node_k_file = op.join(res_path, \"k_core\", \"coreness.npy\")\n\n #bin_mat_file = op.join(res_path, \"bin_mat\", \"bin_mat.npy\")\n\n #for i_v,view in enumerate(views):\n #b_obj = BrainObj('B1', translucent=True)\n\n #sc.add_to_subplot(b_obj, row=nf, col = i_v, use_this_cam=True, rotate=view,\n #title=(\"K-core nodes for {} band\".format(freq_band_name)),\n #title_size=14, title_bold=True, title_color='black')\n\n #c_obj,s_obj = visu_graph(\n #labels_file = labels_file, coords_file = coords_file,\n ##net_file = bin_mat_file)\n #net_file = bin_mat_file, node_size_file=node_k_file)\n\n #sc.add_to_subplot(c_obj, row=nf, col = i_v)\n #sc.add_to_subplot(s_obj, row=nf, col = i_v)\n\n#sc.preview()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"plotting k_core only\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from graphpype.utils_visbrain import visu_graph_kcore # noqa\n\nlabels_file = op.join(data_path, \"label_names.txt\")\ncoords_file = op.join(data_path, \"label_centroid.txt\")\n\n#labels_file = op.join(data_path, \"label_names.txt\")\n#coords_file = op.join(data_path, \"label_centroid.txt\")\n\nfrom visbrain.objects import SceneObj, BrainObj # noqa\n\nsc = SceneObj(size=(1500, 1500), bgcolor=(1,1,1))\n\nviews = [\"left\",'top']\n\n\nfor nf, freq_band_name in enumerate(freq_band_names):\n\n res_path = op.join(\n data_path, graph_analysis_name,\n \"graph_bct_pipe\",\n \"_freq_band_name_\"+freq_band_name+\"_subject_id_sub-0003\")\n\n node_k_file = op.join(res_path, \"k_core\", \"coreness.npy\")\n\n bin_mat_file = op.join(res_path, \"bin_mat\", \"bin_mat.npy\")\n\n for i_v,view in enumerate(views):\n b_obj = BrainObj('B1', translucent=True)\n\n sc.add_to_subplot(b_obj, row=nf, col = i_v, use_this_cam=True, rotate=view,\n title=(\"K-core nodes for {} band\".format(freq_band_name)),\n title_size=14, title_bold=True, title_color='black')\n\n c_obj,s_obj, c_obj2, s_obj2, = visu_graph_kcore(\n labels_file = labels_file, coords_file = coords_file,\n #net_file = bin_mat_file)\n net_file = bin_mat_file, node_size_file=node_k_file)\n\n sc.add_to_subplot(c_obj, row=nf, col = i_v)\n sc.add_to_subplot(s_obj, row=nf, col = i_v)\n\n if c_obj2:\n sc.add_to_subplot(c_obj2, row=nf, col = i_v)\n\n sc.add_to_subplot(s_obj2, row=nf, col = i_v)\n\n\nsc.preview()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
}
},
"nbformat": 4,
"nbformat_minor": 0
}