Neuropycon is based on Nipype, and is open to any contribution with the wrapping mechanism allowing anyone to contribute.
We provide a tutorial on how to wrap a python package of your choice, based on the explanation provided on the nipype website.
We provide here an example with the BCT toolbox, a package largely used in the neuroscience community for computating graph-theoretical based metrics. BCT was originally a Matlab set of functions, but a python version called bctpy also exists.
Here we provide an example of a wrap on one single fuction, incorporated in a new pipeline in order check how long it took to wrap one function. By itself, the wrapping of a single function (Kcore computation of the BCT, one of the measures available in BCT and not in Radatools) took less than half an hour. A rough estimate of the wrap of the K-core function of bctpy package leads to a count of ~50 lines of code (see interfaces/bct/bct.py on github graphype project).
The source code of the corresponding wrapped node:
import os
import numpy as np
from nipype.interfaces.base import BaseInterface, \
BaseInterfaceInputSpec, traits, File, TraitedSpec
from bct import (kcoreness_centrality_bu, kcoreness_centrality_bd)
class KCoreInputSpec(BaseInterfaceInputSpec):
np_mat_file = File(
exists=True,
desc='numpy matrix N*N to apply the data to',
mandatory=True)
is_directed = traits.Bool(
False, usedefault=True, desc="Is the matrix directed ?")
class KCoreOutputSpec(TraitedSpec):
coreness_file = File(
exists=True,
desc="coreness vector file")
distrib_k_file = File(
exists=True,
desc="distrib_k vector file")
class KCore(BaseInterface):
"""
Description:
Compute K core
wraps of kcoreness_centrality_bu and kcoreness_centrality_bd in bctpy
Inputs:
np_mat_file:
type = File,
exists=True,
desc='numpy matrix N*N to apply the data to',
mandatory=True)
is_directed = traits.Bool(
False, usedefault = True, desc="Is the matrix directed ?")
Outputs:
coreness_file:
type = File,
exists=True,
desc="coreness vector file"
distrib_k_file =
type = File
exists=True,
desc="distrib_k vector file"
"""
input_spec = KCoreInputSpec
output_spec = KCoreOutputSpec
def _run_interface(self, runtime):
np_mat_file = self.inputs.np_mat_file
is_directed = self.inputs.is_directed
# loading data
np_mat = np.load(np_mat_file)
# running bctpy
if is_directed:
coreness, distrib_k = kcoreness_centrality_bd(np_mat)
else:
coreness, distrib_k = kcoreness_centrality_bu(np_mat)
print(coreness)
np.save(os.path.abspath("coreness.npy"), coreness)
np.save(os.path.abspath("distrib_k.npy"), distrib_k)
return runtime
def _list_outputs(self):
outputs = self._outputs().get()
outputs["coreness_file"] = os.path.abspath("coreness.npy")
outputs["distrib_k_file"] = os.path.abspath("distrib_k.npy")
return outputs
The incorporation in a functional pipeline (after the matrix computation and thresholding) took another hour, and corresponds to another ~10 lines of code:
def create_pipeline_bct_graph(
main_path, pipeline_name="graph_bct_pipe", con_den=1.0):
"""
Description:
Pipeline for computing module based graph properties
Threshold is density based
Inputs (inputnode):
* conmat_files
"""
# TODO plot=True is kept for sake of clarity but is now unused
pipeline = pe.Workflow(name=pipeline_name)
pipeline.base_dir = main_path
# input node
inputnode = pe.Node(niu.IdentityInterface(
fields=['conmat_file']),
name='inputnode')
# compute binary version
bin_mat = pe.Node(interface=ComputeNetList(export_np_bin=True,
density=con_den),
name="bin_mat")
pipeline.connect(inputnode, 'conmat_file',
bin_mat, 'Z_cor_mat_file')
# compute K core
k_core = pe.Node(
interface=KCore(),
name="k_core")
k_core.inputs.is_directed = False
pipeline.connect(bin_mat, 'np_bin_mat_file',
k_core, 'np_mat_file')
return pipeline