Home – PyHRF

PyHRF is a Python library for the analysis of fMRI data based on the study of hemodynamics.

Author: Ramon Sanders

Developer documentation

Developer documentation

Developer documentation – This documentation summarizes tools and documentation for developers.

Developing pyhrf package

Developer documentation

Some guidelines are provided in the github’s wiki. Before writing code, be sure to read at least the git workflow page and the Code Quality page

Updating Website Documentation

The documentation is written using RestructuredText markup language and is compiled into a website using Sphinx.

Remark: You’ll need at least the version 1.3.1 of Sphinx to compile the code into website (see Installation for installation details).

To modify the website documentation, follow these steps:

  • update documentation code in doc/sphinx/source folder
  • compile documentation with make html
  • check that the local website looks like you want it (open doc/sphinx/build/html/index.html file)
  • commit your changes in your local repository and create a pull-request on github pyhrf repository (see github’s wiki to see how)
  • copy the content of the doc/sphinx/build/html/ folder into the website folder on Inria server (only accessible from Inria internal network)
  • check the pyhrf website

TODO

Add website folder access from outside INRIA Grenoble to upload website from outside INRIA

Releasing

Developer documentation

A release is a stable version of PyHRF package which is available as a tagged version on github repository and installable from the Python Package Index (PyPI) using pip command.

To upload the package to PyPI, you need to install twine package. On most GNU/Linux systems, you can use your package manager to install twine package or use the pip command to install it.

To make a new PyHRF release follow these steps:

  • fill up CHANGELOG.rst file with latest modifications
  • change version in setup.py
  • create annotated tag on git repository with git tag -a x.y.z command (x.y.z being the version number)
  • build tar.gz package:
$ python setup.py sdist
  • to upload the source build package to PyPI, run:
$ twine upload dist/*

The last command will ask for username and password the first time you use it (it asks for savings credentials in $HOME/.pypirc file for later uploads)

TODO

Remove anything that break a wheel build and make a wheel build instead of source build

Configuration

Configuration

Configuration – Package options are stored in $HOME/.pyhrf/config.cfg, which is created after the installation.

Configuration

It handles global package options and the setup of parallel processing. Here is the default content of this file (section order may change):

[global]
write_texture_minf = False          ; compatibility with Anatomist for texture file
tmp_prefix = pyhrftmp               ; prefix used for temporary folders in tmp_path
verbosity = 0                       ; default of verbosity, can be changed with option -v
tmp_path = /tmp/                    ; where to write file
use_mode = enduser                  ; "enduser": stable features only, "devel": +indev features
spm_path = None                     ; path to the SPM matlab toolbox (indev feature)



[parallel-cluster]                  ; Distributed computation on a cluster.
                                    ; Soma-workflow is required.
                                    ; Authentification by ssh keys must be
                                    ; configured in both ways (remote <-local)
                                    ; -eg copy content of ~/.ssh/id_rsa.pub (local machine)
                                    ;    at the end of ~/.ssh/authorized_keys (remote machine)
                                    ;    Also do the converse:
                                    ;    copy content of ~/.ssh/id_rsa.pub (remote machine)
                                    ;    at the end of ~/.ssh/authorized_keys (local machine)

server_id = None                    ; ID of the soma-workflow-engine server
server = None                       ; hostname or IP adress of the server
user = None                         ; user name to log in the server
remote_path = None                  ; path on the server where data will be stored

[parallel-local]                    ; distributed computation on the local cpu
niceness = 10                       ; niceness of remote jobs
nb_procs = 1                        ; number of distruted jobs, better not over
                                    ; the total number of CPU
                                    ; 'cat /proc/cpuinfo | grep processor | wc -l' on linux
                                    ; 'sysctl hw.ncpu' on MAC OS X
                                    ; Set it to 0 if you want to auto-detect the number of
                                    ; availables cpus and cores (taking into account the kernel
                                    ; restrictions such as cgroups, ulimit, ...)

[parallel-LAN]                      ; Distributed computation on a LAN
                                    ; Authentification by ssh keys must be
                                    ; configured
remote_host = None                  ; hostname or IP address of a host on the LAN
niceness = 10                       ; niceness for distributed jobs
hosts = $HOME/.pyhrf/hosts_LAN      ; plain text file containing coma-separated list of hostnames on the LAN
user = None                         ; user name used to log in on any machine
                                    ; on the LAN
remote_path = None                  ; path readable from the machine where
                                    ; pyhrf is launched (to directly retrieve
                                    ; results)
Configuration

Parcellation mask

Parcellation mask

Parcellation mask – The JDE framework estimates HRF parcels-wide. This means that you need a parcellation mask to compute the estimation-detection. PyHRF provides some tools to generate parcellation masks and provides also a parcellation mask (see next section).

Willard atlas

Parcellation mask

The Willard atlas comes from Stanford

To use it check where it is installed by issuing:

$ pyhrf_list_datafiles

and check for stanford_willard_parcellation_3x3x3mm.nii.gz file.

This parcellation mask has been created from the files distributed by Stanford (above website) with a voxel resolution of 3x3x3mm and a volume shape of 53x63x52 voxels.

Parcellation mask

If you use this parcellation mask, please cite the following paper:

The citation for the 499 fROI atlas, nicknamed the “Willard” atlas after the two creators, William Shirer and Bernard Ng, is the following publication:

Richiardi J, et al.: Correlated gene expression supports synchronous activity in brain networks. Science (2015).

PyHRF commands

PyHRF commands

PyHRF commands – fMRI data analysis

pyhrf_jde_vem_analysis

Usage:

pyhrf_jde_vem_analysis [options] TR parcels_file onsets_file [bold_data_run1

Description:

Run an fMRI analysis using the JDE-VEM framework All arguments (even positional ones) are optional and default values are defined in the beginning of the script

PyHRF commands

Options:

positional arguments:
  tr                    Repetition time of the fMRI data
  parcels_file          Nifti integer file which define the parcels
  onsets_file           CSV onsets file
  bold_data_file        4D-Nifti file of BOLD data

optional arguments:
  -h, --help            show this help message and exit
  -c DEF_CONTRASTS_FILE, --def_contrasts_file DEF_CONTRASTS_FILE
                        JSON file defining the contrasts
  -d DT, --dt DT        time resolution of the HRF (if not defined here or in
                        the script, it is automatically computed)
  -l HRF_DURATION, --hrf-duration HRF_DURATION
                        time lenght of the HRF (in seconds, default: 25.0)
  -o OUTPUT_DIR, --output OUTPUT_DIR
                        output directory (created if needed, default:
                        /home/tperret/code/pyhrf)
  --nb-iter-min NB_ITER_MIN
                        minimum number of iterations of the VEM algorithm
                        (default: 5)
  --nb-iter-max NB_ITER_MAX
                        maximum number of iterations of the VEM algorithm
                        (default: 100)
  --beta BETA           (default: 1.0)
  --hrf-hyperprior HRF_HYPERPRIOR
                        (default: 1000)
  --sigma-h SIGMA_H     (default: 0.1)
  --estimate-hrf        (default: True)
  --no-estimate-hrf     explicitly disable HRFs estimation
  --zero-constraint     (default: True)
  --no-zero-constraint  explicitly disable zero constraint (default enabled)
  --drifts-type DRIFTS_TYPE
                        set the drifts type, can be 'poly' or 'cos' (default:
                        poly)
  -v {DEBUG,10,INFO,20,WARNING,30,ERROR,40,CRITICAL,50}, --log-level {DEBUG,10,INFO,20,WARNING,30,ERROR,40,CRITICAL,50}
                        (default: WARNING)
  -p, --parallel        (default: True)
  --no-parallel         explicitly disable parallel computation
  --save-processing     (default: True)
  --no-save-processing

Parcellation tools

pyhrf_parcellate_glm

pyhrf_parcellate_spatial

pyhrf_parcellation_extract

Usage:

pyhrf_parcellation_extract [options] PARCELLATION_FILE PARCEL_ID1 [PARCEL_ID2 ...]

Description:

Extract a sub parcellation comprising only PARCEL_ID1PARCEL_ID2, … from the input parcellation PARCELLATION_FILE.

Options:

-h, --help            show this help message and exit
-v VERBOSELEVEL, --verbose=VERBOSELEVEL
                      { 0 :    'no verbose' 1 :    'minimal verbose' 2 :
                      'main steps' 3 :    'main function calls' 4 :    'main
                      variable values' 5 :    'detailled variable values' 6
                      :    'debug (everything)' }
-o FILE, --output=FILE
                      Output parcellation file. Default is
                      <input_file>_<PARCEL_IDS>.(nii|gii)
-c, --contiguous      Make output parcel ids be contiguous

Example

The input parcellation parcellation.nii looks like:

_images/pyhrf_parcellation_extract_input.png
pyhrf_parcellation_extract parcellation.nii 36 136 -o parcellation_sub.nii

The output parcellation parcellation_sub.nii will look like:

_images/pyhrf_parcellation_extract_output.png

Misc tools

pyhrf_list_datafiles

PyHRF commands

Usage:

pyhrf_list_datafiles [options]

Description:

This command lists all data files included in the package.

Options

-h, --help       show this help message and exit
-b, --base-name  Display only basenames

Examples:

pyhrf_list_datafiles

 /home/user/software/pyhrf/python/pyhrf/datafiles/SPM_v12.mat.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/SPM_v5.mat.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/SPM_v8.mat.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/cortex_occipital_hrf_territories_3mm.nii
 /home/user/software/pyhrf/python/pyhrf/datafiles/cortex_occipital_hrf_territories_convex_hull.tgz
 /home/user/software/pyhrf/python/pyhrf/datafiles/cortex_occipital_right_GWmask_3mm.nii.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/cortex_occipital_white_surf.gii.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/dummySmallBOLD.nii.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/dummySmallMask.nii.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/paradigm_V4.csv
 /home/user/software/pyhrf/python/pyhrf/datafiles/paradigm_loc.csv
 /home/user/software/pyhrf/python/pyhrf/datafiles/paradigm_loc_a.csv
 /home/user/software/pyhrf/python/pyhrf/datafiles/paradigm_loc_av.csv
 /home/user/software/pyhrf/python/pyhrf/datafiles/paradigm_loc_av_comma.csv
 /home/user/software/pyhrf/python/pyhrf/datafiles/paradigm_loc_av_d.csv
 /home/user/software/pyhrf/python/pyhrf/datafiles/paradigm_loc_c_only.csv
 /home/user/software/pyhrf/python/pyhrf/datafiles/paradigm_loc_cp_only.csv
 /home/user/software/pyhrf/python/pyhrf/datafiles/paradigm_loc_cpcd.csv
 /home/user/software/pyhrf/python/pyhrf/datafiles/real_data_surf_tiny_bold.gii
 /home/user/software/pyhrf/python/pyhrf/datafiles/real_data_surf_tiny_mesh.gii
 /home/user/software/pyhrf/python/pyhrf/datafiles/real_data_surf_tiny_parcellation.gii
 /home/user/software/pyhrf/python/pyhrf/datafiles/real_data_vol_4_regions_BOLD.nii.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/real_data_vol_4_regions_anatomy.nii.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/real_data_vol_4_regions_mask.nii.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu.pck
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_hrf_3_territories.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_hrf_3_territories_8x8.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_hrf_4_territories.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_activated.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_ghost.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_house_sun.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_icassp13.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_invader.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_pacman.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_small_spots_1.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_small_spots_2.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_stretched_1.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_template.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_tiny_1.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_tiny_2.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/simu_labels_tiny_3.png
 /home/user/software/pyhrf/python/pyhrf/datafiles/stanford_willard_parcellation_3x3x3mm.nii.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/subj0_anatomy.nii.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/subj0_bold_session0.nii.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/subj0_parcellation.nii.gz
 /home/user/software/pyhrf/python/pyhrf/datafiles/subj0_single_roi.nii.gz
pyhrf_list_datafiles -b

    SPM_v12.mat.gz
    SPM_v5.mat.gz
    SPM_v8.mat.gz
    cortex_occipital_hrf_territories_3mm.nii
    cortex_occipital_hrf_territories_convex_hull.tgz
    cortex_occipital_right_GWmask_3mm.nii.gz
    cortex_occipital_white_surf.gii.gz
    dummySmallBOLD.nii.gz
    dummySmallMask.nii.gz
    paradigm_V4.csv
    paradigm_loc.csv
    paradigm_loc_a.csv
    paradigm_loc_av.csv
    paradigm_loc_av_comma.csv
    paradigm_loc_av_d.csv
    paradigm_loc_c_only.csv
    paradigm_loc_cp_only.csv
    paradigm_loc_cpcd.csv
    real_data_surf_tiny_bold.gii
    real_data_surf_tiny_mesh.gii
    real_data_surf_tiny_parcellation.gii
    real_data_vol_4_regions_BOLD.nii.gz
    real_data_vol_4_regions_anatomy.nii.gz
    real_data_vol_4_regions_mask.nii.gz
    simu.pck
    simu_hrf_3_territories.png
    simu_hrf_3_territories_8x8.png
    simu_hrf_4_territories.png
    simu_labels_activated.png
    simu_labels_ghost.png
    simu_labels_house_sun.png
    simu_labels_icassp13.png
    simu_labels_invader.png
    simu_labels_pacman.png
    simu_labels_small_spots_1.png
    simu_labels_small_spots_2.png
    simu_labels_stretched_1.png
    simu_labels_template.png
    simu_labels_tiny_1.png
    simu_labels_tiny_2.png
    simu_labels_tiny_3.png
    stanford_willard_parcellation_3x3x3mm.nii.gz
    subj0_anatomy.nii.gz
    subj0_bold_session0.nii.gz
    subj0_parcellation.nii.gz
    subj0_single_roi.nii.gz

Multicore computing

Multicore computing

Multicore computing – Pyhrf can make computation on several cores using the joblib python package. Joblib is an optional dependency of pyhrf so if you plan to use multiprocessing computation you need to install it (refer to the installation page)

Configuration

Multicore computing

You can configure the number of used cores by setting the nb_procs option in the pyhrf configuration file. If you want to auto-detect the number of processors and cores that the system can use, set this option to 0.

Use

Multicore computing

When calling the pyhrf_jde_vem_analysis script, the multiprocessing mode is active by default. If you want to disable it use --no-parallel command line option.

File format specification

File format specification

File format specification – Paradigm CSV format

To encode the experimental paradigm, the CSV file format comprises 5 columns:

  1. Session number (integer)
  2. Experimental Condition, as string with double quotes
  3. Onset (float in s.)
  4. Duration (float s.). 0 for event-related paradigm
  5. Amplitude (float, default is 1.).
File format specification

Example (extracted from pyhrf/python/datafiles/paradigm_loc.csv):

SessionExperimental ConditionOnsetDurationAmplitude
0calculaudio35.4000000.000000
0clicDaudio143.4000000.000000
0clicDaudio162.0000000.000000
0clicDaudio230.0000000.000000
0clicDvideo18.0000000.000000
0clicDvideo69.0000000.000000
0clicDvideo227.7000000.000000

NB: Be carefull not to end float onset and duration with a point . because it could results in python not being able to split the paradigm correctly (end with a zero if necessary)

Contrasts JSON format

File format specification

To encode contrasts definition, use the following json format:

{
    "contrast_name": "linear combination of conditions",
    "Audio-Video": "clicDaudio-clicDvideo"
}

The names in the contrast definition MUST correspond to the experimental conditions (case sensitive) defined in the above CSV file.

Visualization tools

Visualization tools

Visualization tools – This page sums up some advices on how to visualize PyHRF results.

NeuroElf

Visualization tools

For exploration of results, you can use the NeuroElf viewer with small modifications of the code to allow better visualization of HRFs and results. NeuroElf is a Matlab toolbox so you’ll need to have Matlab installed on your system.

Installation

Go to the NeuroElf page and follow instructions for NeuroElf installation.

For easier use of the tool, you can modify part of the code for better HRF and PPMs visualizations. The modifications for PyHRF are available here for download (just replace the original files).

If you prefer (for advanced use) you can also download the patchs version

Use

  1. (Optional) Run NeuroElf from Matlab, then open the Anatomical file from the File menu.
  2. Run NeuroElf from Matlab, then open the jde_vem_hrf_mapped.nii file from the File menu.
  3. From the File menu, select the Set underlay object item and select either the loaded anatomical file or the one provided by NeuroElf
  4. From the File menu, select Open file as stats item and load all the jde_vem_ppm_a_nrl_condition.nii files at once.
  5. You can than select from the left menu the condition you want to visualize (you can tweak the threshold if you need to on the right panel).

Nilearn

Visualization tools

For global visualization of PyHRF results, we advise to use the Nilearn plot functions.

We will provide some examples to plot PPMs maps as well as HRFs from active parcels.

PyHRF Viewer

There exists an old viewer for pyhrf you can try (which is not supported nor recommended anymore).

Variational (VEM) analysis

Variational (VEM) analysis

Variational (VEM) analysis – To run a JDE-VEM analysis you need:

  • paradigm csv file corresponding to your experimental design
  • the preprocessed (spatially UNsmoothed) BOLD data associated with a single run [4D-Nifti file]
  • the TR (in s) used at acquisition time
  • a parcellation mask (if you normalize your data into MNI space, you can use the one provided with pyhrf)
Variational (VEM) analysis

If you want to use provided parcellation mask, check the path of the file by running:

$ pyhrf_list_datafiles

and check for the stanford_willard_parcellation_3x3x3mm.nii.gz file.

Then you can run:

$ pyhrf_jde_vem_analysis 2.5 stanford_willard_parcellation_3x3x3mm.nii.gz paradigm.csv bold_data_session1.nii
Variational (VEM) analysis

If you want to tune the parameters, check command line options (see commands). This is adviced to set the dt parameter (the temporal resolution of the HRF) and the output folder.

Example:

check out the path of the files subj0_bold_session0.nii.gzsubj0_parcellation.nii.gz and paradigm_loc.csv using:

$ pyhrf_list_datafiles

then run:

$ pyhrf_jde_vem_analysis 2.4 subj0_parcellation.nii.gz paradigm_loc.csv subj0_bold_session0.nii.gz

replacing the files by their full path. This will create output files in the current folder. Check Visualization tools to check the outputs file.

  • The multirun extension is currently under development and testing

Scientific issues

Scientific issues

Scientific issues – PyHRF aims to provide advanced tools for single-subject analysis of functional Magnetic Resonance Imaging (fMRI) data acquired during an experimental paradigm (i.e. not resting-state). The core idea of PyHRF is to estimate the dynamics of brain activation by recovering the so-called Hemodynamic Response Function (HRF) in a spatially varying manner. To this end, PyHRF implements two different approaches:

Scientific issues
  1. a voxel-wise and condition-wise HRF estimation [1];
  2. a parcel-wise spatially adaptive joint detection-estimation (JDE) algorithm [2,3].

The second approach is more powerful since it jointly addresses (i) the localization of evoked brain activity in response to external stimulation and (ii) the estimation of parcelwise hemodynamic filters. To this end, a parcellation (either functional or anatomical) has to be provided as input parameter.

This tool hence provides interesting perspectives for understanding the HRF differences between brain regions and also between individuals and populations (infants, children, adults, patients …). Within the classical workflow of fMRI data analysis (preprocessings, estimation, statistical inference), PyHRF takes place at the estimation stage. However, we have also implemented posterior probability maps (PPMs) to allow the user performing statistical inference at the subject-level.

For the sake of computational efficiency, the variational expectation-maximization (VEM) [3] is used as default algorithm for computing the solution. The results that are generated in the context are the following:

  • the 3D effect size maps associated with each experimental condition;
  • the 3D contrast maps specified by the user;
  • the time-series describing the HRFs for the set of all brain regions (4D volume). The analysis can also be performed on the cortical surface from projected BOLD signals and

then produces functional textures to be displayed on the input cortical mesh.

[1] P. Ciuciu, J.-B. Poline, G. Marrelec, J. Idier, Ch. Pallier, and H. Benali, “Unsupervised robust non-parametric estimation of the hemodynamic response function for any fMRI experiment”, IEEE Trans. Medical Imaging; 22(10):1235-1251, Oct 2003.[2] T. Vincent, L. Risser and P. Ciuciu, “Spatially adaptive mixture modeling for analysis of within-subjectfMRI time series”, IEEE Trans. Medical Imaging; 29(4):1059-1074, Apr 2010.

[3] L. Chari, T. Vincent, F. Forbes, M. Dojat and P. Ciuciu, “Fast joint detection-estimation of evoked brain activity in event-related fMRI using a variational approach”, IEEE Trans. Medical Imaging; 32(5):821-837, May 2013.

Package overview

pyhrf is mainly written in Python, with some C-extension that handle computationally intensive parts of the algorithms. The package relies on classical scientific libraries: numpy, scipy, matplotlib as well as Nibabel to handle input/outputs and NiPy which provides tools for functional data analysis.

Scientific issues

pyhrf can be used in a stand-alone fashion and provides a set of simple commands in a modular fashion. The setup process is handled through XML files which can be adapted by the user from a set of templates. This format was chosen for its hierarchical organisation which suits the nested nature of the algorithm parametrisations. A dedicated XML editor is provided with a PyQt graphical interface for a quicker edition and also a better review of the treatment parameters. When such an XML setup file is generated ab initio, it defines a default analysis which involves a small real data set shipped with the package. This allows for a quick testing of the algorithm and is also used for demonstration purpose.

An articifial fMRI data generator is provided where the user can test the behaviour of the algorithms with different activation configurations, HRF shapes, nuisance types (noise, low frequency drifts) and paradigms (slow/fast event-related or bloc designs).

Concerning the analysis process, which can be computationally intensive, pyhrf handles parallel computing through multiple core computers.

Changelog

Changelog

Changelog – Current development

Release 0.4.3

2016/07/08

Fixes

Changelog
  • Remove non-existing tests in devel mode
  • Correct typo in documentation
  • Correct list display in documentation
  • Fix bug for numpy >= 1.11.1 version
  • Fix bug in contrasts computation
  • clean tmp folders after some unitary tests
  • Fix VEM script example

Release 0.4.2

2016/07/07

Fixes

  • Fix VEM algorithm
    • fix convergence criteria computation
    • fix underflow and overflow in labels expectation (set labels to previous value if necessary)
  • Continue to clean setup.py
  • fix some DeprecationWarning that will become Exceptions in the future
  • fix detection of parcellation files
  • fix for scikit-learn version = 0.17
  • fix bugs with matplotlib versions 1.4
  • fix bug with Pillow latest version (see #146)
  • fix bug with numpy when installing in virtual environment (see commit a971656)
  • fix the zero constraint on HRF borders

Enhancements

  • optimize some functions in vem_tools
  • rewrite and optimize all VEM steps
  • remove old calls to verbose module and replaced them by logging standard library module
  • update website documentation
Changelog

New

  • Updating documentation
    • updating theme
    • fixing some reST and display errors
  • autodetect cpus number (mainly to use on cluster and not yet documented)
  • add covariance regularization matrix
  • load contrasts from SPM.mat
  • save contrasts in the same order that the xml configuration file
  • compute and save PPMs
  • Add multisession support for VEM BOLD and ASL
  • add cosine drifts to VEM
  • add commandline for VEM
  • add Stanford Willard Parcellation

Release 0.4.1.post1

Fixes:

  • missing function (#135)
  • nipy version required for installation (#134)

Release 0.4.1

Fixes:

  • logging level not set by command line (#113)
  • error with VEM algorithm (#115)

Enhancements:

  • clean and update setup.py (#84)
  • update travis configuration file (#123)

Release 0.4

2015/03/19

API Changes:

  • Deprecate verbose module and implements logging module instead

Fixes:

  • clean up setup.py

Enhancements

Back to top