250px-Head_of_Minerva

INTERNATIONAL WORKSHOP

ON

BRAIN-INSPIRED COMPUTING

 

Computational models, algorithms and applications

 

 

 

CetraroItaly, June 12-16, 2017

 

 

 

 

Final Programme

 

Updated May 30th, 2017

 

 

Programme and Organisation Committee

 

 

         

Katrin Amunts

(Germany)

Lucio Grandinetti

(Italy)

Thomas Lippert

(Germany)

Nicolai Petkov

(The Netherlands)

 

Katrin Amunts

Lucio Grandinetti

Thomas Lippert

Nicolai Petkov

 

 

 Sponsors

 

 

 

 

 

CRAY

 

 

IBM (t.b.c.)

 

 

ICAR CNR

 

 

INTEL (t.b.c.)

 

 

JUELICH SUPERCOMPUTING CENTER, Germany

logo_fzj

 

 

Human Brain Project

 

 

ParTec

https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcSKr8IqWWPKzoEFO_ODjCSjTAjAdkUi116nlAVEmRCxRsZtPp2N

 

 

University of Calabria, Italy

dimes-marchio-01

 

 

Dipartimento di Ingegneria dell’Innovazione – Universitŕ del Salento

DipIngInn_solo giallo

 

 

 

Speakers

 

 

25 invited key-note speakers will participate. So far the following speakers – among others – have confirmed their participation: Thomas Lippert, Tianzi Jiang, Katrin Amunts, Thomas Schulthess, Francesco Pavone, Nicolai Petkov, Thomas Sterling, Adrian Tate.

 

 

 

 

Proceedings

 

 

Refereed papers presented at the Workshop will be published as a Proceedings Volume in the Springer series Lecture Notes in Computer Science.

Interested authors are kindly invited to submit in advance to the PC, during the workshop, the title and one page abstract of their intended contribution.

 

 

 

 

Workshop Agenda

 

Monday, June 12th

Session

Time

Speaker/Activity

 

9:45 – 10:00

Welcome Address

Session I

 

Brain structure and function: a neuroscience perspective I

 

10:00 – 10:45

D. PLEITER

New HPC architectures and technologies for brain research

 

10:45 – 11:30

T. Jiang

The Brainnetome Atlas of Language and its Inspiration for natural language processing

 

11:30 – 12:00

COFFEE BREAK

 

12:00 – 12:30

DISCUSSION AND CONCLUDING REMARKS

Session II

 

Brain structure and function: a neuroscience perspective II

 

18:00 – 18:30

T. MANOS

Improving long lasting anti-kindling effects via coordinated reset simulation frequency mild modulation

 

18:30 – 19:00

COFFEE BREAK

 

19:00 – 19:30

M. CANNATARO

Methods and techniques for recognizing emotions: sentiment analysis and biosignal analysis with applications in neurosciences

 

19:30 – 19:45

DISCUSSION AND CONCLUDING REMARKS

 

 

Tuesday, June 13th

 

Session

Time

Speaker/Activity

Session III

 

Brain models, simulation and brain inspired computing

 

9:30 – 10:00

F. PAVONE

A multivariate analysis of a brain disease

 

10:00 – 10:30

J. JITSEV

What computations are ‘brain-inspired’? - A view on neural information processing, functionality and learning

 

10:30 – 11:00

M. MIGLIORE

The implementation  of brain-inspired cognitive architectures using large-scale realistic computational models

 

11:00 – 11:30

COFFEE BREAK

 

11:30 – 12:00

S. KUNKEL

Extremely scalable simulation code for spiking neuronal networks

 

12:00 – 12:30

DISCUSSION AND CONCLUDING REMARKS DISCUSSION AND CONCLUDING REMARKS

Session IV

 

Representation Learning, Machine Learning

 

17:00 – 17:30

P. CARLONI

Multiscale simulations of key molecular events  in G-protein coupled receptors-based neuronal cascades

 

17:30 – 18:00

N. PETKOV

Representation learning with trainable COSFIRE filters

 

18:00 – 18:30

M. BIEHL

Lifelong (machine) learning of drifting concepts

 

18:30 – 19:00

COFFEE BREAK

 

19:00 – 19.30

N. STRISCIUGLIO

Bio-inspired representation learning in pattern recognition

 

19:30 - 19:45

DISCUSSION AND CONCLUDING REMARKS

 

 

Wednesday, June 14th

 

Session

Time

Speaker/Activity

Session V

 

Building infrastructures related for human brain research

 

9:30 – 10:00

T. LIPPERT

t.b.a.

 

10:00 – 10:30

T. STERLING

A Non von Neumann Architecture for General Neuromorphic Computing

 

10:30 – 11:00

S. YATES

Modelling spiking multi-compartment neural networks at exascale

 

11:00 – 11:30

COFFEE BREAK

 

11:30 – 12:00

K. Amunts

Decoding the multi-level brain organization - scientific  and computational challenges

 

12:00 – 12:30

DISCUSSION AND CONCLUDING REMARKS

Session VI

 

 Some Philosophical Issues

 

17:30 – 18:00

D. MANDIC

Complexity Science  for Physiological Data

 

18:00 – 18:30

COFFEE BREAK

 

18.30 - 19:00

G. GALLO and C. STANCATI

The Future of the Mind: Some (Past and Present) Philosophical Issues

 

19:00 – 20:00

 

PANEL DISCUSSION

 

Supercomputing the brain

 

For the next 5 to 10 years,  where do we want to go? What do we expect? Which ethical problems will occur? Should humans be afraid of beyond-exascale computers? Will future supercomputers simulate consciousness?

The discussion aims to stimulate debate and confrontation among  researchers  from  different areas of knowledge.

 

 

 

Thursday, June 15th

 

Session

Time

Speaker/Activity

Session VII

 

TUTORIAL I

 

9:30 – 11:00

D. MANDIC

Hearables: In-ear EEG and vital signs monitoring of the state of body of mind

 

11:00 – 11:30

COFFEE BREAK

Session VIII

 

TUTORIAL II

 

11:30 – 13:00

M. BIEHL

Prototype-based systems in machine learning

 

 

Friday, June 16th

 

Session

Time

Speaker/Activity

Session IX

 

GROUP OF INTEREST MEETINGS

 

9:30 – 11:00

N. PETKOV

2D Gabor functions for modeling simple and complex cells in visual cortex. Use in image processing and computer vision 

 

11:00 – 11:30

COFFEE BREAK

Session X

11:30 – 13:00

GROUP OF INTEREST MEETINGS

 

 

 

 

Chairmen

 

 

SESSIONS I & 2

 

Nicolai Petkov

University of Groningen

THE NETHERLANDS

 

 

SESSION III

 

Tianzi Jiang

Brainnetome Center, Institute of Automation

Chinese Academy of Sciences

Beijing

CHINA

 

 

SESSION IV

 

Michele Migliore

Institute of Biophysics, National Research Council, Palermo

ITALY

and

Department of Neuroscience

Yale University School of Medicine

New Haven, CT

USA

 

 

SESSIONS V & VI

 

Dirk Pleiter

Forschungszentrum Juelich

GERMANY

 

 

 

 

 

ABSTRACTS

 

Decoding the multi-level brain organization – scientific and computational challenges

 

Katrin Amunts

Forschungszentrum Jülich, Germany

 

The human brain has a multi-level organisation and high complexity. New approaches are necessary to decode the brain with its 86 billion nerve cells, which form complex networks. E.g., 3D Polarized Light Imaging elucidates the connectional architecture of the brain at the level of axons, while keeping the topography of the whole organ; it results in data sets of several petabytes per brain, which should be actively accessible while minimizing their transport. Such ultra-high resolution models therefore pose massive challenges in terms of data processing, visualisation and analysis. The Human Brain Project creates a cutting-edge European infrastructure to address such challenges including cloud-based collaboration and development platforms with databases, workflow systems, petabyte storage, and supercomputers opening new perspective to decode the human brain.

Back to Session V

Lifelong (machine) learning of drifting concepts

 

Michael Biehl

Johann Bernoulli Institute for Mathematics and Computer Science

University of Groningen, The Netherlands

 

Most frequently, frameworks of machine learning comprise of two different stages: First, in a training phase, a given set of example data is analysed, information is extracted and a corresponding hypothesis is parameterized in terms of, say, a classifier or regression system. In a subsequent working phase, this hypothesis is then applied to novel data.

 

For many practical applications of machine learning this separation is convenient and appears natural. A - by now - classical example would be the automated classification of handwritten digits by means of a neural network that has previously been trained from a large number of labeled input examples.

 

Obviously, the conceptual and temporal separation of training and working phase is not a very plausible assumption for human and other biological learning processes.  Moreover, it becomes inappropriate if the actual task of learning, e.g. the target rule in a classification problem, changes continuously in time. In such a situation, the learning system must be able to detect and track the concept drift, i.e. forget irrelevant, older information while continuously adapting to more recent inputs.

 

In this contribution we present a mathematical model of learning drifting concepts in prototype-based classifiers, which are trained from high-dim. data. Methods borrowed from statistical physics allow for the study of the typical learning dynamics for different training strategies in the presence of various drift scenarios. The mathematical framework is outlined and first results are presented and discussed.

 

Back to Session IV

Prototype-based systems in machine learning

(tutorial)

 

Michael Biehl

Johann Bernoulli Institute for Mathematics and Computer Science

University of Groningen, The Netherlands

 

An overview is given of prototype-based systems in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives.

Together with a suitable measure of similarity, such systems can be employed in the context of unsupervised and supervised analysis of potentially high dimensional, complex datasets. We discuss basic schemes of unsupervised Vector Quantization (VQ) as well as the so-called Neural Gas  (NG) approach and Kohonen’ s topology-preserving Self-Organizing Map (SOM).

Supervised learning in prototype systems is exemplified in terms of Learning Vector Quantization (LVQ). Most frequently, the familiar Euclidean distance serves as a dissimilarity measure. We present extensions of the framework to nonstandard measures and give an introduction to the use of adaptive distances in so-called relevance learning schemes.

 

 

To a large extent, this tutorial will be along the lines of our review article:

 

Michael Biehl, Barbara Hammer, Thomas Villmann

Prototype-based methods in machine learning

Advance Review, WIREs Cognitive Science 2016

available online:   http://onlinelibrary.wiley.com/doi/10.1002/wcs.1378/abstract

doi: 10.1002/wcs.1378

 

Back to Session VIII

Methods and techniques for recognizing emotions: sentiment analysis and biosignal analysis with applications in neurosciences

 

Chiara Zucco, Barbara Calabrese, Mario Cannataro

Department of Medical and Surgical Sciences

Data Analytics Research Center

University “Magna Grćcia” of Catanzaro

88100 Catanzaro, ITALY

{czucco, calabreseb, cannataro}@unicz.it

 

Feelings and emotions are related to biological, social and cognitive aspects of each person’s life. With the advent of wearable devices and social networking platforms, people began to monitor their lives on a daily basis not only by recording physical signals (such as heart rate, steps, etc.), but also by expressing their emotions through text, images, video, audio, emoticons and tags.

Deriving meaning from this vast amount of data is therefore a topic that, in recent years, has received a growing interest in both industrial and academic research.  In particular, new computational technologies such as sentiment analysis and affective computing have found applications in many fields of knowledge such as marketing, politics, social sciences, cognitive sciences, medical sciences etc. Such technologies aim to automatically extract emotions from heterogeneous data such as text, images, audio, video, and a pletora of biosignals such as voice, facial expression, EEG, near-infrared spectroscopy, etc.

The paper introduces main concepts of sentiment analysis and affective computing and presents an overview of the main methodologies and techniques used to recognize emotions from the analysis of various data sources such as text, images, voice signals, EEG, near-infrared spectroscopy. Finally, the paper discusses various applications of those techniques to neurosciences.

 

Back to Session II

Multiscale simulations of key molecular events  in G-protein coupled receptors-based neuronal cascades

 

Paolo Carloni

Forschungszentrum Jülich, Germany

 

G-protein coupled receptors (GPCRs) regulate fundamental brain processes, including neurotransmission.  Here I will illustrate recent studies from our lab aimed at predicting structure and energetics of agonists binding to GPCRs. We will present in particular hybrid coarse-grain/molecular mechanics applications which may be particularly useful for resolution models of these proteins. We will conclude with a brief overview of  our investigations of GPCRs-based neuronal cascades  within the Human Brain Project.

 

Back to Session IV

The Future of the Mind: Some (Past and Present) Philosophical Issues

 

Giusy Gallo e Claudia Stancati

Dipartimento di Studi Umanistici, Universitŕ della Calabria

 

Before being investigated by the scientific research, the issues concerning our knowledge of the world, the use of words and sentences, the acquisition of language and the definition of the self are philosophical problems. Then, philosophy is a sort of historical mirror of the issues mentioned above.

We will discuss the following open questions (see Gary Marcus, “The Computational Brain”, in The future of the brain, pp. 212-214) showing for each of them the ideas of two thinkers of past and contemporary philosophy:

1 If the brain is not a von Neumann stored program machine, what kind of information processor is? How does the brain manage to be so coordinated in the absence of a central clock? Is there a kind of neuronal algebra, a set of operations that works on arbitrary values stored in synapses? Who denies that the brain is a computer, has a reasonable alternative?

2 If in certain occasions the human brain behave like a von Neumann computer (conscious and deliberate rule application) what kind of neural systems can support the versatility of our cognition in other domains where knowledge and instructions are both less explicit? It seems human brain is a hybrid system: both digital and analogic. To understand how human mind works we should start thinking about the principle of compositionality.

3 How does the brain implement variable binding?

4 Is there a single canonical form of computation or is there a wide range of operations?

5 What format(s) does the brain use for encoding the information? Like ASCII, JPEG or GIF. We know something about space and motor space but very little about word, sentence, images, melody.

6 Why does the brain contain so much diversity and details in order to do things which a wide but simple neural network can’t do?

 

Back to Session VI

The Brainnetome Atlas of Language and its inspiration for Natural Language Processing

 

Tianzi Jiang1, 2, 3, 4, 5, 6

1 Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China

2 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China

3 CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China

4 Key Laboratory for NeuroInformation of the Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 625014, China

5 The Queensland Brain Institute, University of Queensland, Brisbane, QLD 4072, Australia

6 University of Chinese Academy of Sciences, Beijing 100190, China

 

The human brain atlas plays a central role in neuroscience and clinical practice, and is a prerequisite for studying brain networks and cognitive functions at the macroscale. Using non-invasive multimodal neuroimaging techniques, we have designed a connectivity-based parcellation framework to build the human Brainnetome Atlas, which identifies the subdivisions of the entire human brain, revealing the in vivo connectivity profiles. This new brain atlas has the following four features: (A) It establishes a fine-grained brain parcellation scheme for 210 cortical and 36 subcortical regions with a coherent pattern of anatomical connections; (B) It supplies a detailed map of anatomical and functional connections; (C) it decodes brain functions using a meta-analytical approach; and (D) It will be an open resource for researchers to use for the analysis of whole brain parcellations, connections, and functions. The human Brainnetome Atlas could constitute a major breakthrough in the study of human brain atlas and provides the basis for new lines of inquiry about the brain organization and functions. It could be regarded as a start point, which will enable the generation of future brain atlases that are more finely, defined and that will advance from single anatomical descriptions to an integrated atlas that includes structure, function, and connectivity, along with other potential sources of information.

In this lecture, we first give an introduction on the human Brainnetome atlas. Then we demonstrate what new knowledge on the brain regions of language we can obtain with human Brainnetome atlas. Firstly, we defined a convergent posterior anatomical border for Wernicke’s area and indicated that the brain’s functional subregions can be identified on the basis of its specific structural and functional connectivity patterns. Secondly, we revealed a detailed parcellation of Broca’s region on the basis of heterogeneity in intrinsic brain activity, and investigated cross-cultural consistency and diversity in intrinsic functional organization of Broca’s Region. Finally, we give a brief introduction on the potential inspiration for natural language processing.

 

Back to Session I

What computations are ‘brain-inspired’? - A view on neural information processing, functionality and learning

 

Jenia Jitsev

Juelich Research Center, Germany

 

There has been a long tradition of casting models of information processing which entail elementary generic computing units arranged in multiple stacked layers to perform cascades of transformations of the incoming input as neurally inspired, or brain-like. However, what justifies casting a particular model neurally inspired is quite arbitrary and inconsistent. Often, a very limited set of properties of biological neural networks, like their hierarchical processing organization or spiking of single neurons, is taken to back up claim for neural plausibility, while completely ignoring vast range of other presumable relevant properties, e.g diversity of neuronal single cell dynamics, short-term synaptic plasticity or signal processing in active dendrites, to name only few.

 

The same concerns taking into account different architectural features of brain networks, like different loops through subcortical structures, e.g thalamus or basal ganglia, or different fundamental modes of brain operation like sleep. Moreover, adding clearly biological implausible features into such network models to enforce a desired functionality further obscures terminology of brain-inspired computation. Furthermore, when arguing for certain brain-like functionality, the input and the tasks the networks are demonstrating their capabilities on have often very narrow and artificial character not plausible in a real world setting where nervous systems are operating.

 

Here, to provide a perspective for a consistent framework of building neurally inspired information processing models, I would like to put forward a view that in face of haunting neural diversity on the one hand and still quite limited techniques to record large scale activities across multiple spatial scales from the brain on the other hand, it is necessary to establish solid foundation for functional and computational essense of brain phenomenology before attempting to construct full scale neural network models. Working out functional and computational essense means here specifying the generic type of problems a brain has to solve in natural environment, together with the type of computations and the type of complex real-world input available to the brain. Error driven learning, defined within a closed sensory-motor loop of forming and correcting predictions about sensory input and hidden variables that are most likely causing it, is one potential candidate framework to establish such a generic functional description for brain-like information processing. Only then it will become indeed possible to interpret properly different neurophysiological observations of biological neural substrate and to come up with basic canonical models where both neural and functional properties will reflect the principles of brain-like information processing to satisfactory degree.

 

Back to Session III

Extremely scalable simulation code for spiking neuronal networks

 

Susanne Kunkel

KTH Royal Institute of Technology, Stockholm, Sweden

 

Today’s simulation code for spiking neuronal networks scales well from laptops to supercomputers [1, 2] and it supports a wide range of models (e.g. [3, 4]). In my talk, I will discuss the different requirements that such simulation software needs to meet for different regimes of number of processes. Using the NEST simulator [5] as an example, I will discuss the limiting factors that prevent the current simulation technology [1] from scaling beyond the peta scale. I will introduce recent work on algorithms and data structures that overcome these limitations, and I will also present optimizations that ensure that the new simulation code is still efficient on small compute clusters. Finally, I will show that the novel technology is not only expected to scale well into the post-peta scale regime but that it also decreases memory usage and run time of spiking neuronal network simulations on current supercomputers.

 

[1] Kunkel S, Schmidt M, Eppler JM, Masumoto G, Igarashi J, Ishii S, et al. (2014) Spiking network simulation code for petascale computers. Front. Neuroinform. 8, 78.

[2] Ippen T, Eppler JM, Plesser HE and Diesmann M (2017) Constructing Neuronal Network Models in Massively Parallel Environments. Front. Neuroinform. 11:30.

[3] Hahne J, Helias M, Kunkel S, Igarashi J, Bolten M, Frommer A and Diesmann M (2015) A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations. Front. Neuroinform. 9:22.

[4] Diaz-Pier S, Naveau M, Butz-Ostendorf M and Morrison A (2016) Automatic Generation of Connectivity for Large-Scale Neuronal Network

Models through Structural Plasticity. Front. Neuroanat. 10:57.

[5] Gewaltig MO and Diesmann M (2007) NEST (NEural Simulation Tool).

Scholarpedia 2, 1430.

 

Back to Session III

Complexity Science for Physiological Data

 

Danilo Mandic

Imperial College London, United Kingdom

 

The complexity loss theory states that structural complexity of the responses of living organism decreases with constraints, such as ageing or illness. Yet, methods to quantify the degree of this loss of structural complexity are few and far between. This talk focuses on multivariate multiscale entropy, our new tool for the quantification of the complexity of physiological data. After introducing the method and validating its performance on synthetic data, the utility of the method in general neuroscience is demonstrated over example on multimodal body responses (EEG, heart rate, respiration) in both clinical and out-of-clinic conditions.

 

Back to Session VI

Hearables: In-ear EEG and vital signs monitoring of the state of body

of mind

 

Danilo Mandic

Imperial College London, United Kingdom

 

This tutorial brings together three main aspects of future wearable health technology: (i) adequate signal processing algorithms, (ii) miniaturised hardware for 24/7 continuous monitoring of the mind and body, and (iii) development of applications for use in natural environments. Based upon our 10 years of experience in human-computer interface, we will bring together the latest advances in multiscale signal processing, complexity science, and their application in real-world scenarios for next-generation personalised healthcare, such as sleep, fatigue and stress. Our particular emphasis will be on solutions to the challenges posed by the imperfect but ultra-wearable, unobtrusive, and discreet sensors.  To this end, insights into the biophysics of the generation and acquisition of human physiological responses will be used as a foundation, and indeed the motivation, for the multiscale signal processing algorithms covered. We will also discuss opportunities in multi-person behavioural science, enabled by our own wearable sensing platforms, such as vital sign monitoring from inside the ear canal (ECG, EEG, respiration, etc.) and our miniaturised biosignal acquisition unit.

 

Biography: Danilo P. Mandic is a Professor in signal processing with Imperial College London, UK, and has been working in the areas of adaptive signal processing and bioengineering. He is a Fellow of the IEEE, member of the Board of Governors of International Neural Networks Society (INNS), member of the Big Data Chapter within INNS and member of the IEEE SPS Technical Committee on Signal Processing Theory and Methods. He has received five best paper awards in Brain Computer Interface, runs the Smart Environments Lab at Imperial, and has more than 300 publications in journals and conferences. Prof Mandic has received the President Award for Excellence in Postgraduate Supervision at Imperial.

 

Back to Session VII

Improving long lasting anti-kindling effects via coordinated reset stimulation frequency mild modulation

 

Thanos Manos1, Magteld Zeitler1, Simon Eickhoff1,2, Peter A. Tass3

1Institute of Neuroscience and Medicine (INM-7), Research Center Juelich, Juelich, Germany

2Institute for Systems Neuroscience, Medical Faculty, Heinrich-Heine University Dusseldorf, Germany

3Department of Neurosurgery, Stanford University, Stanford, CA, USA

 

Keywords: coordinated reset, desynchronization, spike time-dependent plasticity, anti-kindling, rapidly varying sequence, slowly varying sequence, stimulation frequency, stimulation intensity

 

Abstract

Several brain diseases are characterized by abnormally strong neuronal synchrony. Coordinated Reset (CR) stimulation [1,2] was computationally designed to specifically counteract abnormal neuronal synchronization processes by desynchronization. In the presence of spike timing-dependent plasticity (STDP) [3] this leads to a decrease of synaptic weights and ultimately to an anti-kindling [4], i.e. unlearning of abnormal synaptic connectivity and abnormal neuronal synchrony. The long-lasting desynchronizing impact of CR stimulation has been verified in pre-clinical and clinical proof of concept studies (e.g. [5]). However, as yet it is unclear how to optimally choose the CR stimulation frequency, i.e. the repetition rate at which the CR stimuli are delivered.

This work presents a first computational study on the dependence of the long-term outcome on the CR stimulation frequency in neuronal networks with STDP while a conductance-based Hodgkin-Huxley neuron model for the description of an ensemble of spiking neurons was used. From a clinical standpoint, it is desirable to achieve an anti-kindling already with stimulation durations as small as possible. For this reason and due to CPU time constraints, we have chosen a certain range of stimulation durations, where we were able to achieve a reasonable success rate (i.e. anti-kindling) at least for suitable stimulation frequencies. For a representative stimulation duration of this kind, we have thoroughly varied the stimulation frequency while we have preliminary evidence that even for longer stimulation durations the picture does not change much. For this purpose, CR stimulation was applied with Rapidly Varying Sequences (RVS) [4] in a wide range of stimulation frequencies and intensities. A similar analysis was performed with a different type of CR signal, the Slowly Varying Sequences (SVS) CR [6].

We show that when comparing the two different CR signals, RVS CR turns out to be more robust against variations of the stimulation frequency; however, SVS CR can obtain stronger anti-kindling effects [7]. In cases where the initial combination of CR intensity and frequency did not perform efficiently for the majority of different network initializations, we implement three plausible therapy-like stimulation protocols, which aim to ameliorate the long-lasting effects. The first one prolongs the CR on period before ceasing it completely, the second one consists of repetition of CR on and off trial-periods with the same fixed CR frequency while the third one incorporates a control mechanism monitoring the degree of synchronization at the end of the CR off period and adjust the CR stimulation period for the following trials via a mild modulation. When comparing these three approaches, the last one not only manages to induce global desynchronization (for all networks tested) but also shows pronounced robustness among different signals and network dependent variations [8]. These findings can be implemented into stimulation protocols for first in man and proof of concept studies aiming at further improvement of CR stimulation.

Extending this work towards the integration of MRI-based neuroimaging and the analyses of inter-individual variability of neuronal dynamics in larger populations, together with Professor Simon Eickhoff we plan to use large-scale mathematical models for the description of the brain-region mean activity. For this purpose, and in order to investigate the dynamics of resting states accompanied with appropriate preprocessing, we will use The Virtual Brain platform as well as Juelich’s Supercomputer Center facilities.

 

References

[1] Tass, P.A. (2003a). A model of desynchronizing deep brain stimulation with a demand-controlled coordinated reset of neural subpopulations. Biol. Cybern. 89: 81-88.

[2] Tass, P.A. (2003b). Desynchronization by means of a Coordinated Reset of neural sub-populations. Prog. Theor.Phys. Suppl. 150: 281-296.

[3] Gerstner, W., Kempter, R., Van Hemmen, J.L, and Wagner, H. (1996). A neuronal learning rule for sub-millisecond temporal coding. Nature 383: 76-78.

[4] Tass, P.A., and Majtanik, M. (2006). Long-term anti-kindling effects of desynchronization brain stimulation: a theoretical study. Biol. Cybern. 94: 58-66.

[5] Adamchic, I., Hauptmann, C., Barnikol, U.B., Pawelcyk, N., Popovych, O.V., Barnikol, T., Silchenko, A., Volkmann, J., Deuschl, G., Meissner, W., Maarouf, M., Sturm, V., Freund, H.-J., Tass, P.A. (2014). Coordinated Reset has lasting aftereffects in patients with Parkinson’s Disease. Mov. Disord. 29: 1679-1684.

[6] Zeitler, M. and Tass, P.A. (2015) Augmented brain function by coordinated reset stimulation with slowly varying sequences. Front. Syst. Neurosci. 9: 49.

[7] Manos, T., Zeitler, M., and Tass, P.A. (2017). Effect of stimulation frequency and intensity on long-lasting anti-kindling. To be submitted.

[8] Manos, T., Zeitler, M., and Tass, P.A. (2017). Improving long lasting anti-kindling effects via coordinated reset stimulation frequency mild modulation. To be submitted.

 

Back to Session II

The implementation of brain-inspired cognitive architectures using large-scale realistic computational models

 

Michele Migliore

Institute of Biophysics, National Research Council, Palermo, Italy

Department of Neuroscience, Yale University School of Medicine, New Haven, CT, USA

 

Understanding the neural basis of brain functions and dysfunctions has a huge impact on a number of scientific, technical, and social fields. Experimental findings have given and continue to give important clues at different levels, from subcellular biochemical pathways to behaviors involving many brain regions. However, most of the multi-level mechanisms underlying the cognitive architecture of the involved brain regions are still largely unknown or poorly understood. This mainly depends on the practical impossibility to obtain detailed simultaneous in vivo recordings from an appropriate set of cells, making it nearly impossible to decipher and understand the emergent properties and behavior of large neuronal networks. We are addressing this problem using large-scale computational models of biologically inspired cognitive architectures, which require substantial resources for storage, computing, and scientific visualization that can be available only through large international research infrastructures. In this talk, I will present and discuss the main results and techniques, used in my lab and within the Human Brain Project, to design and use realistic models of neurons and networks implemented following their natural 3D structure. To illustrate our approach and its relevance to understand computational and functional processes, I will show the results obtained for the hippocampus and the olfactory bulb. The main goal is to uncover the mechanisms underlying higher brain functions, helping the development of innovative therapies to treat brain diseases. Through movies and interactive simulations, I will show how and why the dynamical interaction among neurons can predict new results and account for a variety of puzzling experimental findings.

 

Selected References

Migliore R, De Simone G, Leinekugel X, Migliore M, (2016) The possible consequences for cognitive functions of external electric fields at power line frequency on hippocampal CA1 pyramidal neurons, Eur. J. Neurosci, doi: 10.1111/ejn.13325.

Migliore M, Cavarretta F, Marasco A, Tulumello E, Hines ML, Shepherd GM. (2015) Synaptic clusters function as odor operators in the olfactory bulb, Proc Natl Acad Sci U S A. 112(27):8499-504.

Bianchi D, De Michele PD, Marchetti C, Tirozzi B, Cuomo S, Marie H, Migliore M (2014), Effects of increasing CREB-dependent transcription on the storage and recall processes in a hippocampal CA1 microcircuit, Hippocampus. 24:165-77.

 

Back to Session III

A multivariate analysis of a brain disease

 

Francesco Pavone

University of Florence, Physics Department, LENS, Italy

 

Neuro-rehabilitative therapy is the most effective treatment for recovering motor deficits in stroke patients. Nevertheless, the neural bases of recovery associated with rehabilitative intervention is debated. Here, we demonstrated how the multivariate analysis of brain parameters, both from functional and morphological point of view, is able to depict the damage and its rehabilitation process on different perspectives

IN particular, we show the synergic action of robotic rehabilitation and transient inhibition of the contralesional motor cortex molded cortical plasticity at multiple scales. By longitudinal imaging of cortical activity while training on a robotic platform for mouse rehabilitation, we demonstrated progressive recovery of motor map dedifferentiation and rise of a stronger and faster calcium response in the peri-infarct area. The coupling of the spared cortex to the injured hemisphere was reinforced, as demonstrated by our all-optical approach. Alongside, a profound angiogenic response accompanied the stabilization of peri-infarct micro-circuitry at the synaptic level. The present work, by combining optical tools of visualization and manipulation of neuronal activity, provides the first in vivo evidence of the deep impact of rehabilitation on cortical plasticity.

Finally, the importance of deep leaning, and more in general machine learning, is demonstrated in the analysis and process of informations obtained.

 

Back to Session III

Representation learning with trainable COSFIRE filters

 

Nicolai Petkov

University of Groningen, Netherlands

 

In order to be effective, traditional pattern recognition methods typically require a careful manual design of features, involving considerable domain knowledge and effort by experts. The recent popularity of deep learning is largely due to the automatic configuration of effective early and intermediate representations of the data presented. The downside of this approach is that it requires a huge number of training examples and a major computational effort.

 

Trainable COSFIRE filters are an alternative to deep networks for the extraction of effective representations of data. Such a filter is configured by the automatic analysis of a single pattern. The highly non-linear filter response is computed as a combination of the responses of simpler filters, such as Difference of (color) Gaussians or Gabor filters, taken at different positions of the concerned pattern. The identification of the parameters of the simpler filters that are needed and the positions at which their responses are taken is done automatically. We call this method Combination of Shifted Filter Responses - COSFIRE. An advantage of this approach is its ease of use as it requires no programming effort and little computation – the parameters of a filter are derived automatically from a single training pattern. Hence, a large number of such filters can be configured effortlessly and selected responses can be arranged in feature vectors that are fed into a traditional classifier.

 

This approach is illustrated by the automatic configuration of COSFIRE filters that respond to randomly selected parts of many handwritten digits. We configure automatically up to 5000 such filters and use their maximum responses to a given image of a handwritten digit to form a feature vector that is fed to a classifier. The COSFIRE approach is further illustrated by the detection and identification of traffic signs and of sounds of interest in audio signals.

 

The COSFIRE approach to representation learning and classification yields performance results that are comparable to the best results obtained with deep networks but at a much smaller computational effort. Notably, COSFIRE representations can be obtained using numbers of training examples that are many orders of magnitude smaller than those used by deep networks.

 

About the speaker:

 

Nicolai Petkov is professor of computer science with a chair in intelligent systems at the University of Groningen since 1991. In the period 1998-2009 he was scientific director of the Institute for Mathematics and Computer Science. He applies machine learning and pattern recognition to various problems.  www.cs.rug.nl/is

 

Back to Session IV

2D Gabor functions for modeling simple and complex cells in visual cortex. Use in image processing and computer vision

 

Nicolai Petkov

University of Groningen, Netherlands

 

2D Gabor functions are introduced and their relation to the properties of simple cells in the primary visual cortex is given.  Their properties in the space and frequency domain and the role of different parameters are discussed. Typical use of Gabor functions in image processing and computer vision, such as edge detection and texture characterization is considered.

 

About the speaker:

 

Nicolai Petkov is professor of computer science with a chair in intelligent systems at the University of Groningen since 1991. In the period 1998-2009 he was scientific director of the Institute for Mathematics and Computer Science. He applies machine learning and pattern recognition to various problems.  www.cs.rug.nl/is.

 

Back to Session IX

New HPC Architectures and Technologies for Brain Research

 

Dirk Pleiter

Forschungszentrum Juelich, Germany

 

During the early phase of the Human Brain Project a pre-commercial procurement (PCP) had been launched for procuring research and development services. The goal was to have commercial operators creating solutions that will augment their HPC product roadmap and make these more suitable for computational neuroscience applications. The project focussed on integration of dense memory, scalable visualisation and dynamic resource management. In this talk we will present and discuss the outcomes. Furthermore, we will introduce the pilot systems that had been delivered by the PCP contractors for enabling testing of their solutions.

 

Back to Session I

A Non von Neumann Architecture for General Neuromorphic Computing

 

Thomas Sterling

Indiana University, USA

 

Brain inspired computing refers both to a possible of means of achieving advanced computing through methods and structures analogous to those of the human brain and to computations intended to emulate or simulate operational properties observed of (and in a sense, by) the human brain. There is the potential for significant overlap of the two with computers made up of brain-like components employed to model the human brain itself. The technical approach presented here reflects an advanced cellular automata approach to neuromorphic computing both as a means to achieve computational techniques like machine learning and to use the same class of platforms possibly for brain simulation. As previously reported, the ParalleX execution model is a class of Asynchronous Multi-Tasking abstract architectures that improve efficiency and scalability through dynamic adaptive computations. ParalleX has been embodied in the family of HPX runtime systems (including work at LSU and IU) for proof of concept, first reduction to practice, and prototypes. At the low-level hardware structure, cellular-like organizations named Continuum Computer Architecture (CCA) can be efficiently employed for time-varying irregular graph related computations. Unlike classical cellular automata, CCA incorporates mechanisms that efficiently support the ParalleX parallel computational model and key functions in support of dynamic graph operations. A key property of this approach to neuromorphic computing is that the communications among cells are packet switched through worm-hole routing rather than line switched as used by other methods. This non von Neumann approach should provide the properties of general purpose software control while delivering hardware enabled performance. It is anticipated that a single semiconductor die can incorporate on the order of 214 primitive elements (i.e., “fontons”) with a peak operational performance of 1 exaops within 1 cubic meter. Throughout the presentation, questions and comments from the audience will be welcomed.

 

Back to Session V

Bio-inspired representation learning in pattern recognition

 

Nicola Strisciuglio

University of Groningen, The Netherlands

 

Since when very young, we can quickly learn new concepts and distinguish between different kinds of object or sound.

If we see a single object or hear a particular sound, we are then able to recognize such sample and even different versions of it in other scenarios.

We learn and store representations of the world and use them to detect and understand it.

 

Representation learning is an important aspect of pattern recognition. In the recent years, with the development of deep learning, it raised a large research interest. The aim of techniques for representation learning is to construct effective and reliable features directly from training samples instead of engineering hand-crafted representations, which usually require extensive domain knowledge. Some approaches to representation learning are based on machine learning techniques, while other exploit the knowledge about biological and natural systems.

 

In this presentation, I will discuss about the concept and techniques for representation learning in pattern recognition and present two approaches, COSFIRE and COPE, which take inspiration from some functions of the human visual and auditory systems. I will present the basic idea of COSFIRE and COPE features, how they are configured from training samples and the results achieved by their use in several image and sound analysis applications.

 

Back to Session IV

Modelling spiking multi-compartment neural networks at exascale

 

Sam Yates

CSCS, ETH Zurich, Switzerland

 

The simulation of increasingly large networks of highly detailed neuron models demands in turn the use of large scale compute systems.

The ambition to exploit these systems — petascale today and exascale when available — has implications for the simulation software: implementations must make efficient use of diverse hardware platforms; network construction costs and communication overheads must remain constrained as models grow in scope and complexity.

The HPC landscape is changing rapidly, with the adoption of GPU accelerators and “many core” processors such as Intel's Xeon Phi line. Achieving good utilization of these diverse architectures is becoming increasingly difficult for the developers and maintainers of simulator software.

We present our work on xmc, a HPC library for neural network simulations that addresses these challenges. We describe the scalable architecture of the library and show some initial benchmarks of simulations based on this platform.

 

Back to Session V