Meetings & Conferences

Special Session 4: Abigail Morrison on Neuroscience (ISMB 2009)

Abigail Morrison: Communicating Ideas in Computational Neuroscience
Part of the Advances and Challenges in Computational Biology, hosted by PLoS Computational Biology

In computational neuroscience, the key ideas to be communicated are mathematical and computational models as well as data analysis methods. She will mainly focus on computational models in this talk, though what she says mostly holds for data analysis methods as well.  They type of modelling being done is getting more and more complicated all of the time. And yet, there is no standardization in notation, simulation software, and best practices for describing models. As a result, we cannot reproduce the work of others or critically evaluate or compare models.

A researcher comes up with an interesting model and simulation, and then they want to publish it. So they try to write down what they did in the model. Then another researcher in a similar area reads it and wants to reproduce or build on it. Then, they run into problems: how do they figure out what parameters to use, what dynamics are present? Ultimately, the system they’re running their simulations on will probably be different, and their version of the model won’t work right. So, she’s working on a system that can be more standardized.  Abigail Morrison can think of only one model, in all the times she’s worked on it, that they’ve been able to reproduce without going back to the authors.

Is it science, or is it travel reporting?

Approaches to solve this problem have to be both sociological (large collaborations with defined software and protocols) and technological (version control, high-level APIs, testing and unit testing), or even socio-technological (work together to create tools to facilitate reproducibility).

A lot of interesting work is happening with INCF and NeuralEnsemble/PyNN. INCF has been running since 2005 and tries to coordinate neuroinformatics (databases & data sharing, tool development and analysis, computational models) internationally. INCF also involved with portals, standards incl. ontologies. The Japanese node focuses on the visual side of things, and has produced Visiome, which attempts to collect both papers and figures separately as well as model parameters, simulation scripts and figure-generation scripts. This can all be downloaded, and then hopefully run it on your own system. Another project there is the Simulation Server Platform, intended to provide online test trials for simulation scripts on a virtual machine. All elements are reproduced in the VM such as OS and hardware emulation, compilers, simulation software and viewers. In this way, it supports reproducibility of results by other researchers and testing by journal reviewers.

At the German node, the main focus is to support interactions between experimental and computational neuroscientists, and focuses on collaboratively develop Open-source tools for data access and analysis. The problem is that there are many different recording devices and analysis tools, and no standardization. So, they want a unified data format, implement open source input and export functions for common data formats. Develop and provide a repo for these tools. They also want to design and implement a machine-readable declarative language to describe neural network model (like SBML) – first meeting in March 2009 so still new.

NeuralEnsemble provides hosting for open-source Python-based software projects in Neuroscience, and a key project is PyNN, a common scripting language for all simulators. This facilitates cross-checking of results between simulators, and incremental porting of a model from one simulator to another.

There’s a paper coming in PLoS later this year making a checklist of common suggestions for how network models could be described in words.

Allyson’s thoughts: what about standardization efforts for format/syntax/scope? CARMEN? MIBBI-like efforts: is the checklist effort part of MIBBI? Also, is it really “reproducibility” of results if you have to go to a VM somewhere to get it to work? Probably not, but at least it’s a first step on the road to better tyes of (more complete/generic) reproducibility.

FriendFeed Discussion

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else’s. I’m happy to correct any errors you may spot – just let me know!

Meetings & Conferences

Keynote Presentation: Computational Neuroscience: Models of the Visual System (ISMB 2009)

Tomaso A. Poggio, Massachusetts Institute of Technology

Present learning algorithms have a high sample complexity and shallow architectures. One of the most obvious differences is the apparent ability of people and animals to learn from very few examples (“poverty of stimulus” problem). Are hierarchical architectures the answer to this? Visual cortex: hierarchical architecture from neuroscience to a class of models. In this area, the dorsal stream is for “where” and the ventral stream is for “what”. The ventral stream in the human has an order of magnitude (at least) more neurons than in our close taxonomic relatives.

As you go from V1 to higher areas in the VS, the optimal stimulus increases in visual complexity – by the time you get to the IT area, that area is really only being stimulated by images that are at the complexity level of faces. In the VS, there are both feedforward and backprojection connections. How far can we push the simplest type of FF hierarchical models? It’s a good place to start. 30-50 ms for the image to go from retina to this area. The model of visual recognition (millions of units) is based on neuroscience of the cortex. This software is available online.Overcomplete dictionary of “templates” or images “patches” is learned during an unsupervised learning stage from ~10000 natural images by tuning S units. Preprocessing stages lead to a representation that has lower sampling complexity than the image itself. He refers to the sample complexity of the preprocessing stage as the # of labeled examples required by the classifier at the top.

What can we say about how the model works? There is a long series of comparisons that were made based on literature and collaborations. It is a hierarchical feedforward model of the VS. There is data resulting from the model in IT, V4, V1, etc, and psychophysics. The latter involves rapid categorization – it’s a good example in that backprojection is not allowed. In the measure of accuracy, the model and human observers perform similarly. There is a high correlation of correct responses (difficult images are difficult for both, and so on). It was surprising to find this kind of agreement. When we compare that to computer visual systems of the time (a couple years ago), they found the model based on the neuroscience of the visual cortex did a better job a labelling things correctly. Hierarchical FF models of visual cortex may be wrong, but it presents a challenge for “classical” learning theory.

They have started to develop a theory called HKM. There are a number of fashionable models going under the name of deep-learning networks. You can consider images as functions. Functions can be interpreted as greyscale images when working with a vision problem, for example. What follows is a series of technical slides on the algorithm that I didn’t quite get.

Extensions of the model to videos and sequences of images. The specific system discussed is for looking at mice in cages. They want to classify simple behaviours over a couple of seconds (grooming, walking etc). Collected ~100 hours of videos and then perform an automated analysis. The system is almost as good as humans – they agree about 70% of the time (between labellers), which is about the same as between labellers and the system. They’re doing 24-hour monitoring of 4 different strains for testing the system. You can infer the mouse strain from the behaviour with about 50% accuracy with 10 mins of video.

Limits of present FF models: vision is more than categorization or identification: it is image understanding/inference/parsing. Our visual system can “answer” almost any kind of question about an image or video (a Turing test for vision). The types of models he’s describing, he doesn’t think could handle this type of “turing”-style test.

FriendFeed Discussion

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else’s. I’m happy to correct any errors you may spot – just let me know!

Meetings & Conferences

Thomas Nowotny and Pheromones in Moths, BBSRC Systems Biology Workshop

BBSRC Systems Biology Grantholder Workshop, University of Nottingham, 16 December 2008.

Sensitivity, specificity and ration coding: riddles of the pheromone system in moths. PheroSys project – the neurosciences face the same problem as others in biology, which could be solved by systems biology. Their model for pheromone reaction goes from Antenna -> Antennal Lobe -> mushroom body (involved in recognition and loading) -> pre-motor areas. Can we find the optimal coding strategies? Moths have an extreme specificity and sensitivity to these pheromones.

There are three Work packages. WP1 (Antenna inputs into AL) includes: single olfactory receptor neuron (ORN), which models ORN responses based on cellular processes; population of pheromone-responsive ORNs, where the aim is to describe response patterns of ORNs and correlate with PNs (projector neurons, which have access to many, if not all, of the receptor neurons (RNs)); projection of ORNs and macro-glomerular complex (MGC) organizations, where the aim is to describe the structure of the MGC. WP2 (organization and function of one glomerulus) includes: neuron types and their structure-function relationship – describe electrical properties, characterise projectoin neurons and local interneurons. WP3 (MGC network) includes: investigating the role of oscillations and extracellular recordings in Agrotis ipsilon (multi-neuron correlations).

These notes were transcribed from hand-written ones as my battery had died, therefore they aren't as complete as they would otherwise be.

These are just my notes and are not guaranteed to be correct. Please feel free to let me know about any errors, which are all my fault and not the fault of the speaker. 🙂

Read and post comments |
Send to a friend