Categories
Meetings & Conferences Standards

Standards for Synthetic Biology (BioSysBio 2009)

Existing Standards for DNA Description

Guy Cochrane
EBI

For the EMBL database, they need to provide capability for submission and collaborator data exchange. They use SRS for text search and retrieval, simple sequence retrieval (dbfetch), also dump the whole set of files out. There's been a large amount of growth over the past year or so, as the new technologies allow much faster sequencing.

Personal Comment: I took fewer notes for this section as I used to work on TrEMBL (UniProt as it's called now) and am quite familiar with EMBL, so I didn't feel the need to take as many notes…!

Previous Standards Effort: SBML

Herbert Sauro
University of Washington, Seattle

In 1999 there were 5-6 different simulators, and people wanted to be able to move the models from one tool to the next. SBML was originally created to represent homogeneous multi-compartment biochemical systems. They estimate that this format can cover about 80% of the models out there. The initial version was funded by JST. Over 120 software packages now support SBML including MATLAB and Mathematica. SBML is also acceptable to many journals including Nature, Science, and PLoS. It has also since spawned many other initiatives.

Key contributing factors to its take up: a need from the community; availability of detailed documentation; annual/biannual two-day meetings; portable software libraries to enable developers to incorporate standard capabilities into their software; they deliberately didn't try to do everything, as it covered about 80% of the community's needs at the time. Because the libraries were maintained centrally it ensured that the standard didn't diverge, and extensions/modifications were agreed by the community and could then be easily incorporated by developers.

SBML has been going for 8 years. Significant changes are planned. But, the exciting things are the peripheral results: BioModels (repository), KiSAO (ontology/CV), SBO (ontology/CV), TEDDY (ontology/CV), MIASE (presumptive standard for storage of simulation results), SBRML (presumptive standard), Antimony (human-readable version of SBML).

With a standard format, you can all of a sudden do compliance testing – do all applications produce the same results, or even succeed when simulating all models in BioModels? roadRunner, COPASI, BioUML, SBML ODE Solver perform the best.

Physical Standards and the BioBrick Registry

Randy Rettberg

The idea of the registry came from the TTL Data Book for design engineers. The current registry contains a wiki and more – it looks like a website, not a data book. Each biobrick part was listed, and had its own page. The number of teams in 2003 was less than 10 – in 2008 it was 84, with 1180 people.

The quality of the parts is really important. Starting last year, they did a specific set quality control tests. They're making sure that the top 800 bricks grew, had good sequence, the users said they worked, etc.

They also worked on the overall structure of the registry. He'd like to go in the direction of a more distributed system. Future work includes: extension to DAS interface; uploading parts; external tool hooks for sequence analysis and sequence and feature editors.

This session is a preface session for tomorrow's end-of-meeting standards workshop. Beer and pizza!

Tuesday Standards Session
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

Poacher Turned Gamekeeper: A View of Working in Science from the Publishing Side of the Fence, BSB09

There were a number of workshops running in parallel – I decided to visit "Poacher Turned Gamekeeper: A View of Working in Science from the Publishing Side of the Fence", run by Chris Surridge, Nature, Scientific Editor. These are my notes of that session.

Wants to convince us that journal editors are human beings, too. Science journalism is a product we all use. We produce papers – these are generally considered the final end product. Therefore it is important to understand the process involved in scientific publishing.

About Chris Surridge: PhD in biophysics. Specifically, the x-ray crystallography, mass spectrometry, and did his PhD in microtubule assembly. He's worked at Nature for 14-15 years. How did he get there? After some postdoc work, he had a decision to make. He didn't want to be an eternal postdoc, for example. Saw an advert for a job as an editor of Nature. Got offered a job with Nature structural biology. He's also worked on PLoS, and PLoS One.

Why publish in a particular journal? Impact factor? Good fit? Right audience? Resulting status? Supervisor says so? But in general, it just comes down to limitations of resources of the journal, and not everyone who tries to get published will get published in their journal of choice. Also, many journals don't want to publish too many, and have their impact factor suffer as a result. Is it an artificial scarcity? Yes, he says, there is a degree of it in the publishing world (though it is his opinion that Nature and Science don't do this much).

So, for whatever reason, journals are limited in the number of papers they can publish each week. Nature's resources mean that they can only really publish 10-11 biological science papers per week, and they get about 150 submissions per week. So, there is quite a lot of attrition. The job of the manuscript editor is to use the peer-review system and their understanding to sift through to get the most appropriate papers for their journals.

Papers come in and are assigned to a subject area, and then an editor (either full time or academic editors who do it part time). From reading it, you try to gauge its relevance, how many questions it answers, etc. He's not really worried about how it will be picked up in the press, and he doesn't really look to closely at the names on the paper: it really *isn't* easier for big shots to get papers published. The simple answer is that there is a reason big shots became big shots (e.g. the quality of their work).

This is exactly what you do when you are given a paper to do in a journal club. Then it moves into the more formal area of the peer review. Not everything gets sent out for peer review. There is an empirical rule that, in general, a journal publishes about half the papers it sends out for peer review (holds mainly true, but not completely). The capacity of a nature editor is about 10-15 papers of his own and about 10 reading of his colleagues. In PLoS, it's very similar. If it gets rejected without review, generally it's because the editor feels that, even if true, it's not appropriate for the journal. In general, they don't make very technical decisions – that's left up to the peer reviewer. Hence, rejection letters before review tend to be bland.

Editors tend to be harder on papers where the subject area is something they are very familiar with. Chris started working in the area he knew, but quickly branched out – he says there's nothing like 10-20 papers a week in subject area to get you up to speed quickly in whole other areas.

So, back to the next step in the process: peer review. Referee comments tell you technical quality. The referees should tell the editor: whether or not the paper *true* and accurate; whether or not it is as surprising as the editor thinks it is. The technical accuracy is what you really need the referees for. Of course, people who have worked in a subject area, as much as they wish or try to be, are not completely unbiased. Therefore, it's a good idea not to rely on one referee. If you choose 2, then chances are they disagree with each other. Therefore the ideal minimal number is 3. The more referees you add, the more conflicting opinions you'll have, so you don't want too many, because it is harder to make a decision.

Some referees are better at determining technical aspects, and others are good at the knowledge of the system in question – therefore it's a balancing act to get the right sort of referees. Once accepted, there is a bit of a bargaining session between the authors and the requested changes from the referees: in this case, the editor acts as mediator. Finally, the editors have to ensure that the finished version is something that fits within the constraints of the journals.

In summary, filter (editor) -> peer review (referees) -> filter (editor)-> tweak (authors, editors, subeditors) -> publish.

Q: What does Open Access (OA) mean in publishing? They are freely-available, and copyright is retained by the author, but it is published under a license that allows reuse with attribution. What it isn't is a publishing model, and it doesn't have anything to do with editorial standards (i.e. OA doesn't say anything about the editorial policy).

These days, scientific publishing is virtually all on the internet. Did a quick straw poll: how many of you read a real paper version of a journal in the last 5 papers you read? Two, one was nature and one was Science. Other than that, no-one. This sort of thing is especially useful for methods, where you don't get the full methods in the paper version because there is no room.

2-3 years ago, Nature tried out open peer review (refereeing online). It had been pioneered by Atmospheric Physics and Chemistry (some name like that). Anyone can write a report. After a certain amount of time, the editors decide whether or not to publish. What Nature found was that no-one came and commented.

There is a different version of open peer review that is where the peer review is normal, but the referees give up their anonymity and allow their comments to be published. Some journals do this successfully.

Q: What qualifications? You need to be a scientist 🙂 But there are no exact qualifications, just reply to job advertisement. Most journal editors have to have a PhD (didn't use to be like that). Research experience is taken into consideration, but the amount is variable. However, there's no way to have editorial skills without doing the job: they look at people's potential. If you get interviews, then you're sent texts prior to the interview.

He also said that latex submissions are hard for many journals to handle. Those that are completely latex are fine, but allowing multiple types of submissions are hard. Also, the conversion from latex to the actual software used to create the print version of the journal is not easy.

Tips: give your paper some context; write for the journal audience (specific or broad); don't overreach on your broad statements of applicability; cover letters are incredibly important – it's the first thing he reads – and that cover letter is your (as the author) personal contact with the editor; use the cover letter to try to focus the editor's attention on the bits you think are important.

Q: Do journal editors shape science? Almost certainly yes – they choose what gets published (at some level).

Personal Comments: This was a very interesting and useful workshop, giving us an opportunity to know how the editing process works. Thanks!

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences Outreach

One for All and All for One: Unification and Education in Systems Biology (BioSysBio 2009)

This was the discussion session I chose. These are just notes of what was being said, so they might be a little disconnected.

+ Words don't necessarily mean what you think they mean. This can be a problem in collaborative model development.
+ This is why ontologies are so important.
+ How to get biologists to use these ontologies, when biologists generate terms and definitions, often without regard to what already exists?
+ Symbols in biology are not standardized.
+ Any science has joint words that mean different things. While there are advantages to having the same definitions from a computational perspective, we can just use whatever words are normal in the community, but just make clear the definition. It's could be a translation rather than unification issue.
+ Many people have problems with open access ontologies (i.e. someone else could change what you had spent ages doing).
+ Remember, open access != open editing.
+ What people should realize, if you start doing interdisciplinary work, you really need to change the way you do your research. You need to pay attention to what the other disciplines say.
+ While it is an advantage to take a subject specialism into SB, everyone needs to understand that the other disciplines are useful. Nobody will be able to be a pure SB "jack of all trades". Interdisciplinarity should be taught at an earlier level. Funding bodies are stressing the need for a group of people with different skills.
+ getting Professors and other scientists to actually work for 3, 6 or 9 months or more in other disciplines (in CISBAN, a statistician is being a wet lab biologist, for example) is very useful.
+ Allow your scientists to sit in on undergraduate lectures that allow them to learn the solid understanding of the other disciplines. People can learn that subjects don't work differently, and allows them to realize that this means that terminologies also might work differently.
+ Different disciplines allow you to train your mind in different ways.

Please note that this post is merely my notes. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

Kinetic Data for Reaction Mechanism Steps in SABIO-RK (BioSysBio 2009)

Ulrike Wittig
EML Research GmbH, Germany

SABIO-RK stores information biochemical reactions and enzyme kinetics. Reactions are mainly from KEGG and from literature. Kinetic information comes only from the literature. You can access SABIO-RK both via a user interface and via web services. She then took us through a tour of the website and how to get information from SABIO-RK. They don't just store the overall reaction, but also the intermediate steps.

Other sources for reaction mechanisms. MACiE, which stores qualitative information based on the 3d crystal structure of the enzyme. Literature also has it, but not in a standardized way. That's why SABIO-RK is so helpful. The new data model for SABIO-RK has extra features to store the intermediate steps. How does the information get into the db? She then explained how information from a paper gets into the database. SABIO-RK now contains more detailled information about the mechanisms of reactions and the intermediate steps. In the future there will be a search function for the mechanism, and there will be the ability to export in formats like SBML (SBML currently cannot handle the hierarchy of reactions used in mechanisms). Reaction mechanisms can be used for the representation of signalling reactions (e.g. protein-ligand binding), and this will be implemented in future.

Personal Comments: A very nice tour and explanation of how to use SABIO-RK. It's good to see a data model in a talk, too.

Tuesday Session 2
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences Standards

Creating, Curating and Computing with CellML (BioSysBio 2009)

Catherine Lloyd et al.
Auckland Bioengineering Institute

CellML is an XML-based markup language which leverage off existing standards (e.g. MathML and RDF). Why is a standard format needed at all? The answer lies in the publishing process. A modeller starts out writing the model in whatever language they want, but then when others want to access the model from a publication, how can they run it or understand it? Also, the writing out of the model as a series of equations or graphics can introduce the possibility of errors. Why not just publish in MATLAB? Why bother putting it in CellML? Well, MATLAB isn't used by everyone. And where it is used, it's a procedural language and distinct from the published paper, which has nothing procedural.

Although they have best-practice standards, there are no requirements. This flexible structure can be used to describe a wide range of types of models: electrophysiology, immunology, cell cycle, muscular contraction, synthetic biology and more. There are some limitations: CellML is good at describing at the molecular and cellular model, but not so good at tissue-scale. However, work is underway on this cross-scale modelling.

CellML is modular structure allowing models to be broken into components. CellML has an import feature that allows you to stick bits of models together, like lego bricks. SBML doesn't have this yet, though it is planned for future versions. This import feature is really useful, and saves time. In CellML models can share entities (e.g. proteins) and processes (e.g. reactions) between models. Imports are also helpful for models with repeating units. For a cell/pacemaker model, a pacemaker unit can be defined once and imported many times.

They have two tools (PCEnv and COR) to help develop CellML models. PCEnv allows development in CellML and then export in other formats such as MATLAB, C, Python etc. PCEnv is windows/linux/mac, COR is windows only. Both tools can also run simulations. PCEnv also shows embedded SVG diagrams of all the models in the repository.

The CellML Model Repository: http://www.cellml.org/models

This repository has over 380 models, all are free for download. The majority are from published paper. For each model entry, there is a short description, curation status, a schematic diagram. Model curation includes model validation and documentation. Of the 380 models, only 4 have been translated straight from the published paper into a working CellML model (i.e. without help from the curation team first). This is because there are often typographical errors in the paper, a lack of unit definitions, missing parameters, missing initial conditions, missing equations etc. At the moment they have a star system. 0 = not curated yet. 1 = maths consistent with published paper. 2 = model's complete and reproduced the results in the published paper. 3 = model satisfies physical constraints, e.g. conservation of mass, momentum, charge etc. Other problems: for some older models we never have access to original code.

There's lots of collaboration with SBML. Currently the diagrams are made manually, and there's no reason why it can't be done automatically, and that's being worked on now. If we want to encourage (via journals) modellers to put their models into SBML or CellML, we need to provide really nice tools and help making the models.

Tuesday Session 2
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

Modelling Biochemical Dynamics using Time-Varying S-Systems (BioSysBio 2009)

W-H Huang et al. (presented by F-S Wang)
National Chung Cheng University

Almost no lit has so far talked about modelling power-law models with time-varying parameters. In their model formulation, they used a time varying S-Sytem model. The rate coefficients and the kinetic orders are the time-varying parameters. Several basic functions such as block pulse functions, Lagrange polynomials, and orthogonal polynomials can be used to estimate the time-varying parameters. The model parameters for each time scale are constants.

There are two main challenges to parameter estimation: ODE solving and optimization. They developed a modified collocation approach, which is similar to the conventional method except for the approximation technique. Evolutionary algorithms can be applied to overcome drawbacks to optimization using gradient-based methods. They propose a global-local search method. They also describe Hybrid differential evolution (HDE). They did both wet and in silico experiments. For the latter, they found that the time-varying model fits the experiments very closely, much better than the time-invariant models. The wet-lab study was a kinetic model of ethanol fermentation using mixed sugars. The same conclusion, that the time-invariant model did not closely follow the experiment, while the time-varying one did, was found for this experiment.

DEs including constant parameters are commonly used to model biochemical systems. Such a time-invariant model cannot cover all dynamic behaviour – a time-varying S-system model has been developed by them to overcome these limitations. This model is a close fit with experimental data.

Tuesday Session 1
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

Bayesian Learning of Genetic Network Structure in the Space of Differential Equations (BioSysBio 09)

D Peavoy et al. (presented by Ata Kaban)
University of Birmingham

This work is mainly a feasability study. They would like to reverse engineer regulatory networks using time-course expression data. There are already a number of approaches that range from simple clustering to dynamic and regression models. It's a difficult problems because there are many unknowns in the system. "Simplification of the true complexity is inevitable." You can look at Bayesian nets using graphical models, where nodes = random variables (genes or proteins) and edges are conditional probabilites. The overall model is the joint density. In practice, there aren't enough time points. Another difficulty lies in choosing the form of all of these conditional distributions. They tried an approach inspired by the graphical models, but different from them. Nodes are still genes and proteins, while edges are reactions modelled as ODEs. The overall model is coupled ODEs of unk. structure and parameters of constituent ODEs. The Task is to infer structure and parameters from data. They start with some synthetic data, which is simulated with superimposed additive noise.

They have basic building blocks of nonlinear ODEs. By combining the M-M rate equations, you can build more complex dynamics. You can also model promotory dimers – dimer formation between proteins occurs before they act as TFs for the next stage of gene expression. She then described the inhibitory dimer. There are 7 different affector types that they are modelling. What followed was a thorough description of the Bayesian framework used for model inference. They generated noisy data from a model with 9 genes and 11 proteins in order to validate the proposed inference procedure. They then defined a model space for search/inference with 9 genes and 15 proteins (as actual # proteins not always known), and pre-defined that there are at most 4 proteins allowed to react with a gene. They then asserted a complexity prior for the model, where they penalize complicated interaction models. Metropolis-Hastings sampling was used to generate new candidate models. However, parameter inference was needed to evaluate candidate models' acceptance probabililty. They used Gamma(1.3) prior on all parameters to ensure all parameters are positive, and then used Metropolis sampling to obtain parameter posteriors.

They then think about change of affector types from one to another to change the behaviour and type of the many candidate models. They evaluate a model's acceptance probability by parameter inference. They check for convergence of the models, and after 40,000 samples convergence isn't perfect, but is getting there. It makes a pretty good first step for estimation of the parameters.

The simulation presently takes 5 days on a shared cluster (50 MH chains making 500 model samples each). The model space is still huge, and inserting mode biological knowledge could further refine this and make the approach quicker.

Personal comment: "Simplification of the true complexity is inevitable." What a great statement! Inevitable, but perhaps only for the moment 🙂

Tuesday Session 1
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

Parameter Inference and Sensitivity Analysis on Biochemical Models (BioSysBio 2009)

K Ergueler et al.
Imperial College London

When talking about Bayesian inference, you need to think about a number of things: prior distribution, likelihood, and posterior distribution. Posterior dist is proportional to the likelihood. Then there was a fantastic animated graph showing how these two are related. He's interested in the variability in the posterior distribution. When looking into sensitivity and fisher information, the Hessian is calculated using the sensitivity coefficients. Specifically, he looks at sensitivity coefficients for temporal analysis. Not all parameters behave the same way – some don't get as "excited" about the bifurcations. You can look into the sensitivity by looking at sensitivity profiles. Most parameters in a system are sloppy (showing a graph of log-eigenvalues, and to the left and top are sloppy variables, and to the right and bottom are stiff ones). He then overlaid the eigenvectors on top of this profile, and colored the eigenvectors based on the sensitivity. Red = stiff, green = sloppy. Red are more pronounced and green is less pronounced in the higher eigenvectors.

Then he uses the example of the circadian clock reaction network (Leloup and Goldbeter 1999). The reactions in the middle seem to be more important to the dynamics of the system, and the TIM branch seems more important than the PER branch (according to their analysis). In conclusion, they analysed parameter sensitivities wrt observed time points, the global qualitative behaviour of the system, and the observed system components. Parameter sensitivities define relative importance of different areas of reactions.

Personal Comments: Ah-ha! Another ICL speaker, another Latex presentation. Nice! I'm afraid I'm not completely up on the intricacies of Bayesian stats, so I may have gotten some things mixed up. Please let me know any errors I might have introduced! His graphs were very impressive, and very much contributed to the understanding of the topic.

Tuesday Session 1
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

A Hybrid G-Protein Coupled Neurotensin Receptor (NTS1) with Wild Type EnvZ expressed in E.coli (BSB)

P F LoCascio et al.
University of Oxford

How do you choose chimeric pieces? Look for generalizable characteristics, e.g. mechanical transduction of a signal. They chose EnvZ-OmpR from E.coli. This is a generic system: phosphorylation triggered depending on the amount of osmotic pressure. They end up with a simple custom-designed biological circuit. Components are essentially isolated, so you can change one component at a time. They used the internal part with a different external component (the transmembrane response to osmotic pressure change). They used a conserved proline to change what was at the front of the construct.

Why a GPCR? They are a large class of pharmaceutical targets (>30%), and a working GPCR in a microbial system could be part of a molecular sensing tool (need to couple with WR signalling mechanism). Neurotensin is a GPCR from R.Norvegicus. When designing the hybrid, they need to couple the external signal with the internal transducer to activate the internal transcriptional response. HAMP domain needs a mechanical stimulus from linker peptid, to activate auto-phosphorylation. They know that the GPCR has some sort of twisting motion, but the 3d structure is not yet known. GPCRs have 7 transmembrane helices. There is an interesting "8th half-helix" sitting on the end of the 7th helix.

The sequencing was correct – all plasmids were verified correct. They plan to express plasmids into the reporter system, test ligand binding, and test dimerisation effect.

Monday Session 2
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

Using Control Theory to Elucidate Biological Signalling Networks (BioSysBio 2009)

M A J Roberts et al.
University of Oxford

Personal comment: I think the title might have changed, but I was too slow on the title slide to get it.

His research focuses on chemotaxis pathways. In E.coli chemotaxis, the signal is sensed by MCP and changes the rate of CheA autophosphorylation. CheA can phosphotransfer to CheY and CheB. CheY-P interacts with motor leading to motor switching and direction changing of the bacterium. CheB-P demethylates MCP resulting in adaption (memory). CheR methylates the receptor. The pathway is less well understood in other bacteria. There are often multiple homologues and therefore have a higher complexity. One example is R.sphaeroides, which has two main chemotaxis operons: CheOp2, and CheOp3. But they don't know how the pathway works or how it is connected. He's working on figuring this out without doing all possible interactions in vitro to work it out. He'll do this by creating models for all possible connections and then invalidate some of them.

This chemotaxis system is useful experimentally as you can measure the live output from cells using cell tethering. They constructed sensing models where the ligand is able to directly or indirectly interact with both MCPs and Tlps. There are a number of parameters that aren't known, and were estimated by fitting wild-type data. You can vary the experimental parameters by 10% you still get models that fit WT data pretty well.

The next part of the model is transduction. They experimentally determined the parameters. Finally is the motor binding step, where a simple mechanism for binding is assumed. So they have a set of models which can all represent WT data. They want to maximize the magnitude of the difference between the model outputs in order to discriminate "best" betwen the two models. This is achieved using linearized model equations around the steady state.

The differences between the models under the initial conditions were quite small. So they simulated these in silico to try to get large differences in expression. For example, overexpressing CheY4 has little effect on the WT, so only choose those models that behave similarly. Further tests are performed to get down to a final model. Of course, other un-modelled reactions may also be correct, so they're looking at extending the approach to find other possibilities.

Monday Session 2
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original