Categories
Meetings & Conferences

Why Secure Synthetic Biology? (BioSysBio 2009)

Piers Millett
Biological Weapons Convention Implementation Support Unit, UN

Biology is inherently dual-use: can be used for beneficial and malignant purposes. Synbio is value neutral – it's the purpose it's put to that determines if it is bad or good. So, the focus of the solution is also on intent. The global ban on malign biology covers intent: covers all biological agent irrespective of how they're made. 10 years were spent on trying re-write the bioweapons ban, but the answer was inconclusive. Can we police every single synbio center in the world? Do we narrow our view somehow (production capacity, research area, funding type). Neither way is satisfactory. Hence, for now, top-down control is not practical at the moment – and wouldn't be until things are stable.

Kofi Annan: "Preventing bioterrorism requires innovative solutions specific to the nature of the threat. Biotechnology is not like nuclear technology … The approach to fighting the abuse of biotechnology … will have more in common with measures against cybercrime than with the work to control nuclear proliferation." This approach is user-centric rather than top-down.

In contrast to the top-down, a bottom-up approach is possible but difficult. Security people and biologists need to work together. The BWC is ready to help with information, access to expertise, and more.

Personal Comment: A very engaging speaker who has really nice pacing. On a ligher note, I liked: the audience participation, the videos, the pop culture references (Dr. Evil, Jurassic Park, Spiderman..).

Wednesday Session 2
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences Outreach

Data – Knowledge – Application – Governance (BioSysBio 2009)

Joyce Tait
ESRC Genomics Network

In her view, genetic engineering is to the 21st century what evolution was to the 20th century. There is a non-linear progression from science to the marketplace. It used to be linear (wait for a product to be ready). Governance and Regulation: presumption of regulation for a novel area of life science: how do gov'ts decide on regulatory approaches? What precedents to they invoke? Will GM crops be the precedent for synbio and where will that lead? Feedback loops: regulation is what makes development expensive; venture capital won't invest without a regulatory system in place.

Upstream engagement promises: promisory agenda from socail scientists; more democratic approach; scientific research will not be adversely affected; citizens will be come more accepting of new tech. Downsides of upstream engagement: most people have better things to do; those who do engage might have an "axe to grind", or may develop concerns that they didn't have before; some research areas will be discouraged; we can't always predict what will come out of basic research happening now – this would be speculation on a very large time scale; even when that information is available we can't predict the products coming from that; most really innovative product developments require combined contributions from more than one area of fundamental science, but we won't know what we are missing; even doing this, you still won't avoid conflict later on or mistakes; can't control what happens privately, so you're only inhibiting public work; we're asking today's citizens to decide for the people in the future; under what circumstances is it legitimate to allow one societal group to foreclose options for others?

These aren't hypothetical situations – it really may block off certain areas of research. Public dialogue – rather than engagement – is an excellent thing. Helps manage expectations. She suggests standards related to public engagement in terms of willingness to listen to alternative vews and not knowlingly presenting biased information to support their views. We need to avoid domination of dialogue by ideological views that are not amenable to negotiation.

Wednesday Session 2
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

Programming RNA Devices to Control Cellular Information Processing (BioSysBio 2009)

C Smolke
Caltech

This talk is more focused on synbio. There are many natural chemicals and materials with useful properties, and it would be great to be able to do things with them. Examples are taxol from pacific yew, codeine and morphine from opium poppies, and butanol from clostridium, spider silk and abalone shell, and the rubber tree. It is much more efficient to get these useful chemicals grown inside a bacterium rather than its natural source. These microbial factories are a useful application area for synbio. Similarly, intelligent therapeutics is another application area for synbio. In IT, two biomarkers together would (via other steps) produce a programmed output. You could link these programs to biosensors, or perform metabolic reprogramming, performed programmed growth and more. The ultimate goal is to be able to engineer systems. These systems generally need to interface with their environment.

Synbio *also* has circuitry, sensors and actuators, just like more traditional forms of engineering has. Foundational technologies (synthesis) -> Engineering Frameworks (standardization and composition) -> Engineered Biological Systems (environment, health and medicine). An information processing control (IPC) molecule would have three functions, as mentioned earlier: sensor, computation (process information from sensor and regulate activity of the actuator), and actuator. There are variety of inputs for sensor (small molecules, proteins, RNA, DNA, metal ions, temperature, pH, etc). The actuator could link to various mechanisms like transcription, translation, degradation, splicing, enzyme activity, complex formation, etc. Key engineering properties to think about are scalability, portability, utility, composability, and reliability.

What type of substrate should we build this IPC systems on? What about RNA synthetic biology? You'd go from RNA parts -> RNA devices -> engineered systems. Experimental frameworks provide general rules for assembling the parts into higher order devices. Then you organize devices into systems, which use in silico design frameworks for programming quantitative device performance. Why RNA? The biology of functional RNAs is one reason: noncoding regulatory RNA pathways are very useful. You can also have RNA sensor elements (aptamers), which bind a wide range of ligands with high specificity and affinity. Thirdly, RNA is a very programmable molecule.

They've developed a number of modular frameworks for assembling RNA devices, and she then gave a good explanation of one of them. In this explanation, she mentions that the transmitter can be modified to achieve desired gate function. The remaining nodes (or points of integration) can be used to assemble devices that exhibit desired information processing operations. A sensor + transmitter + actuator = device. The transmitter component for a buffer gate works via competitive binding between two strands. As the input increases in the cell a particular conformation is favored and gene expression is turned on. An inverter gate is the exact opposite. They wanted to make sure these sorts of frameworks are modular. They can do this by using a different receptor for the sensor to make it responsive to a different molecule.

You can also build higher-order information processing devices using these simpler modular devices. For instance, you might want to separate a gradient of an input signal into discrete parts. Another example would be the processing of multiple inputs, or cooperativity of the inputs.

The first architecture they proposed (SI 1): signal integration within the 3' UTR – multiple devices in series. They can build AND and NOR gates, as well as bandpass signal filters and others. In the output signal filter device, devices result in shifts in basal expression levels and output swing. Independent function is supported by matches to predicted values – the two devices linked in tandem are acting independently.

SI 2: a different type of architecture where signal integration is being performed at a single ribozyme core through both stems. You can make a NAND gate by coupling two inverter gates.

SI 3: Two sensor transmitter components are coupled onto a single ribozyme stem. This allows them to work in series. You can perform signal gain (cooperativity) as well as some gate types. With cooperativity, input A will modulate the second component which allows a second input A to bind to the second component.

Modularity of the actuator domain: using an shRNA switch – this exhibits similar properties to the ribozyme device.

How do we take these components and put them into real applications? One application is immune system therapies, where RNA-based systems offer teh potential for tight, programmable regulation over target protein levels. She had a really nice example of how she used a series of ribozymes to tune t-cell proliferation with RNA signal filters. After you get the right response, you need to create stable cell lines. Showed this working in mice.

Personal Comments: A very clear, very interesting talk on her work. Thanks very much!

Wednesday Session 1
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences Software and Tools

An Intuitive Automated Modelling Interface for Systems Biology (BioSysBio 2009)

O Kahramanogullari et al.
Imperial College London

He works on improving the modelling and inference step. He makes use of SPiM, which is a process algebra by Microsoft. Process algebra is used to study complex reactive systems, and therefore are well-suited to modelling biological systems. They have used this technique to build a process model of Rho GTPases with GDIs (Kahramanogullari et al. 2009 Theoretical Computer Science, in press).  They also created a process model for actin polymerisation (Kahramanogullari et al. 2009, Proc of FBTC08, Elsevier). Such structures can be written in process algebra when they would be extremely difficult with differential equation techniques.

Process algebra is very difficult for anyone to use directly. So, they've developed an intuitive language interface for modelling with SPiM. The assumption in this is that biochemical species are stateful entities with connectivity interfaces to other species. Further, a species can have a number of sites through which it interacts with other species, and changes its state as a result of these interactions. So, they allow descriptions of the model in a natural-language-like narrative language. Their tool is available for download from their website: http://www.doc.ic.ac.uk/~ozank/pim.html .

Wednesday Session 1
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

Novel Tools for Plant Tissue Engineering (BioSysBio 2009)

L Dupuy et al.
Scottish Crop Research Institute

Plants are the ideal models for the engineering of synthetic multicellular systems, however there is a need for tools to measure, process and design such systems.

Quantitative analysis of plant multicellular kinematics. The segment cell architectures approach: grow a region of pixels incrementally by raising the intensity of the pixels. The basin (a set of pixels) are initiated at cell centers, and expand when neighbors have a lower intensity. The balloon approach: ballons are initiated at cell centres, there is contact search initialization, and then "physical" inflation of balloons under certain circumstances. What is the application to cell growth kinematics? You can find really clear geometric rules which drive where/when the cell divisions occur.

Integrating molecular and computational tools. Automated analysis of cell growth involves labelling plasma membrane and nucleus simultaneously, which allows combining algorithms, automation of cell search, and facilitates 3d segmentation. Standardization of biological parts on a cell basis includes normalizing gene expression; there is also a ImageJ plugin for automation of ratiometric analyses; and more.

Computational models for tissue growth and development. There are computational tools and molecular tools that can help out. Modelling tissue growth is a multiscale problem. You also have to take into account the mechanics of growth, such as: cells are closed-walled structures maintained in tension by turgor pressure; permanent deformation of cell wall material enables cell expansion; the cell genetic activity influences the cell wall material properties. CellModeller is a tool for data analysis, visualisation, simulation, and segmentation reconstruction. It uses an XML exchange format, a Python interface, and a data structure described in C++. He then described how CellModeller works by giving a trichome pattern system example. Trichomes are root hairs that form on the root. The pattern of these types of cells are not random.

Multicellularity is key for building complexity of a system. Plant systems are ideal for engineering cell-cell interactions. There is a whole group of tools to create models, from bytes to molecules.

Personal Comment: These notes are a bit scattered, as it was just after I gave my talk, and I wasn't completely in zen note-taking mode 🙂 However, there were some great pictures of plant cells and models, and it was a well-structured talk. It was nice to see modelling tools for multiple cells.

Wednesday Session 1
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

Panel Discussion: Ethics, Public Engagement, Biosecurity and more (BioSysBio 2009)

Panel is:
Caitlin Cockerton (Chair)
Julian Savulescu
Matthew Harvey
Piers Millet
Drew Endy

Each panelist starts by giving a 10 minute talk.

Drew Endy: An Engineer's Perspective on Synthetic Biology

He's interested in synbio because: work into sustainability, among other things. The basics of genetic engineering hasn't changed in 30+ years. However, synbio equates to a tools revolution. But, do we need to "manage" people who are trying to "hack" the genome in their garage? Could you actually file patents on what's in the biobricks registry? Yes, but expensive. Will there be a cultural synchronization or a continued disconnect in future between genetic engineering researcher and the anti-GMO sections?

Personal Comments: A drawing of Rama (I think) – great sci-fi link! Also, a slick slide presentation with very few words and lots and lots of pictures – I like it.

Matthew Harvey: Synthetic biology and public engagement

Matthew Harvey is the Senior Policy Adviser, Science Policy Centre, Royal Society, UK.

One aspect of public engagement (PE): we shouldn't force people to be engaged if they don't care. Many of the PEs for GM started out adversarial: people assumed that scientists were "automatically" for GM. Unlike GM, there aren't a series of products queueing up to be sold. However, the risk assessment part remains vital. The Woodrow Wilson Center did a tentative PE study about synbio. They found that even with a very low awareness of synbio, 2/3 of adults are willing to express an initial opinion regarding the tradoff between potential benefits and risks. People also had questions way beyond risks and benefits (who what when where how etc). Based on this, institutions have been trying to move the PE upstream, before products were available. This is pitched as social intelligence gathering, and may try to anticipate problems that don't exist yet (for good or ill).

Julian Savulescu: Two Concerns about Synthetic Biology

From the University of Oxford. Benefits are already well-covered, but he wants to raise 2 concerns: synbio poses risk of malevolent use; synbio might undermine the moral use of living things. These concerns can be understood as variants of a common concern about promoting future wrongdoing.

Wrt the first concern, Cello et al in 2002 wrote about the de novo synthesis of poliovirus. Rumpey et al in 2005 reconstructed the 1918 spanish influenza virus. For the second concern, people are worried that synbio will contribute to a feeling that life no longer has a "special status". For a more thorough look, see Cho, Magnus, Caplan and McGee (1999). But where, on the nebulous scale of "moral status" do the products of synbio belong?

A reformulation of the 2nd concern is that: synbio beings are assigned great moral status, which cause a sacrifice of the human/animal status for the sake of the synbios, which could lead to humans/animals being harmed.

Suppose we correctly assign a great moral status to sybios: human/animals could get permissibly harmed. Alternatively, we incorrectly assign this status: humans/animals get wrongly harmed.

Some arguments: scientific inquiriy is justified by the intrinsic value of the knowledge it produces, but this assumes that the value of knowledge trumps other moral values. The second is the gunmakers' defence: a scientist is not responsible for malevolent uses, but wrongs for which we are not responsible can still be relelvant to the ethical assessment of our conduct. Additionally, we can't predict the future, so any principle which requires us to do so is unworkable, but it may well be possible to identify predictors of malevolent use – we haven't even tried.

The two main concerns can be understood as variants of a moral general concern about bringing about wrongdoing. The most popular way of dissolving these concerns – scientific isolationism – fails.

Challenges for regulators: minimise the risk of malevolent use. For scientists: make better predictions about how research will be used. For philosophers: ascertain criteria for moral status, and determine how to weigh risk of future wrongdoing against benefits of pursuing research in synbio.

Personal Comment: I don't agree that an increase in moral status (if that's the way it goes) of synbios would necessarily lead to a drop in the status of humans/animals.

Piers Millet

Personal Comment: Piers generously dropped his talk so that the panel discussion could begin. That was very nice, and very timely, as there's only 15 minutes left and the discussion hadn't started yet! A real shame to miss it, especially since we tantalizingly saw his first slide, a gigantic UN symbol with the words "Biological Weapons Convention Implementation Support Unit" underneath. Made me feel like we were in a secret meeting or something. However, smart move. A tip of the hat to him.

General Discussion

Q: Are there any occasions when a political decision has been needed in terms of prioritization of types of science (including synbio) when upstream PE has been attempted? Matthew isn't aware of any such occasion. Of course, this conference and the community itself is an example of upstream discussions in general.

Q: Comment for Julian: applications in practice aren't always influenced by whether or not it was originally developed for military purposes. (Personal Comment: I believe the example provided was the laser.) Drew mentioned that of course you could spend loads of time thinking about military/non-military applications. It is also good to engage in taking action early, as things are still being figured out. One example is the creation of iGEM as a cooperative community contest, as opposed to creating something more aggressive such as a "bug wars" game. 🙂

Piers: There are at least two approaches to doing diy bio: one is people doing biology on their kitchen tables, and the other is a community model where you don't expect to have your lab in your house, but you could have a community lab in a central location that can meet regulations and where people can do things. The latter is quite interesting.

Q: is synbio the end of evolution? How does it fit? Drew: evolution is the most successful design framework for biology, but we don't know how to deploy it yet! Can't go forward with existing frameworks for things like patents – would overload the current system.

Overall Personal Comments: The twitter #biosysbio feed has been quite interesting for this section.

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences Standards

Standards for Synthetic Biology (BioSysBio 2009)

Existing Standards for DNA Description

Guy Cochrane
EBI

For the EMBL database, they need to provide capability for submission and collaborator data exchange. They use SRS for text search and retrieval, simple sequence retrieval (dbfetch), also dump the whole set of files out. There's been a large amount of growth over the past year or so, as the new technologies allow much faster sequencing.

Personal Comment: I took fewer notes for this section as I used to work on TrEMBL (UniProt as it's called now) and am quite familiar with EMBL, so I didn't feel the need to take as many notes…!

Previous Standards Effort: SBML

Herbert Sauro
University of Washington, Seattle

In 1999 there were 5-6 different simulators, and people wanted to be able to move the models from one tool to the next. SBML was originally created to represent homogeneous multi-compartment biochemical systems. They estimate that this format can cover about 80% of the models out there. The initial version was funded by JST. Over 120 software packages now support SBML including MATLAB and Mathematica. SBML is also acceptable to many journals including Nature, Science, and PLoS. It has also since spawned many other initiatives.

Key contributing factors to its take up: a need from the community; availability of detailed documentation; annual/biannual two-day meetings; portable software libraries to enable developers to incorporate standard capabilities into their software; they deliberately didn't try to do everything, as it covered about 80% of the community's needs at the time. Because the libraries were maintained centrally it ensured that the standard didn't diverge, and extensions/modifications were agreed by the community and could then be easily incorporated by developers.

SBML has been going for 8 years. Significant changes are planned. But, the exciting things are the peripheral results: BioModels (repository), KiSAO (ontology/CV), SBO (ontology/CV), TEDDY (ontology/CV), MIASE (presumptive standard for storage of simulation results), SBRML (presumptive standard), Antimony (human-readable version of SBML).

With a standard format, you can all of a sudden do compliance testing – do all applications produce the same results, or even succeed when simulating all models in BioModels? roadRunner, COPASI, BioUML, SBML ODE Solver perform the best.

Physical Standards and the BioBrick Registry

Randy Rettberg

The idea of the registry came from the TTL Data Book for design engineers. The current registry contains a wiki and more – it looks like a website, not a data book. Each biobrick part was listed, and had its own page. The number of teams in 2003 was less than 10 – in 2008 it was 84, with 1180 people.

The quality of the parts is really important. Starting last year, they did a specific set quality control tests. They're making sure that the top 800 bricks grew, had good sequence, the users said they worked, etc.

They also worked on the overall structure of the registry. He'd like to go in the direction of a more distributed system. Future work includes: extension to DAS interface; uploading parts; external tool hooks for sequence analysis and sequence and feature editors.

This session is a preface session for tomorrow's end-of-meeting standards workshop. Beer and pizza!

Tuesday Standards Session
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

Poacher Turned Gamekeeper: A View of Working in Science from the Publishing Side of the Fence, BSB09

There were a number of workshops running in parallel – I decided to visit "Poacher Turned Gamekeeper: A View of Working in Science from the Publishing Side of the Fence", run by Chris Surridge, Nature, Scientific Editor. These are my notes of that session.

Wants to convince us that journal editors are human beings, too. Science journalism is a product we all use. We produce papers – these are generally considered the final end product. Therefore it is important to understand the process involved in scientific publishing.

About Chris Surridge: PhD in biophysics. Specifically, the x-ray crystallography, mass spectrometry, and did his PhD in microtubule assembly. He's worked at Nature for 14-15 years. How did he get there? After some postdoc work, he had a decision to make. He didn't want to be an eternal postdoc, for example. Saw an advert for a job as an editor of Nature. Got offered a job with Nature structural biology. He's also worked on PLoS, and PLoS One.

Why publish in a particular journal? Impact factor? Good fit? Right audience? Resulting status? Supervisor says so? But in general, it just comes down to limitations of resources of the journal, and not everyone who tries to get published will get published in their journal of choice. Also, many journals don't want to publish too many, and have their impact factor suffer as a result. Is it an artificial scarcity? Yes, he says, there is a degree of it in the publishing world (though it is his opinion that Nature and Science don't do this much).

So, for whatever reason, journals are limited in the number of papers they can publish each week. Nature's resources mean that they can only really publish 10-11 biological science papers per week, and they get about 150 submissions per week. So, there is quite a lot of attrition. The job of the manuscript editor is to use the peer-review system and their understanding to sift through to get the most appropriate papers for their journals.

Papers come in and are assigned to a subject area, and then an editor (either full time or academic editors who do it part time). From reading it, you try to gauge its relevance, how many questions it answers, etc. He's not really worried about how it will be picked up in the press, and he doesn't really look to closely at the names on the paper: it really *isn't* easier for big shots to get papers published. The simple answer is that there is a reason big shots became big shots (e.g. the quality of their work).

This is exactly what you do when you are given a paper to do in a journal club. Then it moves into the more formal area of the peer review. Not everything gets sent out for peer review. There is an empirical rule that, in general, a journal publishes about half the papers it sends out for peer review (holds mainly true, but not completely). The capacity of a nature editor is about 10-15 papers of his own and about 10 reading of his colleagues. In PLoS, it's very similar. If it gets rejected without review, generally it's because the editor feels that, even if true, it's not appropriate for the journal. In general, they don't make very technical decisions – that's left up to the peer reviewer. Hence, rejection letters before review tend to be bland.

Editors tend to be harder on papers where the subject area is something they are very familiar with. Chris started working in the area he knew, but quickly branched out – he says there's nothing like 10-20 papers a week in subject area to get you up to speed quickly in whole other areas.

So, back to the next step in the process: peer review. Referee comments tell you technical quality. The referees should tell the editor: whether or not the paper *true* and accurate; whether or not it is as surprising as the editor thinks it is. The technical accuracy is what you really need the referees for. Of course, people who have worked in a subject area, as much as they wish or try to be, are not completely unbiased. Therefore, it's a good idea not to rely on one referee. If you choose 2, then chances are they disagree with each other. Therefore the ideal minimal number is 3. The more referees you add, the more conflicting opinions you'll have, so you don't want too many, because it is harder to make a decision.

Some referees are better at determining technical aspects, and others are good at the knowledge of the system in question – therefore it's a balancing act to get the right sort of referees. Once accepted, there is a bit of a bargaining session between the authors and the requested changes from the referees: in this case, the editor acts as mediator. Finally, the editors have to ensure that the finished version is something that fits within the constraints of the journals.

In summary, filter (editor) -> peer review (referees) -> filter (editor)-> tweak (authors, editors, subeditors) -> publish.

Q: What does Open Access (OA) mean in publishing? They are freely-available, and copyright is retained by the author, but it is published under a license that allows reuse with attribution. What it isn't is a publishing model, and it doesn't have anything to do with editorial standards (i.e. OA doesn't say anything about the editorial policy).

These days, scientific publishing is virtually all on the internet. Did a quick straw poll: how many of you read a real paper version of a journal in the last 5 papers you read? Two, one was nature and one was Science. Other than that, no-one. This sort of thing is especially useful for methods, where you don't get the full methods in the paper version because there is no room.

2-3 years ago, Nature tried out open peer review (refereeing online). It had been pioneered by Atmospheric Physics and Chemistry (some name like that). Anyone can write a report. After a certain amount of time, the editors decide whether or not to publish. What Nature found was that no-one came and commented.

There is a different version of open peer review that is where the peer review is normal, but the referees give up their anonymity and allow their comments to be published. Some journals do this successfully.

Q: What qualifications? You need to be a scientist 🙂 But there are no exact qualifications, just reply to job advertisement. Most journal editors have to have a PhD (didn't use to be like that). Research experience is taken into consideration, but the amount is variable. However, there's no way to have editorial skills without doing the job: they look at people's potential. If you get interviews, then you're sent texts prior to the interview.

He also said that latex submissions are hard for many journals to handle. Those that are completely latex are fine, but allowing multiple types of submissions are hard. Also, the conversion from latex to the actual software used to create the print version of the journal is not easy.

Tips: give your paper some context; write for the journal audience (specific or broad); don't overreach on your broad statements of applicability; cover letters are incredibly important – it's the first thing he reads – and that cover letter is your (as the author) personal contact with the editor; use the cover letter to try to focus the editor's attention on the bits you think are important.

Q: Do journal editors shape science? Almost certainly yes – they choose what gets published (at some level).

Personal Comments: This was a very interesting and useful workshop, giving us an opportunity to know how the editing process works. Thanks!

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences Outreach

One for All and All for One: Unification and Education in Systems Biology (BioSysBio 2009)

This was the discussion session I chose. These are just notes of what was being said, so they might be a little disconnected.

+ Words don't necessarily mean what you think they mean. This can be a problem in collaborative model development.
+ This is why ontologies are so important.
+ How to get biologists to use these ontologies, when biologists generate terms and definitions, often without regard to what already exists?
+ Symbols in biology are not standardized.
+ Any science has joint words that mean different things. While there are advantages to having the same definitions from a computational perspective, we can just use whatever words are normal in the community, but just make clear the definition. It's could be a translation rather than unification issue.
+ Many people have problems with open access ontologies (i.e. someone else could change what you had spent ages doing).
+ Remember, open access != open editing.
+ What people should realize, if you start doing interdisciplinary work, you really need to change the way you do your research. You need to pay attention to what the other disciplines say.
+ While it is an advantage to take a subject specialism into SB, everyone needs to understand that the other disciplines are useful. Nobody will be able to be a pure SB "jack of all trades". Interdisciplinarity should be taught at an earlier level. Funding bodies are stressing the need for a group of people with different skills.
+ getting Professors and other scientists to actually work for 3, 6 or 9 months or more in other disciplines (in CISBAN, a statistician is being a wet lab biologist, for example) is very useful.
+ Allow your scientists to sit in on undergraduate lectures that allow them to learn the solid understanding of the other disciplines. People can learn that subjects don't work differently, and allows them to realize that this means that terminologies also might work differently.
+ Different disciplines allow you to train your mind in different ways.

Please note that this post is merely my notes. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original

Categories
Meetings & Conferences

Kinetic Data for Reaction Mechanism Steps in SABIO-RK (BioSysBio 2009)

Ulrike Wittig
EML Research GmbH, Germany

SABIO-RK stores information biochemical reactions and enzyme kinetics. Reactions are mainly from KEGG and from literature. Kinetic information comes only from the literature. You can access SABIO-RK both via a user interface and via web services. She then took us through a tour of the website and how to get information from SABIO-RK. They don't just store the overall reaction, but also the intermediate steps.

Other sources for reaction mechanisms. MACiE, which stores qualitative information based on the 3d crystal structure of the enzyme. Literature also has it, but not in a standardized way. That's why SABIO-RK is so helpful. The new data model for SABIO-RK has extra features to store the intermediate steps. How does the information get into the db? She then explained how information from a paper gets into the database. SABIO-RK now contains more detailled information about the mechanisms of reactions and the intermediate steps. In the future there will be a search function for the mechanism, and there will be the ability to export in formats like SBML (SBML currently cannot handle the hierarchy of reactions used in mechanisms). Reaction mechanisms can be used for the representation of signalling reactions (e.g. protein-ligand binding), and this will be implemented in future.

Personal Comments: A very nice tour and explanation of how to use SABIO-RK. It's good to see a data model in a talk, too.

Tuesday Session 2
http://friendfeed.com/rooms/biosysbio
http://conferences.theiet.org/biosysbio

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else's. I'm happy to correct any errors you may spot – just let me know!

Read and post comments |
Send to a friend

original