Archive

Archive for the ‘Semantics and Ontologies’ Category

Attribution vs Citation: Do you know the difference?

July 10, 2009 1 comment

This is a cross-posted, two-author item available both from my and Frank Gibson’s blog (his post).

Often the words “attribution” and “citation” are used interchangeably. However, in the context of ensuring your work gets the referencing it deserves when others make use of it, it is important that the differences between these two concepts are clear. This article outlines the differences between attribution and citation, and suggests that what most scientists are interested in is not attribution, which can be ensured via licensing restrictions, but instead citation, which is a much tougher nut to crack.

From xkcd, at http://xkcd.com/285/

From xkcd, at http://xkcd.com/285/

At ISMB last week, there were a number of conversations about the difference between attribution and citation. This topic was brought up again yesterday in a conversation between the authors of this post. It is an important distinction which is explored in this post.

First, some definitions for attribution and citation. These are not the only definitions possible, but for the purposes of this discussion, please keep these in mind.

Attribution: Acknowledgement of the use of someone else’s information, data, or other work. Crucially, while Wikipedia has a fairly straightforward definition of citation, it does NOT mention even common ways that attribution should be implemented (see Wikipedia attribution page).

Citation: When you publish a paper that makes use of someone else’s information (data, ontology, etc.), you include in that paper a reference to the work of that other person or group. Wikipedia states that it is a “reference to a published or unpublished source” whose prime purpose is of “intellectual honesty”.

Distinguishing between attribution and citation.
You can imagine that citation is a specific type of attribution, but attribution itself can be performed in any number of ways. For scientists, citation is much more useful to their careers as a result of the publish or perish environment.

So, what could attribution consist of? First, let’s take as an example the re-use of someone else’s ontology or specific sub-parts or classes of that ontology. Each class in an ontology is identified by a URI. Therefore, is importing the URL enough? With a URI is it clear where you got the class from? If it’s not enough, where do you put that reference or statement that you are re-using other classes: within the overall metadata of your own ontology? Alternatively, when attributing data is a reference to the originating paper or URL from where you downloaded the data enough? Where do you put that reference: within the metadata of your own document? As a citation? How much is enough attribution?

These questions cannot easily be answered.

A common-sense answer to the question of properly fulfilling requirements is to, at a minimum, first cite their information in your paper, and second include URL(s)/URI(s) in your metadata. But here we get to the crux of the matter: we’ve now stated that a useful way to ensure attribution is to cite the other person. But, if you think carefully, what’s more important for your impact assessments, and your work? It’s actually the citation itself. Sure, acknowledgement via extra referencing in the metadata of the person using your information is great, but what you really need is a citation in their work. If we aren’t careful, we will all make the easy mistake of conflating citation in papers with importing a licensed piece of information and how to mark its inclusion: the former is what we often are scored on and what we would really like, while the latter is the only thing a license enforces. Licensing with attribution requirements is not citation; you can make use of a licensed ontology, but this does not require you to cite it in a paper.

Attribution: the legal entity.

Important point: It’s easy to use a license such as the CC-BY, thinking that you’ll ensure citation, when in fact all you’re doing is ensuring attribution.

What are the implications of attribution? It can quickly get out-of-control and difficult to manage.
By requiring attribution in an ontology or data file, if someone imports information (such as a class from an ontology) into their own document, the new one must attribute the original. Continuing the ontology analogy, if there are 20-30 ontologies being used for a single project (which is not inconceivable in the coming years), there could be great difficulty in maintaining attribution for them all.

Important point: While licenses such as the CC-BY allow the attribution to be performed “in the manner specified by the author or licensor”, this could lead to 30 different licensors requiring potentially 30 different methods of attribution, and attribution stacking isn’t pretty.

Citation: the gentlemen’s club.

Can citation be assured? No. Well, maybe.
You can imagine citation as a gentlemen’s club, as propriety dictates that you should cite another’s work that you use, but there is no legal requirement to do so. Indeed, many believe that citation should not be enforced anyway. In contrast, attribution as required by licenses is a legal statement. However, let’s revisit the clause in CC-BY that states the author/licensor can specify the manner in which the attribution is given.

Important point: Could you use a license such as CC-BY, and state that the attribution must come in the form of, at a minimum, citation in the paper which describes the work being performed by the licensee?

Bottom line: which one is more important to you, as a scientist? Depends on the context.
This is difficult to answer. There aren’t very many guidelines available for us to analyse. The OBO Foundry does have a set of principles, the first of which states that “their [the ontology(ies) and their classes] original source is always credited and that after any external alterations, they must never be redistributed under the same name or with the same identifiers”. However, how this credit is attained is unclear, as described in various blog posts (Allyson, Frank, Melanie). As a result, the following conclusions came out of the OBO Foundry workshop this summer (Monday outcomes): it is “unclear if each ontology should develop their own bespoke license or use develop ‘CC-by’; how to give attribution? Generally use own judgment, here MIREOT mechanism can help when importing external terms into an ontology, giving class level attribution” (MIREOT web page, see also OWLED 2008 paper). Therefore, while they are aware of the problem, they don’t offer a consensus solution(s).

The flipside of this is that in order to use an ontology, you first have to write a paper and cite the classes you wish to import, then get on with the work. If you never get a paper and therefore a citation, is you ontology/data illegal? If you take the example of OBI, which imports several other ontologies and is an open community of developers, would a license restriction requiring citation actually prevent the work starting? This is probably a bit of a chicken-and-egg scenario, if it were ever to come a reality. In short, while there are some tempting possibilities, there doesn’t yet seem to be a useful solution.

In summary, it’s generally not attribution that people want (which can be licensed, even if you don’t like the layers of attribution that will require once you’re using multiple sources) but citation, which isn’t so easily licensed – yet. When deciding what sort of license to use (e.g. an open one like CC0 or an attribution-based one like CC-BY), you need to take into account expected usage. In some cases, for a leaf ontology, perhaps CC-BY is appropriate, as it isn’t intended to be imported by others, but you never know when your leaf will turn into something others import. Science Commons also believes that attribution is a very different beast, and shouldn’t be required when licensing data. They provided me with an answer to how to license ontologies recently that favored CC0.

So, if you really want citation and not attribution, consider an open license such as CC0 and make a gentlemanly (gentle-science-person-ly) request that if someone uses it AND publishes a paper on it, please cite it in the way you suggest. Alternatively, I’d be interested to hear if it would be possible to use an attribution-based license such as CC-BY and then require the attribution method be citation in a paper. Would this method work, and would it be polite? Your comments, please.

FriendFeed Discussion

TT47: Semantic Data Integration for Systems Biology Research (ISMB 2009)

July 2, 2009 Leave a comment

Chris Rawlings, Also speaking: Catherine Canevet and Paul Fisher

BBSRC-funded research collaboration in Newcastle, Manchester, and Rothamsted : ONDEX and Taverna. Demo: Integration and augmentation of yeast metabolome model (Nature Biotech October 2008 26(10). Presented: Taverna and ONDEX. In ONDEX, everything can be seen as a network. To help with this, ONDEX contains an ontology of concept classes, relation types, and additional properties. Their example is yeast jamboree data integration. They have both specific (e.g. KEGG) and generic (e.g. tab delimited) parsers to load in data.

When ONDEX works with Taverna, instead of using the pipeline manager you use the ONDEX web services and access ONDEX from Taverna. This means you can use Taverna to pull in data into ONDEX. So, first parse jamboree data into ONDEX and remove currency metabolites (e.g. ATP, NAD). Add publications to the graph, from which domain experts can view and manually curate that data. Finally, annotate the graph using network analysis results. Then switch to taverna and identify orphans discovered in ONDEX. Retrieve the enzymes relating to the orphans and assemble the PubMed query and then add hits back to the ONDEX graph. Finally, have a look at the completed visualization. Use the ONDEX pipeline manager to upload data – it’s all in a GUI, which is good.

Then followed a live demo.

FriendFeedDiscussion

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else’s. I’m happy to correct any errors you may spot – just let me know!

TT16: Ontology Services for Semantic Applications in Healthcare and Life Sciences (ISMB 2009)

June 30, 2009 1 comment

Patricia Whetzel, Outreach Coordinator for NCBO

Trish has recorded her talk as a screencast as she wanted to do a demo, and she can’t trust the wireless – true enough! RESTful web services have been developed at the NCBO within BioPortal. http://rest.bioontology.org/bioportal (Note this is the prefix for all services, and if you just go to this URL there isn’t anything visible). Chose RESTful services as they are lightweight and easy to use. The main BioPortal website is http://bioportal.bioontology.org. All information on the BioPortal site is retrieved using those web services. Can store ontologies in OWL, OBO and Protege frames formats.

You can search ontologies based on a number of parameters. Much help information is available via mouseover text. You can also download ontologies that are available on BioPortal. When browsing your ontologies you can see the structure, the metadata, definitions and more. There are also ontology widgets that you can put on your own site, including jump-to feature and term selection widget. This latter one is very useful because it allows your web app to use term auto-complete without having to code it yourself!

To go into the search web services a little bit more, for instance search for “protocol”. The search can be paramaterized and filtered in many ways: which ontology to use, exact or non-exact matching, etc. The search function is especially important for ontology re-use. For instance, if you’re developing a new domain ontology, then you want to make sure you don’t reinvent the wheel and this is a good way to find out what’s out there. The next bit of the video showed using these searches via programmatic means.

BioPortal also allows you to annotate, or add notes, to ontologies. There is also an annotation tag /term cloud in the interface, which is nice :) You may see duplicates in the tag cloud – designed to be this way to show that more than one ontology has that term..  There are also hierarchy services. You can view the parent terms of a particular term, and do other sorts of queries that allow you to explore the hierarchy around a term programmatically. On the web app, they have a visualization of the hierarchy that is dynamic and you can play with.

FriendFeed Discussion

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else’s. I’m happy to correct any errors you may spot – just let me know!

HL13: The Human Phenotype Ontology (ISMB 2009)

June 29, 2009 Leave a comment

Peter Robinson

MIM – started in 1966 and online (OMIM) for over a decade. It has been extremely difficult to use computationally in a large-scale fashion. Thehierarchical structure of OMIM does not reflect that two terms are more cloesly related than a third. In constructing the HPO, all descriptions used at least twice (~7000) were assigned to HPO. It now has about 9000 terms and annotations for 4813 diseases. They have a procedure which calculates phenotypic similarity by finding their most-specific common ancestor.

You can visualize the human phenome using HPO. They also have a query system that allows physicians to query what’s in the ontology. Also there is the Phenomizer, which is “next-generation diagnostics”. You can get a prioritized list of candidates.To validate the approach, they took 44 syndromes and went to literature to look at their frequency, then generate patients at random using the features of the disease. For each simulated patient, queries were generated using HPO terms. Ranks of the disease returned by the phenomizer were compared to the original diagnosis. Comparisons were performed with phenotypic noise. In an ideal situation, their approach has some advantage (when no noise and imprecision). When add noise or imprecision, the p-value stays ok but other measures drop. They also use the information to get disease-gene families.

HPO and PATO are talking to each other. HPO is being used as a link between cellular networks and HP. They also want you to annotate your data with HPO. If you’re interested, find out more about the HPO Consortium.

FriendFeed Discussion

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else’s. I’m happy to correct any errors you may spot – just let me know!

PTO6: Ontology Quality Assurance Through Analysis of Term Transformations (ISMB 2009)

June 29, 2009 1 comment

Karin Verspoor

This work came out of a meeting talking about OBO quality assurance in GO. The work described here is applicable to any controlled vocabulary. The key quality concerns is univocality, or a shared interpretation of the nature of reality, and was originally coined from Spinoza in 1677. David Hill intended it to mean something slightly different, which is consistency of expression of concepts within an ontology. This facilitates human usability and computational tools can utilize this regularity.

Try to identify cases where there were violations of univocality: two semantically similar terms with different structure in their term labels. GO is generally very high quality: need computational tools to identify inconsistencies. They chose a simplistic approach of term transformation and clustering, as it’s good to start with the simplest stuff first. First step is abstraction, which is substitution of embedded GO and ChEBI terms with variables GTERM and CTERM, respectively. Then there was stopword removal (high frequency words like the, of, via). Next is alphabetic reordering (to deal with word order variation in the terms). They tried all different combinations of transformation ordering, to see how they were different.

20% of abstraction was due to CTERM, and 30% due to GTERMs. If you look at the distribution of the cluster sizes before and after transformation has radically changed. Max cluster before transformation was 29, and after the max cluster size was ~3000. In the end, found 237 clusters that may contain a univocality violation. Looked for terms that were in different cluster after abstraction, but merged together after one of the other transformations (that’s how they got the 237 clusters). A further 190 clusters that had to be manually assessed – this has reduced the number of things that had to be looked at manually. Discovered 67 true positive violations (35% ) of univocality. Already have ideas for improvements of this step.

The 67 clusters constitutes 317 GO terms. 45% of true positive inconsistences were {Y of X} | {Y in X}. There were a further 16% of TP where there were determiners in one version (e.g. “the”) and not in another version. Some of the smaller number of TP dealing with inverses, etc. 50% of FP were the semantic import of a stopword (some of the stopwords actually carry meaning and shouldn’t have been removed) and by removing it they’ve removed the difference between the two words.

FriendFeed Discussion

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else’s. I’m happy to correct any errors you may spot – just let me know!

PTO4: Alignment of the UMLS Semantic Network with BioTop Methodology and Assessment (ISMB 2009)

June 29, 2009 2 comments

Stefan Schulz

Ontology alignment is the linking of two ontologies by detecting semantic correspondences in their representational units (RUs), e.g. classes. Mainly done via equivalence and subsumption. BioTop is a recent development created to provide formal definitions of upper-level types and relations for the biomedical domain. It is compatible with both BFO and DOLCE lite. It links to OBO ontologies. UMLS Semantic Network (SN) is an upper-level semantic categorization framework for all concepts of the UMLS Metathesaurus. It is mainly unchanged in the last 20 years: a tree of 135 semantic types.

If you compare the two, the main difference is in the semantics, as the BioTop semantics are explicit and use Description Logics (DL), which means you’re also subscribing to the open-world assumption (OWA). The semantics of UMLS-SN is more implicit, frame-like and may be closed world. It also has the possibility to block relation inheritance, which isn’t possible with DL.

The methodology is first to provide DL semantics to the UMLS SN, and second build the bridge between BioTop and UML SN. How do we do the first step?  For semantic types: types extend to classes of individuals; subsumption hierarchies are assumed to be is_a hierarchies; and there are no explicit disjoint partitions. For semantic relations: reified as classes, NOT represented as OWL object properties. For triples: transformed into OWL classes with domain and range restrictions. Why did we convert relations to classes? Didn’t want to inflate the number of BioTop relations, and there are other structural reasons. If you reify the relation, you can provide complex restrictions on that relation. Also, it means you can formally represent the UMLS SN tags such as “defined not inherited” in a more rigorous way.

Mapping is fully manual using Protege 4, consistency check with Fact++ and Pellet supported by the explanation plugin (Horridge ISWC 2008) – they spent most of their time fighting against inconsistent TBoxes. It was an iterative process. Assessment is next. Using SN alone there is very low agreement with expert rating. Using SN+BioTop there were very few rejections (only 3) but agreed with all expert ratings. Possible reasons could be to do with the DL’s OWA and for the false positives that the expert rating was done on NE but system judgments were done on something else. There were inconsistent categorizations of UMLS SN objects which exposed hidden ambiguities (e.g. that Hospital was both a building and an organisation).

Allyson’s questions: Why decide to create BioTop and not use BFO or DOLCE lite? It’s not that I would necessarily suggest that these be used, I am just curious. Also, subsumption hierarchies are assumed to be is_a hierarchies, but is that a safe assumption in UMLS SN? For instance, in older versions of GO this would have been a problem (some things marked as subsumption were not in fact is_a, though I am pretty sure GO has fixed all of this now).

FriendFeed Discussion

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else’s. I’m happy to correct any errors you may spot – just let me know!

PT02:From Disease Ontology (DO) to Disease-Ontology Lite (DOLite)

June 29, 2009 Leave a comment

Warren A. Kibbe

Allyson’s note: I missed the beginning of this talk due to me participating in the press conference. Apologies.

Integrating clustering results is followed by final curation of DOLite terms, where a domain expert reviews the merged clusters. In summary, DOLite is a CV whereas the DO is an ontology. The purpose of this was to facilitate the funcational analysis based on a gene list. FunDO: website exploring genes using Functional Disease Ontology Annotations. You take the complete DO, and put in a typical gene list from a microarray study, and you get a network view of clustered genes. The same query using a gene list with DO and DOLite gets better clustering with DOLite (where better == more distinct clusters, greater number of clusters in the example we were shown – 2 clusters rather than 1).

You can also use GeneRIFs as a source (1000 genes with GeneRIFs annotations). You get slightly different answers depending on how you develop/annotate your gene list. Poorly-annotated genes, or a large % of genes with little or no exp literature will have few GeneRIFs.

Grouping ontology terms based on gene-to-ontology mapping provides a IC method for creating “Slims” from any type of ontology. They’ll do this with GO itself and see what their version of GOSlim looks like. Functional analysis based on DOLite provides much more concise and biologically-relevant results.

FriendFeed Discussion

Please note that this post is merely my notes on the presentation. They are not guaranteed to be correct, and unless explicitly stated are not my opinions. They do not reflect the opinions of my employers. Any errors you can happily assume to be mine and no-one else’s. I’m happy to correct any errors you may spot – just let me know!

Follow

Get every new post delivered to your Inbox.

Join 508 other followers