UKON 2016 Short Talks IV

These are my notes for the fourth session of talks at the UK Ontology Network Meeting on 14 April, 2016.

Source 14 April 2016

Mining informative OWL axioms with DL miner
Viachaslau Sazonau, Uli Sattler

A reminder: TBox = OWL Class and property axioms, and ABox = individuals with labels and relations. A common question is “What is missing in my TBox?” The ABox might provide hints… A domain expert could scan the ontology and find the additional axioms “manually”.  DL-Miner automatically scans the ABox and generates hypotheses for the TBox.

If the hypothesis is correct, add to TBox. If incorrect, you then check the ABox as there might be something wrong there. An alternative would be go outside the ontology (to the laboratory, for example) to see if there’s another reason why the hypothesis has been suggested.

Justification and Reasoner Verification
Michael Lee, Bijan Parsia, Uli Sattler

Reasoners are vital and it is essential they are correct as they do tasks impossible to do ourselves. When there are disagreements that have occurred before (ORE) an error must have occurred. They want to find disagreements and resolve them. They will evaluate each justification for a disagreement either with a human or a reasoner.

They look at FaCT++ and HermiT. One reasoner would make a statement, and the other reasoner states whether it disagrees or not. If they can’t make a decision, it goes to a human. They did this with 4 reasoners (also Pellet and JFact) and looked at 190 ontologies. 181 had agreements on classifications, and 9 disagreements. This resulted in 1622 justifications for the disagreements. They found errors with data types and missing asserted axioms.

When you do ontology engineering, make sure you use more than one reasoner. However, reasoners are generally stable with a 95% level of agreement. In future, would be worth making a service where you can submit your ontologies to be compared across reasoners.

Antipattern Comprehension: An Empirical Evaluation
Tie Hou, Peter Chapman, Andrew Blake

Comprehension of justifications is known to be difficult for even experienced ontologists. Even with reasoners, understanding is difficult. They are trying to make things easier with visualization. Most visualization tools show only the hierarchical structure of an ontology, however incoherence in an ontology can arise from the interaction between concepts and properties. Therefore they use concept diagrams which can be viewed individually or merged.

Does visualization make it easier to examine incoherence? A set of antipattern categorizations were extracted from online TONES ontology repo. They focused only on the identification of logical contradictions. Participants using Protege statements did not perform any worse than those using diagrams. They want to extend the study to help debug ontologies. The study was performed with students with no knowledge of ontologies, and they’d like to do it again with experts.

Please note that this post is merely my notes on the presentation. I may have made mistakes: these notes are not guaranteed to be correct. Unless explicitly stated, they represent neither my opinions nor the opinions of my employers. Any errors you can assume to be mine and not the speaker’s. I’m happy to correct any errors you may spot – just let me know!


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s