Version 80 (modified by karianne, 11 years ago)


Gold Sponsors:


Silver Sponsors:



Armed Forces



Aker Solutions





RuleML 2012
Semantic Days 2013 in Stavanger

Bios and Abstracts

Sonja Chirico Indrebø

Keynote speaker: Sonja Chirico Indrebø - Chief Information Officer – Senior Vice President IT, Statoil

Sonja Chirico Indrebø graduated from University of Surrey, UK, in 1992 with a BEng’s degree in Electronic and Electrical Engineering. After graduation she worked for six years as an instrument and telecommunication's engineer within the Oil and Gas business of Aker Engineering (now Aker Solutions).

Sonja joined Statoil in 1998 as a senior engineer in offshore telecommunications. Since then she has held multiple technical and leadership positions mainly within the company’s IT business. She has been responsible for large scale projects and also operations of the global IT services, both applications and infrastructure. Since beginning of 2011 she has been the company’s CIO.

Jim Maltby

Keynote speaker: Dr. James Maltby - YarcData

Dr. James Maltby is a Solution Architect for YarcData Inc., and specializes in mapping scientific and business applications to new computer architectures. He has an academic background in physics and engineering, specializing in radiation transport. He has worked for Cray since 2000, developing software for the massively multithreaded Cray XMT (and its MTA-1 predecessor) as a well as the other Cray systems. He also led the Bioinformatics practice at Cray for several years, using HPC to solve Life Science problems. His most recent project involved developing a highly parallel in-memory Semantic Database for the XMT architecture, now released as uRiKA by YarcData.

Complexity and Scalability in Semantic Graph Analysis

Semantic approaches offer a more natural way to combine disparate sources of data and to search for complex patterns. However, limitations of common computer hardware architectures can place an upper limit on the scalability of graph stores. Previous data management architectures have been very successful, but trade-offs related to their intended purposes have created built-in limitations. Classic relational databases offer storage efficiency, but performance can suffer as more data sources are added. Denormalized data warehouses improve performance but at a cost of complexity and increased maintenance. New storage paradigms such as Hadoop offer lower cost and higher parallel performance, but their highly partitioned nature limits the type of data that may be analyzed. This talk will present a new in-memory semantic graph database built over a unique multithreaded supercomputer, based on RDF and SPARQL, and describe its performance. Several real-world uses will be described, drawn from life sciences, financial services and cybersecurity.

Ian Horrocks

Keynote speaker: Prof. Ian Horrocks - Oxford University

Ian Horrocks is a Professor in the Oxford University Department of Computer Science and is a Fellow of Oriel College Oxford. His research interests include logic-based knowledge representation and reasoning and the semantic web, with a particular focus on ontology languages and applications. He was one of the key authors of the OIL, DAML+OIL, and OWL ontology language standards, chaired the W3C working group that standardised OWL 2, and developed many of the algorithms, optimisation techniques and reasoning systems that underpin OWL applications. He is a Fellow of the Royal Society, a member of Academia Europaea, an ECCAI Fellow, and is amongst the most highly cited authors in Computer Science.

Ontology Based Data Access: Theory and Practice

So called "Semantic Technologies" are rapidly becoming mainstream technologies, with RDF and OWL now being deployed in diverse application domains, and with major technology vendors starting to augment their existing systems accordingly. In this talk I will review the evolution of Semantic Technologies to date, and show how research ideas from logic based knowledge representation developed into a mainstream technology. I will then go on to examine the scalability challenges arising from deployment in large scale applications, and discuss ongoing research aimed at addressing them, focusing in particular on a layered integration of ontologies and databases known as Ontology Based Data Access.

Stephen Brobst

Stephen Brobst - Teradata Corporation

Stephen Brobst is the Chief Technology Officer for Teradata Corporation. Stephen performed his graduate work in Computer Science at the Massachusetts Institute of Technology where his Masters and PhD research focused on high-performance parallel processing. He also completed an MBA with joint course and thesis work at the Harvard Business School and the MIT Sloan School of Management.

Stephen has been on the faculty of The Data Warehousing Institute since 1996. He was also appointed to Barack Obama's Presidential Council of Advisors on Science and Technology (PCAST) in the working group on Networking and Information Technology Research and Development (NITRD).

Semantic Data Modeling: The Key to Re-Usable Data

There is significant industry confusion in understanding the differences between logical, physical, and semantic data modeling. This workshop will provide a clear understanding of the role for semantic data modeling when deploying an enterprise data warehouse. An approach for development of a semantic data model will be described. In addition, we will describe the VIP methodology for realization of a semantic data model using an access layer on top of an enterprise data warehouse with an optimized combination of views, indexes, and physicalization.

  • Learn about the differences between logical, physical, and semantic data models.
  • Learn how use of semantic data modeling can reduce data redundancy in an analytic environment.
  • Learn how to make good design tradeoffs in regard to performance, maintenance, and storage costs in the realization of the access layer aligned to your semantic data model.

Maurizio Lenzerini

Prof. Maurizio Lenzerini - Università di Roma La Sapienza

Maurizio Lenzerini is a Full Professor in Computer Science at Università di Roma La Sapienza. He is conducting research in data management, knowledge representation and reasoning, information integration, and service-oriented computing. He is the author of more than 250 publications in international conferences and journals, and has been invited speaker in many international conferences. He was the Chair of the Executive Committee of the ACM Symposium of Principles of Database Systems in 2010-2012, and is currently a member of such Committee. He is a Fellow of the European Coordinating Committee for Artificial Intelligence (ECCAI), a Fellow of the Association for Computing Machinery (ACM), a member of the Academia Europaea - The Academy of Europe, and the recipient of several research awards, including an IBM University Shared Research Award, and an IBM Faculty Award.

Ontology-based data management

The need of effectively managing the data sources of an organization, which are often autonomous, distributed, and heterogeneous, and devising tools for deriving useful information and knowledge from them is widely recognized as one of the challenging issues in modern information systems. Ontology-based data management aims at accessing, using, and maintaining data by means of an ontology, i.e., a conceptual representation of the domain of interest in the underlying information system. This new paradigm provides several interesting features, many of which have been already proved effective in managing complex information systems and addressing the above mentioned challenges. In this talk we first provide an introduction to ontology-based data management, illustrating the main ideas and techniques for using an ontology to access the data layer of an information system, and then discuss several important issues that are still the subject of extensive investigations by the Knowledge Representation and Reasoning community.

Josef Noll

Prof. Josef Noll - University of Oslo

Josef Noll is professor at the University of Oslo in the area of Mobile Services. His group ConnectedLife concentrates on the working areas mobile-based trust and authentication, personalised and context-aware service provisioning, and service continuation in 5G systems. He is co-founder and steering board member of the Center for Wireless Innovation Norway and Mobile Monday Norway.

In the area of Internet of Things he is project leader of the JU Artemis pSHIELD project. Previously he was Senior Advisor at Telenor R&I in the Products and Markets group, Programme manager for the UMTS++ Mobile Broadband Access programme, and project leader of several European projects.

Prof. Noll received the IARIA fellow award in 2008. He is reviewer of several EU FP6/FP7/FP8, referee of the European Research Council (ERC) Starting Grant, and evaluator of the EU's framework programme FP7, the Joint Undertaking (JU) Artemis, the Dutch IOP, the Austrian FIT, the Qatar National Research Fund and the Cyprus research programmes.

Measurable Security for the Internet of Things

Business intelligence is moving towards the real-time handling of information, coming from both internal and external business processes. Inclusion of sensor data in automatic process control has been a topic in industry for quite a while, but was mainly limited to closed systems. Trends in collaborative industries like oil & gas show that sensor data might contribute to automatic processes in different domains, fostered by the vision of the Internet of Things.

The presentation will address the challenges of communication across domains, focussing on the challenges of new infrastructures, new ways of communication and new devices. Two main trends are visible: (i) wireless sensors contributing to automated processes and (ii) the move of control into mobile devices. The example of "bring your own device" (BYOD) is used to exemplify the trends of devices accessing processes and information in your enterprise. In the upcoming years not only phones, tablets and computers will demand access, but also sensors and embedded system will deliver and request information. Sensors will contribute to automated processes, and thus require a knowledge management.

Classic threats as insufficient authentication and loss of devices are addressed through an approach of integrating, managing and securing mobile devices. Such a short-sighted approach, as suggested by leading IT companies, is deemed to fail. A paradigm shift in handling security is required, addressing the need for securing information instead of securing infrastructure. The paradigm shift includes the need for measurable security, and addresses a metrics-based approach for a quantitative assessment of both the potential attack scenario and the security measures of the infrastructure.

Our suggested approach is based on the semantic description of both a potential attack scenario, the security-related aspects of my sensors/systems and semantic policies. The outcome is a methodology for measurable security, and provides composable security for sensor systems. The approach is currently applied in the areas of Railway Security and UAV operation through the European Artemis project nSHIELD (

Graham Moor

Graham Moore - BrightstarDB Ltd.

After graduating top of his class with a 1st Class Honours degree in Computer Science from Southampton University Graham worked at leading information, knowledge and content management companies (DPSL Ltd, STEP Gmbh, Empolis, Ontopia) before forming Networked Planet in 2004.

Graham has led teams of developers in building semantic web, knowledege and content management products for over 10 years. Graham's roles have included the development and productisation of leading edge technology as well as the evangelism and partner network building that is required to take these products to market.

Graham has also been heavily involved in the communities around information and knowledge management technology. He has spent 10 years on the ISO13250 Topic Maps committee, editing the Topic Maps and Topic Maps Constraint Language and being the author of the SDShare information integration protocol.

Introduction to SDShare

SDShare is a stream based protocol for exposing and consuming data in a standardised way. It uses Atom and RDF as building blocks to allow servers to expose collections of data, and clients to process and update local data stores with copies of the data.

RDF provides a very flexible data model that allows data of any form to be represented. The existing RDF family of standards define how to serialise the model, how to query the model (SPARQL) and how to update the model (SPARQL Update). However, there are no protocols that define how a machine can both expose and consume RDF in a system of interconnected nodes. A classic use case for this requirement is where one RDF server contains a collection of master data and another RDF system wishes to use that data. The master data will change over time. This protocol defines how the master data server can publish the changes to the resources it manages and how a client can consume them to ensure it is in sync.

While the standard uses RDF for interchange, source systems need not support RDF natively, but instead transform data as requested into RDF. This makes SDShare a solution to the general problem of enterprise information integration and compares favorably when compared to data integration via Web Services.

The protocol was original developed as a response to an eGovernment challenge of syndicating asset metadata between government agencies. Since then it has been refined and published as an open standard at It has also been in use in many government and commercial applications. The main business uses for SDShare include, enterprise search of multiple sources, master data management, data quality management, content classification and unified data merging and querying.

This presentation will describe the technical aspects of the standard and well as provide high level overview of the most common use cases.

Torulf Mollestad

Torulf Mollestad - SAS Institute

Torulf Mollestad, Ph.D., is a Senior Consultant of Advanced Analytics for the SAS Information Management team in the Nordic region; working on analytics, data mining and text mining projects for numerous industries and many different application areas, such as churn prediction, risk modeling, fraud detection, anti money laundering, industrial equipment failure prediction etc.

He was an Associate Professor NTNU (Norges Teknisk-Naturvitenskaplige Universitet) in Trondheim N for 8 years and has been with SAS Institute for 12 years.

Text mining and semantic tagging

In this presentation, statistical text mining methods are described - what they are and how they may be compared to classical constructive semantics based methods. We will describe methods for semantic tagging of words or documents and how this information may be utilized in subsequent (statistical) analysis of the text Moreover, we will talk about methods and tools for designing ontologies that are consistent across the organization and for making automatic document classification into these. Different use cases will be presented.

Monika L. Eknes

Monika L. Eknes - Skaperhuset

Monika L. Eknes is an entrepreneur with a passion for how to transform organizations into super organisms and founder of Skaperhuset AS. She considers ICT to serve as groups’, organizations’, societies’, nations’ – and global nervous systems, analogue to what exists in a human body. She assesses semantic technologies to have the potential to simulate the human brain’s strategies for dealing with Big Data. Eknes holds a Ph.D. within offshore safety and is a certified communicologist. She has previously worked in DNV Research and held different management positions within SHE and quality, including being the safety director of the Norwegian National Rail Administration.

Arne Dybdahl

Arne Dybdahl - Skaperhuset

Arne Dybdahl, MSc ICT, boad member for the SIG on Semantic Technology and Information Architecture at The Norwegian Computer Society, certified communicologist, and partner at Skaperhuset Semansys AS. He has extensive experience covering the management of a production enterprise, research, banking and quality management.

Can semantic technologies revolutionize administrative ICT?

Being responsible for a big organization’s management system which received low user satisfaction scores even though it had been developed aided by recognized consultants and documented in one of the most advanced tools available I asked one of my co-workers how the system could be made simpler and more user friendly at reasonable costs. Quite quickly he returned with the answer that this would be achievable by using a semantic wiki. It turned out not to be possible to convince the ICT department to explore the idea. However four years later we realized the idea in Skaperhuset.

The presentation outlines how a management system was developed in a Semantic Mediawiki combining semantic technology, competence on "framing and anchoring", the Business Motivation Model and HSEQ competence. The presentation also discusses future applications far beyond management systems.

David Norheim

David Norheim - Computas

David Norheim heads the Information Management Department in Computas. He has experience with applying Semantic Web technologies for various user groups and application areas since 2001 and co-founded a Semantic Web startup in 2002. He is currently involved in a number of Semantic Web projects in Norway and in Europe.

Experiences with organisational linked data

Over the last couple of years major government bodies in Norway and Sweden has been demonstrating the value from publshing national company registies as linked open data. Organizational data containing information about management and board members, number of employees, locations and various classifications has been linked to accounting information and to eachother. Over these data various demonstrators have been build. The power and ease of developing demonstrators over a linked data and Semantic Web infrastructure has been demonstrated. An innovation project known as Semicolon has been a key driver for this progress, and we continue to demonstate the value of linked data inside the firewall. In this presentation we will show how we've collaborated and how the agencies have matured and how demonstrators with focus on business intelligence becomes an eye opener.

Thomas Hubauer

Thomas Hubauer - Siemens

Thomas Hubauer is a Research Scientist in the technology field Business Analytics and Monitoring at Siemens Corporate Technology in Munich.

His research focuses on knowledge modeling and retrieval, with an emphasis on information interpretation and abductive reasoning. His current projects address the application of Semantic Web technologies for reactive and preventive data analytics in the industry and energy domains.

Semantic Data Analytics for Integrated Energy and Industry Automation Systems

Abstract TBA

Peter Lawrence

Peter Lawrence - TopQuadrant

Dr. Peter Lawrence has 30+ years of experience in applying automation, software, and information technology to create transformational solutions to technical challenges, particularly in process manufacturing.

Most recently he was CTO of Matrikon where the Intuition product he developed (using the Semantic Web technology) was instrumental to Matrikon’s acquisition by Honeywell in 2010.

Logical Semantic Warehouse - Developing Your Own Semantic Ecosystem

Gartner has defined the Logical Semantic Warehouse (LSW) as a new data management architecture for analytics combining the strengths of traditional repository warehouses with alternative data management and access strategy. The LSW will form a new best practice by the end of 2015 according to Gartner.

Organizations are reporting faster time to delivery, greater flexibility and an ability to include additional forms of processing such as virtualization and even content analytics in their analytics solutions — all managed by a common semantic layer or engine to deliver the assets to multiple different analytic applications.

Dr. Peter Lawrence from TopQuadrant will describe the evolution of LSW from existing data warehouse and data mart technology practices and the semantic version of LSW, its architecture and benefits.

Marin Dimitrov

Marin Dimitrov - Ontotext

Marin Dimitrov is the CTO of Ontotext - a company providing solutions for data integration and information management with Semantic Technologies. He has experience with various industrial and research projects in the area of Linked Data, text mining, semantic search and semantic databases

From Big Data to Smart Data

Companies increasingly are realizing that they have more than a “big data” problem. What they really have is a “Big Metadata” problem, in that the barrier for timely and complex analytics is not necessarily the processing speed or the storage volume, but the fact that valuable metadata is spread across many inconsistent and heterogeneous repositories, or that the insufficient metadata makes complex data analysis time consuming and costly.

Semantic Technologies and RDF data warehousing in particular, have the potential to help with establishing harmony among enterprise metadata and timely insight discovery

Back to main page


About PCA
Reference Data Services