Information and Data Quality (IDQ)

In this tutorial you will learn about the emerging ISO 8000 standard for measuring IDQ.
Content includes:

  • The academic foundation and the industry requirements for the standard.
  • There will be a detailed walk trough of illustrating examples.
  • Reference will be made to several real life implementations, example
    • "How to secure future data quality in procurement phase"
    • "Diagnose database for business critical decisions"
    • "Requirements to Test data"
  • The relationship to ISO 9000 and ISO15288 "Information and Data Quality - Relationship to other Standards"

Introduction to Semantic Technology

Martin Giese and Martin G. Skjæveland (University of Oslo)
This tutorial is an introduction to the practical side of Semantic Web Technologies:
the attendant will learn

  • what the central ideas of semantic web technologies are
  • how knowledge is represented on the semantic web
  • how to formulate queries to a semantic data store
  • how domain modelling for the Semantic Web works
  • how a relational database can be exported as semantic data

Martin G. Skjæveland is a Ph.D student at the University of Oslo, working in the area of applied logic and semantic web. Prior to his Ph.D studies he was employed by DNV Information Risk Management where he worked with ontology engineering and semantic web technologies, mainly centered around the oil and gas standard ISO 15926.

Martin Giese holds a PhD in computer science from the University of Karlsruhe, and has been employed as researcher at the University of Oslo since 2007. He has more than 15 years of experience with mechanised reasoning and formal methods. Since 2007 he has been involved in several projects concerning the application of semantic technologies in the Norwegian public sector and oil & gas industry.

Big Linked Data

The slides of the tutorial are available here.

This tutorial introduces the notion of Big Data in the context of the Linked Open Data movement. The tutorial will focus on the usage of Linked Data principles for large-scale distributed data integration, and explains how vast quantities of such integrated data can be processed efficiently.
Contents included:

  • Introduction to Big Data and Linked Data
  • Review of openly available datasets
  • Challenges in collecting and integration of Big Linked Data
  • Big Linked Data in the Web of Things
  • Complex events
  • Large-scale analytics

Andreas Harth works as post-doctoral researcher at Institute AIFB at the Karlsruhe Institute of Technology after pursuing a Ph.D. with the Digital Enterprise Research Institute (DERI) at the National University of Ireland, Galway. His research interests are large-scale data interoperation on the Semantic Web, Linked Data, knowledge representation, computational logic and user interaction on web data. Andreas has published over a dozen papers in these areas, and is author of several open source software systems.

Dumitru Roman works for SINTEF as a research scientist since the end of 2009. He is currently the project coordinator of the Environmental Services Infrastructure with Ontologies (ENVISION) project. More recently, he has initiated Sensapp – a Web-based platform for enabling access to sensor data, and has been involved in the LOD activities in Norway through the Semicolon2 and PlanetData projects.

Marko Grobelnik is an expert in analysis of large amounts of complex data with the purpose to extract useful knowledge. In particular, the areas of expertise comprise: Data Mining, Text Mining, Information Extraction, Link Analysis, and Data Visualization as well as more integrative areas such as Semantic Web, Knowledge Management and Artificial Intelligence.


About PCA
Reference Data Services