From Code4Lib
Revision as of 16:54, 27 January 2009 by (Talk) (Program Outline)

Jump to: navigation, search

Aims and Overview

The aim of this tutorial is to provide participants with a detailed conceptual understanding of how to publish Linked Data on the Web, and a gentle introduction to the practical and technical steps that make up the publishing process. In addition the tutorial will cover best practices in topics such as minting URIs for published data sets, vocabulary selection, choosing what RDF data to expose, and interlinking distributed data sets. Specific patterns will be presented for publishing different forms of data set, such as data from static files, relational databases and existing Web APIs. Participants will also be shown how to debug published data.

The second focus of the tutorial will be applications that consume Linked Data from the Web. We will give an overview about existing Linked Data browsers, Web of Data search engines as well as Linked Data Mashups and cover the existing software frameworks that can be used to build applications on top of the Data Web.

Lastly the tutorial will be to provide a collaborative space to explore putting some of the ideas of linked data into practice. The idea is that people can split off into groups, or work independently on adding linked data support to an existing application, exploring modeling issues of what vocabularies to use for particular data sets, and brain storming about new vocabularies that may be needed in the library world.

The tutorial will combine presentations by the tutors with demonstrations, interactive sessions, and group discussion. Other than a broad technical understanding of the Web development and Web publishing process, and a basic conceptual understanding of the Semantic Web, there are no pre-requisites for participation in the tutorial, which will be of interest to the full spectrum of code4lib2009 attendees, including researchers, developers, data managers/publishers and those seeking to comercially exploit the Semantic Web by publishing Linked Data.

Content, Approach and Schedule

The tutorial will be based primarily on material from the How to Publish Linked Data on the Web online tutorial, the key resource in the field of Linked Data publishing, supplemented by practical examples to illustrate key issues and give participants hands-on experience of creating Linked Data.

Program Outline

  • Tutorial
    • The Web of Data & Linked Data Principles (need a section to introduce the topic and motivate why LDOW useful)
    • Busy Developer's Intro to RDF (iand)
    • Practical task: describing a book/journal/article (or doing FOAF) with Linked Data
    • URIs and conneg
    • Vocabulary selection (what lbjay suggested)
  • Possible Demos
    • take some pre-existing data and convert to rdf for publication as linked data (iand)
    • oai-ore
    • create a foaf file
    • authority data and semweb (corey?)
    • registries NSDL (jphipps?)
    • RDA (jphipps/corey?)
    • <link> and linked-data (dchud?)
    • openvocab (iand?)
    • (anders?)
    • discovering vocabularies (lbjay)
    • skos and rdf/a (demo
    • short history of semweb sans layer cake
  • Breakout Sessions
    • Break out into groups or work independently on putting some of the ideas into practice
    • Working with RDF in various languages: php, python, ruby, java
    • Talk about publishing some dataset you are familiar with as linked data
    • Hack linked data support into an existing application you have
    • Explore the use of existing vocabularies for library data, or brainstorm about new ones
  • Review, Conclusions and Outlook
    • Results of breakout sessions
    • Linked Data prospects in the year ahead
    • Upcoming directions and challenges for linked data in the library/archives space


  • Ed Summers (Library of Congress)
  • Mike Giarlo (Library of Congress)
  • Jay Luker (Ex Libris)
  • Ross Singer (Talis)
  • Corey Harper (NYU)
  • Ian Davis (Talis)
  • Dan Chudnov (hopefully)


  • encourage people to bring data
  • pre-packaged examples
  • have data on hand for people that don't have some
  • be able to work offline if necessary