<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.code4lib.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Chelociraptor</id>
		<title>Code4Lib - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.code4lib.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Chelociraptor"/>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/Special:Contributions/Chelociraptor"/>
		<updated>2026-04-09T20:53:54Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.26.2</generator>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2015_Preconference_Proposals&amp;diff=42111</id>
		<title>2015 Preconference Proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2015_Preconference_Proposals&amp;diff=42111"/>
				<updated>2014-11-20T16:44:50Z</updated>
		
		<summary type="html">&lt;p&gt;Chelociraptor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Instructions ==&lt;br /&gt;
Thank you for considering proposing a pre-conference! Here are a few details:&lt;br /&gt;
&lt;br /&gt;
* We will be taking pre-conference proposals until '''November 7, 2014'''&lt;br /&gt;
* If you cannot or do not want to edit this wiki directly, you can email your proposals to cmh2166@columbia.edu or collie@msu.edu&lt;br /&gt;
* Examples from the 2014 pre-conference proposals can be found at [[2014 preconference proposals|http://wiki.code4lib.org/2014_preconference_proposals]]&lt;br /&gt;
* If you are interested in ''attending'' a particular pre-conference, please append your name below that proposal (indicating interest in more than one proposal is fine!)&lt;br /&gt;
* If you have an idea for a pre-conference, but cannot facilitate yourself please post the idea below and email cmh2116@columbia.edu or collie@msu.edu&lt;br /&gt;
* '''NOTE:''' Pre-conferences are NOT included in the Code4Lib Conference price and will be held on Monday, February 9, 2015 as either full day or half day sessions&lt;br /&gt;
* Please use the template for proposals provided in the pre-formatted block below&lt;br /&gt;
&lt;br /&gt;
== Pre-conference Proposals ==&lt;br /&gt;
&lt;br /&gt;
=== Delivering and Preserving GIS Data ===&lt;br /&gt;
 &lt;br /&gt;
'''Half Day [Morning]'''&lt;br /&gt;
&lt;br /&gt;
* Darren Hardy, Stanford University, drh@stanford.edu&lt;br /&gt;
* Jack Reed, Stanford University, pjreed@stanford.edu&lt;br /&gt;
&lt;br /&gt;
We will discuss how to set up a spatial data infrastructure (SDI) to deliver GIS data, to manage GIS content in a Fedora repository for preservation, and to establish metadata requirements for good spatial discovery. By the end of the workshop you will have a working SDI! This workshop is a compliment to the GeoBlacklight workshop in the afternoon.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# [[User:Ssimpkin|Sarah Simpkin]]&lt;br /&gt;
# Vicky Steeves&lt;br /&gt;
# Andrew Battista&lt;br /&gt;
# Peggy Griesinger&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== A hands-on introduction to GeoBlacklight ===&lt;br /&gt;
&lt;br /&gt;
'''Half Day [Afternoon]'''&lt;br /&gt;
&lt;br /&gt;
* Darren Hardy, Stanford University, drh@stanford.edu&lt;br /&gt;
* Jack Reed, Stanford University, pjreed@stanford.edu&lt;br /&gt;
&lt;br /&gt;
GeoBlacklight is a discovery solution for geospatial data that builds on the successful Blacklight platform. Many libraries have collections of GIS data that aren’t easily discoverable. This will be a hands-on workshop, focused on installing and running GeoBlacklight which builds on the morning workshop &amp;quot;Delivering and Preserving GIS Data&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# [[User:Ssimpkin|Sarah Simpkin]]&lt;br /&gt;
# Vicky Steeves&lt;br /&gt;
# Andrew Battista&lt;br /&gt;
# Peggy Griesinger&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
===RailsBridge: Intro to programming in Ruby on Rails===&lt;br /&gt;
&lt;br /&gt;
'''&amp;quot;Half-Day&amp;quot; [morning]'''&lt;br /&gt;
&lt;br /&gt;
* Contact Carolyn Cole, Penn State University, carolyn@psu.edu&lt;br /&gt;
* Additional instructors welcome&lt;br /&gt;
&lt;br /&gt;
Interested in learning how to program? Want to build your own web application? Never written a line of code before and are a little intimidated? There's no need to be! [http://www.railsbridge.org/ RailsBridge] is a friendly place to get together and learn how to write some code.&lt;br /&gt;
&lt;br /&gt;
RailsBridge is a great workshop that opens the doors to projects like [http://projectblacklight.org/ Blacklight] and [http://projecthydra.org/ Hydra] and [https://github.com/traject-project/traject Traject].&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Maura Carbone&lt;br /&gt;
#Vicky Steeves&lt;br /&gt;
# Peggy Griesinger&lt;br /&gt;
# Mike Price&lt;br /&gt;
# Jean Rainwater&lt;br /&gt;
# Coral Sheldon-Hess&lt;br /&gt;
# Margaret Heller&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== Replace yourself with a painfully complex bash script...or try Ansible ===&lt;br /&gt;
&lt;br /&gt;
'''Half Day [Morning]'''&lt;br /&gt;
&lt;br /&gt;
* Chad Nelson, chad dot nelson @ lyrasis dot org&lt;br /&gt;
* Blake Carver, Blake dot carver @lyrasis dot org&lt;br /&gt;
&lt;br /&gt;
Abstract: &lt;br /&gt;
&lt;br /&gt;
[http://www.ansible.com Ansible] is an open source automation and [http://en.wikipedia.org/wiki/Configuration_management configuration management] tool that focuses on simplicity to help make your life as a developer, or a sysadmin, or even a full on devops-er, easier. This workshop will cover the basic building blocks used in Ansible as well as some best practices for maintaining your Ansible code. We will start by working through a simple example together, and then participants will be given time to work on their own projects with instructors providing guidance and troubleshooting along the way. By the end of the session, participants will have a working knowledge of Ansible and be able to write a working [http://docs.ansible.com/playbooks.html playbook] to meet local needs.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
# Ray Schwartz&lt;br /&gt;
# Coral Sheldon-Hess&lt;br /&gt;
# Kevin S. Clarke&lt;br /&gt;
# Joshua Gomez&lt;br /&gt;
# Charlie Morris&lt;br /&gt;
# Andy Mardesich&lt;br /&gt;
# Anna Headley&lt;br /&gt;
# Chelsea Lobdell&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== Intro to Docker ===&lt;br /&gt;
&lt;br /&gt;
'''Half Day [Whenever]'''&lt;br /&gt;
&lt;br /&gt;
* John Fink, McMaster University, john dot fink at gmail dot com&lt;br /&gt;
* Francis Kayiwa, University of Maryland Libraries , francis dot kayiwa at gmail dot com&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
&lt;br /&gt;
[http://docker.io Docker] ([http://journal.code4lib.org/articles/9669 jbfink code4lib journal article]) is an open source Linux operating system-level virtualization framework that has seen great uptake over the past year. This workshop will take you through the basic features of Docker, including setup, importing of containers, development workflows and deploying. Knowing when Docker is useful and when it isn't will also be covered. Ideally, every attendee will have ample experience creating and running their own Docker instances by the end.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
#  Jim Hahn&lt;br /&gt;
#  Joshua Gomez&lt;br /&gt;
#  Bobbi Fox&lt;br /&gt;
#  Ray Schwartz&lt;br /&gt;
#  Megan Kudzia&lt;br /&gt;
# Coral Sheldon-Hess&lt;br /&gt;
# Cary Gordon (uses Docker in production on AWS)&lt;br /&gt;
# Eric Phetteplace&lt;br /&gt;
# Esther Verreau&lt;br /&gt;
# Charlie Morris&lt;br /&gt;
# Anna Headley (voting for afternoon, compliments ansible)&lt;br /&gt;
&lt;br /&gt;
=== Code Retreat ===&lt;br /&gt;
&lt;br /&gt;
'''Full Day'''&lt;br /&gt;
&lt;br /&gt;
* Jeremy Friesen, University of Notre Dame, jfriesen at nd dot edu&lt;br /&gt;
* Additional facilitators welcome; Especially if you have CodeRetreat experience.&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Coderetreat is a day-long, intensive practice event, focusing on the fundamentals of software development and design.&lt;br /&gt;
By providing developers the opportunity to take part in focused practice, away from the pressures of 'getting things done', the coderetreat format has proven itself to be a highly effective means of skill improvement.&lt;br /&gt;
Practicing the basic principles of modular and object-oriented design, developers can improve their ability to write code that minimizes the cost of change over time.&amp;quot; [http://coderetreat.org/about About Code Retreat]&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
# Mike Giarlo&lt;br /&gt;
# Charlie Morris&lt;br /&gt;
# Devon Smith&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== Presentations workshop ===&lt;br /&gt;
 &lt;br /&gt;
'''&amp;quot;Half Day [Afternoon]&amp;quot;'''  (but could be expanded based on interest)&lt;br /&gt;
&lt;br /&gt;
* Chris Beer, Stanford University, cabeer@stanford.edu&lt;br /&gt;
* Additional facilitators welcome.&lt;br /&gt;
&lt;br /&gt;
This is a preconference session intended for first time Code4Lib speakers, habitual procrastinators, experienced speakers, those thinking about offering lightning talks, etc. If you're preparing a talk for this year's Code4Lib, this workshop is an opportunity to rehearse your presentation, get feedback from peers, get familiar with the presentation technology, etc.&lt;br /&gt;
 &lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
#Vicky Steeves&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== Dive into Hydra  ===&lt;br /&gt;
 &lt;br /&gt;
'''&amp;quot;Half Day [Afternoon]&amp;quot;''' &lt;br /&gt;
&lt;br /&gt;
* Justin Coyne, Data Curation Experts, justin@curationexperts.com&lt;br /&gt;
* Bess Sadler, Stanford University, bess@stanford.edu&lt;br /&gt;
&lt;br /&gt;
Hydra is a collaboration of over 30 educational institutions who work together to solve their repository needs by building open-source software.   Dive into Hydra is a course that bootstraps you into the Hydra software framework.  We'll start at the basics and walk you through the various layers of the Hydra stack.   We'll conclude by installing the Worthwhile gem, enabling every participant to walk away with their own Institutional Repository.  Participants who have prior exposure to web programming will get the most out of this course.  It's recommended (but not required) that you attend &amp;quot;RailsBridge&amp;quot; prior to this workshop.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Maura Carbone&lt;br /&gt;
# Peggy Griesinger&lt;br /&gt;
# Mike Price&lt;br /&gt;
# Jean Rainwater&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== code4lib/Write The Docs barcamp ===&lt;br /&gt;
&lt;br /&gt;
'''&amp;quot;Full Day&amp;quot;''', with options for jumping in for half a day&lt;br /&gt;
&lt;br /&gt;
* code4lib wrangler: Becky Yoose, yoosebec at grinnell dot edu&lt;br /&gt;
* Write the Docs contacts: TBA&lt;br /&gt;
&lt;br /&gt;
Abstract&lt;br /&gt;
&lt;br /&gt;
Documentation. We all know that we need it for things we develop, but most of us either keep putting it off or write documentation that is not maintained, clear, concise, and so on. We're all guilty! So what's stopping us from doing better docs? Luckily, Portland is also the home to the NA Write the Docs conference, and is home for many folks who live and breathe documentation. This barcamp is open to both code4lib and non-code4lib conference attendees and is intended to provide a space where code4libbers can find practices and tools in creating better documentation for all as well as documentation wonks can find out ways in which the library wonks can help with better documentation access and organization. &lt;br /&gt;
&lt;br /&gt;
Remember, like metadata, documentation is a love note to the future.&lt;br /&gt;
&lt;br /&gt;
More information about Write the Docs at http://conf.writethedocs.org/&lt;br /&gt;
&lt;br /&gt;
There will be a nominal fee (t/b/d) for non-Code4LibCon attendees (subject to organizer approval). &lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
'''Full day'''&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
'''Morning'''&lt;br /&gt;
# Ranti Junus&lt;br /&gt;
# Mita Williams&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
'''Afternoon'''&lt;br /&gt;
# Francis Kayiwa (if my Pre-Conf is in the AM) Otherwise with Ranti if my Pre-Conf is in the afternoon. &lt;br /&gt;
# Kevin S. Clarke&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== Linked Data Workshop ===&lt;br /&gt;
&lt;br /&gt;
'''&amp;quot;Half Day [morning]&amp;quot;''' &lt;br /&gt;
&lt;br /&gt;
* Karen Estlund, University of Oregon, kestlund@uoregon.edu&lt;br /&gt;
* Tom Johnson, DPLA, tom@dp.la&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
&lt;br /&gt;
Developer and metadata experts-focused linked data workshop. Topics covered will include: linked open data principles, converting existing data, and modeling linked data in DAMS.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Logan Cox&lt;br /&gt;
# Ray Schwartz&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== Code4Arc ===&lt;br /&gt;
&lt;br /&gt;
'''&amp;quot;Full Day&amp;quot;''' (with options for half day participation)&lt;br /&gt;
&lt;br /&gt;
* Sarah Romkey, Artefactual Systems, sromkey@artefactual.com&lt;br /&gt;
* Justin Simpson, Artefactual Systems, jsimpson@artefactual.com&lt;br /&gt;
* Chris Fitzpatrick, ArchivesSpace, chris.fitzpatrick@lyrasis.org&lt;br /&gt;
* Alexandra Chassanoff, BitCurator Access, bitcurator@gmail.com&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
&lt;br /&gt;
What does it mean to Code for Archives? Is it different than coding for libraries, and if so, how? &lt;br /&gt;
&lt;br /&gt;
Code4Lib is a wonderful and successful model (you must agree or you wouldn't be reading this). This workshop is an attempt to create a space to replicate the model in an Archival context. A space to talk about development for archives, and the particular challenges of developing archival systems.  Topics to discuss include Integration between different Archival software tools, and between Archival tools/workflows and larger institutional tools like institutional repositories, discovery and access systems.&lt;br /&gt;
&lt;br /&gt;
The schedule may include the following:&lt;br /&gt;
&lt;br /&gt;
* Panel type conversations about the State of Art in Archives &lt;br /&gt;
* Case Studies - discussion of workflows at specific institutions, including gaps in tools and how those are being addressed or could be addressed &lt;br /&gt;
* Tool Demos - access to demos of some of the open source tools used in an Archival Context (examples include ArchivesSpace, Archivematica, BitCurator, AtoM)&lt;br /&gt;
&lt;br /&gt;
Artefactual will provide demos running Archivematica and AtoM, Lyrasis will do so for ArchivesSpace, BitCurator will for BitCurator.  We encourage others to chime in here to expand the list of tools available to touch and play with. &lt;br /&gt;
&lt;br /&gt;
When signing up, please indicate if you are an end-user or a developer.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Laney McGlohon - developer&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== Fail4Lib 2015 ===&lt;br /&gt;
&lt;br /&gt;
'''Half Day [TBD, probably afternoon]'''&lt;br /&gt;
&lt;br /&gt;
* Andreas Orphanides, akorphan (at) ncsu.edu&lt;br /&gt;
* Jason Casden, jmcasden (at) ncsu.edu&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
&lt;br /&gt;
Failure. Failure never changes. Since failure is an inescapable part of our professional work, it's important to be familiar with it, to acknowledge it, and to grow from it -- and, in contravention to longstanding tradition, to accept it as a fact of development life. At Fail4Lib, we'll talk about our own experiences with projects gone wrong, explore some famous design failures in the real world, and talk about how we can come to terms with the reality of failure, to make it part of our creative process -- rather than something to be shunned. Let's train ourselves to understand and embrace failure, encourage enlightened risk-taking, and seek out opportunities to fail and learn. This way, when we do what we do -- and fail at what we do -- we'll do so with grace and without fear.&lt;br /&gt;
&lt;br /&gt;
This year's preconference will include new case studies and an improved discussion format. Repeat customers are welcome! (Fail early, fail often.)&lt;br /&gt;
&lt;br /&gt;
The schedule may include the following:&lt;br /&gt;
&lt;br /&gt;
* Case studies. Avoid our own mistakes by bearing witness to the failures of others.&lt;br /&gt;
* Confessionals, for those willing to share. Let's learn from our own (and each others') failures.&lt;br /&gt;
* Group therapy. Vent about your own experiences in a judgment-free setting. Explore how we can make our organizations less risk-averse and more failure-tolerant.&lt;br /&gt;
&lt;br /&gt;
''Interested in attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Ray Schwartz&lt;br /&gt;
# Charlie Morris&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== Coding Custom Solutions for Every Department in the Library with File Analyzer ===&lt;br /&gt;
 &lt;br /&gt;
'''&amp;quot;Half Day [Morning]&amp;quot;''' &lt;br /&gt;
&lt;br /&gt;
* Terry Brady, Georgetown University Library, twb27@georgetown.edu&lt;br /&gt;
&lt;br /&gt;
Abstract&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Library has shared an application called the [http://georgetown-university-libraries.github.io/File-Analyzer/ File Analyzer] that has allowed us to build custom solutions for nearly every department in the library.&lt;br /&gt;
&lt;br /&gt;
* Analyzing Marc Records for the Cataloging department&lt;br /&gt;
* Transferring ILS invoices for the University Account System for the Acquisitions department &lt;br /&gt;
* Delivering patron fines to the Bursar’s office for the Access Service department&lt;br /&gt;
* Summarizing student worker timesheet data for the Finance department&lt;br /&gt;
* Validating counter compliant reports for the Electronic Resources department&lt;br /&gt;
* Preparing ingest packages for the Digital Services department&lt;br /&gt;
* Validating checksums for the Preservation department&lt;br /&gt;
&lt;br /&gt;
This hands on workshop will step through the components of the application framework.  Workshop participants will install and develop custom File Analyzer tasks in this session.&lt;br /&gt;
&lt;br /&gt;
The workshop agenda will loosely follow the [https://github.com/Georgetown-University-Libraries/File-Analyzer/wiki/File-Analyzer-Training----Code4Lib-2014 pre-conference agenda from Code4Lib 2014].&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
#  Megan Kudzia&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== Confessions of the (Accidental) Code Hoarder: How to make your Code Sharable: ===&lt;br /&gt;
 &lt;br /&gt;
'''Half Day [Whenever]'''&lt;br /&gt;
&lt;br /&gt;
* Karen A. Coombs, OCLC, coombsk@oclc.org&lt;br /&gt;
&lt;br /&gt;
Abstract&lt;br /&gt;
Have you built something cool and useful that you want to share with others? This preconference session will discuss techniques and tools for sharing code. Using our own OCLC Developer Network PHP authentication code libraries as an example, we will discuss a set of recommended best practices for how to share your code.&lt;br /&gt;
 &lt;br /&gt;
We’ll start with coding standards and test writing so you can be confident of the quality of your code. Next we'll discuss inline documentation as a tool for developers and how auto-generating documentation will save you time and effort. Lastly we'll provide an overview of the tricky areas of dependency and package management, and distribution tools. Along the way, we'll cover PHP coding standards, testing, and popular PHP tools including PHPDoc for documentation, Composer for smooth installations, and using GitHub and Packagist to manage distribution, updates and community feedback.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Peggy Griesinger&lt;br /&gt;
# Ray Schwartz&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== UXtravaganza ===&lt;br /&gt;
'''&amp;quot;Half or Full Day [Based on Interest?, Morning/Afternoon Doesn’t Matter]&amp;quot;'''&lt;br /&gt;
&lt;br /&gt;
* William Hicks, University of North Texas, William.hicks@unt.edu&lt;br /&gt;
* Volunteers?&lt;br /&gt;
&lt;br /&gt;
Abstract&lt;br /&gt;
&lt;br /&gt;
I’m envisioning a 1/2 of full day for front-end developers, content strategy people, and other misfits with an interest in user experience, where we can talk about our shared problems, use cases, the state of current research, and play with each other’s sites. A half day seems doable, but if there’s significant enough interest we could push for a full?  Here are a few of the things I think might be interesting to see happen:&lt;br /&gt;
&lt;br /&gt;
* '''Analytics Share-fest:''' A few volunteers demonstrate data about their websites, catalogs, archival/digital collections. Most of us know our own sites but it would be interesting/validating to share this data with others so we can start to see commonalities between institutions, in certain kinds of systems, etc. For anyone using event tracking, or using click- or heat-maps, this would be a great opportunity to show off what people are seeing.&lt;br /&gt;
&lt;br /&gt;
* '''UX Best Practices Catch Up:''' This spring I had the opportunity to attend a few days worth of usability workshops from the Nielsen-Norman Group, most of which was focused on mobile. I could distill down a lot of the information into an short presentation.  Since this is a constantly moving area of research it would be nice to see a few people do other similar short presentations on some current trends/findings relevant to libraries, search, etc.&lt;br /&gt;
&lt;br /&gt;
* '''Mobile Dev Lab:''' The UNT Libraries has been collecting a small set of smartphones and tablets for testing and development. Basically an [http://labup.org Open Device Lab].  We have about a dozen devices now of varying sizes, OS, OS Versions, + Google Glass. I’ll bring the devices, you can bring yours, and assuming we can get the wifi up and running we can test our sites/services with our big sausage fingers rather than pretending to do so through emulators and the one or two devices we each usually have on hand. If anyone is game they can do a tutorial on Browser-based Inspector Tools, Browser-Cams, or other testing services.&lt;br /&gt;
&lt;br /&gt;
* '''The Eye’s Have It.''' The UNT Libraries is also in the process of acquiring an eye tracker and software for usability and other gaze-based research studies. We’ll take possession of it shortly after this pre-conference proposal is due and will have a couple of months to play with them before the conference.  Assuming we can get our act together learning the device and can get past the technical hurdles of setting it up at the pre-conference, we could try to do some live demos on each other’s sites; i.e. You nominate a site/service, someone in the audience volunteers to wear the device, and we all watch them struggle do the tasks you request on a projector. Rinse. Lather. Repeat. It would hardly be scientific, but it sure would be fun. As a backup, if we have some sites nominated beforehand, I can run a few students at my library through some tasks here and we can show off the results to the crowd.&lt;br /&gt;
&lt;br /&gt;
For those of you wanting to attend and help out, I’d really like to see some discussion on typography, writing for the web, “dealing with business/administrative requirements from on-high&amp;quot;, maybe do some prototyping exercises, etc. Similarly if anyone is interested in doing some tutorials on bootstrap or how-to’s on running a usability test, that would be rad. But we need you to step up and steer part of the time for most of this to work, so if you are interested in some aspect, and especially if you want to volunteer to lead a bit of the time, contact me.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your&lt;br /&gt;
name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Ray Schwartz&lt;br /&gt;
# Andy Mardesich&lt;br /&gt;
# Chelsea Lobdell&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== Intro to Git &amp;amp; possibly beyond ===&lt;br /&gt;
 &lt;br /&gt;
'''Half Day [Whenever]'''&lt;br /&gt;
&lt;br /&gt;
* Erin Fahy, Stanford University, efahy@stanford.edu&lt;br /&gt;
* Shaun Trujillo, Mount Holyoke College, strujill@mtholyoke.edu&lt;br /&gt;
&lt;br /&gt;
We can start with the basics of Git and discuss ways in which it can help you version control just about any file, not just code. Points we can go over:&lt;br /&gt;
&lt;br /&gt;
* What is a Distributed Version Control System?&lt;br /&gt;
* What's the difference between Git and Github.com?&lt;br /&gt;
* How to initialize new Git projects locally and on a remote server/Github&lt;br /&gt;
* Cloning/Forking existing projects and keeping up to date&lt;br /&gt;
* The wonderful world of Git branches&lt;br /&gt;
* Interactive rebasing&lt;br /&gt;
* Contributing code to existing projects &amp;amp; what pull requests are&lt;br /&gt;
* How to handle merge conflicts&lt;br /&gt;
* Overview of workflows and branch best practices&lt;br /&gt;
* (time allowing) Advanced git: pre/post hooks, submodules, anything else?&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Jeannie Graham&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== Visualizing Library Data ===&lt;br /&gt;
 &lt;br /&gt;
'''&amp;quot;Half Day [Morning||Afternoon]&amp;quot;''' &lt;br /&gt;
&lt;br /&gt;
* Matt Miller, matthewmiller@nypl.org, New York Public Library, NYPL Labs&lt;br /&gt;
&lt;br /&gt;
Visualizing your institution’s data can give new insight about your holding’s strengths, weaknesses and outliers. They can also provide potential new avenues for discovery and access. This half day session will focus on programmatically visualizing library metadata. Emphasis will be on creating web-based visualizations utilizing libraries such as d3.js but attention paid towards visualizing large datasets while keeping them web accessible. By then end of the session participants will have template, sample code and methodologies enabling them to start producing visualization with their own data.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Ashley Blewer!&lt;br /&gt;
# Bobbi Fox&lt;br /&gt;
# Ray Schwartz&lt;br /&gt;
# Ranti Junus&lt;br /&gt;
# Eric Phetteplace&lt;br /&gt;
# Joshua Gomez&lt;br /&gt;
# Charlie Morris&lt;br /&gt;
# Andy Mardesich&lt;br /&gt;
# Tao Zhao&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== CollectionSpace: Getting it up and running at your museum ===&lt;br /&gt;
 &lt;br /&gt;
'''Half Day [Afternoon]'''&lt;br /&gt;
&lt;br /&gt;
* Richard Millet, CollectionSpace.org, richard.millet@lyrasis.org&lt;br /&gt;
* TBD&lt;br /&gt;
&lt;br /&gt;
This workshop is designed for anyone interested in or tasked with the technical setup and configuration of CollectionSpace for use in any collections environment (museum, library, special collection, gallery, etc. For more information about CollectionSpace, visit http://www.collectionspace.org&lt;br /&gt;
&lt;br /&gt;
Participants will be walked through the process of installing the software and performing basic configuration work on a stand-alone instance of CollectionSpace. Participants will learn how to create user accounts, set up basic roles and permissions, and may then catalog or otherwise document sample objects from their collections. Materials distributed prior to the workshop will cover hardware and system requirements for participants.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
=== DPLA API Workshop: ===&lt;br /&gt;
 &lt;br /&gt;
'''Half Day [Afternoon]''' &lt;br /&gt;
&lt;br /&gt;
* Audrey Altman, DPLA&lt;br /&gt;
* Mark Breedlove, DPLA&lt;br /&gt;
* Mark Matienzo, DPLA&lt;br /&gt;
* Tom Johnson, DPLA&lt;br /&gt;
&lt;br /&gt;
The Digital Public Library of America API workshop guides attendees through the process of creating an app based on DPLA's free, public API. The API provides access to over 8 million [http://creativecommons.org/publicdomain/zero/1.0/ CC0] licensed metadata records from America’s libraries, archives, and museums in a common metadata format. This workshop is designed for people of all technical skill levels and will cover API basics, the capabilities of the DPLA API, available toolsets, and tips for using records from the API effectively. Members of DPLA's technology team will be on hand to help the group build their first application, and answer questions about tools and content.&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Ranti Junus&lt;br /&gt;
# Jean Rainwater&lt;br /&gt;
# Mita Williams&lt;br /&gt;
# Margaret Heller&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2015]]&lt;/div&gt;</summary>
		<author><name>Chelociraptor</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2012_talks_proposals&amp;diff=9785</id>
		<title>2012 talks proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2012_talks_proposals&amp;diff=9785"/>
				<updated>2011-11-18T17:09:27Z</updated>
		
		<summary type="html">&lt;p&gt;Chelociraptor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Deadline for talk submission is ''Sunday, November 20''.&lt;br /&gt;
&lt;br /&gt;
Prepared talks are 20 minutes (including setup and questions), and focus on one or more of the following areas:&lt;br /&gt;
 * tools (some cool new software, software library or integration platform)&lt;br /&gt;
 * specs (how to get the most out of some protocols, or proposals for new ones)&lt;br /&gt;
 * challenges (one or more big problems we should collectively address)&lt;br /&gt;
&lt;br /&gt;
The community will vote on proposals using the criteria of:&lt;br /&gt;
 * usefulness&lt;br /&gt;
 * newness&lt;br /&gt;
 * geekiness&lt;br /&gt;
 * diversity of topics&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Talk Title: ==&lt;br /&gt;
 &lt;br /&gt;
* Speaker's name, affiliation, and email address&lt;br /&gt;
* Second speaker's name, affiliation, email address, if second speaker&lt;br /&gt;
&lt;br /&gt;
Abstract of no more than 500 words.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VuFind 2.0: Why and How? ==&lt;br /&gt;
&lt;br /&gt;
* Demian Katz, Villanova University, demian.katz@villanova.edu&lt;br /&gt;
&lt;br /&gt;
A major new version of the VuFind discovery software is currently in development.  While VuFind 1.x remains extremely popular, some of its components are beginning to show their age.  VuFind 2.0 aims to retain all the strengths of the previous version of the software while making the architecture cleaner, more modern and more standards-based.  This presentation will examine the motivation behind the update, preview some of the new features to look forward to, and discuss the challenges of creating a developer-friendly open source package in PHP.&lt;br /&gt;
&lt;br /&gt;
== Open Source Software Registry ==&lt;br /&gt;
&lt;br /&gt;
* [[User:DataGazetteer|Peter Murray]], LYRASIS, Peter.Murray@lyrasis.org&lt;br /&gt;
&lt;br /&gt;
LYRASIS is creating and shepherding a [[Registry_E-R_Diagram|registry of library open source software]] as part of its [http://www.lyrasis.org/News/Press-Releases/2011/LYRASIS-Receives-Grant-to-Support-Open-Source.aspx grant from the Mellon Foundation to support the adoption of open source software by libraries].  &lt;br /&gt;
The goal of the grant is to help libraries of all types determine if open source software is right for them, and what combination of software, hosting, training, and consulting works for their situation.  &lt;br /&gt;
The registry is intended to become a community exchange point and stimulant for growth of the library open source ecosystem by connecting libraries with projects, service providers, and events.&lt;br /&gt;
&lt;br /&gt;
The first half of this session will demonstrate the registry functions and describe how projects and providers can get involved.  &lt;br /&gt;
The second half of the session will be a brainstorming suggestion of how to expand the functionality and usefulness of the registry.&lt;br /&gt;
&lt;br /&gt;
== Property Graphs And TinkerPop Applications in Digital Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Brian Tingle, California Digital Library, brian.tingle.cdlib.org@gmail.com&lt;br /&gt;
&lt;br /&gt;
[http://www.tinkerpop.com/ TinkerPop] is an open source software development group focusing on technologies in the [http://en.wikipedia.org/wiki/Graph_database graph database] space.   &lt;br /&gt;
This talk will provide a general introduction to the TinkerPop Graph Stack and the [https://github.com/tinkerpop/gremlin/wiki/Defining-a-Property-Graph property graph model] is uses.  The introduction will include code examples and explanations of the property graph models used by the [http://socialarchive.iath.virginia.edu/ Social Networks in Archival Context] project and show how the historical social graph is exposed as a JSON/REST API implemented by a TinkerPop [https://github.com/tinkerpop/rexster rexster] [https://github.com/tinkerpop/rexster-kibbles Kibble] that contains the application's graph theory logic.  Other graph database applications possible with TinkerPop such as RDF support, and citation analysis will also be discussed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Security in Mind ==&lt;br /&gt;
 &lt;br /&gt;
* Erin Germ, United States Naval Academy, Nimitz Library, germ@usna.edu&lt;br /&gt;
&lt;br /&gt;
I would like to talk about security of library software.&lt;br /&gt;
&lt;br /&gt;
Over the Summer, I discovered a critical vulnerability in a vendor’s software that (verified) allowed me to assume any user’s identity for that site, (verified) switch to any user, and to (unverified, meaning I didn’t not perform this as I didn’t want to “hack” another library’s site) assume the role of any user for any other library who used this particular vendor's software.&lt;br /&gt;
&lt;br /&gt;
Within a 3 hour period, I discovered a 2 vulnerabilities: 1) minor one allowing me to access any backups from any library site, and 2) a critical vulnerability.  From start to finish, the examination, discovery in the vulnerability, and execution of a working exploit was done in less than 2 hours. The vulnerability was a result of poor cookie implementation. The exploit itself revolved around modifying the cookie, and then altering the browser’s permissions by assuming the role of another user.&lt;br /&gt;
&lt;br /&gt;
I do not intend on stating which vendor it was, but I will show how I was able to perform this. If needed, I can do further research and “investigation” into other vendor's software to see what I can “find”.&lt;br /&gt;
&lt;br /&gt;
''If selected, I will contact the vendor to inform them that I will present about this at C4L2012. I do not intend on releasing the name of the vendor.''&lt;br /&gt;
&lt;br /&gt;
== Search Engines and Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Greg Lindahl, blekko CTO, greg@blekko.com&lt;br /&gt;
&lt;br /&gt;
[https://blekko.com blekko] is a new web-scale search engine which enables end-users to create vertical search engines, through a feature called [http://help.blekko.com/index.php/category/slashtags/ slashtags]. Slashtags can contain as few as 1 or as many as tens of thousands of websites relevant to a narrow or broad topic. We have an extensive set of slashtags curated by a combination of volunteers and an in-house librarian team, or end-users can create and share their own. This talk will cover examples of slashtag creation relevant to libraries, and show how to embed this search into a library website, either using javascript or via our API.&lt;br /&gt;
&lt;br /&gt;
''We have exhibited at a couple of library conferences, and have received a lot of interest. blekko is a free service.''&lt;br /&gt;
&lt;br /&gt;
== Beyond code. Versioning data with Git and Mercurial. ==&lt;br /&gt;
&lt;br /&gt;
* Stephanie Collett, California Digital Library, stephanie.collett@ucop.edu&lt;br /&gt;
* Martin Haye, California Digital Library, martin.haye@ucop.edu&lt;br /&gt;
&lt;br /&gt;
Within a relatively short time since their introduction, [http://en.wikipedia.org/wiki/Distributed_Version_Control_System distributed version control systems] (DVCS) like [http://git-scm.com/ Git] and [http://mercurial.selenic.com/ Mercurial] have enjoyed widespread adoption for versioning code. It didn’t take long for the library development community to start discussing the potential for using DVCS within our applications and repositories to version data. After all, many of the features that have made some of these systems popular in the open source community to version code (e.g. lightweight, file-based, compressed, reliable) also make them compelling options for versioning data.  And why write an entire versioning system from scratch if a DVCS solution can be a drop-in solution? At the [http://www.cdlib.org/ California Digital Library] (CDL) we’ve started using Git and Mercurial in some of our applications to version data. This has proven effective in some situations and unworkable in others. This presentation will be a practical case study of CDL’s experiences with using DVCS to version data. We will explain how we’re incorporating Git and Mercurial in our applications, describe our successes and failures and consider the issues involved in repurposing these systems for data versioning.&lt;br /&gt;
&lt;br /&gt;
==Design for Developers==&lt;br /&gt;
&lt;br /&gt;
*Lisa Kurt, University of Nevada, Reno, lkurt@unr.edu&lt;br /&gt;
&lt;br /&gt;
Users expect good design. This talk will delve into what makes really great design, what to look for, and how to do it. Learn the principles of great design to take your applications, user interfaces, and projects to a higher level. With years of experience in graphic design and illustration, Lisa will discuss design principles, trends, process, tools, and development. Design examples will be from her own projects as well as a variety from industry. You’ll walk away with design knowledge that you can apply immediately to a variety of applications and a number of top notch go-to resources to get you up and running.&lt;br /&gt;
&lt;br /&gt;
==Building research applications with Mendeley==&lt;br /&gt;
&lt;br /&gt;
William Gunn, Mendeley william.gunn@mendeley.com (@mrgunn)&lt;br /&gt;
&lt;br /&gt;
This is partly a tool talk and partly a big idea one.&lt;br /&gt;
&lt;br /&gt;
Mendeley has built the world's largest open database of research and we've now begun to collect some interesting social metadata around the document metadata. I would like to share with the Code4Lib attendees information about using this resource to do things within your application that have previously been impossible for the library community, or in some cases impossible without expensive database subscriptions. One thing that's now possible is to augment catalog search by surfacing information about content usage, allowing people to not only find things matching a query, but popular things or things read by their colleagues. In addition to augmenting search, you can also use this information to augment discovery. Imagine an online exhibit of artifacts from a newly discovered dig not just linking to papers which discuss the artifact, but linking to really good interesting papers about the place and the people who made the artifacts. So the big idea is, &amp;quot;How will looking at the literature from a broader perspective than simple citation analysis change how research is done and communicated? How can we build tools that make this process easier and faster?&amp;quot; I can show some examples of applications that have been built using the Mendeley and PLoS APIs to begin to address this question, and I can also present results from Mendeley's developer challenge which shows what kinds of applications researchers are looking for, what kind of applications peope are building, and illustrates some interesting places where the two don't overlap.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Your UI can make or break the application (to the user, anyway)==&lt;br /&gt;
&lt;br /&gt;
* Robin Schaaf, University of Notre Dame, schaaf.4@nd.edu&lt;br /&gt;
&lt;br /&gt;
UI development is hard and too often ends up as an after-thought to computer programmers - if you were a CS major in college I'll bet you didn't have many, if any, design courses.  I'll talk about how to involve the users upfront with design and some common pitfalls of this approach.  I'll also make a case for why you should do the screen design before a single line of code is written.  And I'll throw in some ideas for increasing usability and attractiveness of your web applications.  I'd like to make a case study of the UI development of our open source ERMS.&lt;br /&gt;
&lt;br /&gt;
==Why Nobody Knows How Big The Library Really Is - Perspective of a Library Outside Turned Insider==&lt;br /&gt;
&lt;br /&gt;
* Patrick Berry, California State University, Chico, pberry@csuchico.edu&lt;br /&gt;
&lt;br /&gt;
In this talk I would like to bring the perspective of an &amp;quot;outsider&amp;quot; (although an avowed IT insider) to let you know that people don't understand the full scope of the library.  As we &amp;quot;rethink education&amp;quot;, it is incumbent upon us to help educate our institutions as to the scope of the library.  I will present some of the tactics I'm employing to help people outside, and in some cases inside, the library to understand our size and the value we bring to the institution.&lt;br /&gt;
&lt;br /&gt;
==Building a URL Management Module using the Concrete5 Package Architecture==&lt;br /&gt;
&lt;br /&gt;
* David Uspal, Villanova University, david.uspal@villanova.edu&lt;br /&gt;
&lt;br /&gt;
Keeping track of URLs utilized across a large website such as a university library, and keeping that content up to date for subject and course guides, can be a pain, and as an open source shop, we’d like to have open source solution for this issue.  For this talk, I intend to detail our solution to this issue by walking step-by-step through the building process for our URL Management module -- including why a new solution was necessary; a quick rundown of our CMS ([http://www.concrete5.org Concrete5], a CMS that isn’t Drupal); utilizing the Concrete5 APIs to isolate our solution from core code (to avoid complications caused by core updates); how our solution was integrated into the CMS architecture for easy installation; and our future plans on the project.&lt;br /&gt;
&lt;br /&gt;
==Building an NCIP connector to OpenSRF to facilitate resource sharing==&lt;br /&gt;
&lt;br /&gt;
* Jon Scott, Lyrasis, jon_scott@wsu.edu and Kyle Banerjee, Orbis Cascade Alliance, banerjek@uoregon.edu &lt;br /&gt;
&lt;br /&gt;
How do you reverse engineer any protocol to provide a new service? Humans (and worse yet, committees) often design verbose protocols built around use cases that don't line up current reality. To compound difficulties, the contents of protocol containers are not sufficiently defined/predictable and the only assistance available is sketchy documentation and kind individuals on the internet willing to share what they learned via trial by fire.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NCIP (Niso Circulation Interchange Protocol) is an open standard that defines a set of messages to support exchange of circulation data between disparate circulation, interlibrary loan, and related applications -- widespread adoption of NCIP would eliminate huge amounts of duplicate processing in separate systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This presentation discusses how we learned enough about NCIP and OpenSRF from scratch to build an NCIP responder for Evergreen to facilitate resource sharing in a large consortium that relies on over 20 different ILSes.&lt;br /&gt;
&lt;br /&gt;
==Practical Agile: What's Working for Stanford, Blacklight, and Hydra==&lt;br /&gt;
&lt;br /&gt;
* Naomi Dushay, Stanford University Libraries, ndushay@stanford.edu&lt;br /&gt;
&lt;br /&gt;
Agile development techniques can be difficult to adopt in the context of library software development.  Maybe your shop has only one or two developers, or you always have too many simultaneous projects.   Maybe your new projects can’t be started until 27 librarians reach consensus on the specifications.&lt;br /&gt;
&lt;br /&gt;
This talk will present successful Agile- and Silicon-Valley-inspired practices we’ve adopted at Stanford and/or in the Blacklight and Hydra projects.  We’ve targeted developer happiness as well as improved productivity with our recent changes.  User stories, dead week, sight lines … it’ll be a grab bag of goodies to bring back to your institution, including some ideas on how to adopt these practices without overt management buy in.&lt;br /&gt;
&lt;br /&gt;
==Quick and &amp;lt;strike&amp;gt;Dirty&amp;lt;/strike&amp;gt; Clean Usability: Rapid Prototyping with Bootstrap==&lt;br /&gt;
&lt;br /&gt;
* Shaun Ellis, Princeton University Libraries, shaune@princeton.edu &lt;br /&gt;
&lt;br /&gt;
''&amp;quot;The code itself is unimportant; a project is only as useful as people actually find it.&amp;quot;  - Linus Torvalds'' [http://bit.ly/p4uuyy]&lt;br /&gt;
&lt;br /&gt;
Usability has been a buzzword for some time now, but what is the process for making the the transition toward a better user experience, and hence, better designed library sites?  I will discuss the one facet of the process my team is using to redesign the Finding Aids site for Princeton University Libraries (still in development).  The approach involves the use of rapid prototyping, with Bootstrap [http://twitter.github.com/bootstrap/], to make sure we are on track with what users and stakeholders expect up front, and throughout the development process.&lt;br /&gt;
&lt;br /&gt;
Because Bootstrap allows for early and iterative user feedback, it is more effective than the historic Photoshop mockups/wireframe technique.  The Photoshop approach allows stakeholders to test the look, but not the feel -- and often leaves developers scratching their heads.  Being a CSS/HTML/Javascript grid-based framework, Bootstrap makes it easy for anyone with a bit of HTML/CSS chops to quickly build slick, interactive prototypes right in the browser -- tangible solutions which can be shared, evaluated, revised, and followed by all stakeholders (see Minimum Viable Products [http://en.wikipedia.org/wiki/Minimum_viable_product]).  Efficiency is multiplied because the customized prototypes can flow directly into production use, as is the goal with iterative development approaches, such as the Agile methodology.&lt;br /&gt;
&lt;br /&gt;
While Bootstrap is not the only framework that offers grid-based layout, development is expedited and usability is enhanced by Bootstraps use of of &amp;quot;prefabbed&amp;quot; conventional UI patterns, clean typography, and lean Javascript for interactivity.   Furthermore, out-of-the box Bootstrap comes in a fairly neutral palette, so focus remains on usability, and does not devolve into premature discussions of color or branding choices.  Finally, using Less can be a powerful tool in conjunction with Bootstrap, but is not necessary.  I will discuss the pros and cons, and offer examples for how to getting up and running with or without Less.&lt;br /&gt;
&lt;br /&gt;
==Search Engine Relevancy Tuning - A Static Rank Framework for Solr/Lucene==&lt;br /&gt;
&lt;br /&gt;
* Mike Schultz, Amazon.com (formerly Summon Search Architect) mike.schultz@gmail.com&lt;br /&gt;
&lt;br /&gt;
Solr/Lucene provides a lot of flexibility for adjusting relevancy scoring and improving search results.  Roughly speaking there are two areas of concern: Firstly, a 'dynamic rank' calculation that is a function of the user query and document text fields.  And secondly, a 'static rank' which is independent of the query and generally is a function of non-text document metadata.  In this talk I will outline an easily understood, hand-tunable static rank system with a minimal number of parameters.&lt;br /&gt;
&lt;br /&gt;
The obvious major feature of a search engine is to return results relevant to a user query.  Perhaps less obvious is the huge role query independent document features play in achieving that. Google's PageRank is an example of a static ranking of web pages based on links and other secret sauce.  In the Summon service, our 800 million documents have features like publication date, document type, citation count and Boolean features like the-article-is-peer-reviewed.  These fields aren't textual and remain 'static' from query to query, but need to influence a document's relevancy score.  In our search results, with all query related features being equal, we'd rather have more recent documents above older ones, Journals above Newspapers, and articles that are peer reviewed above those that are not. The static rank system I will describe achieves this and has the following features:&lt;br /&gt;
&lt;br /&gt;
* Query-time only calculation - nothing is baked into the index - with parameters adjustable at query time.&lt;br /&gt;
* The system is based on a signal metaphor where components are 'wired' together.  System components allow multiplexing, amplifying, summing, tunable band-pass filtering, string-to-value-mapping all with a bare minimum of parameters.&lt;br /&gt;
* An intuitive approach for mixing dynamic and static rank that is more effective than simple adding or multiplying.&lt;br /&gt;
* A way of equating disparate static metadata types that leads to understandable results ordering.&lt;br /&gt;
&lt;br /&gt;
==Submitting Digitized Book-like things to the Internet Archive==&lt;br /&gt;
&lt;br /&gt;
* Joel Richard, Smithsonian Institution Libraries, richardjm@si.edu&lt;br /&gt;
&lt;br /&gt;
The Smithsonian Libraries has submitted thousands of out-of-copyright items to the Internet Archive over the years. Specifically in relation to the Biodiversity Heritage Library, we have developed an in-house boutique scanning and upload process that became a learning experience in automated uploading to the Archive. As part of the software development, we created a whitepaper that details the combined learning experiences of the Smithsonian Libraries and the Missouri Botanical Garden. We will discuss some of the the contents of this whitepaper in the context of our scanning process and the manner in which we upload items to the Archive. &lt;br /&gt;
&lt;br /&gt;
Our talk will include a discussion of the types of files and their formats used by the Archive, processes that the Archive performs on uploaded items, ways of interacting and affecting those processes, potential pitfalls and solutions that you may encounter when uploading, and tools that the Archive provides to help monitor and manage your uploaded documents. &lt;br /&gt;
&lt;br /&gt;
Finally, we'll wrap up with a brief summary of how to use things that are on the Internet Archive in your own websites.&lt;br /&gt;
&lt;br /&gt;
== So... you think you want to Host a Code4Lib National Conference, do you? ==&lt;br /&gt;
&lt;br /&gt;
* Elizabeth Duell, Orbis Cascade Alliance, eduell@uoregon.edu&lt;br /&gt;
&lt;br /&gt;
Are you interested in hosting your own Code4Lib Conference? Do you know what it would take? What does BEO stands for? What does F&amp;amp;B Minimum mean? Who would you talk to for support/mentoring? There are so many things to think about: internet support, venue size, rooming blocks, contracts, dietary restrictions and coffee (can't forget the coffee!) just to name a few. Putting together a conference of any size can look daunting, so let's take the scary out of it and replace it with a can do attitude!&lt;br /&gt;
&lt;br /&gt;
Be a step ahead of the game by learning from the people behind the curtain. Ask questions and be given templates/ cheat sheets! &lt;br /&gt;
&lt;br /&gt;
== HTML5 Microdata and Schema.org ==&lt;br /&gt;
 &lt;br /&gt;
* Jason Ronallo, North Carolina State University Libraries, jason_ronallo@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
When the big search engines announced support for HTML5 microdata and the schema.org vocabularies, the balance of power for semantic markup in HTML shifted. &lt;br /&gt;
* What is microdata? &lt;br /&gt;
* Where does microdata fit with regards to other approaches like RDFa and microformats? &lt;br /&gt;
* Where do libraries stand in the worldview of Schema.org and what can they do about it? &lt;br /&gt;
* How can implementing microdata and schema.org optimize your sites for search engines?&lt;br /&gt;
* What tools are available?&lt;br /&gt;
&lt;br /&gt;
== Stack View: A Library Browsing Tool ==&lt;br /&gt;
 &lt;br /&gt;
* Annie Cain, Harvard Library Innovation Lab, acain@law.harvard.edu&lt;br /&gt;
&lt;br /&gt;
In an effort to recreate and build upon the traditional method of browsing a physical library, we used catalog data, including dimensions and page count, to create a [http://librarylab.law.harvard.edu/projects/stackview/ virtual shelf].&lt;br /&gt;
&lt;br /&gt;
This CSS and JavaScript backed visualization allows items to sit on any number of different shelves, really taking advantage of its digital nature.  See how we built Stack View on top of our data and learn how you can create shelves of your own using our open source code.&lt;br /&gt;
&lt;br /&gt;
== “Linked-Data-Ready” Software for Libraries ==&lt;br /&gt;
&lt;br /&gt;
* Jennifer Bowen, University of Rochester River Campus Libraries, jbowen@library.rochester.edu&lt;br /&gt;
&lt;br /&gt;
Linked data is poised to replace MARC as the basis for the new library bibliographic framework.  For libraries to benefit from linked data, they must learn about it, experiment with it, demonstrate its usefulness, and take a leadership role in its deployment. &lt;br /&gt;
&lt;br /&gt;
The eXtensible Catalog Organization (XCO) offers open-source software for libraries that is “linked-data-ready.” XC software prepares MARC and Dublin Core metadata for exposure to the semantic web, incorporating FRBR Group 1 entities and registered vocabularies for RDA elements and roles. This presentation will include a software demonstration, proposed software architecture for creation and management of linked data, a vision for how libraries can migrate from MARC to linked data, and an update on XCO progress toward linked data goals.&lt;br /&gt;
&lt;br /&gt;
== How people search the library from a single search box ==&lt;br /&gt;
&lt;br /&gt;
* Cory Lown, North Carolina State University Libraries, cory_lown@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
Searching the library is complex. There's the catalog, article databases, journal title and database title look-ups, the library website, finding aids, knowledge bases, etc. How would users search if they could get to all of these resources from a single search box? I'll share what we've learned about single search at NCSU Libraries by tracking use of QuickSearch (http://www.lib.ncsu.edu/search/index.php?q=aerospace+engineering), our home-grown unified search application. As part of this talk I will suggest low-cost ways to collect real world use data that can be applied to improve search. I will try to convince you that data collection must be carefully planned and designed to be an effective tool to help you understand what your users are telling you through their behavior. I will talk about how the fragmented library resource environment challenges us to provide useful and understandable search environments. Finally, I will share findings from analyzing millions of user transactions about how people search the library from a production single search box at a large university library.&lt;br /&gt;
&lt;br /&gt;
== An Incremental Approach to Archival Description and Access ==&lt;br /&gt;
&lt;br /&gt;
* Chela Scott Weber, New York University Libraries, chelascott@gmail.com&lt;br /&gt;
* Mark A. Matienzo, Yale University Library, mark@matienzo.org&lt;br /&gt;
&lt;br /&gt;
''This is placeholder text; description coming shortly''&lt;br /&gt;
&lt;br /&gt;
== Making the Easy Things Easy: A Generic ILS API ==&lt;br /&gt;
&lt;br /&gt;
* Wayne Schneider, Hennepin County Library, wschneider@hclib.org&lt;br /&gt;
&lt;br /&gt;
Some stuff we try to do is complicated, because, let's face it, library data is hard. Some stuff, on the other hand, should be easy. Given an item identifier, I should be able to look at item availability. Given a title identifier, I should be able to place a request. And no, I shouldn't have to parse through the NCIP specification or write a SIP client to do it.&lt;br /&gt;
&lt;br /&gt;
This talk will present work we have done on a web services approach to an API for traditional library transactional data, including example applications.&lt;br /&gt;
&lt;br /&gt;
== Your Catalog in Linked Data==&lt;br /&gt;
&lt;br /&gt;
* Tom Johnson, Oregon State University Libraries, thomas.johnson@oregonstate.edu&lt;br /&gt;
&lt;br /&gt;
Linked Library Data activity over the last year has seen bibliographic data sets and vocabularies proliferating from traditional library&lt;br /&gt;
sources. We've reached a point where regular libraries don't have to go it alone to be on the Semantic Web. There is a quickly growing pool of things we can actually ''link to'', and everyone's existing data can be immediately enriched by participating.&lt;br /&gt;
&lt;br /&gt;
This is a quick and dirty road to getting your catalog onto the Linked Data web. The talk  will take you from start to finish, using Free Software tools to establish a namespace, put up a SPARQL endpoint, make a simple data model, convert MARC records to RDF, and link the results to major existing data sets (skipping conveniently over pesky processing time). A small amount of &amp;quot;why linked data?&amp;quot; content will be covered, but the primary goal is to leave you able to reproduce the process and start linking your catalog into the web of data. Appropriate documentation will be on the web.&lt;br /&gt;
&lt;br /&gt;
== Getting the Library into the Learning Management System using Basic LTI == &lt;br /&gt;
&lt;br /&gt;
* David Walker, California State University, dwalker@calstate.edu&lt;br /&gt;
&lt;br /&gt;
The integration of library resources into learning management systems (LMS) has long been something of a holy grail for academic libraries.  The ability to deliver targeted library systems and services to students and faculty directly within their online course would greatly simplify access to library resources.  Yet, the technical barriers to achieving that goal have to date been formidable.  &lt;br /&gt;
&lt;br /&gt;
The recently released Learning Tool Interoperability (LTI) protocol, developed by IMS, now greatly simplifies this process by allowing libraries (and others) to develop and maintain “tools” that function like a native plugin or building block within the LMS, but ultimately live outside of it.  In this presentation, David will provide an overview of Basic LTI, a simplified subset (or profile) of the wider LTI protocol, showing how libraries can use this to easily integrate their external systems into any major LMS.  He’ll showcase the work Cal State has done to do just that.&lt;br /&gt;
&lt;br /&gt;
== Turn your Library Proxy Server into a Honeypot ==&lt;br /&gt;
 &lt;br /&gt;
* Calvin Mah, Simon Fraser University, calvinm@sfu.ca (@calvinmah)&lt;br /&gt;
&lt;br /&gt;
Ezproxy has provided libraries with a useful tool for providing patrons with offsite online access to licensed electronic resources.  This has not gone unnoticed for the unscrupulous users of the Internet who are either unwilling or unable to obtain legitimate access to these materials for themselves.  Instead, they buy or share hacked university computing accounts for unauthorized access.  When undetected, abuse of compromised university accounts can lead to abuse of vendor resources which lead to the blocking of the entire campus block of IP addresses from accessing that resource.&lt;br /&gt;
&lt;br /&gt;
Simon Fraser University Library has been pro actively detecting and thwarting unauthorized attempts through log analysis.  Since SFU has begun analysing our ezproxy logs, the number of new SFU login credentials which are posted and shared in publicly accessible forums has been reduced to zero.   Since our log monitoring began in 2008, the annual average number of SFU login credentials  that are compromised or hacked is 140.  Instead of being a single point of weakness in campus IT security, the library’s proxy server is a honeypot exposing weak passwords, keystroke logging trojans installed on patron PCs and campus network password sniffers.&lt;br /&gt;
&lt;br /&gt;
This talk will discuss techniques such as geomapping login attempts, strategies such as seeding phishing attempts and tools such as statistical log analysis used in detecting compromised login credentials.  &lt;br /&gt;
&lt;br /&gt;
== Relevance Ranking in the Scholarly Domain ==&lt;br /&gt;
&lt;br /&gt;
* Tamar Sadeh, PhD, Ex Libris Group, tamar.sadeh@exlibrisgroup.com&lt;br /&gt;
&lt;br /&gt;
The greatest challenge for discovery systems is how to provide users with the most relevant search results, given the immense landscape of available content. In a manner that is similar to human interaction between two parties, in which each person adjusts to the other in tone, language, and subject matter, discovery systems would ideally be sophisticated and flexible enough to adjust their algorithms to individual users and each user’s information needs. &lt;br /&gt;
&lt;br /&gt;
When evaluating the relevance of an item to a specific user in a specific context, relevance-ranking algorithms need to take into account, in addition to the degree to which the item matches the query, information that is not embodied in the item itself. Such information, which includes the item’s scholarly value, the type of search that the user is conducting (e.g., an exploratory search or a known-item search), and other factors, enables a discovery system to fulfill user expectations that have been shaped by experience with Web search engines.  &lt;br /&gt;
&lt;br /&gt;
The session will focus on the challenges of developing and evaluating relevance-ranking algorithms for the scholarly domain. Examples will be drawn mainly from the relevance-ranking technology deployed by the Ex Libris Primo discovery solution. &lt;br /&gt;
&lt;br /&gt;
== Mobile Library Catalog using Z39.50 ==&lt;br /&gt;
 &lt;br /&gt;
* James Paul Muir, The Ohio State University, muir.29@osu.edu&lt;br /&gt;
&lt;br /&gt;
A talk about putting a new spin on an age-old technology, creating a universal interface, which exposes any Z39.50 capable library catalog as a simple, useful and universal REST API for use in native mobile apps and mobile web.&lt;br /&gt;
&lt;br /&gt;
The talk includes the exploration and demonstration of the Ohio State University’s native app “OSU Mobile” for iOS and Android and shows how the library catalog search was integrated.&lt;br /&gt;
&lt;br /&gt;
The backbone of the project is a REST API, which was created in a weekend using a PHP framework that translates OPAC XML results from the Z39.50 interface into mobile-friendly JSON formatting.&lt;br /&gt;
&lt;br /&gt;
Raw Z39.50 search results contain all MARC information as well as local holdings.  &lt;br /&gt;
Configurable search fields and the ability to select which fields to include in the JSON output make this solution a perfect fit for any Z39.50-capable library catalog.&lt;br /&gt;
  &lt;br /&gt;
Looking forward, possibilities for expansion include the use of Off Campus Sign-In for online resources so mobile patrons can directly access online resources from a smartphone (included in the Android version of OSU Mobile) as well as integration with library patron account.&lt;br /&gt;
&lt;br /&gt;
Enjoy this alternative to writing a custom OPAC adapter or using a 3rd party service for exposing library records and use the proven and universal Z39.50 interface directly against your library catalog. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DMPTool: Guidance and Resources for your data management plan ==&lt;br /&gt;
 &lt;br /&gt;
* Marisa Strong, California Digital Libary, marisa.strong@ucop.edu&lt;br /&gt;
&lt;br /&gt;
A number of U.S. funding agencies such as the National Science Foundation require researchers to supply detailed, cost-effective plans for managing research data, called Data Management Plans.  To help researchers with this requirement, several organizations such as the California Digital Library, University of Illinois, University of Virginia, Smithsonian Institution, the DataONE consortium and the (UK) Digital Curation Centre) came together to develop the DMPTool. The goal of the DMPTool is to provide researchers with guidance, links to resources and help with writing data management plans.&lt;br /&gt;
&lt;br /&gt;
This tool presents the requirements specific to the funding agency they are applying for along with detailed help with each section.  Users can create a plan, preview it, export it in various formats, and make it freely accessible for others to read. Users who are members of participating institutions will benefit from specific help for each section, suggested answers, and resources for management of their data, all specific to their institution.  Institutions can also announce events, workshops, and data management information via the DMPTool blog available from within the tool.&lt;br /&gt;
&lt;br /&gt;
This open-source software tool is integrated with federated login using Shibboleth which allows users to login via their home institutions. It is a Ruby/Rails application hosted on a SLES VM.  We had a geographically distributed development team sharing code on Bitbucket. &lt;br /&gt;
&lt;br /&gt;
This talk will demo the features of the application as well as highlight the development practices and infrastructure used in building the application.&lt;br /&gt;
&lt;br /&gt;
== Lies, Damned Lies, and Lines of Code Per Day ==&lt;br /&gt;
 &lt;br /&gt;
* James Stuart, Columbia University, james.stuart@columbia.edu&lt;br /&gt;
&lt;br /&gt;
We've all heard about that one study that showed that Pair Programming was 20% efficient than working alone. Or maybe you saw on a blog that study that showed that programmers who write fewer lines of code per day are more efficient...or was it less efficient? And of course, we all know that programmers who work in (Ruby|Python|Java|C|Erlang) have been shown to be more efficient.&lt;br /&gt;
&lt;br /&gt;
A quick examination of some of the research surrounding programming efficiency and methodology, with a focus on personal productivity, and how to incorporate the more believable research into your own team's workflow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==An Anatomy of a Book Viewer==&lt;br /&gt;
&lt;br /&gt;
*Mohammed Abuouda, Bibliotheca Alexandrina, mohammed.abuouda@bibalex.org&lt;br /&gt;
&lt;br /&gt;
Bibliotheca Alexandria (BA) hosts 210,000 digital books in different languages available at http://dar.bibalex.org. It includes the largest collection of digitized Arabic books. Using open source  tools, BA has developed a modular book viewer that can be deployed in any environment to provide the users with a great personalized reading experience. BA’s book viewer provides several services that make this possible: morphological search in different languages, localization, server load balancing, scalability and image processing. Personalization features includes different types of annotation such as sticky notes, highlighting and underlining. It also provides the ability to embed the viewer in any webpage and change its skin.&lt;br /&gt;
&lt;br /&gt;
In this talk we will describe the book viewer architecture, its modular design and how to incorporate it in your current environment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Carrier: Digital Signage System ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:jmspargu|Justin Spargur]], The University of Arizona, spargurj@u.library.arizona.edu&lt;br /&gt;
 &lt;br /&gt;
Carrier is a web-based digital signage application written using JavaScript, PHP, MySQL that can be used on any device with an internet connection and a web browser. Used across the University of Arizona Libraries campuses, Carrier can display any web-based content, allowing users to promote new library collections and services via images, web pages, or videos. Users can easily manage the order in which slides are delivered, manage the length that slides are displayed for, set dates for when slides should be shown, and even specify specific locations where slides should be presented. &lt;br /&gt;
 &lt;br /&gt;
In addition to marketing purposes, Carrier can be used to send both low and high priority alerts to patrons. Alerts can be sent through the administrative interface, via RSS feeds, and even through a Twitter feed, allowing for easy integration with existing campus emergency notification systems.&lt;br /&gt;
 &lt;br /&gt;
I will describe the technical underpinnings of Carrier, challenges that we’ve faced since its implementation, enhancements planned for the next release of the software, and discuss our plans for releasing this software for others to use '''for free'''.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== We Built It.  They Came.  Now What? ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:evviva|Evviva Weinraub]], Oregon State University, evviva.weinraub@oregonstate.edu&lt;br /&gt;
 &lt;br /&gt;
You have a great idea for something new or useful.  You build it, put it out there on GitHub, do a couple of presentations, maybe a press release and BAM, suddenly you’ve created a successful Open Source tool that others are using.  Great!&lt;br /&gt;
&lt;br /&gt;
Fast-forward 3 years. &lt;br /&gt;
&lt;br /&gt;
You still believe in the product, but you can no longer be solely responsible for taking care of it.  Just putting it out there has made it a tool others use, but how do you find a community of folks who believe in the product as much as you do and are willing to commit the time and energy into building, sustaining and moving this project forward.  Or just figuring out if you should bother trying?&lt;br /&gt;
&lt;br /&gt;
In 2006, OSU Libraries built an Interactive Course Assignment system called Library a la Carte – think LibGuides only Open Source.  We now find ourselves in just this predicament.  &lt;br /&gt;
&lt;br /&gt;
What can we do as a community to move beyond our build-first-ask-questions-later mentality and embed sustainability into our new and existing ideas and products without moving toward commercialization?  I fully expect we’ll end up with more questions than answers, but let’s spend some talking about our predicament and yours and think about how we can come out the other side. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contextually Rich Collections Without the Risk: Digital Forensics and Automated Data Triage for Digital Collections ==&lt;br /&gt;
&lt;br /&gt;
* [[User:kamwoods|Kam Woods]], University of North Carolina at Chapel Hill, kamwoods@email.unc.edu&lt;br /&gt;
* Cal Lee, University of North Carolina at Chapel Hill, callee -- at -- ils -- unc -- edu&lt;br /&gt;
* Matthew Kirschenbaum, University of Maryland, mkirschenbaum@gmail.com&lt;br /&gt;
&lt;br /&gt;
Digital libraries and archives are increasingly faced with a significant backlog of unprocessed data along with an accelerating stream of incoming material. These data often arrive from donor organizations, institutions, and individuals on hard drives, optical and magnetic disks, flash memory devices, and even complete hardware (traditional desktop computers and mobile systems). &lt;br /&gt;
&lt;br /&gt;
Information on these devices may be sensitive, obscured by operating system arcana, or require specialized tools and procedures to parse. Furthermore, the sheer volume of materials being handled means that even simple tasks such as providing useful content reports can be impractical (or impossible) in current workflows.&lt;br /&gt;
&lt;br /&gt;
Many of the tasks currently associated with data triage and analysis can be simplified and performed with improved coverage and accuracy through the use of open source digital forensics tools. In this talk we will discuss recent developments in providing digital librarians and archivists with simple, open source tools to accomplish these tasks.  We will discuss tools and methods be tested, developed and packaged as part of the [http://bitcurator.net BitCurator] project.  These tools can be used to reduce or eliminate laborious, error-prone tasks in existing workflows and put valuable time back into the hands of digital librarians and archivists -- time better used to identify and tackle complex tasks that *cannot* be solved by software.&lt;br /&gt;
&lt;br /&gt;
== Finding Movies with FRBR and Facets ==&lt;br /&gt;
 &lt;br /&gt;
* Kelley McGrath, University of Oregon, kelleym@uoregon.edu&lt;br /&gt;
&lt;br /&gt;
How might the Functional Requirements for Bibliographic Records (FRBR) model and faceted navigation improve access to film and video in libraries? I will describe the design and implementation of a FRBR-inspired prototype discovery interface ([http://blazing-sunset-24.heroku.com/ http://blazing-sunset-24.heroku.com/]) using Solr and Blacklight . This approach demonstrates how FRBR can enable a work-centric view that is focused on the original movie or program while supporting users in selecting an appropriate version.&lt;br /&gt;
&lt;br /&gt;
The prototype features two sets of facets, which independently address two important information needs: (1) &amp;quot;What kind of movie or program do you want to watch?&amp;quot; (e.g., a 1970s TV sitcom, something directed by Kurosawa, or an early German horror film); (2) &amp;quot;How do you want to watch it? Where do you want to get it from?&amp;quot; (e.g., on Blu-ray, with Spanish subtitles, available at the local public library). This structure enables patrons to narrow, broaden and pivot across facet values instead of limiting them to the tree-structured hierarchy common with existing FRBR applications. &lt;br /&gt;
&lt;br /&gt;
This type of interface requires controlled data values mapped to FRBR group 1 entities, which in many cases are not available in existing MARC bibliographic records. I will discuss ongoing work using the XC Metadata Services Toolkit ([http://www.extensiblecatalog.org/ http://www.extensiblecatalog.org/]) to extract and normalize data from existing MARC records for videos in order to populate a FRBRized, faceted discovery interface.&lt;br /&gt;
&lt;br /&gt;
==Escaping the Black Box — Building a Platform to Foster Collaborative Innovation==&lt;br /&gt;
&lt;br /&gt;
* Karen Coombs, OCLC, coombsk@oclc.org&lt;br /&gt;
* Kathryn Harnish, OCLC harnishk@oclc.org&lt;br /&gt;
&lt;br /&gt;
Exposed Web services offer an unprecedented opportunity for collaborative innovation — that’s one of the hallmarks of Web-based services like Amazon, Google, and Facebook.  These environments are popular not only for their native feature sets, but also for the array of community-developed apps that can run in them.  The creativity of the development communities that work in these systems brings new value to all types of users.&lt;br /&gt;
&lt;br /&gt;
What if the library community could realize this same level of collaborative innovation around its systems?  What kinds of support would be necessary to transform library systems from “black boxes” to more open, accessible environments in which value is created and multiplied by the user community?&lt;br /&gt;
&lt;br /&gt;
In this session, we’ll discuss the challenges and opportunities OCLC faced in creating just that kind of environment.  The recently-released OCLC “cooperative platform” provides improved access to a wide variety of OCLC’s data and services, allowing library developers and other interested partners to collaborate, innovate, and share new solutions with fellow libraries.  We’ll describe the open standards and technologies we’ve put in play in as we:&lt;br /&gt;
* exposed robust Web services that provide access to both data and business logic; &lt;br /&gt;
* created an architecture for integrating community-built applications in OCLC (and other) products; and &lt;br /&gt;
* developed an infrastructure to support community development, collaboration, and app sharing&lt;br /&gt;
&lt;br /&gt;
Learn how OCLC is helping to open the “black box” -- and give libraries the freedom to become true partners in the evolution of their library systems.&lt;br /&gt;
&lt;br /&gt;
== Code inheritance; or, The Ghosts of Perls Past  ==&lt;br /&gt;
&lt;br /&gt;
* Jon Gorman, University of Illinois, jtgorman@illinois.ed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Any organization has a history not found in its archives or museums. Mysteries exist that origins are lost to the collective institutional knowledge.  Despite what has been forgotten by humans, our servers and computers still keep running. Instructions crafted long ago execute like digital ghosts following orders of masters who have long since left.&lt;br /&gt;
&lt;br /&gt;
The University of Illinois has a fair amount of Perl code created by several different developers. This code includes software that handles our data feeds coming both in and out of campus, reports against our Voyager system, some web applications, and more.&lt;br /&gt;
&lt;br /&gt;
I'll touch a little on the historical legacy and why Perl is used. From there I'll share some tips, best practices, and some of the mistakes I've made in trying to maintain this code. Most of the advice will transition to any language, but code and libraries discussed will be Perl. The presentation will also touch on some internal debate on whether or not to port parts of our Perl codebase.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Recorded Radio/TV broadcasts streamed for library users ==&lt;br /&gt;
&lt;br /&gt;
 * Kåre Fiedler Christiansen, The State and University Library Denmark, kfc@statsbiblioteket.dk&lt;br /&gt;
 * Mads Villadsen, The State and University Library Denmark, mv@statsbiblioteket.dk&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Provide online access to the Radio/TV collection,&amp;quot; my boss said. About 500,000&lt;br /&gt;
hours of Danish broacast radio and TV. Easy, right? Well, half a year later &lt;br /&gt;
we'd done it, but it turned out to involve practically every it employee in the &lt;br /&gt;
library and quite a few non-technical people as well.&lt;br /&gt;
&lt;br /&gt;
Combining our Fedora-based DOMS repository system with our Lucene-based Summa&lt;br /&gt;
search system with our WAYF-based single-signon system with an upgrade of our&lt;br /&gt;
SAN system for enough speed to deliver the content with an ffmpeg-based &lt;br /&gt;
transcoding workflow system with a Wowza-based streaming server, and sprinkling&lt;br /&gt;
it all with a nice user-friendly web frontend turned out to be quite a challenge,&lt;br /&gt;
but also one of the most engaging experiences for a long time.&lt;br /&gt;
&lt;br /&gt;
Of course we were immidiately shut down, since the legal details weren't quite&lt;br /&gt;
as clear as we thought they were, but take an exclusive preview at &lt;br /&gt;
http://developer.statsbiblioteket.dk/kultur/ - username/password: code4lib.&lt;br /&gt;
&lt;br /&gt;
== NoSQL Bibliographic Records: Implementing a Native FRBR Datastore with Redis ==&lt;br /&gt;
 * Jeremy Nelson, Colorado College, jeremy.nelson@coloradocollege.edu&lt;br /&gt;
&lt;br /&gt;
In October, the Library of Congress issued a news release, &amp;quot;A Bibliographic Framework for the Digital Age&amp;quot; outlining a list of requirements for a New Bibliographic Framework Environment. Responding to this challenge, this talk will demonstrate a Redis (http://redis.io) FRBR datastore proof-of-concept that, with a lightweight python-based interface, can meet these requirements. &lt;br /&gt;
&lt;br /&gt;
Because FRBR is an Entity-Relationship model; it is easily implemented as key-value within the primitive data structures provided by Redis.  Redis' flexibility makes it easy to associate arbitrary metadata and vocabularies, like MARC, METS, VRA or MODS, with FRBR entities and inter-operate with legacy and emerging standards and practices like RDA Vocabularies and LinkedData.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Upgrading from Catalog to Discovery Environment: A Consortial Approach ==&lt;br /&gt;
 &lt;br /&gt;
* Spencer Lamm, Swarthmore College, slamm1@swarthmore.edu&lt;br /&gt;
* Chelsea Lobdell, Swarthmore College, clobdel1@swarthmore.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Almost two years ago the Tri-College Consortium of Haverford, Swarthmore, and Bryn Mawr Colleges embarked upon a journey to provide enhanced end-user experience and discoverability with our library applications. Our solution was to implement an integration of ExLibris's Primo Central into Villanova's VuFind for a dual-channel searching experience. We present a case study of the collaborative and technical aspects of our process.&lt;br /&gt;
&lt;br /&gt;
At a high level we will describe our approach to project management and decision making.  We used a multi-tiered structure of working groups with an iterative design-feedback implementation cycle.  We will relay lessons learned from our experience: successes, failures, and unexpected hurdles.&lt;br /&gt;
&lt;br /&gt;
At a lower, technical level we will discuss the vufind search module architecture; the workflow of creating a new search channel; a Primo API parser; and the data structures of the Primo API response and the Primo SearchObject. Time permitting, we will also outline how we modified VuFind's Innovative driver to work with our ILS.&lt;br /&gt;
&lt;br /&gt;
[[Category: Code4Lib2012]]&lt;/div&gt;</summary>
		<author><name>Chelociraptor</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2012_talks_proposals&amp;diff=9783</id>
		<title>2012 talks proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2012_talks_proposals&amp;diff=9783"/>
				<updated>2011-11-18T17:08:49Z</updated>
		
		<summary type="html">&lt;p&gt;Chelociraptor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Deadline for talk submission is ''Sunday, November 20''.&lt;br /&gt;
&lt;br /&gt;
Prepared talks are 20 minutes (including setup and questions), and focus on one or more of the following areas:&lt;br /&gt;
 * tools (some cool new software, software library or integration platform)&lt;br /&gt;
 * specs (how to get the most out of some protocols, or proposals for new ones)&lt;br /&gt;
 * challenges (one or more big problems we should collectively address)&lt;br /&gt;
&lt;br /&gt;
The community will vote on proposals using the criteria of:&lt;br /&gt;
 * usefulness&lt;br /&gt;
 * newness&lt;br /&gt;
 * geekiness&lt;br /&gt;
 * diversity of topics&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Talk Title: ==&lt;br /&gt;
 &lt;br /&gt;
* Speaker's name, affiliation, and email address&lt;br /&gt;
* Second speaker's name, affiliation, email address, if second speaker&lt;br /&gt;
&lt;br /&gt;
Abstract of no more than 500 words.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VuFind 2.0: Why and How? ==&lt;br /&gt;
&lt;br /&gt;
* Demian Katz, Villanova University, demian.katz@villanova.edu&lt;br /&gt;
&lt;br /&gt;
A major new version of the VuFind discovery software is currently in development.  While VuFind 1.x remains extremely popular, some of its components are beginning to show their age.  VuFind 2.0 aims to retain all the strengths of the previous version of the software while making the architecture cleaner, more modern and more standards-based.  This presentation will examine the motivation behind the update, preview some of the new features to look forward to, and discuss the challenges of creating a developer-friendly open source package in PHP.&lt;br /&gt;
&lt;br /&gt;
== Open Source Software Registry ==&lt;br /&gt;
&lt;br /&gt;
* [[User:DataGazetteer|Peter Murray]], LYRASIS, Peter.Murray@lyrasis.org&lt;br /&gt;
&lt;br /&gt;
LYRASIS is creating and shepherding a [[Registry_E-R_Diagram|registry of library open source software]] as part of its [http://www.lyrasis.org/News/Press-Releases/2011/LYRASIS-Receives-Grant-to-Support-Open-Source.aspx grant from the Mellon Foundation to support the adoption of open source software by libraries].  &lt;br /&gt;
The goal of the grant is to help libraries of all types determine if open source software is right for them, and what combination of software, hosting, training, and consulting works for their situation.  &lt;br /&gt;
The registry is intended to become a community exchange point and stimulant for growth of the library open source ecosystem by connecting libraries with projects, service providers, and events.&lt;br /&gt;
&lt;br /&gt;
The first half of this session will demonstrate the registry functions and describe how projects and providers can get involved.  &lt;br /&gt;
The second half of the session will be a brainstorming suggestion of how to expand the functionality and usefulness of the registry.&lt;br /&gt;
&lt;br /&gt;
== Property Graphs And TinkerPop Applications in Digital Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Brian Tingle, California Digital Library, brian.tingle.cdlib.org@gmail.com&lt;br /&gt;
&lt;br /&gt;
[http://www.tinkerpop.com/ TinkerPop] is an open source software development group focusing on technologies in the [http://en.wikipedia.org/wiki/Graph_database graph database] space.   &lt;br /&gt;
This talk will provide a general introduction to the TinkerPop Graph Stack and the [https://github.com/tinkerpop/gremlin/wiki/Defining-a-Property-Graph property graph model] is uses.  The introduction will include code examples and explanations of the property graph models used by the [http://socialarchive.iath.virginia.edu/ Social Networks in Archival Context] project and show how the historical social graph is exposed as a JSON/REST API implemented by a TinkerPop [https://github.com/tinkerpop/rexster rexster] [https://github.com/tinkerpop/rexster-kibbles Kibble] that contains the application's graph theory logic.  Other graph database applications possible with TinkerPop such as RDF support, and citation analysis will also be discussed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Security in Mind ==&lt;br /&gt;
 &lt;br /&gt;
* Erin Germ, United States Naval Academy, Nimitz Library, germ@usna.edu&lt;br /&gt;
&lt;br /&gt;
I would like to talk about security of library software.&lt;br /&gt;
&lt;br /&gt;
Over the Summer, I discovered a critical vulnerability in a vendor’s software that (verified) allowed me to assume any user’s identity for that site, (verified) switch to any user, and to (unverified, meaning I didn’t not perform this as I didn’t want to “hack” another library’s site) assume the role of any user for any other library who used this particular vendor's software.&lt;br /&gt;
&lt;br /&gt;
Within a 3 hour period, I discovered a 2 vulnerabilities: 1) minor one allowing me to access any backups from any library site, and 2) a critical vulnerability.  From start to finish, the examination, discovery in the vulnerability, and execution of a working exploit was done in less than 2 hours. The vulnerability was a result of poor cookie implementation. The exploit itself revolved around modifying the cookie, and then altering the browser’s permissions by assuming the role of another user.&lt;br /&gt;
&lt;br /&gt;
I do not intend on stating which vendor it was, but I will show how I was able to perform this. If needed, I can do further research and “investigation” into other vendor's software to see what I can “find”.&lt;br /&gt;
&lt;br /&gt;
''If selected, I will contact the vendor to inform them that I will present about this at C4L2012. I do not intend on releasing the name of the vendor.''&lt;br /&gt;
&lt;br /&gt;
== Search Engines and Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Greg Lindahl, blekko CTO, greg@blekko.com&lt;br /&gt;
&lt;br /&gt;
[https://blekko.com blekko] is a new web-scale search engine which enables end-users to create vertical search engines, through a feature called [http://help.blekko.com/index.php/category/slashtags/ slashtags]. Slashtags can contain as few as 1 or as many as tens of thousands of websites relevant to a narrow or broad topic. We have an extensive set of slashtags curated by a combination of volunteers and an in-house librarian team, or end-users can create and share their own. This talk will cover examples of slashtag creation relevant to libraries, and show how to embed this search into a library website, either using javascript or via our API.&lt;br /&gt;
&lt;br /&gt;
''We have exhibited at a couple of library conferences, and have received a lot of interest. blekko is a free service.''&lt;br /&gt;
&lt;br /&gt;
== Beyond code. Versioning data with Git and Mercurial. ==&lt;br /&gt;
&lt;br /&gt;
* Stephanie Collett, California Digital Library, stephanie.collett@ucop.edu&lt;br /&gt;
* Martin Haye, California Digital Library, martin.haye@ucop.edu&lt;br /&gt;
&lt;br /&gt;
Within a relatively short time since their introduction, [http://en.wikipedia.org/wiki/Distributed_Version_Control_System distributed version control systems] (DVCS) like [http://git-scm.com/ Git] and [http://mercurial.selenic.com/ Mercurial] have enjoyed widespread adoption for versioning code. It didn’t take long for the library development community to start discussing the potential for using DVCS within our applications and repositories to version data. After all, many of the features that have made some of these systems popular in the open source community to version code (e.g. lightweight, file-based, compressed, reliable) also make them compelling options for versioning data.  And why write an entire versioning system from scratch if a DVCS solution can be a drop-in solution? At the [http://www.cdlib.org/ California Digital Library] (CDL) we’ve started using Git and Mercurial in some of our applications to version data. This has proven effective in some situations and unworkable in others. This presentation will be a practical case study of CDL’s experiences with using DVCS to version data. We will explain how we’re incorporating Git and Mercurial in our applications, describe our successes and failures and consider the issues involved in repurposing these systems for data versioning.&lt;br /&gt;
&lt;br /&gt;
==Design for Developers==&lt;br /&gt;
&lt;br /&gt;
*Lisa Kurt, University of Nevada, Reno, lkurt@unr.edu&lt;br /&gt;
&lt;br /&gt;
Users expect good design. This talk will delve into what makes really great design, what to look for, and how to do it. Learn the principles of great design to take your applications, user interfaces, and projects to a higher level. With years of experience in graphic design and illustration, Lisa will discuss design principles, trends, process, tools, and development. Design examples will be from her own projects as well as a variety from industry. You’ll walk away with design knowledge that you can apply immediately to a variety of applications and a number of top notch go-to resources to get you up and running.&lt;br /&gt;
&lt;br /&gt;
==Building research applications with Mendeley==&lt;br /&gt;
&lt;br /&gt;
William Gunn, Mendeley william.gunn@mendeley.com (@mrgunn)&lt;br /&gt;
&lt;br /&gt;
This is partly a tool talk and partly a big idea one.&lt;br /&gt;
&lt;br /&gt;
Mendeley has built the world's largest open database of research and we've now begun to collect some interesting social metadata around the document metadata. I would like to share with the Code4Lib attendees information about using this resource to do things within your application that have previously been impossible for the library community, or in some cases impossible without expensive database subscriptions. One thing that's now possible is to augment catalog search by surfacing information about content usage, allowing people to not only find things matching a query, but popular things or things read by their colleagues. In addition to augmenting search, you can also use this information to augment discovery. Imagine an online exhibit of artifacts from a newly discovered dig not just linking to papers which discuss the artifact, but linking to really good interesting papers about the place and the people who made the artifacts. So the big idea is, &amp;quot;How will looking at the literature from a broader perspective than simple citation analysis change how research is done and communicated? How can we build tools that make this process easier and faster?&amp;quot; I can show some examples of applications that have been built using the Mendeley and PLoS APIs to begin to address this question, and I can also present results from Mendeley's developer challenge which shows what kinds of applications researchers are looking for, what kind of applications peope are building, and illustrates some interesting places where the two don't overlap.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Your UI can make or break the application (to the user, anyway)==&lt;br /&gt;
&lt;br /&gt;
* Robin Schaaf, University of Notre Dame, schaaf.4@nd.edu&lt;br /&gt;
&lt;br /&gt;
UI development is hard and too often ends up as an after-thought to computer programmers - if you were a CS major in college I'll bet you didn't have many, if any, design courses.  I'll talk about how to involve the users upfront with design and some common pitfalls of this approach.  I'll also make a case for why you should do the screen design before a single line of code is written.  And I'll throw in some ideas for increasing usability and attractiveness of your web applications.  I'd like to make a case study of the UI development of our open source ERMS.&lt;br /&gt;
&lt;br /&gt;
==Why Nobody Knows How Big The Library Really Is - Perspective of a Library Outside Turned Insider==&lt;br /&gt;
&lt;br /&gt;
* Patrick Berry, California State University, Chico, pberry@csuchico.edu&lt;br /&gt;
&lt;br /&gt;
In this talk I would like to bring the perspective of an &amp;quot;outsider&amp;quot; (although an avowed IT insider) to let you know that people don't understand the full scope of the library.  As we &amp;quot;rethink education&amp;quot;, it is incumbent upon us to help educate our institutions as to the scope of the library.  I will present some of the tactics I'm employing to help people outside, and in some cases inside, the library to understand our size and the value we bring to the institution.&lt;br /&gt;
&lt;br /&gt;
==Building a URL Management Module using the Concrete5 Package Architecture==&lt;br /&gt;
&lt;br /&gt;
* David Uspal, Villanova University, david.uspal@villanova.edu&lt;br /&gt;
&lt;br /&gt;
Keeping track of URLs utilized across a large website such as a university library, and keeping that content up to date for subject and course guides, can be a pain, and as an open source shop, we’d like to have open source solution for this issue.  For this talk, I intend to detail our solution to this issue by walking step-by-step through the building process for our URL Management module -- including why a new solution was necessary; a quick rundown of our CMS ([http://www.concrete5.org Concrete5], a CMS that isn’t Drupal); utilizing the Concrete5 APIs to isolate our solution from core code (to avoid complications caused by core updates); how our solution was integrated into the CMS architecture for easy installation; and our future plans on the project.&lt;br /&gt;
&lt;br /&gt;
==Building an NCIP connector to OpenSRF to facilitate resource sharing==&lt;br /&gt;
&lt;br /&gt;
* Jon Scott, Lyrasis, jon_scott@wsu.edu and Kyle Banerjee, Orbis Cascade Alliance, banerjek@uoregon.edu &lt;br /&gt;
&lt;br /&gt;
How do you reverse engineer any protocol to provide a new service? Humans (and worse yet, committees) often design verbose protocols built around use cases that don't line up current reality. To compound difficulties, the contents of protocol containers are not sufficiently defined/predictable and the only assistance available is sketchy documentation and kind individuals on the internet willing to share what they learned via trial by fire.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NCIP (Niso Circulation Interchange Protocol) is an open standard that defines a set of messages to support exchange of circulation data between disparate circulation, interlibrary loan, and related applications -- widespread adoption of NCIP would eliminate huge amounts of duplicate processing in separate systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This presentation discusses how we learned enough about NCIP and OpenSRF from scratch to build an NCIP responder for Evergreen to facilitate resource sharing in a large consortium that relies on over 20 different ILSes.&lt;br /&gt;
&lt;br /&gt;
==Practical Agile: What's Working for Stanford, Blacklight, and Hydra==&lt;br /&gt;
&lt;br /&gt;
* Naomi Dushay, Stanford University Libraries, ndushay@stanford.edu&lt;br /&gt;
&lt;br /&gt;
Agile development techniques can be difficult to adopt in the context of library software development.  Maybe your shop has only one or two developers, or you always have too many simultaneous projects.   Maybe your new projects can’t be started until 27 librarians reach consensus on the specifications.&lt;br /&gt;
&lt;br /&gt;
This talk will present successful Agile- and Silicon-Valley-inspired practices we’ve adopted at Stanford and/or in the Blacklight and Hydra projects.  We’ve targeted developer happiness as well as improved productivity with our recent changes.  User stories, dead week, sight lines … it’ll be a grab bag of goodies to bring back to your institution, including some ideas on how to adopt these practices without overt management buy in.&lt;br /&gt;
&lt;br /&gt;
==Quick and &amp;lt;strike&amp;gt;Dirty&amp;lt;/strike&amp;gt; Clean Usability: Rapid Prototyping with Bootstrap==&lt;br /&gt;
&lt;br /&gt;
* Shaun Ellis, Princeton University Libraries, shaune@princeton.edu &lt;br /&gt;
&lt;br /&gt;
''&amp;quot;The code itself is unimportant; a project is only as useful as people actually find it.&amp;quot;  - Linus Torvalds'' [http://bit.ly/p4uuyy]&lt;br /&gt;
&lt;br /&gt;
Usability has been a buzzword for some time now, but what is the process for making the the transition toward a better user experience, and hence, better designed library sites?  I will discuss the one facet of the process my team is using to redesign the Finding Aids site for Princeton University Libraries (still in development).  The approach involves the use of rapid prototyping, with Bootstrap [http://twitter.github.com/bootstrap/], to make sure we are on track with what users and stakeholders expect up front, and throughout the development process.&lt;br /&gt;
&lt;br /&gt;
Because Bootstrap allows for early and iterative user feedback, it is more effective than the historic Photoshop mockups/wireframe technique.  The Photoshop approach allows stakeholders to test the look, but not the feel -- and often leaves developers scratching their heads.  Being a CSS/HTML/Javascript grid-based framework, Bootstrap makes it easy for anyone with a bit of HTML/CSS chops to quickly build slick, interactive prototypes right in the browser -- tangible solutions which can be shared, evaluated, revised, and followed by all stakeholders (see Minimum Viable Products [http://en.wikipedia.org/wiki/Minimum_viable_product]).  Efficiency is multiplied because the customized prototypes can flow directly into production use, as is the goal with iterative development approaches, such as the Agile methodology.&lt;br /&gt;
&lt;br /&gt;
While Bootstrap is not the only framework that offers grid-based layout, development is expedited and usability is enhanced by Bootstraps use of of &amp;quot;prefabbed&amp;quot; conventional UI patterns, clean typography, and lean Javascript for interactivity.   Furthermore, out-of-the box Bootstrap comes in a fairly neutral palette, so focus remains on usability, and does not devolve into premature discussions of color or branding choices.  Finally, using Less can be a powerful tool in conjunction with Bootstrap, but is not necessary.  I will discuss the pros and cons, and offer examples for how to getting up and running with or without Less.&lt;br /&gt;
&lt;br /&gt;
==Search Engine Relevancy Tuning - A Static Rank Framework for Solr/Lucene==&lt;br /&gt;
&lt;br /&gt;
* Mike Schultz, Amazon.com (formerly Summon Search Architect) mike.schultz@gmail.com&lt;br /&gt;
&lt;br /&gt;
Solr/Lucene provides a lot of flexibility for adjusting relevancy scoring and improving search results.  Roughly speaking there are two areas of concern: Firstly, a 'dynamic rank' calculation that is a function of the user query and document text fields.  And secondly, a 'static rank' which is independent of the query and generally is a function of non-text document metadata.  In this talk I will outline an easily understood, hand-tunable static rank system with a minimal number of parameters.&lt;br /&gt;
&lt;br /&gt;
The obvious major feature of a search engine is to return results relevant to a user query.  Perhaps less obvious is the huge role query independent document features play in achieving that. Google's PageRank is an example of a static ranking of web pages based on links and other secret sauce.  In the Summon service, our 800 million documents have features like publication date, document type, citation count and Boolean features like the-article-is-peer-reviewed.  These fields aren't textual and remain 'static' from query to query, but need to influence a document's relevancy score.  In our search results, with all query related features being equal, we'd rather have more recent documents above older ones, Journals above Newspapers, and articles that are peer reviewed above those that are not. The static rank system I will describe achieves this and has the following features:&lt;br /&gt;
&lt;br /&gt;
* Query-time only calculation - nothing is baked into the index - with parameters adjustable at query time.&lt;br /&gt;
* The system is based on a signal metaphor where components are 'wired' together.  System components allow multiplexing, amplifying, summing, tunable band-pass filtering, string-to-value-mapping all with a bare minimum of parameters.&lt;br /&gt;
* An intuitive approach for mixing dynamic and static rank that is more effective than simple adding or multiplying.&lt;br /&gt;
* A way of equating disparate static metadata types that leads to understandable results ordering.&lt;br /&gt;
&lt;br /&gt;
==Submitting Digitized Book-like things to the Internet Archive==&lt;br /&gt;
&lt;br /&gt;
* Joel Richard, Smithsonian Institution Libraries, richardjm@si.edu&lt;br /&gt;
&lt;br /&gt;
The Smithsonian Libraries has submitted thousands of out-of-copyright items to the Internet Archive over the years. Specifically in relation to the Biodiversity Heritage Library, we have developed an in-house boutique scanning and upload process that became a learning experience in automated uploading to the Archive. As part of the software development, we created a whitepaper that details the combined learning experiences of the Smithsonian Libraries and the Missouri Botanical Garden. We will discuss some of the the contents of this whitepaper in the context of our scanning process and the manner in which we upload items to the Archive. &lt;br /&gt;
&lt;br /&gt;
Our talk will include a discussion of the types of files and their formats used by the Archive, processes that the Archive performs on uploaded items, ways of interacting and affecting those processes, potential pitfalls and solutions that you may encounter when uploading, and tools that the Archive provides to help monitor and manage your uploaded documents. &lt;br /&gt;
&lt;br /&gt;
Finally, we'll wrap up with a brief summary of how to use things that are on the Internet Archive in your own websites.&lt;br /&gt;
&lt;br /&gt;
== So... you think you want to Host a Code4Lib National Conference, do you? ==&lt;br /&gt;
&lt;br /&gt;
* Elizabeth Duell, Orbis Cascade Alliance, eduell@uoregon.edu&lt;br /&gt;
&lt;br /&gt;
Are you interested in hosting your own Code4Lib Conference? Do you know what it would take? What does BEO stands for? What does F&amp;amp;B Minimum mean? Who would you talk to for support/mentoring? There are so many things to think about: internet support, venue size, rooming blocks, contracts, dietary restrictions and coffee (can't forget the coffee!) just to name a few. Putting together a conference of any size can look daunting, so let's take the scary out of it and replace it with a can do attitude!&lt;br /&gt;
&lt;br /&gt;
Be a step ahead of the game by learning from the people behind the curtain. Ask questions and be given templates/ cheat sheets! &lt;br /&gt;
&lt;br /&gt;
== HTML5 Microdata and Schema.org ==&lt;br /&gt;
 &lt;br /&gt;
* Jason Ronallo, North Carolina State University Libraries, jason_ronallo@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
When the big search engines announced support for HTML5 microdata and the schema.org vocabularies, the balance of power for semantic markup in HTML shifted. &lt;br /&gt;
* What is microdata? &lt;br /&gt;
* Where does microdata fit with regards to other approaches like RDFa and microformats? &lt;br /&gt;
* Where do libraries stand in the worldview of Schema.org and what can they do about it? &lt;br /&gt;
* How can implementing microdata and schema.org optimize your sites for search engines?&lt;br /&gt;
* What tools are available?&lt;br /&gt;
&lt;br /&gt;
== Stack View: A Library Browsing Tool ==&lt;br /&gt;
 &lt;br /&gt;
* Annie Cain, Harvard Library Innovation Lab, acain@law.harvard.edu&lt;br /&gt;
&lt;br /&gt;
In an effort to recreate and build upon the traditional method of browsing a physical library, we used catalog data, including dimensions and page count, to create a [http://librarylab.law.harvard.edu/projects/stackview/ virtual shelf].&lt;br /&gt;
&lt;br /&gt;
This CSS and JavaScript backed visualization allows items to sit on any number of different shelves, really taking advantage of its digital nature.  See how we built Stack View on top of our data and learn how you can create shelves of your own using our open source code.&lt;br /&gt;
&lt;br /&gt;
== “Linked-Data-Ready” Software for Libraries ==&lt;br /&gt;
&lt;br /&gt;
* Jennifer Bowen, University of Rochester River Campus Libraries, jbowen@library.rochester.edu&lt;br /&gt;
&lt;br /&gt;
Linked data is poised to replace MARC as the basis for the new library bibliographic framework.  For libraries to benefit from linked data, they must learn about it, experiment with it, demonstrate its usefulness, and take a leadership role in its deployment. &lt;br /&gt;
&lt;br /&gt;
The eXtensible Catalog Organization (XCO) offers open-source software for libraries that is “linked-data-ready.” XC software prepares MARC and Dublin Core metadata for exposure to the semantic web, incorporating FRBR Group 1 entities and registered vocabularies for RDA elements and roles. This presentation will include a software demonstration, proposed software architecture for creation and management of linked data, a vision for how libraries can migrate from MARC to linked data, and an update on XCO progress toward linked data goals.&lt;br /&gt;
&lt;br /&gt;
== How people search the library from a single search box ==&lt;br /&gt;
&lt;br /&gt;
* Cory Lown, North Carolina State University Libraries, cory_lown@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
Searching the library is complex. There's the catalog, article databases, journal title and database title look-ups, the library website, finding aids, knowledge bases, etc. How would users search if they could get to all of these resources from a single search box? I'll share what we've learned about single search at NCSU Libraries by tracking use of QuickSearch (http://www.lib.ncsu.edu/search/index.php?q=aerospace+engineering), our home-grown unified search application. As part of this talk I will suggest low-cost ways to collect real world use data that can be applied to improve search. I will try to convince you that data collection must be carefully planned and designed to be an effective tool to help you understand what your users are telling you through their behavior. I will talk about how the fragmented library resource environment challenges us to provide useful and understandable search environments. Finally, I will share findings from analyzing millions of user transactions about how people search the library from a production single search box at a large university library.&lt;br /&gt;
&lt;br /&gt;
== An Incremental Approach to Archival Description and Access ==&lt;br /&gt;
&lt;br /&gt;
* Chela Scott Weber, New York University Libraries, chelascott@gmail.com&lt;br /&gt;
* Mark A. Matienzo, Yale University Library, mark@matienzo.org&lt;br /&gt;
&lt;br /&gt;
''This is placeholder text; description coming shortly''&lt;br /&gt;
&lt;br /&gt;
== Making the Easy Things Easy: A Generic ILS API ==&lt;br /&gt;
&lt;br /&gt;
* Wayne Schneider, Hennepin County Library, wschneider@hclib.org&lt;br /&gt;
&lt;br /&gt;
Some stuff we try to do is complicated, because, let's face it, library data is hard. Some stuff, on the other hand, should be easy. Given an item identifier, I should be able to look at item availability. Given a title identifier, I should be able to place a request. And no, I shouldn't have to parse through the NCIP specification or write a SIP client to do it.&lt;br /&gt;
&lt;br /&gt;
This talk will present work we have done on a web services approach to an API for traditional library transactional data, including example applications.&lt;br /&gt;
&lt;br /&gt;
== Your Catalog in Linked Data==&lt;br /&gt;
&lt;br /&gt;
* Tom Johnson, Oregon State University Libraries, thomas.johnson@oregonstate.edu&lt;br /&gt;
&lt;br /&gt;
Linked Library Data activity over the last year has seen bibliographic data sets and vocabularies proliferating from traditional library&lt;br /&gt;
sources. We've reached a point where regular libraries don't have to go it alone to be on the Semantic Web. There is a quickly growing pool of things we can actually ''link to'', and everyone's existing data can be immediately enriched by participating.&lt;br /&gt;
&lt;br /&gt;
This is a quick and dirty road to getting your catalog onto the Linked Data web. The talk  will take you from start to finish, using Free Software tools to establish a namespace, put up a SPARQL endpoint, make a simple data model, convert MARC records to RDF, and link the results to major existing data sets (skipping conveniently over pesky processing time). A small amount of &amp;quot;why linked data?&amp;quot; content will be covered, but the primary goal is to leave you able to reproduce the process and start linking your catalog into the web of data. Appropriate documentation will be on the web.&lt;br /&gt;
&lt;br /&gt;
== Getting the Library into the Learning Management System using Basic LTI == &lt;br /&gt;
&lt;br /&gt;
* David Walker, California State University, dwalker@calstate.edu&lt;br /&gt;
&lt;br /&gt;
The integration of library resources into learning management systems (LMS) has long been something of a holy grail for academic libraries.  The ability to deliver targeted library systems and services to students and faculty directly within their online course would greatly simplify access to library resources.  Yet, the technical barriers to achieving that goal have to date been formidable.  &lt;br /&gt;
&lt;br /&gt;
The recently released Learning Tool Interoperability (LTI) protocol, developed by IMS, now greatly simplifies this process by allowing libraries (and others) to develop and maintain “tools” that function like a native plugin or building block within the LMS, but ultimately live outside of it.  In this presentation, David will provide an overview of Basic LTI, a simplified subset (or profile) of the wider LTI protocol, showing how libraries can use this to easily integrate their external systems into any major LMS.  He’ll showcase the work Cal State has done to do just that.&lt;br /&gt;
&lt;br /&gt;
== Turn your Library Proxy Server into a Honeypot ==&lt;br /&gt;
 &lt;br /&gt;
* Calvin Mah, Simon Fraser University, calvinm@sfu.ca (@calvinmah)&lt;br /&gt;
&lt;br /&gt;
Ezproxy has provided libraries with a useful tool for providing patrons with offsite online access to licensed electronic resources.  This has not gone unnoticed for the unscrupulous users of the Internet who are either unwilling or unable to obtain legitimate access to these materials for themselves.  Instead, they buy or share hacked university computing accounts for unauthorized access.  When undetected, abuse of compromised university accounts can lead to abuse of vendor resources which lead to the blocking of the entire campus block of IP addresses from accessing that resource.&lt;br /&gt;
&lt;br /&gt;
Simon Fraser University Library has been pro actively detecting and thwarting unauthorized attempts through log analysis.  Since SFU has begun analysing our ezproxy logs, the number of new SFU login credentials which are posted and shared in publicly accessible forums has been reduced to zero.   Since our log monitoring began in 2008, the annual average number of SFU login credentials  that are compromised or hacked is 140.  Instead of being a single point of weakness in campus IT security, the library’s proxy server is a honeypot exposing weak passwords, keystroke logging trojans installed on patron PCs and campus network password sniffers.&lt;br /&gt;
&lt;br /&gt;
This talk will discuss techniques such as geomapping login attempts, strategies such as seeding phishing attempts and tools such as statistical log analysis used in detecting compromised login credentials.  &lt;br /&gt;
&lt;br /&gt;
== Relevance Ranking in the Scholarly Domain ==&lt;br /&gt;
&lt;br /&gt;
* Tamar Sadeh, PhD, Ex Libris Group, tamar.sadeh@exlibrisgroup.com&lt;br /&gt;
&lt;br /&gt;
The greatest challenge for discovery systems is how to provide users with the most relevant search results, given the immense landscape of available content. In a manner that is similar to human interaction between two parties, in which each person adjusts to the other in tone, language, and subject matter, discovery systems would ideally be sophisticated and flexible enough to adjust their algorithms to individual users and each user’s information needs. &lt;br /&gt;
&lt;br /&gt;
When evaluating the relevance of an item to a specific user in a specific context, relevance-ranking algorithms need to take into account, in addition to the degree to which the item matches the query, information that is not embodied in the item itself. Such information, which includes the item’s scholarly value, the type of search that the user is conducting (e.g., an exploratory search or a known-item search), and other factors, enables a discovery system to fulfill user expectations that have been shaped by experience with Web search engines.  &lt;br /&gt;
&lt;br /&gt;
The session will focus on the challenges of developing and evaluating relevance-ranking algorithms for the scholarly domain. Examples will be drawn mainly from the relevance-ranking technology deployed by the Ex Libris Primo discovery solution. &lt;br /&gt;
&lt;br /&gt;
== Mobile Library Catalog using Z39.50 ==&lt;br /&gt;
 &lt;br /&gt;
* James Paul Muir, The Ohio State University, muir.29@osu.edu&lt;br /&gt;
&lt;br /&gt;
A talk about putting a new spin on an age-old technology, creating a universal interface, which exposes any Z39.50 capable library catalog as a simple, useful and universal REST API for use in native mobile apps and mobile web.&lt;br /&gt;
&lt;br /&gt;
The talk includes the exploration and demonstration of the Ohio State University’s native app “OSU Mobile” for iOS and Android and shows how the library catalog search was integrated.&lt;br /&gt;
&lt;br /&gt;
The backbone of the project is a REST API, which was created in a weekend using a PHP framework that translates OPAC XML results from the Z39.50 interface into mobile-friendly JSON formatting.&lt;br /&gt;
&lt;br /&gt;
Raw Z39.50 search results contain all MARC information as well as local holdings.  &lt;br /&gt;
Configurable search fields and the ability to select which fields to include in the JSON output make this solution a perfect fit for any Z39.50-capable library catalog.&lt;br /&gt;
  &lt;br /&gt;
Looking forward, possibilities for expansion include the use of Off Campus Sign-In for online resources so mobile patrons can directly access online resources from a smartphone (included in the Android version of OSU Mobile) as well as integration with library patron account.&lt;br /&gt;
&lt;br /&gt;
Enjoy this alternative to writing a custom OPAC adapter or using a 3rd party service for exposing library records and use the proven and universal Z39.50 interface directly against your library catalog. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DMPTool: Guidance and Resources for your data management plan ==&lt;br /&gt;
 &lt;br /&gt;
* Marisa Strong, California Digital Libary, marisa.strong@ucop.edu&lt;br /&gt;
&lt;br /&gt;
A number of U.S. funding agencies such as the National Science Foundation require researchers to supply detailed, cost-effective plans for managing research data, called Data Management Plans.  To help researchers with this requirement, several organizations such as the California Digital Library, University of Illinois, University of Virginia, Smithsonian Institution, the DataONE consortium and the (UK) Digital Curation Centre) came together to develop the DMPTool. The goal of the DMPTool is to provide researchers with guidance, links to resources and help with writing data management plans.&lt;br /&gt;
&lt;br /&gt;
This tool presents the requirements specific to the funding agency they are applying for along with detailed help with each section.  Users can create a plan, preview it, export it in various formats, and make it freely accessible for others to read. Users who are members of participating institutions will benefit from specific help for each section, suggested answers, and resources for management of their data, all specific to their institution.  Institutions can also announce events, workshops, and data management information via the DMPTool blog available from within the tool.&lt;br /&gt;
&lt;br /&gt;
This open-source software tool is integrated with federated login using Shibboleth which allows users to login via their home institutions. It is a Ruby/Rails application hosted on a SLES VM.  We had a geographically distributed development team sharing code on Bitbucket. &lt;br /&gt;
&lt;br /&gt;
This talk will demo the features of the application as well as highlight the development practices and infrastructure used in building the application.&lt;br /&gt;
&lt;br /&gt;
== Lies, Damned Lies, and Lines of Code Per Day ==&lt;br /&gt;
 &lt;br /&gt;
* James Stuart, Columbia University, james.stuart@columbia.edu&lt;br /&gt;
&lt;br /&gt;
We've all heard about that one study that showed that Pair Programming was 20% efficient than working alone. Or maybe you saw on a blog that study that showed that programmers who write fewer lines of code per day are more efficient...or was it less efficient? And of course, we all know that programmers who work in (Ruby|Python|Java|C|Erlang) have been shown to be more efficient.&lt;br /&gt;
&lt;br /&gt;
A quick examination of some of the research surrounding programming efficiency and methodology, with a focus on personal productivity, and how to incorporate the more believable research into your own team's workflow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==An Anatomy of a Book Viewer==&lt;br /&gt;
&lt;br /&gt;
*Mohammed Abuouda, Bibliotheca Alexandrina, mohammed.abuouda@bibalex.org&lt;br /&gt;
&lt;br /&gt;
Bibliotheca Alexandria (BA) hosts 210,000 digital books in different languages available at http://dar.bibalex.org. It includes the largest collection of digitized Arabic books. Using open source  tools, BA has developed a modular book viewer that can be deployed in any environment to provide the users with a great personalized reading experience. BA’s book viewer provides several services that make this possible: morphological search in different languages, localization, server load balancing, scalability and image processing. Personalization features includes different types of annotation such as sticky notes, highlighting and underlining. It also provides the ability to embed the viewer in any webpage and change its skin.&lt;br /&gt;
&lt;br /&gt;
In this talk we will describe the book viewer architecture, its modular design and how to incorporate it in your current environment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Carrier: Digital Signage System ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:jmspargu|Justin Spargur]], The University of Arizona, spargurj@u.library.arizona.edu&lt;br /&gt;
 &lt;br /&gt;
Carrier is a web-based digital signage application written using JavaScript, PHP, MySQL that can be used on any device with an internet connection and a web browser. Used across the University of Arizona Libraries campuses, Carrier can display any web-based content, allowing users to promote new library collections and services via images, web pages, or videos. Users can easily manage the order in which slides are delivered, manage the length that slides are displayed for, set dates for when slides should be shown, and even specify specific locations where slides should be presented. &lt;br /&gt;
 &lt;br /&gt;
In addition to marketing purposes, Carrier can be used to send both low and high priority alerts to patrons. Alerts can be sent through the administrative interface, via RSS feeds, and even through a Twitter feed, allowing for easy integration with existing campus emergency notification systems.&lt;br /&gt;
 &lt;br /&gt;
I will describe the technical underpinnings of Carrier, challenges that we’ve faced since its implementation, enhancements planned for the next release of the software, and discuss our plans for releasing this software for others to use '''for free'''.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== We Built It.  They Came.  Now What? ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:evviva|Evviva Weinraub]], Oregon State University, evviva.weinraub@oregonstate.edu&lt;br /&gt;
 &lt;br /&gt;
You have a great idea for something new or useful.  You build it, put it out there on GitHub, do a couple of presentations, maybe a press release and BAM, suddenly you’ve created a successful Open Source tool that others are using.  Great!&lt;br /&gt;
&lt;br /&gt;
Fast-forward 3 years. &lt;br /&gt;
&lt;br /&gt;
You still believe in the product, but you can no longer be solely responsible for taking care of it.  Just putting it out there has made it a tool others use, but how do you find a community of folks who believe in the product as much as you do and are willing to commit the time and energy into building, sustaining and moving this project forward.  Or just figuring out if you should bother trying?&lt;br /&gt;
&lt;br /&gt;
In 2006, OSU Libraries built an Interactive Course Assignment system called Library a la Carte – think LibGuides only Open Source.  We now find ourselves in just this predicament.  &lt;br /&gt;
&lt;br /&gt;
What can we do as a community to move beyond our build-first-ask-questions-later mentality and embed sustainability into our new and existing ideas and products without moving toward commercialization?  I fully expect we’ll end up with more questions than answers, but let’s spend some talking about our predicament and yours and think about how we can come out the other side. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contextually Rich Collections Without the Risk: Digital Forensics and Automated Data Triage for Digital Collections ==&lt;br /&gt;
&lt;br /&gt;
* [[User:kamwoods|Kam Woods]], University of North Carolina at Chapel Hill, kamwoods@email.unc.edu&lt;br /&gt;
* Cal Lee, University of North Carolina at Chapel Hill, callee -- at -- ils -- unc -- edu&lt;br /&gt;
* Matthew Kirschenbaum, University of Maryland, mkirschenbaum@gmail.com&lt;br /&gt;
&lt;br /&gt;
Digital libraries and archives are increasingly faced with a significant backlog of unprocessed data along with an accelerating stream of incoming material. These data often arrive from donor organizations, institutions, and individuals on hard drives, optical and magnetic disks, flash memory devices, and even complete hardware (traditional desktop computers and mobile systems). &lt;br /&gt;
&lt;br /&gt;
Information on these devices may be sensitive, obscured by operating system arcana, or require specialized tools and procedures to parse. Furthermore, the sheer volume of materials being handled means that even simple tasks such as providing useful content reports can be impractical (or impossible) in current workflows.&lt;br /&gt;
&lt;br /&gt;
Many of the tasks currently associated with data triage and analysis can be simplified and performed with improved coverage and accuracy through the use of open source digital forensics tools. In this talk we will discuss recent developments in providing digital librarians and archivists with simple, open source tools to accomplish these tasks.  We will discuss tools and methods be tested, developed and packaged as part of the [http://bitcurator.net BitCurator] project.  These tools can be used to reduce or eliminate laborious, error-prone tasks in existing workflows and put valuable time back into the hands of digital librarians and archivists -- time better used to identify and tackle complex tasks that *cannot* be solved by software.&lt;br /&gt;
&lt;br /&gt;
== Finding Movies with FRBR and Facets ==&lt;br /&gt;
 &lt;br /&gt;
* Kelley McGrath, University of Oregon, kelleym@uoregon.edu&lt;br /&gt;
&lt;br /&gt;
How might the Functional Requirements for Bibliographic Records (FRBR) model and faceted navigation improve access to film and video in libraries? I will describe the design and implementation of a FRBR-inspired prototype discovery interface ([http://blazing-sunset-24.heroku.com/ http://blazing-sunset-24.heroku.com/]) using Solr and Blacklight . This approach demonstrates how FRBR can enable a work-centric view that is focused on the original movie or program while supporting users in selecting an appropriate version.&lt;br /&gt;
&lt;br /&gt;
The prototype features two sets of facets, which independently address two important information needs: (1) &amp;quot;What kind of movie or program do you want to watch?&amp;quot; (e.g., a 1970s TV sitcom, something directed by Kurosawa, or an early German horror film); (2) &amp;quot;How do you want to watch it? Where do you want to get it from?&amp;quot; (e.g., on Blu-ray, with Spanish subtitles, available at the local public library). This structure enables patrons to narrow, broaden and pivot across facet values instead of limiting them to the tree-structured hierarchy common with existing FRBR applications. &lt;br /&gt;
&lt;br /&gt;
This type of interface requires controlled data values mapped to FRBR group 1 entities, which in many cases are not available in existing MARC bibliographic records. I will discuss ongoing work using the XC Metadata Services Toolkit ([http://www.extensiblecatalog.org/ http://www.extensiblecatalog.org/]) to extract and normalize data from existing MARC records for videos in order to populate a FRBRized, faceted discovery interface.&lt;br /&gt;
&lt;br /&gt;
==Escaping the Black Box — Building a Platform to Foster Collaborative Innovation==&lt;br /&gt;
&lt;br /&gt;
* Karen Coombs, OCLC, coombsk@oclc.org&lt;br /&gt;
* Kathryn Harnish, OCLC harnishk@oclc.org&lt;br /&gt;
&lt;br /&gt;
Exposed Web services offer an unprecedented opportunity for collaborative innovation — that’s one of the hallmarks of Web-based services like Amazon, Google, and Facebook.  These environments are popular not only for their native feature sets, but also for the array of community-developed apps that can run in them.  The creativity of the development communities that work in these systems brings new value to all types of users.&lt;br /&gt;
&lt;br /&gt;
What if the library community could realize this same level of collaborative innovation around its systems?  What kinds of support would be necessary to transform library systems from “black boxes” to more open, accessible environments in which value is created and multiplied by the user community?&lt;br /&gt;
&lt;br /&gt;
In this session, we’ll discuss the challenges and opportunities OCLC faced in creating just that kind of environment.  The recently-released OCLC “cooperative platform” provides improved access to a wide variety of OCLC’s data and services, allowing library developers and other interested partners to collaborate, innovate, and share new solutions with fellow libraries.  We’ll describe the open standards and technologies we’ve put in play in as we:&lt;br /&gt;
* exposed robust Web services that provide access to both data and business logic; &lt;br /&gt;
* created an architecture for integrating community-built applications in OCLC (and other) products; and &lt;br /&gt;
* developed an infrastructure to support community development, collaboration, and app sharing&lt;br /&gt;
&lt;br /&gt;
Learn how OCLC is helping to open the “black box” -- and give libraries the freedom to become true partners in the evolution of their library systems.&lt;br /&gt;
&lt;br /&gt;
== Code inheritance; or, The Ghosts of Perls Past  ==&lt;br /&gt;
&lt;br /&gt;
* Jon Gorman, University of Illinois, jtgorman@illinois.ed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Any organization has a history not found in its archives or museums. Mysteries exist that origins are lost to the collective institutional knowledge.  Despite what has been forgotten by humans, our servers and computers still keep running. Instructions crafted long ago execute like digital ghosts following orders of masters who have long since left.&lt;br /&gt;
&lt;br /&gt;
The University of Illinois has a fair amount of Perl code created by several different developers. This code includes software that handles our data feeds coming both in and out of campus, reports against our Voyager system, some web applications, and more.&lt;br /&gt;
&lt;br /&gt;
I'll touch a little on the historical legacy and why Perl is used. From there I'll share some tips, best practices, and some of the mistakes I've made in trying to maintain this code. Most of the advice will transition to any language, but code and libraries discussed will be Perl. The presentation will also touch on some internal debate on whether or not to port parts of our Perl codebase.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Recorded Radio/TV broadcasts streamed for library users ==&lt;br /&gt;
&lt;br /&gt;
 * Kåre Fiedler Christiansen, The State and University Library Denmark, kfc@statsbiblioteket.dk&lt;br /&gt;
 * Mads Villadsen, The State and University Library Denmark, mv@statsbiblioteket.dk&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Provide online access to the Radio/TV collection,&amp;quot; my boss said. About 500,000&lt;br /&gt;
hours of Danish broacast radio and TV. Easy, right? Well, half a year later &lt;br /&gt;
we'd done it, but it turned out to involve practically every it employee in the &lt;br /&gt;
library and quite a few non-technical people as well.&lt;br /&gt;
&lt;br /&gt;
Combining our Fedora-based DOMS repository system with our Lucene-based Summa&lt;br /&gt;
search system with our WAYF-based single-signon system with an upgrade of our&lt;br /&gt;
SAN system for enough speed to deliver the content with an ffmpeg-based &lt;br /&gt;
transcoding workflow system with a Wowza-based streaming server, and sprinkling&lt;br /&gt;
it all with a nice user-friendly web frontend turned out to be quite a challenge,&lt;br /&gt;
but also one of the most engaging experiences for a long time.&lt;br /&gt;
&lt;br /&gt;
Of course we were immidiately shut down, since the legal details weren't quite&lt;br /&gt;
as clear as we thought they were, but take an exclusive preview at &lt;br /&gt;
http://developer.statsbiblioteket.dk/kultur/ - username/password: code4lib.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Upgrading from Catalog to Discovery Environment: A Consortial Approach ==&lt;br /&gt;
 &lt;br /&gt;
* Spencer Lamm, Swarthmore College, slamm1@swarthmore.edu&lt;br /&gt;
* Chelsea Lobdell, Swarthmore College, clobdel1@swarthmore.edu&lt;br /&gt;
&lt;br /&gt;
Almost two years ago the Tri-College Consortium of Haverford, Swarthmore, and Bryn Mawr Colleges embarked upon a journey to provide enhanced end-user experience and discoverability with our library applications. Our solution was to implement an integration of ExLibris's Primo Central into Villanova's VuFind for a dual-channel searching experience. We present a case study of the collaborative and technical aspects of our process.&lt;br /&gt;
&lt;br /&gt;
At a high level we will describe our approach to project management and decision making.  We used a multi-tiered structure of working groups with an iterative design-feedback implementation cycle.  We will relay lessons learned from our experience: successes, failures, and unexpected hurdles.&lt;br /&gt;
&lt;br /&gt;
At a lower, technical level we will discuss the vufind search module architecture; the workflow of creating a new search channel; a Primo API parser; and the data structures of the Primo API response and the Primo SearchObject. Time permitting, we will also outline how we modified VuFind's Innovative driver to work with our ILS.&lt;br /&gt;
&lt;br /&gt;
[[Category: Code4Lib2012]]&lt;/div&gt;</summary>
		<author><name>Chelociraptor</name></author>	</entry>

	</feed>