<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.code4lib.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Rchyla</id>
		<title>Code4Lib - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.code4lib.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Rchyla"/>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/Special:Contributions/Rchyla"/>
		<updated>2026-04-10T01:34:23Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.26.2</generator>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2014_Prepared_Talk_Proposals&amp;diff=39851</id>
		<title>2014 Prepared Talk Proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2014_Prepared_Talk_Proposals&amp;diff=39851"/>
				<updated>2013-11-08T20:14:23Z</updated>
		
		<summary type="html">&lt;p&gt;Rchyla: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Proposals for Prepared Talks:'''&lt;br /&gt;
&lt;br /&gt;
Prepared talks are 20 minutes (including setup and questions), and should focus on one or more of the following areas:&lt;br /&gt;
 &lt;br /&gt;
* ''Projects'' you've worked on which incorporate innovative implementation of existing technologies and/or development of new software&lt;br /&gt;
* ''Tools and technologies'' – How to get the most out of existing tools, standards and protocols (and ideas on how to make them better)&lt;br /&gt;
* ''Technical issues'' - Big issues in library technology that should be addressed or better understood&lt;br /&gt;
* ''Relevant non-technical issues'' – Concerns of interest to the Code4Lib community which are not strictly technical in nature, e.g. collaboration, diversity, organizational challenges, etc.&lt;br /&gt;
&lt;br /&gt;
'''To Propose a Talk'''&lt;br /&gt;
* Log in to the wiki in order to submit a proposal. If you are not already registered, follow the instructions to do so.&lt;br /&gt;
* Provide a title and brief (500 words or fewer) description of your proposed talk.&lt;br /&gt;
* If you so choose, you may also indicate when, if ever, you have presented at a prior Code4Lib conference. This information is completely optional, but it may assist us in opening the conference to new presenters.&lt;br /&gt;
&lt;br /&gt;
As in past years, the Code4Lib community will vote on proposals that they would like to see included in the program. This year, however, only the top 10 proposals will be guaranteed a slot at the conference. Additional presentations will be selected by the Program Committee in an effort to ensure diversity in program content. Community votes will, of course, still weigh heavily in these decisions.&lt;br /&gt;
&lt;br /&gt;
Presenters whose proposals are selected for inclusion in the program will be guaranteed an opportunity to register for the conference. The standard conference registration fee will still apply.&lt;br /&gt;
&lt;br /&gt;
''Proposals can be submitted through '''Friday, November 8, 2013, at 5pm PST'''''. Voting will commence on November 18, 2013 and continue through December 6, 2013. The final line-up of presentations will be announced in early January, 2014.&lt;br /&gt;
&lt;br /&gt;
'''Talk Proposals'''&lt;br /&gt;
&lt;br /&gt;
==Creating a new Greek-Dutch dictionary==&lt;br /&gt;
* Caspar Treijtel, University of Amsterdam, c.treijtel@uva.nl&lt;br /&gt;
&lt;br /&gt;
At present, no complete dictionary of (ancient) Greek-Dutch is available online. A new dictionary is currently under construction at Leiden University, with software being developed at the University of Amsterdam. The team in Leiden has already begun preparation of the data, with at this moment about 6,000 approved lemmas. The ultimate goal is to produce both a print version and online open access version from the same source documents. The software needed for this has been made in a project that was funded by CLARIN-NL.&lt;br /&gt;
&lt;br /&gt;
Migrator&lt;br /&gt;
&lt;br /&gt;
For the production of lemmas we have implemented an advanced workflow. The (generally non-technical) users create lemmas using MS Word, which is both familiar and easy to use. We have developed a custom software module that carefully migrates the Word documents into deeply structured XML by analyzing the structure and semantics of the lemmas, and falling back on heuristics in ambiguous cases. While having initially envisioned the oXygen XML Author component as the main tool for creating new lemmas, we obtained excellent results with the migrator module, and decided therefore to continue using MS Word as the primary composition tool. The main advantage of this is that the editors are much more familiar with Word than with any other WYSIWYG editor. Lemmas that have been migrated to XML are stored in an XML database and can be further edited using oXygen XML Author.&lt;br /&gt;
&lt;br /&gt;
Lemmatizer&lt;br /&gt;
&lt;br /&gt;
Greek morphology is complicated. In order to use a dictionary effectively, a rather high level of initial language competence is necessary for the user to be able to relate the word form s/he finds in a text to the correct basic lemma form, where the definition of the word can be found. Using a Greek morphological database we have been able to facilitate the search for lemmas. A ‘lemmatizer’ module gives the possible parsings of the word forms and the lemmas they can be derived from. This enables the user to type in the word as found in the text and be redirected to the correct lemma.&lt;br /&gt;
&lt;br /&gt;
Visualization&lt;br /&gt;
&lt;br /&gt;
For the online dictionary we have implemented a visualization module that allows the user to view multiple lemmas at once. The implementation of this module has been done using the Javascript framework MooTools. The result is a viewer that performs really well and is run by maintainable Javascript code.&lt;br /&gt;
&lt;br /&gt;
The online dictionary is still being worked on, have a look at http://www.woordenboekgrieks.nl/ for the beta version. A newer test version with additional features can be found here: http://angel.ic.uva.nl:8600/.&lt;br /&gt;
&lt;br /&gt;
Credits&lt;br /&gt;
&lt;br /&gt;
* construction of the dictionary: Prof. Ineke Sluiter, Classics department of Leiden University; Prof. Albert Rijksbaron, University of Amsterdam&lt;br /&gt;
* publisher of the dictionary: Amsterdam University Press&lt;br /&gt;
* design/typesetting dictionary: TaT Zetwerk (http://www.tatzetwerk.nl/)&lt;br /&gt;
* software development: Digital Production Center, University Library, University of Amsterdam&lt;br /&gt;
* project funding: CLARIN-NL (http://www.clarin.nl/)&lt;br /&gt;
* morphological database for use by the lemmatizer: courtesy of Prof. Helma Dik, University of Chicago (based on data of the Perseus Project)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== Using Drupal to drive alternative presentation systems ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Recently, we have been building systems that use angular.js, Rails, or other systems for presentation, while leveraging Drupal's sophisticated content management capabilities on the back end.&lt;br /&gt;
&lt;br /&gt;
So far, these have been one-way systems, but as we move to Drupal 8 we are beginning to explore ways to further decouple the presentation and CMS functions.&lt;br /&gt;
&lt;br /&gt;
== A Book, a Web Browser and a Tablet: How Bibliotheca Alexandrina's Book Viewer Framework Makes It Possible ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Mohammed.abuouda|Mohammed Abu ouda]], Bibliotheca Alexandrina (The new Library of Alexandria)&lt;br /&gt;
&lt;br /&gt;
A lot of institutions around the world are engaged in multiple digitization projects aiming at preserving the human knowledge present in books and availing them through multiple channels to people around the whole globe. These efforts will sure help close the digital gap particularly with the arrival of affordable e-readers, mobile phones and network coverage. However, the digital reading experience has not yet arrived to its maximum potential. Many readers miss features they like in their good old books and wish to find them in their digital counterpart. In an attempt to create a unique digital reading experience, Bibliotheca Alexandria (BA) created a flexible book viewing framework that is currently used to access its current collection of more than 300,000 digital books in five different languages which includes the largest collection of digitized Arabic books.&lt;br /&gt;
&lt;br /&gt;
Using open source tools, BA used the framework to develop a modular book viewer that can be deployed in different environments and is currently at the heart of various BA projects. The Book viewer provides several features creating a more natural reading experience. As with physical books, the reader can now personalize the books he reads by adding annotations like highlights, underlines and sticky notes to capture his thoughts and ideas in addition to being able to share the book with friends on social networks. The reader can perform a search across the content of the book receiving highlighted search results within the pages of the book. More features can be further added to the book viewer through its plugin architecture.&lt;br /&gt;
&lt;br /&gt;
== Structured data NOW: seeding schema.org in library systems ==&lt;br /&gt;
 &lt;br /&gt;
* [http://coffeecode.net Dan Scott], Laurentian University&lt;br /&gt;
** Previous code4lib presentations: [https://archive.org/details/code4lib.conf.2008.pres.CouchDBsacrilege CouchDB is sacrilege... mmm, delicious sacrilege] at Code4Lib 2008&lt;br /&gt;
&lt;br /&gt;
The semantic web, linked data, and structured data are all fantastic ideas with a barrier imposed by implementation constraints. If their system does not allow customizations, or the institution lacks skilled human resources, it does not matter how enthused a given library might be about publishing structured data... it will not happen. However, if the software in use simply publishes structured data by default, then the web will be populated for free. Really! No extra resources necessary.&lt;br /&gt;
&lt;br /&gt;
This presentation highlights Dan's work with systems such as Evergreen, Koha, and VuFind to enable the publication of schema.org structured data out-of-the-box. Along the way, we reflect the current state of the W3C Schema.org Bibliographic Extension community group efforts to shape the evolution of the schema.org vocabulary. Finally, hold on tight as we contemplate next steps and the possibilities of a world where structured data is the norm on the web.&lt;br /&gt;
&lt;br /&gt;
== Towards Pasta Code Nirvana: Using JavaScript MVC to Fill Your Programming Ravioli ==&lt;br /&gt;
&lt;br /&gt;
* Bret Davidson, North Carolina State University Libraries, bret_davidson@ncsu.edu&lt;br /&gt;
** Previous Code4Lib Presentations: [http://wiki.code4lib.org/index.php/2013_talks_proposals#Data-Driven_Documents:_Visualizing_library_data_with_D3.js Visualizing library data with D3.js] at Code4Lib 2013&lt;br /&gt;
&lt;br /&gt;
JavaScript MVC frameworks are ushering in a golden age of robust and responsive web applications that take advantage of evergreen browsers, performant JS engines, and the unprecedented reach provided by billions of personal computing devices. The web browser has emerged as the world’s most popular application runtime and the complexity[1] and scope of JavaScript applications has exploded accordingly. Server-side web frameworks like Rails and Django have helped developers adhere to best practices like modularity, dependency injection, and unit testing for years, practices that are now being applied to JavaScript development through projects like Backbone[2], Ember[3], and Angular[4].&lt;br /&gt;
&lt;br /&gt;
This talk will discuss the issues JavaScript MVC frameworks are trying to solve, common features like data binding, implications for the future of web development[5], and the appropriateness of JavaScript MVC for library applications.&lt;br /&gt;
&lt;br /&gt;
*[1]http://en.wikipedia.org/wiki/Spaghetti_code&lt;br /&gt;
*[2]http://backbonejs.org&lt;br /&gt;
*[3]http://emberjs.com&lt;br /&gt;
*[4]http://angularjs.org&lt;br /&gt;
*[5]http://tomdale.net/2013/09/progressive-enhancement-is-dead/&lt;br /&gt;
&lt;br /&gt;
== WebSockets for Real-Time and Interactive Interfaces ==&lt;br /&gt;
&lt;br /&gt;
* [http://ronallo.com Jason Ronallo], NCSU Libraries, jason_ronallo@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
Previous Code4Lib presentations:&lt;br /&gt;
* [http://code4lib.org/conference/2012/ronallo HTML5 Microdata and Schema.org] 2012&lt;br /&gt;
* [http://code4lib.org/conference/2013/ronallo HTML5 Video Now!] 2013&lt;br /&gt;
&lt;br /&gt;
Watching the Google Analytics Real-Time dashboard for the first time was mesmerizing. As soon as someone visited a site, I could see what page they were on. For a digital collections site with a lot of images, it was fun to see what visitors were looking at. But getting from Google Analytics to the image or other content of what was currently being viewed was cumbersome. The real-time experience was something I wanted share with others. I'll show you how I used a WebSocket service to create a real-time interface to digital collections.&lt;br /&gt;
&lt;br /&gt;
In the Hunt Library at NCSU we have some large video walls. I wanted to make HTML-based exhibits that featured viewer interactions. I'll show you how I converted Listen to Wikipedia [1] into an bring-your-own-device interactive exhibit. With WebSockets any HTML page can be remote controlled by any internet connected device.&lt;br /&gt;
&lt;br /&gt;
I will attempt to include real-time audience participation.&lt;br /&gt;
&lt;br /&gt;
[1] http://listen.hatnote.com/&lt;br /&gt;
&lt;br /&gt;
== Rapid Development of Automated Tasks with the File Analyzer ==&lt;br /&gt;
&lt;br /&gt;
* Terry Brady, Georgetown University Libraries, twb27@georgetown.edu&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Libraries have customized the File Analyzer and Metadata Harvester application (https://github.com/Georgetown-University-Libraries/File-Analyzer) to solve a number of library automation challenges:&lt;br /&gt;
* validating digitized and reformatted files&lt;br /&gt;
* validating vendor statistics for counter compliance&lt;br /&gt;
* preparing collections of digital files for archiving and ingest&lt;br /&gt;
* manipulating ILS import and export files&lt;br /&gt;
&lt;br /&gt;
The File Analyzer application was used by the US National Archives to validate 3.5 million digitized images from the 1940 Census.  After implementing a customized ingest workflow within the File Analyzer, the Georgetown University Libraries was able to process an ingest backlog of over a thousand files of digital resources into DigitalGeorgetown, the Libraries’ Digital Collections and Institutional Repository platform.  Georgetown is currently developing customized workflows that integrate Apache Tika, BagIt, and Marc conversion utilities.&lt;br /&gt;
&lt;br /&gt;
The File Analyzer is a desktop application with a powerful framework for implementing customized file validation and transformation rules.  As new rules are deployed, they are presented to users within a user interface that is easy (and powerful) to use.&lt;br /&gt;
&lt;br /&gt;
Learn about the functionality that is available for download, how you can use this tool to automate workflows from digital collections to ILS ingests to electronic resources statistics and also discuss the opportunities to collaborate on enhancements to this application!&lt;br /&gt;
&lt;br /&gt;
== GeoHydra: How to Build a Geospatial Digital Library with Fedora ==&lt;br /&gt;
 &lt;br /&gt;
* [http://stanford.edu/~drh Darren Hardy], Stanford University, drh@stanford.edu&lt;br /&gt;
&lt;br /&gt;
Geographically-rich data are exploding and putting fear in those trying to&lt;br /&gt;
tackle integrating them into existing digital library infrastructures.&lt;br /&gt;
Building a spatial data infrastructure that integrates with your digital&lt;br /&gt;
library infrastructure need not be a daunting task. We have successfully&lt;br /&gt;
deployed a geospatial digital library infrastructure using Fedora and&lt;br /&gt;
open-source geospatial software [1]. We'll discuss the primary design&lt;br /&gt;
decisions and technologies that led to a production deployment within a few&lt;br /&gt;
months. Briefly, our architecture revolves around discovery, delivery, and&lt;br /&gt;
metadata pipelines using open-source OpenGeoPortal [2], Solr [3], GeoServer&lt;br /&gt;
[4], PostGIS [5], and GeoNetwork [6] technologies, plus the proprietary ESRI&lt;br /&gt;
ArcMap [7] -- the GIS industry's workhorse. Finally, we'll discuss the key&lt;br /&gt;
skillsets needed to build and maintain a spatial data infrastructure.&lt;br /&gt;
&lt;br /&gt;
[1] http://foss4g.org&lt;br /&gt;
[2] http://opengeoportal.org&lt;br /&gt;
[3] http://lucene.apache.org/solr&lt;br /&gt;
[4] http://geoserver.org&lt;br /&gt;
[5] http://postgis.net&lt;br /&gt;
[6] http://geonetwork-opensource.org&lt;br /&gt;
[7] http://esri.com&lt;br /&gt;
&lt;br /&gt;
==Under the Hood of Hadoop Processing at OCLC Research ==&lt;br /&gt;
&lt;br /&gt;
[http://roytennant.com/ Roy Tennant]&lt;br /&gt;
&lt;br /&gt;
* Previous Code4Lib presentations: 2006: &amp;quot;The Case for Code4Lib 501c(3)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[http://hadoop.apache.org/ Apache Hadoop] is widely used by Yahoo!, Google, and many others to process massive amounts of data quickly. OCLC Research uses a 40-node compute cluster with Hadoop and HBase to process the 300 million MARC records of WorldCat in various ways. This presentation will explain how Hadoop MapReduce works and illustrate it with specific examples and code. The role of the jobtracker in both monitoring and reporting on processes will be explained. String searching WorldCat will also be demonstrated live.&lt;br /&gt;
&lt;br /&gt;
== Quick and Easy Data Visualization with Google Visualization API and Google Chart Libraries ==&lt;br /&gt;
 &lt;br /&gt;
[http://bohyunkim.net/blog Bohyun Kim], Florida International University, bohyun.kim@fiu.edu&lt;br /&gt;
* 'No' previous Code4Lib presentations &lt;br /&gt;
&lt;br /&gt;
Do most of the data that your library collects stay in spreadsheets or are published as a static table with a series of boring numbers? Do your library stakeholders spend more time collecting the data than using it as a decision-making tool because the data is presented in a way that makes it hard for them [http://developers.google.com/chart/interactive/docs/gallery to quickly grasp its significance? ]&lt;br /&gt;
&lt;br /&gt;
This talk will provide an overview of [http://developers.google.com/chart/interactive/docs/reference Google Visualization API] [2] and [http://developers.google.com/chart/ Google Chart Libraries] [3] to get you started on the way to quickly query and visualize your library data from remote data sources (e.g. a Google Spreadsheet or your own database) with (or without) cool-looking user-controls, animation effects, and even a dashboard.&lt;br /&gt;
&lt;br /&gt;
== Leap Motion + Rare Books: A hands-free way to view and interact with rare books in 3D ==&lt;br /&gt;
 &lt;br /&gt;
[http://http://www.youtube.com/user/jpdenzer Juan Denzer], Binghamton University, jdenzer@binghamton.edu&lt;br /&gt;
* 'No' previous Code4Lib presentations &lt;br /&gt;
&lt;br /&gt;
As rare books become more delicate over time, making them available to the public becomes harder.  We at Binghamton University Library have developed an application that makes it easier to view rare books without ever having to touch them.  We have combined the Leap Motion hands-free device and 3D rendered models to create a new virtual experience for the viewer.&lt;br /&gt;
&lt;br /&gt;
The application allows the user to rotate and zoom in on a 3D representation of a rare book.  The user is also able to ‘open’ the virtual book and flip through it using a natural user interface.  Such as swiping the hand left or right to turn the page.&lt;br /&gt;
&lt;br /&gt;
The application is built on the .Net framework and is written in C#.  3D models are created using simple 3D software such as sketchup or Blender.  Scans of the book cover and spine are created using simple flatbed scanners.  The inside pages are scanned using overhead scanners. &lt;br /&gt;
&lt;br /&gt;
This talk with discuss the technologies used in developing the application and virtually any library could implement the application with virtually no coding at all. This presentation will have a demonstration of the software and also a chance for audience members to experience the Rare Book Leap Motion App themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Course Reserves Unleashed! ==&lt;br /&gt;
 &lt;br /&gt;
* Bobbi Fox, Library Technology Services, Harvard University, bobbi_fox@harvard.edu&lt;br /&gt;
* Gloria Korsman, Andover-Harvard Theological Library&lt;br /&gt;
** No previous Code4Lib presentations &lt;br /&gt;
&lt;br /&gt;
Hey kids!  Remember when SOAP was used for something other than washing?  Our sophisticated (and highly functional) Course Reserves Request system does!&lt;br /&gt;
&lt;br /&gt;
However, while the system is great for submitting and  processing course reserve requests, the student-facing presentation through Havard’s home-grown -- and soon to be replaced -- LMS leaves a lot to be desired.  &lt;br /&gt;
&lt;br /&gt;
Follow along as we leverage Solr 4 as a No-SQL database, along with more progressive RESTful API techniques, to release Reserves data into the wild without interfering with reserves request processing -- and, in the process, open up the opportunity for other schools to feed their data in as well.&lt;br /&gt;
&lt;br /&gt;
== We Are All Disabled! Universal Web Design Making Web Services Accessible for Everyone ==&lt;br /&gt;
 &lt;br /&gt;
Cynthia Ng, Accessibility Librarian, CILS at Langara College&lt;br /&gt;
* No previous Code4Lib presentations (not counting lightning talks)&lt;br /&gt;
&lt;br /&gt;
We’re building and improving tools and services all the time, but do you only develop for the “average” user or add things for “disabled” users? We all use “assistive” technology accessing information in a multitude of ways with different platforms, devices, etc. Let’s focus on providing web services that are accessible to everyone without it being onerous or ugly. The aim is to get you thinking about what you can do to make web-based services and content more accessible for all from the beginning or with small amounts of effort whether you're a developer or not.&lt;br /&gt;
&lt;br /&gt;
The goal of the presentation is to provide both developers and content creators with information on simple, practical ways to make web content and web services more accessible. However, rather than thinking about putting in extra effort or making adjustment for those with disabilities, I want to help people think about how to make their websites more accessible for all users through universal web design.&lt;br /&gt;
&lt;br /&gt;
== Personalize your Google Analytics Data with Custom Events and Variables ==&lt;br /&gt;
&lt;br /&gt;
[http://joshwilson.net Josh Wilson], Systems Integration Librarian, State Library of North Carolina - joshwilsonnc@gmail.com&lt;br /&gt;
&lt;br /&gt;
At the State Library of North Carolina, we had more specific questions about the use of our digital collections than standard GA could provide. A few implementations of custom events and custom variables later, we have our answers.&lt;br /&gt;
&lt;br /&gt;
I'll demonstrate how these analytics add-ons work, and why implementation can sometimes be more complicated than just adding a few lines of JavaScript to your ga.js. I'll discuss some specific examples in use at the SLNC:&lt;br /&gt;
&lt;br /&gt;
* Capturing the content of specific metadata fields in CONTENTdm as Custom Events &lt;br /&gt;
* Recording Drupal taxonomy terms as Custom Variables&lt;br /&gt;
&lt;br /&gt;
In both instances, this data deepened our understanding of how our sites and collections were being used, and in turn, we were able to report usage more accurately to content contributors and other stakeholders.&lt;br /&gt;
&lt;br /&gt;
More on: [https://developers.google.com/analytics/devguides/collection/gajs/eventTrackerGuide GA Custom Events] | [https://developers.google.com/analytics/devguides/collection/gajs/gaTrackingCustomVariables GA Custom Variables]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Behold Fedora 4: The Incredible Shrinking Repository! ==&lt;br /&gt;
&lt;br /&gt;
Esmé Cowles, UC San Diego Library.  Previous talk: [http://code4lib.org/conference/2013/cowles-critchlow-westbrook All Teh Metadatas Re-Revisited] (2013)&lt;br /&gt;
&lt;br /&gt;
* One repository contains untold numbers of digital objects and powers many Hydra and Islandora apps&lt;br /&gt;
* It speaks RDF, but contains no triplestore! (triplestores sold separately, SPARQL Update may be involved, some restrictions apply)&lt;br /&gt;
* Flexible enough to tie itself in knots implementing storage and access control policies&lt;br /&gt;
* Witness feats of strength and scalability, with dramatically increased performance and clustering&lt;br /&gt;
* Plumb the depths of bottomless hierarchies, and marvel at the metadata woven into the very fabric of the repository&lt;br /&gt;
* Ponder the paradox of ingesting large files by not ingesting them&lt;br /&gt;
* Be amazed as Fedora 4 swallows other systems whole (including Fedora 3 repositories)&lt;br /&gt;
* Watch novice developers setup Fedora 4 from scratch, with just a handful of incantations to Git and Maven&lt;br /&gt;
&lt;br /&gt;
The Fedora Commons Repository is the foundation of many digital collections, e-research, digital library, archives, digital preservation, institutional repository and open access publishing systems.  This talk will focus on how Fedora 4 improves core repository functionality, adds new features, maintains backwards compatibility, and addresses the shortcomings of Fedora 3.&lt;br /&gt;
&lt;br /&gt;
== Organic Free-Range API Development - Making Web Services That You Will Actually Want to Consume ==&lt;br /&gt;
&lt;br /&gt;
Steve Meyer and Karen Coombs, OCLC&lt;br /&gt;
&lt;br /&gt;
Building web services can have great benefits by providing reusability of data and functionality. Underpinning your applications with a web service will allow you to write code once and support multiple environments: your library's web app, mobile applications, the embedded widget in your campus portal. However, building a web service is its own kind of artful programming. Doing it well requires attention to many of the same techniques and requirements as building web applications, though with different outcomes. &lt;br /&gt;
&lt;br /&gt;
So what are the usability principles for web services? How do you build a web service that you (and others) will actually want to use? In this talk, we’ll share some of the lessons learned - the good, the bad, and the ugly - through OCLC's work on the WorldCat Metadata API. This web service is a sophisticated API that provides external clients with read and write access to WorldCat data. It provides a model to help aspiring API creators navigate the potential complications of crafting a web service. We'll cover:&lt;br /&gt;
&lt;br /&gt;
* Loose coupling of data assets and resource-oriented data modeling at the core&lt;br /&gt;
* Coding to standards vs. exposure of an internal data model&lt;br /&gt;
* Authentication and security for web services: API Keys, Digital Signing, OAuth Flows&lt;br /&gt;
* Building web services that behave as a suite so it looks like the left hand knows what the right hand is doing&lt;br /&gt;
&lt;br /&gt;
So at the end of the day, your team will know your API is a very good egg after all. &lt;br /&gt;
&lt;br /&gt;
If accepted, the presenters intend to produce and share a Quick Guide for building a web service that will reflect content presented in the talk.&lt;br /&gt;
&lt;br /&gt;
== Lucene's Latest (for Libraries) ==&lt;br /&gt;
&lt;br /&gt;
erik.hatcher@lucidworks.com&lt;br /&gt;
&lt;br /&gt;
Lucene powers the search capabilities of practically all library discovery platforms, by way of Solr, etc.  The Lucene project evolves rapidly, and it's a full-time job to keep up with the ever improving features and scalability.   This talk will distill and showcase the most relevant(!) advancements to date.&lt;br /&gt;
&lt;br /&gt;
== The Why and How of Very Large Displays in Libraries. ==&lt;br /&gt;
&lt;br /&gt;
* Cory Lown, NCSU Libraries, cwlown@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
Previous Code4Lib Presentations:&lt;br /&gt;
* [http://code4lib.org/conference/2012/lown How People Search the Library from a Single Search Box]  2012&lt;br /&gt;
* [http://code4lib.org/conference/2010/orphanides_lown_lynema Enhancing Discoverability with Virtual Shelf Browse] 2010&lt;br /&gt;
&lt;br /&gt;
Built into the walls of NC State's new Hunt Library are several [http://www.christiedigital.com/en-us/digital-signage/products/microtiles/pages/microtiles-digital-signage-video-wall.aspx Christie MicroTile Display Wall Systems]. What does a library do with a display that's seven feet tall and over twenty feet wide? I'll talk about why libraries might want large displays like this, what we're doing with them right now, and what we might do with them in the future. I'll talk about how these displays factor into planning for new and existing web projects. And I'll get into the fun details of how you build web applications that scale from the very small browser window on a phone all the way up to a browser window with about 14 million pixels (about 10 million more than a dual 24&amp;quot; monitor desktop setup).&lt;br /&gt;
&lt;br /&gt;
== Discovering your Discovery System in Real Time. ==&lt;br /&gt;
&lt;br /&gt;
* Godmar Back, Virginia Tech, gback@vt.edu&lt;br /&gt;
* Annette Bailey, Virginia Tech, afbailey@vt.edu&lt;br /&gt;
&lt;br /&gt;
Practically all libraries today provide web-based discovery systems to their users;&lt;br /&gt;
users discover items and peruse or check them out by clicking on links.  Unlike&lt;br /&gt;
the traditional transaction of checking out a book at the circulation desk, this&lt;br /&gt;
interaction is largely invisible.  We have built a system that records user's&lt;br /&gt;
interaction with Summon in real-time, processes the resulting data with minimal delay,&lt;br /&gt;
and visualizes it in various ways using Google Charts and using various d3.js modules,&lt;br /&gt;
such as word clouds, tree maps, and others.&lt;br /&gt;
&lt;br /&gt;
These visualizations can be embedded in web sites, but are also suitable for&lt;br /&gt;
projection via large-scale displays or projectors right into the 'Learning Spaces'&lt;br /&gt;
many libraries are converted into.  The goal of this talk is to share the technology&lt;br /&gt;
and advocate the building of a cloud-based infrastructure that would make this&lt;br /&gt;
technology available to any library that uses a discovery system, rather than just&lt;br /&gt;
those who have the technological prowess for developing such systems and&lt;br /&gt;
visualizations in-house.  &lt;br /&gt;
&lt;br /&gt;
Previous presentations at Code4Lib:&lt;br /&gt;
* Talk: Code4Lib 2009 [http://code4lib.org/files/LibX2.0-Code4Lib-2009AsPresented.ppt LibX 2.0]&lt;br /&gt;
* Preconference: [http://wiki.code4lib.org/index.php/LibX_Preconference LibX 2.0, 2009]&lt;br /&gt;
* Preconference: Code4Lib 2010, On Widgets and Web Services&lt;br /&gt;
&lt;br /&gt;
== Your Library, Anywhere: A Modern, Responsive Library Catalogue at University of Toronto Libraries ==&lt;br /&gt;
&lt;br /&gt;
* Bilal Khalid, Gordon Belray, Lisa Gayhart (lisa.gayhart@utoronto.ca)&lt;br /&gt;
&lt;br /&gt;
* No previous Code4Lib presentations&lt;br /&gt;
&lt;br /&gt;
With the recent surge in the mobile device market and an ever expanding patron base with increasingly divergent levels of technical ability, the University of Toronto Libraries embarked on the development of a new catalogue discovery layer to fit the needs of its diverse users. &lt;br /&gt;
&lt;br /&gt;
[http://search.library.utoronto.ca The result]: a mobile-friendly, flexible and intuitive web application that brings the full power of a faceted library catalogue to users without compromising quality or performance, employing Responsive Web Design principles. This talk will discuss: application development; service improvements; interface design; and user outreach, testing, and project communications. Feedback and questions from the audience are very welcome. If time runs short, we will be available for questions and conversation after the presentation.&lt;br /&gt;
&lt;br /&gt;
Note: A version of this content has been provisionally accepted as an article for Code4Lib Journal, January 2014 publication.)&lt;br /&gt;
&lt;br /&gt;
== All Tiled Up ==&lt;br /&gt;
&lt;br /&gt;
* Mike Graves, MIT Libraries (mgraves@mit.edu)&lt;br /&gt;
&lt;br /&gt;
You've got maps. You even scanned and georeferenced them. Now what? Running a full GIS stack can be expensive, and overkill in some cases. The good news is that you have a lot more options now than you did just a few years ago. I'd like to present some lighter weight solutions to making georeferenced images available on the Web.&lt;br /&gt;
&lt;br /&gt;
This talk will provide an introduction to MBTiles. I'll go over what they are, how you create them, how you use them and why you would use them.&lt;br /&gt;
&lt;br /&gt;
== The Great War: Image Interoperability to Facebook ==&lt;br /&gt;
&lt;br /&gt;
* Rob Sanderson, Los Alamos National Laboratory (azaroth42@gmail.com)&lt;br /&gt;
** (Code4Lib 2006: [http://www.code4lib.org/2006/sanderson | Library Text Mining])&lt;br /&gt;
* Rob Warren, Carleton University&lt;br /&gt;
** No previous presentations&lt;br /&gt;
&lt;br /&gt;
Using a pipeline constructed from Linked Open Data and other interoperability specifications, it is possible to merge and re-use image and textual data from distributed library collections to build new, useful tools and applications.  Starting with the OAI-PMH interface to ContentDM, we will take you on a tour through the International Image Interoperability Framework and Shared Canvas, to a cross-institutional viewer, and image analysis for the purposes of building a historical Facebook from finding and tagging people in photographs.  The World War One collections are drawn from multiple institutions and merged by the machine learning code.&lt;br /&gt;
&lt;br /&gt;
The presentation will focus on the (open source) toolchain and the benefits of the use of standards throughout:  OAI-PMH to get the metadata, IIIF for interaction with the images, the Shared Canvas ontology for describing collections of digitized objects, Open Annotation for tagging things in the images and specialized ontologies that are specific to the contents.  The tools include standard RDF / OWL technologies, JSON-LD, imagemagick and OpenCV for image analysis.&lt;br /&gt;
&lt;br /&gt;
== Visualizing Solr Search Results with D3.js for User-Friendly Navigation of Large Results Sets ==&lt;br /&gt;
&lt;br /&gt;
*Julia Bauder, Grinnell College Libraries (bauderj-at-grinnell-dot-edu)&lt;br /&gt;
*No previous presentations at national Code4Lib conferences&lt;br /&gt;
&lt;br /&gt;
As the corpus of articles, books, and other resources searched by discovery systems continues to get bigger, searchers are more and more frequently confronted with unmanageably large numbers of results. How can we help users make sense of 10,000 hits and find the ones they actually want? Facets help, but making sense of a gigantic sidebar of facets is not an easy task for users, either.&lt;br /&gt;
During this talk, I will explain how we will soon be using Solr 4’s pivot queries and hierarchical visualizations (e.g., treemaps) from D3.js to let patrons view and manipulate search results. We will be doing this with our VuFind 2.0 catalog, but this technique will work with any system running Solr 4. I will also talk about early student reaction to our tests of these visualization features.&lt;br /&gt;
&lt;br /&gt;
== PeerLibrary – open source cloud based collaborative library ==&lt;br /&gt;
&lt;br /&gt;
[https://github.com/peerlibrary/peerlibrary PeerLibrary is a new open source project] and a cloud service providing collaborative reading, sharing and storing. Users can upload publications they want to read (currently in PDF format), read them in the browser in real-time with others, highlight, annotate and organize their own or collaborative library. PeerLibrary provides a search engine to search over all uploaded open access publications. Additionally, it aims to collaboratively aggregate the open layer of knowledge on top of this publications through public annotations and references user will add to publications. In this way publications would not just be available to read, but accessible to the general public as well. Currently, it is aiming at scientific community and scientific publications.&lt;br /&gt;
&lt;br /&gt;
See [http://blog.peerlibrary.org/post/63458789185/screencast-previewing-the-peerlibrary-project screencast here].&lt;br /&gt;
&lt;br /&gt;
It is still in development and beta launch is planned at the end of November.&lt;br /&gt;
&lt;br /&gt;
== Who was where when, or finding biographical articles on Wikipedia by place and time ==&lt;br /&gt;
&lt;br /&gt;
* [http://morton-owens.info Emily Morton-Owens], The Seattle Public Library (presenting on work from NYU)&lt;br /&gt;
* No previous c4l presentations&lt;br /&gt;
&lt;br /&gt;
It's easy to answer the question &amp;quot;What important people were in Paris in 1939?&amp;quot; But what about Virginia in the 1750s or Scandinavia in the 14th century? I created a tool that allows you to search for biographies in a generally applicable way, using a map interface. I would like to present updates to my thesis project, which combines a crawler written in Java that extracts information from Wikipedia articles, with a MongoDB data store and a frontend in Python.&lt;br /&gt;
&lt;br /&gt;
The input to the project is freetext of entire articles in Wikipedia; this is important to allow us to pick up Benjamin Franklin not just in the single most obvious place of Philadelphia but also in London, Paris, Boston, etc. I can talk about my experiments disambiguating place names (approaches pioneered on newspaper articles were actually unhelpful on this type of text) and setting up a processing queue that does not become mired in the biographies of every human who ever played soccer. I also want to mitigate some of the implementation choices I made due to my academic deadline and improve the accuracy/usability.&lt;br /&gt;
&lt;br /&gt;
What I hope to show is that I was able to develop a novel and useful reference tool automatically, using fairly simple heuristics that are a far cry from hand-cataloging familiar to many librarians.&lt;br /&gt;
&lt;br /&gt;
You can try out [http://linserv1.cims.nyu.edu:48866/ the original version] (this server is inconveniently set to be updated/rebooted on 11/8--may be temporarily unavailable)&lt;br /&gt;
&lt;br /&gt;
== Good!, DRY, and Dynamic: Content Strategy for Libraries (Especially the Big Ones) ==&lt;br /&gt;
&lt;br /&gt;
*Michael Schofield, Nova Southeastern University Libraries, mschofield@nova.edu&lt;br /&gt;
*No previous code4lib presentations.&lt;br /&gt;
&lt;br /&gt;
The responsibilities of the #libweb are exploding [it’s a good thing] and it is no longer uncommon for libraries to manage or even home-grow multiple applications and sites. Often it is at this point where the web people begin to suffer the absence of a content strategy when, say, business hours need to be updated sitewide a half-dozen times.&lt;br /&gt;
&lt;br /&gt;
We were already feeling this crunch when we decided to further complicate the Nova Southeastern University Libraries by splitting the main library website into two. The Alvin Sherman Library, Research, and Information Technology Center is a unique joint-use facility that serves not only the academic community but the public of Broward County - and marketing a hyperblend of content through one portal just wasn't cutting it. With a web team of two, we knew that managing all this rehashed, disparate content was totally unsustainable.&lt;br /&gt;
&lt;br /&gt;
I want to share in this talk how I went about making our library content DRY (“don’t repeat yourself”): input content in one place--blurbs, policies, featured events, featured databases, book reviews, business hours, and so on.--and syndicate it everywhere - even, sometimes, dynamically target that content for specific audiences or context. It is a presentation that is a little about workflow, a little more about browser and context detection, a tangent about content-modeling the CMS, and a lot about APIs, syndication, and performance.&lt;br /&gt;
&lt;br /&gt;
== No code, no root, no problem? Adventures in SaaS and library discovery ==&lt;br /&gt;
&lt;br /&gt;
*[mailto:erwhite@vcu.edu Erin White, VCU]&lt;br /&gt;
*No previous C4L presentations&lt;br /&gt;
&lt;br /&gt;
In 2012 VCU was an eager early adopter of Ex Libris' cloud service Alma as an ILS, ERM, link resolver, and single-stop, de-silo'd public-facing discovery tool. This has been a disruptive change that has shifted our systems staff's day-to-day work, relationships with others in the library, and relationships with vendors.&lt;br /&gt;
&lt;br /&gt;
I'll share some of our experiences and takeaways from implementing and maintaining a cloud service:&lt;br /&gt;
* Seeking disruption and finding it&lt;br /&gt;
* Changing expectations of service and the reality of unplanned downtime&lt;br /&gt;
* Communication and problem resolution with non-IT library staff&lt;br /&gt;
* Working with a vendor that uses agile development methodology&lt;br /&gt;
* Benefits and pitfalls of creating customizations and code workarounds&lt;br /&gt;
* Changes in library IT/coders' roles with SaaS&lt;br /&gt;
&lt;br /&gt;
...as well as thoughts on the philosophy of library discovery vs real-life experiences in moving to a single-search model.&lt;br /&gt;
&lt;br /&gt;
== Building for others (and ourselves):  the Avalon Media System ==&lt;br /&gt;
* [mailto:michael.klein@northwestern.edu Michael B Klein], Senior Software Developer, Northwestern University &lt;br /&gt;
** [http://code4lib.org/conference/2010/metz_klein Public Datasets in the Cloud] (code4lib 2010)&lt;br /&gt;
** [http://code4lib.org/conference/2013/klein-rogers The Avalon Media System: A Next Generation Hydra Head For Audio and Video Delivery] (code4lib 2013)&lt;br /&gt;
* [mailto:j-rudder@northwestern.edu Julie Rudder], Digital Initiatives Project Manager, Northwestern University&lt;br /&gt;
** no previous code4lib presentations&lt;br /&gt;
&lt;br /&gt;
[http://www.avalonmediasystem.org/ Avalon Media System] is a collaborative effort between development teams at Northwestern and Indiana Universities. Our goal is to produce an open source media management platform that works well for us, but is also widely adopted and contributed to by other institutions. We believe that building a strong user and contributor community is vital to the success and longevity of the project, and have developed the system with this goal in mind. We will share lessons learned, pains and successes we’ve had releasing two versions of the application since last year.  &lt;br /&gt;
&lt;br /&gt;
Our presentation will cover our experiences:&lt;br /&gt;
* providing flexible, admin-friendly distribution and installation options&lt;br /&gt;
* building with abstraction, customization and local integrations in mind&lt;br /&gt;
* prioritizing features (user stories)&lt;br /&gt;
* attracting code contributions from other institutions&lt;br /&gt;
* gathering community feedback &lt;br /&gt;
* creating a product rather than a bag of parts&lt;br /&gt;
&lt;br /&gt;
== How to check your data to provide a great data product? Data quality as a key product feature at Europeana ==&lt;br /&gt;
&lt;br /&gt;
*[mailto:Peter.Kiraly@kb.nl Péter Király] portal backend developer, Europeana&lt;br /&gt;
*No previous C4L presentations&lt;br /&gt;
&lt;br /&gt;
[http://Europeana.eu/ Europeana.eu] - Europe's digital library, archive and museum - aggregates more than 30 million metadata records from more than 2200 institutions.  The records come from libraries, archives, museums and every other kind of cultural institution, from very different systems and metadata schemas, and are typically transformed several times until they are ingested into the Europeana data repository.  Europeana builds a consolidated database from these records, creating reliable and consistent services for end-users (a search portal, search widget, mobile apps, thematic sites etc.) and an API, which supports our strategic goeal of data for reuse in education, creative industries, and the cultural sector.  A reliable &amp;quot;data product&amp;quot; is thus at the core of our own software products, as well as those of our API partners.&lt;br /&gt;
&lt;br /&gt;
Much effort is needed to smooth out local differences in the metadata curation practice of our data providers. We need a solid framework to measure the consistency of our data and provide feedback to decision-makers inside and outside the organisation. We can also use this metrics framework to ask content providers to improve their own metadata. Of course, a data-quality-driven approach requires that we also improve the data transformation steps of the Europeana ingestion process itself. Data quality issues heavily define what new features we are able to create in our user interfaces and API, and might actually affect the design and implementation of our underlying data structure, the Europeana Data Model.&lt;br /&gt;
&lt;br /&gt;
In the presentation I briefly describe the Europeana metadata ingestion process, show the data quality metrics, the measuring techniques (using the Europeana API, Solr and MongoDB queries), some typical problems (both trivial and difficult ones), and finally the feedback mechanism we propose to deploy.&lt;br /&gt;
&lt;br /&gt;
Keywords: Europeana, data quality, EDM, API, Apache Solr, MongoDB, #opendata, #openglam&lt;br /&gt;
&lt;br /&gt;
== Teach your Fedora to Fly: scaling out your digital repository ==&lt;br /&gt;
&lt;br /&gt;
*[mailto:acoburn@amherst.edu Aaron Coburn], Software Developer, Amherst College&lt;br /&gt;
*No previous C4L presentations&lt;br /&gt;
&lt;br /&gt;
Fedora is a great repository system for managing large collections of digital objects, but what happens when a popular food magazine begins directing a large number of readers to a manuscript showing Emily Dickinson’s own recipe for doughnuts? While Fedora excels in its support of XML-based metadata, it doesn’t always perform well under a high volume of traffic. Nor is it especially tolerant of network or hardware failures.&lt;br /&gt;
&lt;br /&gt;
This presentation will show how we are making heavy use of a fedora repository while at the same time insulating it almost entirely from any web traffic. Starting with a distributed web front-end built with Node.js, and caching most of the user-accessible content from Fedora in an elastic, fault-tolerant Riak (NoSQL) cluster, we have eliminated nearly all single points of failure in the system. It also means that our production system is spread across twelve separate servers, where asynchrony and Map-Reduce are king. And aside from being blazing fast, it is also entirely Hydra-compliant.&lt;br /&gt;
&lt;br /&gt;
Furthermore, we will attempt to answer the question: if fedora crashes and the visitors to your site don’t notice, did it really fail?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Open Source Software and Freeware to Preserve and Deliver Digital Videos ==&lt;br /&gt;
* [mailto:wfang@kinoy.rutgers.edu Wei Fang], Head of Digital Services, Rutgers University Law Library&lt;br /&gt;
* Jiebei Luo, Digital Projects Initiative Intern, Rutgers University&lt;br /&gt;
*No previous C4L presentations&lt;br /&gt;
&lt;br /&gt;
The Rutgers University Law Library is the official digital repository of the New Jersey Supreme Court oral arguments since 2002. This large video collection contains approximately 3,000 videos with a total of 400 GB or 6,000 viewing hours. With the expansion of this collection, the existing database and the static website could not efficiently support the library’s daily operations and meet its patrons’ search needs. &lt;br /&gt;
By utilizing open source software and freeware such as Ubuntu, FFmpeg, Solr and Drupal, the library is able to develop a complete solution to re-encoding videos, embedding subtitles, incorporating  Solr search engine and content management system to support full-text subtitle search, automatically updating video metadata records in the library catalog system and eventually providing a plug-in free HTML 5-based Web interface for patrons to view the videos online.&lt;br /&gt;
The aspects below will be presented in detail at the conference:&lt;br /&gt;
*	Video codecs comparison &lt;br /&gt;
*	Server-end batch video encoding/re-encoding&lt;br /&gt;
*	HTML 5 video tag and embedding subtitles&lt;br /&gt;
*	Incorporating search engine Solr and content management tool 	Drupal with the database to retrieve videos by full-text search especially in subtitle files&lt;br /&gt;
*	Incorporating video metadata with the library catalog system&lt;br /&gt;
&lt;br /&gt;
== Shared Vision, Shared Resources: the Curate Institutional Repository ==&lt;br /&gt;
* Dan Brubaker Horst, University of Notre Dame &lt;br /&gt;
** [http://code4lib.org/conference/2011/JohnsonHorst A Community-Based Approach to Developing a Digital Exhibit at Notre Dame Using the Hydra Framework] &lt;br /&gt;
* Julie Rudder, Northwestern University&lt;br /&gt;
** no previous presentations&lt;br /&gt;
&lt;br /&gt;
Curate is being collaboratively developed by several institutions in the Hydra community who share the need and vision for a Fedora-backed Institutional Repository. The first release of Curate was a collaboration between Notre Dame and Northwestern University, along with Digital Curation Experts (DCE) - a vendor hired jointly by our two institutions. Powered by the Hydra engine Sufia, the team worked quickly to release the first version of Curate in October 2013 which provides a basic self-deposit system that has support for various content types, collection building, DOI minting, and user profile creation. From the very beginning we have built Curate to be easy to theme and extend in order to ease the process of installation and use by other institutions.&lt;br /&gt;
&lt;br /&gt;
In December 2013, additional partners will join the project including: Indiana University, the University of Cincinnati and the University of Virginia. Each institution contributes resources to the project in order to further our common goal to create a product that fits our needs and has a sustainable future.Together we will tackle additional content types (like complex data, software, media), administrative collections and more. &lt;br /&gt;
&lt;br /&gt;
Our presentation will include:&lt;br /&gt;
* a brief demonstration of Curate and technical overview&lt;br /&gt;
* why and how we work together&lt;br /&gt;
* why build Curate&lt;br /&gt;
* the future of the project&lt;br /&gt;
&lt;br /&gt;
== Solr, Cloud and Blacklight ==&lt;br /&gt;
* David Jiao, Library Information Systems, Indiana University at Bloomington, djiao@indiana.edu&lt;br /&gt;
** No previous code4lib presentations&lt;br /&gt;
&lt;br /&gt;
SolrCloud refers to the distributed capabilities in Solr4. It is designed to offer a highly available, fault tolerant environment by organizing data into multiple pieces that can be hosted on multiple machines with replicas, and providing a centralized cluster configuration and management. &lt;br /&gt;
&lt;br /&gt;
At Indiana University, we are upgrading our Solr backend for our recently released Blacklight-based OPAC system from Solr 1.4 to Solr4, and we also put up efforts to build a private cloud of Solr4 servers. In this talk, I will persent certain features of SolrCloud, including distributed requests, fault tolerance, near real time indexing/searching, and configuration management with Zookeeper, and our experiences of utilizing these features to provide better performance and architecture for our OPAC system, which serves over 7 million bibliographic records to over 100 thousand students and faculty members. I will also discuss some practical lessons learned from our SolrCloud setup/upgrade and the integration of the new SolrCloud to our customized Blacklight system.  &lt;br /&gt;
&lt;br /&gt;
== Leveraging XSD's for Reflective, Live Dataset Support in Institutional Repositories ==&lt;br /&gt;
* [mailto:msulliva@ufl.edu Mark Sullivan], Library Information Technology, University of Florida&lt;br /&gt;
** No previous code4lib presentations&lt;br /&gt;
&lt;br /&gt;
The University of Florida Libraries are currently adding support for active datasets into our METS-based institutional repository software.  This ongoing project enables the library to be a partner in current, or long-running, data-driven projects around the university by providing tangible short-term and long-term benefits to the projects.  The system assists project teams by storing and providing access to their data, while supporting online filtering and sorting of the data, custom queries, and adding and editing of the data by authorized users.  We are also exploring simple data visualizations to allow users to perform basic graphical and geographic queries.  Several different schemas were explored including DDI and EML, but ultimately the streamlined approach of using XSD's with some custom attributes was chosen, with all other data residing in the METS file portions.  Currently the system is being developed using XSD's describing XML datasets, but this model should easily scale to support SQL datasets or large datasets supported by Hadoop or iRODS.&lt;br /&gt;
&lt;br /&gt;
This work is being integrated in the open source [http://sobek.ufl.edu SobekCM Digital Content Management System] which is built on a pair-tree structure of METS resources with [http://ufdc.ufl.edu/design/webcontent/sobekcm/SobekCM_Resource_Object.pdf rich metadata support] including DC, MODS, MARC, VRACore, DarwinCore, IEE-LOM, GML/KML, schema.org microdata, and many other standard schemas.  The system has emphasized online, distributed creation and maintenance of resources including geo-placement and geographic searching of resources, building structure maps (table of contents) visually online, and a broad suite of curator tools.  &lt;br /&gt;
&lt;br /&gt;
This work is presented as a model which could be implemented in other systems as well.  We will demonstrate current support and discuss our upcoming roadmap to provide complete support.&lt;br /&gt;
&lt;br /&gt;
== Dead-simple Video Content Management: Let Your Filesystem Do The Work ==&lt;br /&gt;
&lt;br /&gt;
* Andreas Orphanides, NCSU Libraries (akorphan (at) ncsu.edu)&lt;br /&gt;
** (never led or soloed a C4L presentation)&lt;br /&gt;
&lt;br /&gt;
Content management is hard. To keep all the moving parts in order, and to maintain a layer of separation between the system and content creators (who are frequently not technical experts), we typically turn to content management systems like Drupal. But even Drupal and its kin require significant overhead and present a not inconsiderable learning curve for nontechnical users.&lt;br /&gt;
&lt;br /&gt;
In some contexts it's possible -- and desirable -- to manage content in a more streamlined, lightweight way, with a minimum of fuss and technical infrastructure. In this presentation I'll share a simple MVC-like architecture for managing video content for playback on the web, which uses a combination of Apache's mod_rewrite module and your server's filesystem structure to provide an automated approach to video content management that's easy to implement and provides a low barrier to content updates: friendly to content creators and technology implementors alike. Even better, the basic method is HTML5-friendly, and can be integrated into your favorite content management system if you've got permissions for creating templates.&lt;br /&gt;
&lt;br /&gt;
In the presentation I'll go into detail about the system structure and logic required to implement this approach. I'll detail the benefits and limitations of the system, as well as the challenges I encountered in developing its implementation. Audience members should come away with sufficient background to implement a similar system on their own servers. Implementation documentation and genericized code will also be shared, as available.&lt;br /&gt;
&lt;br /&gt;
== Managing Discovery ==&lt;br /&gt;
&lt;br /&gt;
* Andrew Pasterfield, Senior Programmer/Systems Analyst, University of Calgary Library, ampaster@ucalgary.ca&lt;br /&gt;
**No previous code4lib presentations &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In fall 2012 the University of Calgary Library launched a new home page that incorporated a Summon powered&lt;br /&gt;
Single Search Box with customized “bento box” results display. Search at the U of C now combines a range of&lt;br /&gt;
metadata sources for discovery and customized mapping of a database recommender and LibGuide into a unified&lt;br /&gt;
display.  Further customizations include a non Google Analytics/non proxy method to log clicks.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This presentation will discuss the technical details of bringing the various systems together into one display interface to increase discovery at the U of C Library.&lt;br /&gt;
&lt;br /&gt;
http://library.ucalgary.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Sorting it out: a piece of the User Centered Design Process ==&lt;br /&gt;
&lt;br /&gt;
* Cindy Beggs, [http://www.akendi.com/aboutus/management/ Akendi], cindy@akendi.com&lt;br /&gt;
&lt;br /&gt;
This talk is about how to apply a user centered design methodology to the process of creating an information architecture.  Participants learn the fundamentals of UCD and how card sorting and reverse card sorting enable us to isolate the content we present on screen from the layouts and visuals of those screens.  We talk about ways to identify who will be using the information architecture you are creating and why we need to know how it will be used.&lt;br /&gt;
 &lt;br /&gt;
What will attendees takes away from your talk?&lt;br /&gt;
The criticality of involving “real” end users in the process of creating an information architecture.  The basics of following a user-centered-design process in the creation of best in class, content-rich, digital products.&lt;br /&gt;
&lt;br /&gt;
Cindy Beggs has been working in the “information industry” for over 25 years.  A librarian by profession, she has spent decades helping users figure out how to find their way through large bodies of content.  Her insights into how people seek information, her empathy for those who find it a challenge and her practical experience helping organizations figure out how to best structure their content contribute to her success as an information architect with both clients and trainees.  (http://www.akendi.com/aboutus/management/)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation of ArchivesSpace in University of Richmond==&lt;br /&gt;
&lt;br /&gt;
*Birong Ho, bho@richmond.edu&lt;br /&gt;
&lt;br /&gt;
University of Richmond implemented its archive collection management ArchivsSpace in the fall, 2013. As a charter member and the Head of Special Collection as the Board member, implementation of such an Open Source Software became a priority. &lt;br /&gt;
&lt;br /&gt;
Several aspects of implementation will be addressed in the talk. Among them, they are Collections and Repository, storage layer including data format, System resources requirements, Technical architecture, Customization, scaling and integrated with other systems in the library.&lt;br /&gt;
&lt;br /&gt;
The customization, scale, and integration with other systems such as Archeon and Exist on campus became a concern will be focused and elaborated in the talk.&lt;br /&gt;
&lt;br /&gt;
==Easy Wins for Modern Web Technologies in Libraries==&lt;br /&gt;
&lt;br /&gt;
*[mailto:trey.terrell@oregonstate.edu Trey Terrell], Analyst Programmer, Oregon State University&lt;br /&gt;
** No previous Code4Lib presentations &lt;br /&gt;
&lt;br /&gt;
Oregon State University is currently implementing an updated version of its room reservation system. In its development we've come across and implemented a variety of &amp;quot;easy wins&amp;quot; to make it more responsive, easier to maintain, less expensive to run, and just cooler to experience. While our particular system was in Ruby on Rails, this talk will address general methods and example utilities which can be used no matter your stack.&lt;br /&gt;
&lt;br /&gt;
I'll be talking about things like cache management, reverse proxies, publish/subscribe servers, WebSockets, responsive design, asynchronous processing, and keeping complicated stacks up and running with minimal effort.&lt;br /&gt;
&lt;br /&gt;
==Implementing Islandora at a Small Institution==&lt;br /&gt;
&lt;br /&gt;
*Megan Kudzia, Albion College Library&lt;br /&gt;
*Eddie Bachle, Albion College IT&lt;br /&gt;
**No previous Code4Lib presentations&lt;br /&gt;
&lt;br /&gt;
Albion College (and particularly the Library/Archives and Special Collections) has a variety of needs which could be met by an open-source Institutional Repository system. Several months and lots of conversations later, we’re continuing to troubleshoot our way through Islandora. We’d like to talk about what has worked for us, where our frustrations have been, whether it’s even possible to install and develop a system like this at a small institution, and where the process has stalled. &lt;br /&gt;
&lt;br /&gt;
As of right now, we do have a semi-working installation. We’re not sure when it will be ready for our end users, but we'll talk about our development process and evaluate our progress.&lt;br /&gt;
''Contributions also by Nicole Smeltekop, Albion College Archives &amp;amp; Special Collections''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== PhantomJS+Selenium: Easy Automated Testing of AJAX-y UIs ==&lt;br /&gt;
&lt;br /&gt;
* Martin Haye, California Digital Library, martin.haye@ucop.edu&lt;br /&gt;
** Previous Code4Lib Presentation: [http://code4lib.org/conference/2012/collett Beyond code: Versioning data with Git and Mercurial] at Code4Lib 2012 (Martin co-presenting with Stephanie Collett)&lt;br /&gt;
* Mark Redar, California Digital Library, mark.redar@ucop.edu&lt;br /&gt;
&lt;br /&gt;
Web user interfaces are demanding ever-more dynamism and polish, combining HTML5, AJAX, lots of CSS and jQuery (or ilk) to create autocomplete drop-downs, intelligent buttons, stylish alert dialogs, etc. How can you make automated tests for these highly complex and interactive UIs?&lt;br /&gt;
&lt;br /&gt;
Part of the answer is PhantomJS. It’s a modern WebKit browser that’s “headless” (meaning it has no display) that can be driven from command-line Selenium unit tests. PhantomJS is dead simple to install, and its blazing speed and server-friendliness make continuous integration testing easy. You can write UI unit tests in {language-of-your-choice} and run them not just in PhantomJS but in Firefox and Chrome, plus a zillion browser/OS combinations at places like SauceLabs, TestingBot and BrowserStack.&lt;br /&gt;
&lt;br /&gt;
In this double-team live code talk, we’ll explain all that while we demonstrate the following in real time:&lt;br /&gt;
&lt;br /&gt;
* Start with nothing.&lt;br /&gt;
* Install Selenium bindings for Ruby and Python.&lt;br /&gt;
* In each language write a small test of an AJAX-y UI.&lt;br /&gt;
* Run the tests in Firefox, and fix bugs (in the test or UI) as needed.&lt;br /&gt;
* Install PhantomJS.&lt;br /&gt;
* Show the same tests running headless as part of a server-friendly test suite. &lt;br /&gt;
* (Wifi permitting) Show the same tests running on a couple different browser/OS combinations on the server cloud at SauceLabs – talking through a tunnel to the local firewalled application.&lt;br /&gt;
&lt;br /&gt;
==New Technologies, Collaboration, &amp;amp; Entrepreneurship in Libraries:  Harnessing Their Power to Help Your Library==&lt;br /&gt;
&lt;br /&gt;
* Stephanie Walker – swalker@brooklyn.cuny.edu&lt;br /&gt;
* Howard Spivak – howards@brooklyn.cuny.edu&lt;br /&gt;
* Alex - Alex@brooklyn.cuny.edu&lt;br /&gt;
&lt;br /&gt;
Academic libraries are caught in budget squeezes and often struggle to find ways to communicate value to senior administration and others.  At Brooklyn College Library, we have taken an unusual, possibly unique, approach to these issues.  Our technology staff have long worked directly with librarians to develop products that meet library, faculty, and student needs, and we have shared many of our products with colleagues, including an award-winning website, e-resource, and content management system we call 4MyLibrary, which we shared for free with 8 CUNY colleges, and also an easy-to-use book scanner, which has proven overwhelming popular with students, faculty, other librarians, and numerous campus offices.  Recently, motivated by budget cuts, we decided that what worked for us might interest other libraries, and working with our Office of Technology Commercialization, we started selling 2 products:  our book scanners (at half the price of commercial alternatives), and a hosting service, whereby we could host and support 4MyLibrary for libraries with minimal technology staff.  Both succeeded, and yielded major benefits:  a steady revenue stream and the admiration and serious goodwill of our senior administration and others.   However, this presentation is neither a basic how-to, nor an advertisement.  With this presentation, we hope to spur a conversation for broader collaboration, especially regarding new technologies, among libraries.  We all have some level of technical expertise, most of us are struggling with rising prices and tight budgets, and many of us are unhappy with various technology products we use, from scanners to our ILS.  We believe – and can demonstrate – that with collaboration, we can solve many of our problems, and provide better services to boot. &lt;br /&gt;
&lt;br /&gt;
== Identifiers, Data, and Norse Gods ==&lt;br /&gt;
&lt;br /&gt;
* Ryan Scherle, Dryad Digital Repository, ryan@datadryad.org&lt;br /&gt;
&lt;br /&gt;
ORCID and DataCite are provide stable identifiers for researchers and and data, respectively. Each system does a fine job of providing value to its users. But wouldn't it be great if they could link their systems to create something much more powerful? Perhaps even as powerful as a god?&lt;br /&gt;
&lt;br /&gt;
Enter [http://odin-project.eu/ ODIN], The ORCID and DataCite Interoperability Network. ODIN is a two-year project to unleash the power of persistent identifiers for researchers and the research they create. This talk will present recent work from the ODIN project, including several tools that can be used to unleash the godlike power of identifiers at your institution.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Armed Bandits in the Digital Library ==&lt;br /&gt;
&lt;br /&gt;
* Roman Chyla, [http://labs.adsabs.harvard.edu/adsabs/ Astrophysics Data System], rchyla@cfa.harvard.edu&lt;br /&gt;
** Previous Code4Lib: [http://code4lib.org/conference/2013/chyla Citation search in SOLR and second-order operators]&lt;br /&gt;
&lt;br /&gt;
Many of us are using the excellent Lucene library (or SOLR appliance) to provide search functionality. These systems contain number of features to adjust relevancy ranking of hits, but we may not know how to use them. In this presentation, I'll present the available options - eg. what is the default ranking 'Vector space model, what are the alternatives (eg. BM25) and what are the other options we have to tweak and adjust the ranking of the hits (eg. boost factors, functions). But even if we know how to deploy these adjustments and tweaks, we are still left in dark. We do not know whether the change we've just rolled out had a significant (statistically significant) effect or maybe it was just a waste of time and resources? A/B testing is one option, but there may be a much better one - so called &amp;quot;Multi-Armed Bandits Approach&amp;quot;. And in this talk I'd like to show how we are experimenting with this strategy to adjust [http://labs.adsabs.harvard.edu/adsabs/ ADS search engine].&lt;br /&gt;
&lt;br /&gt;
[[:Category:Code4Lib2014]]&lt;/div&gt;</summary>
		<author><name>Rchyla</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2013_talks_proposals&amp;diff=28002</id>
		<title>2013 talks proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2013_talks_proposals&amp;diff=28002"/>
				<updated>2012-11-02T19:53:39Z</updated>
		
		<summary type="html">&lt;p&gt;Rchyla: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Deadline has been extended by request due to the hurricane/storm.'''&lt;br /&gt;
&lt;br /&gt;
Deadline for talk submission is ''Friday, November 9'' at 11:59pm ET. We ask that no changes be made after this point, so that every voter reads the same thing. You can update your description again after voting closes.&lt;br /&gt;
&lt;br /&gt;
Prepared talks are 20 minutes (including setup and questions), and focus on one or more of the following areas:&lt;br /&gt;
* tools (some cool new software, software library or integration platform)&lt;br /&gt;
* specs (how to get the most out of some protocols, or proposals for new ones)&lt;br /&gt;
* challenges (one or more big problems we should collectively address)&lt;br /&gt;
&lt;br /&gt;
The community will vote on proposals using the criteria of:&lt;br /&gt;
* usefulness&lt;br /&gt;
* newness&lt;br /&gt;
* geekiness&lt;br /&gt;
* uniqueness&lt;br /&gt;
* awesomeness&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
== Talk Title ==&lt;br /&gt;
 &lt;br /&gt;
* Speaker's name, affiliation, and email address&lt;br /&gt;
* Second speaker's name, affiliation, email address, if applicable&lt;br /&gt;
&lt;br /&gt;
Abstract of no more than 500 words.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modernizing VuFind with Zend Framework 2 ==&lt;br /&gt;
&lt;br /&gt;
* Demian Katz, Villanova University, demian DOT katz AT villanova DOT edu&lt;br /&gt;
&lt;br /&gt;
When setting goals for a new major release of VuFind, use of an existing web framework was an important decision to encourage standardization and avoid reinvention of the wheel.  Zend Framework 2 was selected as providing the best balance between the cutting-edge (ZF2 was released in 2012) and stability (ZF1 has a long history and many adopters).  This talk will examine some of the architecture and features of the new framework and discuss how it has been used to improve the VuFind project.&lt;br /&gt;
&lt;br /&gt;
== Did You Really Say That Out Loud?  Tools and Techniques for Safe Public WiFi Computing  ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:DataGazetteer|Peter Murray]], LYRASIS, Peter.Murray@lyrasis.org&lt;br /&gt;
&lt;br /&gt;
Public WiFi networks, even those that have passwords, are nothing more that an old-time [https://en.wikipedia.org/wiki/Party_line_(telephony) party line]: what every you say can be easily heard by anyone nearby.  &lt;br /&gt;
Remember [https://en.wikipedia.org/wiki/Firesheep Firesheep]?  &lt;br /&gt;
It was an extension to Firefox that demonstrated how easy it was to snag session cookies and impersonate someone else.&lt;br /&gt;
So what are you sending out over the airwaves, and what techniques are available to prevent eavesdropping?&lt;br /&gt;
This talk will demonstrate tools and techniques for desktop and mobile operating systems that you should be using right now -- right here at Code4Lib -- to protect your data and your network activity.&lt;br /&gt;
&lt;br /&gt;
== Drupal 8 Preview — Symfony and Twig ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Drupal is a great platform for building web applications. Last year, the core developers decided to adopt the Symfony PHP framework, because it would lay the groundwork for the modernization (and de-PHP4ification) of the Drupal codebase. As I write this, the Symfony ClassLoader and HttpFoundation libraries are committed to Drupal core, with more elements likely before Drupal 8 code freeze.&lt;br /&gt;
&lt;br /&gt;
It seems almost certain that the Twig templating engine will supplant PHPtemplate as the core Drupal template engine. Twig is a powerful, secure theme building tool that removes PHP from the templating system, the result being a very concise and powerful theme layer.&lt;br /&gt;
&lt;br /&gt;
Symfony and Twig have a common creator, Fabien Potencier, who's overall goal is to rid the world of the excesses of PHP 4.&lt;br /&gt;
&lt;br /&gt;
== Neat! But How Do We Do It? - The Real-world Problem of Digitizing Complex Corporate Digital Objects ==&lt;br /&gt;
&lt;br /&gt;
* Matthew Mariner, University of Colorado Denver, Auraria Library, matthew.mariner@ucdenver.edu&lt;br /&gt;
&lt;br /&gt;
Isn't it neat when you discover that you are the steward of dozens of Sanborn Fire Instance Maps, hundreds of issues of a city directory, and thousands of photographs of persons in either aforementioned medium? And it's even cooler when you decide, &amp;quot;Let's digitize these together and make them one big awesome project to support public urban history&amp;quot;?  Unfortunately it's a far more difficult process than one imagines at inception and, sadly, doesn't always come to fruition.  My goal here is to discuss the technological (and philosophical) problems librarians and archivists face when trying to create ultra-rich complex corporate digital projects, or, rather, projects consisting of at least three facets interrelated by theme.  I intend to address these problems by suggesting management solutions, web workarounds, and, perhaps, a philosophy that might help in determining whether to even move forward or not.  Expect a few case studies of &amp;quot;grand ideas crushed by technological limitations&amp;quot; and &amp;quot;projects on the right track&amp;quot; to follow.   &lt;br /&gt;
 &lt;br /&gt;
== ResCarta Tools building a standard format for audio archiving, discovery and display ==&lt;br /&gt;
&lt;br /&gt;
* [[User:sarney|John Sarnowski]], The ResCarta Foundation, john.sarnowski@rescarta.org&lt;br /&gt;
&lt;br /&gt;
The free ResCarta Toolkit has been used by libraries and archives around the world to host city directories, newspapers, and historic photographs and by aerospace companies to search and find millions of engineering documents.  Now the ResCarta team has released audio additions to the toolkit. &lt;br /&gt;
&lt;br /&gt;
Create full text searchable oral histories, news stories, interviews. or build an archive of lectures; all done to Library of Congress standards.  The included transcription editor allows for accurate correction of the data conversion tool’s output.  Build true archives of text, photos and audio.  A single audio file carries the embedded Axml metadata, transcription, and word location information. Checks with the FADGI BWF Metaedit.&lt;br /&gt;
&lt;br /&gt;
ResCarta-Web presents your audio to IE, Chome, Firefox, Safari, and Opera browsers with full playback and word search capability. Display format is OGG!! &lt;br /&gt;
&lt;br /&gt;
You have to see this tool in action.  Twenty minutes from an audio file to transcribed, text-searchable website.  Be there or be L seven (Yeah, I’m that old)   &lt;br /&gt;
&lt;br /&gt;
== Format Designation in MARC Records: A Trip Down the Rabbit-Hole ==&lt;br /&gt;
 &lt;br /&gt;
* Michael Doran, University of Texas at Arlington, doran@uta.edu&lt;br /&gt;
&lt;br /&gt;
This presentation will use a seemingly simple data point, the &amp;quot;format&amp;quot; of the item being described, to illustrate some of the complexities and challenges inherent in the parsing of MARC records.  I will talk about abstract vs. concrete forms; format designation in the Leader, 006, 007, and 008 fixed fields as well as the 245 and 300 variable fields; pseudo-formats; what is mandatory vs. optional in respect to format designation in cataloging practice; and the differences between cataloging theory and practice as observed via format-related data mining of a mid-size academic library collection. &lt;br /&gt;
&lt;br /&gt;
I understand that most of us go to code4lib to hear about the latest sexy technologies.  While MARC isn't sexy, many of the new tools being discussed still need to be populated with data gleaned from MARC records.  MARC format designation has ramifications for search and retrieval, limits, and facets, both in the ILS and further downstream in next generation OPACs and web-scale discovery tools.  Even veteran library coders will learn something from this session. &lt;br /&gt;
&lt;br /&gt;
== Touch Kiosk 2: Piezoelectric Boogaloo ==&lt;br /&gt;
&lt;br /&gt;
* Andreas Orphanides, North Carolina State University Libraries, akorphan@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
At the NCSU Libraries, we provide realtime access to information on library spaces and services through an interactive touchscreen kiosk in our Learning Commons. In the summer of 2012, two years after its initial deployment, I redeveloped the kiosk application from the ground up, with an entirely new codebase and a completely redesigned user interface. The changes I implemented were designed to remedy previously identified shortcomings in the code and the interface design [1], and to enhance overall stability and performance of the application.&lt;br /&gt;
&lt;br /&gt;
In this presentation I will outline my revision process, highlighting the lessons I learned and the practices I implemented in the course of redevelopment. I will highlight the key features of the HTML/Javascript codebase that allow for increased stability, flexibility, and ease of maintenance; and identify the changes to the user interface that resulted from the usability findings I uncovered in my previous research. Finally, I will compare the usage patterns of the new interface to the analysis of the previous implementation to examine the practical effect of the implemented changes.&lt;br /&gt;
&lt;br /&gt;
I will also provide access to a genericized version of the interface code for others to build their own implementations of similar kiosk applications.&lt;br /&gt;
&lt;br /&gt;
[1] http://journal.code4lib.org/articles/5832&lt;br /&gt;
&lt;br /&gt;
== Wayfinding in a Cloud: Location Service for libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Petteri Kivimäki, The National Library of Finland, petteri.kivimaki@helsinki.fi&lt;br /&gt;
&lt;br /&gt;
Searching for books in large libraries can be a difficult task for a novice library user. This paper presents The Location Service, software as a service (SaaS) wayfinding application developed and managed by The National Library of Finland, which is targeted for all the libraries. The service provides additional information and map-based guidance to books and collections by showing their location on a map, and it can be integrated with any library management system, as the integration happens by adding a link to the service in the search interface. The service is being developed continuously based on the feedback received from the users.&lt;br /&gt;
&lt;br /&gt;
The service has two user interfaces: One for the customers and one for the library staff for managing the information related to the locations. The UI for the customers is fully customizable by the libraries, and the customization is done via template files by using the following techniques: HTML, CSS, and Javascript/jQuery. The service supports multiple languages, and the libraries have a full control of the languages, which they want to support in their environment.&lt;br /&gt;
&lt;br /&gt;
The service is written in Java and it uses Spring and Hibernate frameworks. The data is stored in PostgreSQL database, which is shared by all the libraries. They do not possess a direct access to the database, but the service offers an interface, which makes it possible to retrieve XML data over HTTP. Modification of the data via admin UI, however, is restricted, and access on the other libraries’ data is blocked.&lt;br /&gt;
&lt;br /&gt;
== Empowering Collection Owners with Automated Bulk Ingest Tools for DSpace ==&lt;br /&gt;
&lt;br /&gt;
* Terry Brady, Georgetown University, twb27@georgetown.edu&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Library has developed a number of applications to expedite the process of ingesting content into DSpace.&lt;br /&gt;
* Automatically inventory a collection of documents or images to be uploaded&lt;br /&gt;
* Generate a spreadsheet for metadata capture based on the inventory&lt;br /&gt;
* Generate item-level ingest folders, contents files and dublin core metadata for the items to be ingested&lt;br /&gt;
* Validate the contents of ingest folders prior to initiating the ingest to DSpace&lt;br /&gt;
* Present users with a simple, web-based form to initiate the batch ingest process&lt;br /&gt;
&lt;br /&gt;
The applications have eliminated a number of error-prone steps from the ingest workflow and have significantly reduced a number of tedious data editing steps.  These applications have empowered content experts to be in charge of their own collections. &lt;br /&gt;
&lt;br /&gt;
In this presentation, I will provide a demonstration of the tools that were built and discuss the development process that was followed.&lt;br /&gt;
&lt;br /&gt;
== Quality Assurance Reports for DSpace Collections ==&lt;br /&gt;
&lt;br /&gt;
* Terry Brady, Georgetown University, twb27@georgetown.edu&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Library has developed a collection of quality assurance reports to improve the consistency of the metadata in our DSpace collections.  The report infrastructure permits the creation of query snippets to test for possible consistency errors within the repository such as items missing thumbnails, items with multiple thumbnails, items missing a creation date, items containing improperly formatted dates, items without duplicated metadata fields, items recently added items across the repository, a community or a collection&lt;br /&gt;
&lt;br /&gt;
These reports have served to prioritize programmatic data cleanup tasks and manual data cleanup tasks.  The reports have served as a progress tracker for data cleanup work and will provide on-going monitoring of the metadata consistency of the repository.&lt;br /&gt;
&lt;br /&gt;
In this presentation, I will provide a demonstration of the tools that were built and discuss the development process that was followed.&lt;br /&gt;
&lt;br /&gt;
== A Hybrid Solution for Improving Single Sign-On to a Proxy Service with Squid and EZproxy through Shibboleth and ExLibris’ Aleph X-Server ==&lt;br /&gt;
&lt;br /&gt;
* Alexander Jerabek, UQAM - Université du Québec à Montréal, jerabek.alexander_j@uqam.ca&lt;br /&gt;
* Minh-Quang Nguyen, UQAM - Université du Québec à Montréal, nguyen.minh-quang@uqam.ca&lt;br /&gt;
&lt;br /&gt;
In this talk, we will describe how we developed and implemented a hybrid solution for improving single sign-on in conjunction with the library’s proxy service. This hybrid solution consists of integrating the disparate elements of EZproxy, the Squid workflow, Shibboleth, and the Aleph X-Server. We will report how this new integrated service improves the user experience. To our knowledge, this new service is unique and has not been implemented anywhere else. We will also present some statistics after approximately one year in production.&lt;br /&gt;
&lt;br /&gt;
See article: http://journal.code4lib.org/articles/7470&lt;br /&gt;
&lt;br /&gt;
== HTML5 Video Now! ==&lt;br /&gt;
&lt;br /&gt;
* Jason Ronallo, North Carolina State University Libraries, jnronall@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
Can you use HTML5 video now? Yes.&lt;br /&gt;
&lt;br /&gt;
I'll show you how to get started using HTML5 video, including gotchas, tips, and tricks. Beyond the basics we'll see the power of having video integrated into HTML and the browser. Finally, we'll look at examples that push the limits and show the exciting future of video on the Web.&lt;br /&gt;
&lt;br /&gt;
My experience comes from technical development of an oral history video clips project. I developed the technical aspects of the project, including video processing, server configuration, development of a public site, creation of an administrative interface, and video engagement analytics. Major portions of this work have been open sourced under an MIT license.&lt;br /&gt;
&lt;br /&gt;
== Hybrid Archival Collections Using Blacklight and Hydra ==&lt;br /&gt;
&lt;br /&gt;
* Adam Wead, Rock and Roll Hall of Fame and Museum, awead@rockhall.org&lt;br /&gt;
&lt;br /&gt;
At the Library and Archives of the Rock and Roll Hall of Fame, we use available tools such as Archivists' Toolkit to create EAD finding aids of our collections.  However, managing digital content created from these materials and the born-digital content that is also part of these collections represents a significant challenge.  In my presentation, I will discuss how we solve the problem of our hybrid collections by using Hydra as a digital asset manager and Blacklight as a unified presentation and discovery interface for all our materials.&lt;br /&gt;
&lt;br /&gt;
Our strategy centers around indexing ead xml into Solr as multiple documents: one for each collection, and one for every series, sub-series and item contained within a collection.  For discovery, we use this strategy to leverage item-level searching of archival collections alongside our traditional library content.  For digital collections, we use this same technique to represent a finding aid in Hydra as a set of linked objects using RDF.  New digital items are then linked to these parent objects at the collection and series level.  Once this is done, the items can be exported back out to the Blacklight solr index and the digital content appears along with the rest of the items in the collection.&lt;br /&gt;
&lt;br /&gt;
== Making the Web Accessible through Solid Design ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Cynthia|Cynthia Ng]] from Ryerson University Library &amp;amp; Archives&lt;br /&gt;
&lt;br /&gt;
In libraries, we are always trying our best to be accessible to everyone and we make every effort to do so physically, but what about our websites? Web designers are great at talking about the user experience and how to improve it, but what sometimes gets overlooked is how to make a site more accessible and meet accessibility guidelines. While guidelines are necessary to cover a minimum standard, web accessibility should come from good web design without ‘sacrificing’ features. While it's difficult to make a website fully accessible to everyone, there are easy, practical ways to make a site as accessible as possible.&lt;br /&gt;
&lt;br /&gt;
While the focus will be on websites and meeting the Web Accessibility Guidelines WCAG, the presentation will also touch on how to make custom web interfaces accessible.&lt;br /&gt;
&lt;br /&gt;
== Getting People to What They Need Fast! A Wayfinding Tool to Locate Books &amp;amp; Much More ==&lt;br /&gt;
 &lt;br /&gt;
* Steven Marsden, Ryerson University Library &amp;amp; Archives, steven dot marsden at ryerson dot ca&lt;br /&gt;
* [[User:Cynthia|Cynthia Ng]], Ryerson University Library &amp;amp; Archives&lt;br /&gt;
&lt;br /&gt;
Having a bewildered, lost user in the building or stacks is a common occurrence, but we can help our users find their way through enhanced maps and floor plans.  While not a new concept, these maps are integrated into the user’s flow of information without having to load a special app. The map not only highlights the location, but also provides all the related information with a link back to the detailed item view. During the first stage of the project, it has only be implemented for books (and other physical items), but the 'RULA Finder' is built to help users find just about anything and everything in the library including study rooms, computer labs, and staff. With a simple to use admin interface, it makes it easy for everyone, staff and users. &lt;br /&gt;
&lt;br /&gt;
The application is written in PHP with data stored in a MySQL database. The end-user interface involves jQuery, JSON, and the library's discovery layer (Summon) API.&lt;br /&gt;
&lt;br /&gt;
The presentation will not only cover the technical aspects, but also the implementation and usability findings.&lt;br /&gt;
&lt;br /&gt;
== De-sucking the Library User Experience ==&lt;br /&gt;
 &lt;br /&gt;
* Jeremy Prevost, Northwestern University, j-prevost {AT} northwestern [DOT] edu&lt;br /&gt;
&lt;br /&gt;
Have you ever thought that library vendors purposely create the worst possible user experience they can imagine because they just hate users? Have you ever thought that your own library website feels like it was created by committee rather than for users because, well, it was? I’ll talk about how we used vendor supplied APIs to our ILS and Discovery tool to create an experience for our users that sucks at least a little bit less.&lt;br /&gt;
&lt;br /&gt;
The talk will provide specific examples of how inefficient or confusing vendor supplied solutions are from a user perspective along with our specific streamlined solutions to the same problems. Code examples will be minimal as the focus will be on improving user experience rather than any one code solution of doing that. Examples may include the seemingly simple tasks of renewing a book or requesting an item from another campus library.&lt;br /&gt;
&lt;br /&gt;
== Solr Testing Is Easy with Rspec-Solr Gem ==&lt;br /&gt;
&lt;br /&gt;
* Naomi Dushay, Stanford University, ndushay AT stanford DOT edu&lt;br /&gt;
&lt;br /&gt;
How do you know if &lt;br /&gt;
&lt;br /&gt;
* your idea for &amp;quot;left anchoring&amp;quot; searches actually works?&lt;br /&gt;
* your field analysis for LC call numbers accommodates a suffix between the first and second cutter without breaking the rest of LC call number parsing?&lt;br /&gt;
* tweaking Solr configs to improve, say, Chinese searching, won't break Turkish and Cyrillic?&lt;br /&gt;
* changes to your solrconfig file accomplish what you wanted without breaking anything else?&lt;br /&gt;
&lt;br /&gt;
Avoid the whole app stack when writing Solr acceptance/relevancy/regression tests!  Forget cucumber and capybara.  This gem lets you easily (only 4 short files needed!) write tests like this, passing arbitrary parameters to Solr:&lt;br /&gt;
&lt;br /&gt;
  it &amp;quot;unstemmed author name Zare should precede stemmed variants&amp;quot; do&lt;br /&gt;
    resp = solr_response(author_search_args('Zare').merge({'fl'=&amp;gt;'id,author_person_display', 'facet'=&amp;gt;false}))&lt;br /&gt;
    resp.should include(&amp;quot;author_person_display&amp;quot; =&amp;gt; /\bZare\W/).in_each_of_first(3).documents&lt;br /&gt;
    resp.should_not include(&amp;quot;author_person_display&amp;quot; =&amp;gt; /Zaring/).in_each_of_first(20).documents&lt;br /&gt;
  end&lt;br /&gt;
      &lt;br /&gt;
  it &amp;quot;Cyrillic searching should work:  Восемьсoт семьдесят один день&amp;quot; do&lt;br /&gt;
    resp = solr_resp_doc_ids_only({'q'=&amp;gt;'Восемьсoт семьдесят один день'})&lt;br /&gt;
    resp.should include(&amp;quot;9091779&amp;quot;)&lt;br /&gt;
  end&lt;br /&gt;
   &lt;br /&gt;
  it &amp;quot;q of 'String quartets Parts' and variants should be plausible &amp;quot; do&lt;br /&gt;
    resp = solr_resp_doc_ids_only({'q'=&amp;gt;'String quartets Parts'})&lt;br /&gt;
    resp.should have_at_least(2000).documents&lt;br /&gt;
    resp.should have_the_same_number_of_results_as(solr_resp_doc_ids_only({'q'=&amp;gt;'(String quartets Parts)'}))&lt;br /&gt;
    resp.should have_more_results_than(solr_resp_doc_ids_only({'q'=&amp;gt;'&amp;quot;String quartets Parts&amp;quot;'}))&lt;br /&gt;
  end&lt;br /&gt;
   &lt;br /&gt;
  it &amp;quot;Traditional Chinese chars 三國誌 should get the same results as simplified chars 三国志&amp;quot; do&lt;br /&gt;
    resp = solr_response({'q'=&amp;gt;'三國誌', 'fl'=&amp;gt;'id', 'facet'=&amp;gt;false}) &lt;br /&gt;
    resp.should have_at_least(240).documents&lt;br /&gt;
    resp.should have_the_same_number_of_results_as(solr_resp_doc_ids_only({'q'=&amp;gt;'三国志'})) &lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
See&lt;br /&gt;
   http://rubydoc.info/github/sul-dlss/rspec-solr/frames&lt;br /&gt;
   https://github.com/sul-dlss/rspec-solr&lt;br /&gt;
&lt;br /&gt;
and our production relevancy/acceptance/regression tests slowly migrating from cucumber to:&lt;br /&gt;
   https://github.com/sul-dlss/sw_index_tests&lt;br /&gt;
&lt;br /&gt;
== Northwestern's Digital Image Library ==&lt;br /&gt;
&lt;br /&gt;
*Mike Stroming, Northwestern University Library, m-stroming AT northwestern DOT edu&lt;br /&gt;
*Edgar Garcia, Northwestern University Library, edgar-garcia AT northwestern DOT edu&lt;br /&gt;
&lt;br /&gt;
At Northwestern University Library, we are about to release a beta version of our Digital Image Library (DIL).  DIL is an implementation of the Hydra technology that provides a Fedora repository solution for discovery of and access to over 100,000 images for staff, students, and scholars. Some important features are:&lt;br /&gt;
&lt;br /&gt;
*Build custom collection of images using drag-and-drop&lt;br /&gt;
*Re-order images within a collection using drag-and-drop&lt;br /&gt;
*Nest collections within other collections&lt;br /&gt;
*Create details/crops of images&lt;br /&gt;
*Zoom, rotate images&lt;br /&gt;
*Upload personal images&lt;br /&gt;
*Retrieve your own uploads and details from a collection&lt;br /&gt;
*Export a collection to a PowerPoint presentation&lt;br /&gt;
*Create a group of users and authorize access to your images&lt;br /&gt;
*Batch edit image metadata&lt;br /&gt;
&lt;br /&gt;
Our presentation will include a demo, explanation of the architecture, and a discussion of the benefits of being a part of the Hydra open-source community.&lt;br /&gt;
&lt;br /&gt;
== Two standards in a software (to say nothing of Normarc) ==&lt;br /&gt;
&lt;br /&gt;
*Zeno Tajoli, CINECA (Italy), z DOT tajoli AT cineca DOT it&lt;br /&gt;
&lt;br /&gt;
With this presentation I want to show how ILS Koha handles the support of three differnt MARC dialects:&lt;br /&gt;
MARC21, Unimarc and Normarc. The main points of the presentation:&lt;br /&gt;
&lt;br /&gt;
*Three MARC at MySQL level&lt;br /&gt;
*Three MARC at API level&lt;br /&gt;
*Three MARC at display&lt;br /&gt;
*Can I add a new format ?&lt;br /&gt;
&lt;br /&gt;
== Future Friendly Web Design for Libraries ==&lt;br /&gt;
&lt;br /&gt;
*[[User:michaelschofield|Michael Schofield]], Alvin Sherman Library, Research, and Information Technology Center, mschofied[dot]nova[dot]edu&lt;br /&gt;
&lt;br /&gt;
Libraries on the web are afterthoughts. Often their design is stymied on one hand by red tape imposed by the larger institution and on the other by an overload of too democratic input from colleagues. Slashed budgets / staff stretched too thin foul-up the R-word (that'd be &amp;quot;redesign&amp;quot;) - but things are getting pretty strange. Notions about the Web (and where it can be accessed) are changing. &lt;br /&gt;
&lt;br /&gt;
So libraries can only avoid refabbing their fixed-width desktop and jQuery Mobile m-dot websites for so long until desktop users evaporate and demand from patrons with web-ready refrigerators becomes deafening. Just when we have largely hopped on the bandwagon and gotten enthusiastic about being online, our users expect a library's site to look and perform great on everything. &lt;br /&gt;
&lt;br /&gt;
Our presence on the web should be built to weather ever-increasing device complexity. To meet users at their point of need, libraries must start thinking Future Friendly.&lt;br /&gt;
&lt;br /&gt;
This overview rehashes the approach and philosophy of library web design, re-orienting it for maximum accessibility and maximum efficiency of design. While just 20 minutes, we'll mull over techniques like mobile-first responsive web design, modular CSS, browser feature detection for progressive enhancement, and lots of nifty tricks.&lt;br /&gt;
&lt;br /&gt;
==BYU's discovery layer service aggregator==&lt;br /&gt;
&lt;br /&gt;
*Curtis	Thacker, Brigham Young University, curtis.thacker AT byu DOT edu&lt;br /&gt;
&lt;br /&gt;
It is clear that libraries will continue to experience rapid change based on the speed of technology. To acknowledge this new reality and to provide rapid response to shifting end user paradigms BYU has developed a custom service aggregator. At first our vendors looked at us a bit funny; however, in the last year they have been astonished with the fluid implementation of new services – here’s the short list:&lt;br /&gt;
&lt;br /&gt;
*filmfinder - a tool for browsing and searching films&lt;br /&gt;
*A custom book recommender service based on checkout data&lt;br /&gt;
*Integrated library services like personell, library hours, study room scheduler and database finder through a custom adwords system.&lt;br /&gt;
*A very geeky and powerful utility used for converting marc XML into primo compliant xml.&lt;br /&gt;
*Embedded floormaps&lt;br /&gt;
*A responsive web design&lt;br /&gt;
*Bing did-you-mean&lt;br /&gt;
*And many more.&lt;br /&gt;
&lt;br /&gt;
I will demo the system, review the archtecture and talk about future plans.&lt;br /&gt;
&lt;br /&gt;
==The Avalon Media System: A Next Generation Hydra Head For Audio and Video Delivery==&lt;br /&gt;
&lt;br /&gt;
* Michael Klein, Senior Software Developer, Northwestern University LIbrary, michael.klein AT northwestern DOT edu&lt;br /&gt;
* Nathan Rogers, Programmer/Analyst, Indiana University, rogersna AT indiana DOT edu&lt;br /&gt;
&lt;br /&gt;
Based on the success of the [http://www.dml.indiana.edu/ Variations] digital music platform, Indiana University and Northwestern University have developed a next generation educational tool for delivering multimedia resources to the classroom. The Avalon Media System (formerly Variations on Video) supports the ingest, media processing, management, and access-controlled delivery of library-managed video and audio collections. To do so, the system draws on several existing, mature, open source technologies:&lt;br /&gt;
&lt;br /&gt;
* The ingest, search, and discovery functionality of the Hydra framework&lt;br /&gt;
* The powerful multimedia workflow management features of Opencast Matterhorn&lt;br /&gt;
* The flexible Engage audio/video player&lt;br /&gt;
* The streaming capabilities of both Red5 Media Server (open source) and Adobe Flash Media Server (proprietary)&lt;br /&gt;
&lt;br /&gt;
Extensive customization options are built into the framework for tailoring the application to the needs of a specific institution.&lt;br /&gt;
&lt;br /&gt;
Our goal is to create an open platform that can be used by other institutions to serve the needs of the academic community. Release 1 is planned for a late February launch with future versions released every couple of months following. For more information visit http://avalonmediasystem.org/ and https://github.com/variations-on-video/hydrant.&lt;br /&gt;
&lt;br /&gt;
== The DH Curation Guide: Building a Community Resource == &lt;br /&gt;
&lt;br /&gt;
*Robin Davis, John Jay College of Criminal Justice, robdavis AT jjay.cuny.edu &lt;br /&gt;
*James Little, University of Illinois Urbana-Champaign, little9 AT illinois.edu  &lt;br /&gt;
&lt;br /&gt;
Data curation for the digital humanities is an emerging area of research and practice. The DH Curation Guide, launched in July 2012, is an educational resource that addresses aspects of humanities data curation in a series of expert-written articles. Each provides a succinct introduction to a topic with annotated lists of useful tools, projects, standards, and good examples of data curation done right. The DH Curation Guide is intended to be a go-to resource for data curation practitioners and learners in libraries, archives, museums, and academic institutions.  &lt;br /&gt;
&lt;br /&gt;
Because it's a growing field, we designed the DH Curation Guide to be a community-driven, living document. We developed a granular commenting system that encourages data curation community members to contribute remarks on articles, article sections, and article paragraphs. Moreover, we built in a way for readers to contribute and annotate resources for other data curation practitioners.  &lt;br /&gt;
&lt;br /&gt;
This talk will address how the DH Curation Guide is currently used and will include a sneak peek at the articles that are in store for the Guide’s future. We will talk about the difficulties and successes of launching a site that encourages community. We are all builders here, so we will also walk through developing the granular commenting/annotation system and the XSLT-powered publication workflow. &lt;br /&gt;
&lt;br /&gt;
== Solr Update == &lt;br /&gt;
&lt;br /&gt;
*Erik Hatcher, LucidWorks, erik.hatcher AT lucidworks.com &lt;br /&gt;
&lt;br /&gt;
Solr is continually improving.  Solr 4 was recently released, bringing dramatic changes in the underlying Lucene library and Solr-level features.  It's tough for us all to keep up with the various versions and capabilities.&lt;br /&gt;
&lt;br /&gt;
This talk will blaze through the highlights of new features and improvements in Solr 4 (and up).  Topics will include: SolrCloud, direct spell checking, surround query parser, and many other features.  We will focus on the features library coders really need to know about.&lt;br /&gt;
&lt;br /&gt;
== Reports for the People == &lt;br /&gt;
&lt;br /&gt;
*Kara Young, Keene State College, NH, kyoung1 at keene.edu&lt;br /&gt;
*Dana Clark, Keene State College, NH, dclark5 at keene.edu&lt;br /&gt;
&lt;br /&gt;
Libraries are increasingly being called upon to provide information on how our programs and services are moving our institutional strategic goals forward.  In support of College and departmental Information Literacy learning outcomes, Mason Library Systems at Keene State College developed an assessment database to record and report assessment activities by Library faculty.  Frustrated by the lack of freely available options for intuitively recording, accounting for, and outputting useful reports on instructional activities, Librarians requested a tool to make capturing and reporting activities (and their lives) easier.  Library Systems was able to respond to this need by working with librarians to identify what information is necessary to capture, where other assessment tools had fallen short, and ultimately by developing an application that supports current reporting imperatives while providing flexibility for future changes.&lt;br /&gt;
&lt;br /&gt;
The result of our efforts was an in-house browser interfaced Assessment Database to improve the process of data collection and analysis.  The application is written in PHP, data stored in a MySQL database, and presented via browser making extensive use of JQuery and JQuery plug-ins for data collection, manipulation, and presentation. &lt;br /&gt;
The presentation will outline the process undertaken to build a successful collaboration with Library faculty from conception to implementation, as well as the technical aspects of our trial-and-error approach. Plus: cool charts and graphs!&lt;br /&gt;
&lt;br /&gt;
==  Network Analyses of Library Catalog Data ==&lt;br /&gt;
 &lt;br /&gt;
* Kirk Hess, University of Illinois at Urbana-Champaign, kirkhess AT illinois.edu&lt;br /&gt;
* Harriett Green, University of Illinois at Urbana-Champaign, green19 AT illinois.edu &lt;br /&gt;
&lt;br /&gt;
Library collections are all too often like icebergs:  The amount exposed on the surface is only a fraction of the actual amount of content, and we’d like to recommend relevant items from deep within the catalog to users. With the assistance of an XSEDE Allocation grant (http://xsede.org), we’ve used R to reconstitute anonymous circulation data from the University of Illinois’s library catalog into separate user transactions. The transaction data is incorporated into subject analyses that use XSEDE supercomputing resources to generate predictive network analyses and visualizations of subject areas searched by library users using Gephi (https://gephi.org/). The test data set for developing the subject analyses consisted of approximately 38,000 items from the Literatures and Languages Library that contained 110,000 headings and 130,620 transactions. We’re currently working on developing a recommender system within VuFind to display the results of these analyses.&lt;br /&gt;
&lt;br /&gt;
== Pitfall! Working with Legacy Born Digital Materials in Special Collections ==&lt;br /&gt;
&lt;br /&gt;
* Donald Mennerich, The New York Public Library, don.mennerich AT gmail.com&lt;br /&gt;
* Mark A. Matienzo, Yale University Library, mark AT matienzo.org&lt;br /&gt;
&lt;br /&gt;
Archives and special collections are being faced with a growing abundance of  born digital material, as well as an abundance of many promising tools for managing them. However, one must consider the potential problems that can arise when approaching a collection containing legacy materials (from roughly the pre-internet era). Many of the tried and true, &amp;quot;best of breed&amp;quot; tools for digital preservation don't always work as they do for more recent materials, requiring a fair amount of ingenuity and use of &amp;quot;word of mouth tradecraft and knowledge exchanged through serendipitous contacts, backchannel conversations, and beer&amp;quot; (Kirschenbaum, &amp;quot;Breaking &amp;lt;code&amp;gt;badflag&amp;lt;/code&amp;gt;&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Our presentation will focus on some of the strange problems encountered and creative solutions devised by two digital archivists in the course of preserving, processing, and providing access to collections at their institutions. We'll be placing particular particular emphasis of the pitfalls and crocodiles we've learned to swing over safely, while collecting treasure in the process. We'll address working with CP/M disks in collections of authors' papers, reconstructing a multipart hard drive backup spread across floppy disks, and more. &lt;br /&gt;
&lt;br /&gt;
== Project &amp;lt;s&amp;gt;foobar&amp;lt;/s&amp;gt; FUBAR ==&lt;br /&gt;
&lt;br /&gt;
* Becky Yoose, Grinnell College, yoosebec AT grinnell DOT edu&lt;br /&gt;
&lt;br /&gt;
Be it mandated from Those In A Higher Pay Grade Than You or self-inflicted, many of us deal with managing major library-related technology projects [1]. It’s common nowadays to manage multiple technology projects, and generally external and internal issues can be planned for to minimize project timeline shifts and quality of deliverables. Life, however, has other plans for you, and all your major library technology infrastructure projects pile on top of each other at the same time. How do you and your staff survive a train wreck of technology projects and produce deliverables to project stakeholders without having to go into the library IT version of the United States Federal Witness Protection Program?&lt;br /&gt;
&lt;br /&gt;
This session covers my experience with the collision of three major library technology projects - including a new institutional repository and an integrated library system migration - and how we dealt with external and internal factors, implemented damage control, and overall lessening the damage from the epic crash. You might laugh, you might cry, you will probably have flashbacks from previous projects, but you will come out of this session with a set of tools to use when you’re dealing with managing mission-critical projects.&lt;br /&gt;
&lt;br /&gt;
[1] Past code4lib talks have covered specific project management strategies, such as Agile, for application development. I will be focusing on and discussing general project management practices in relation to various library technology projects, many of which these strategies include in their own structures.&lt;br /&gt;
&lt;br /&gt;
== Implementing RFID in an Academic Library == &lt;br /&gt;
&lt;br /&gt;
* Scott Bacon, Coastal Carolina University, sbacon AT coastal DOT edu&lt;br /&gt;
&lt;br /&gt;
Coastal Carolina University’s Kimbel Library recently implemented RFID to increase security, provide better inventory control over library materials and enable do-it-yourself patron services such as self checkout. &lt;br /&gt;
&lt;br /&gt;
I’ll give a quick overview of RFID and the components involved and then will talk about how our library utilized the technology. It takes a lot of research, time, money and not too little resourcefulness to make your library RFID-ready. I’ll show how we developed our project timeline, how we assessed and evaluated vendors and how we navigated the bid process. I’ll also talk about hardware and software installation, configuration and troubleshooting and will discuss our book and media collection encoding process. &lt;br /&gt;
&lt;br /&gt;
We encountered myriad issues with our vendor, the hardware and the software. Would we do it all over again? Should your library consider RFID? Caveats abound...&lt;br /&gt;
&lt;br /&gt;
== Coding an Academic Library Intranet in Drupal: Now We're Getting Organizized... ==&lt;br /&gt;
&lt;br /&gt;
* Scott Bacon, Coastal Carolina University, sbacon AT coastal DOT edu&lt;br /&gt;
&lt;br /&gt;
The Kimbel Library Intranet is coded in Drupal 7, and was created to increase staff communication and store documentation. This presentation will contain an overview of our intranet project, including the modules we used, implementation issues, and possible directions in future development phases. I won’t forget to talk about the slew of tasty development issues we faced, including dealing with our university IT department, user buy-in, site navigation, user roles, project management, training and mobile modules (or the lack thereof). And some other fun (mostly) true anecdotes will surely be shared. &lt;br /&gt;
&lt;br /&gt;
The main functions of Phase I of this project were to increase communication across departments and committees, facilitate project management and revise the library's shared drive. Another important function of this first phase was to host mission-critical documentation such as strategic goals, policies and procedures. Phase II of this project will focus on porting employee tasks into the centralized intranet environment. This development phase, which aims to replicate and automate the bulk of staff workflows within a content management system, will be a huge undertaking. &lt;br /&gt;
&lt;br /&gt;
We chose Drupal as our intranet platform because of its extensibility, flexibility and community support. We are also moving our entire library web presence to Drupal in 2013 and will be soliciting any advice on which modules to use/avoid and which third-party services to wrangle into the Drupal environment. Should we use Drupal as the back-end to our entire Web presence? Why or why not?&lt;br /&gt;
&lt;br /&gt;
== Hands off! Best Practices and Top Ten Lists for Code Handoffs ==&lt;br /&gt;
 &lt;br /&gt;
* Naomi Dushay, Stanford University Library, ndushay@stanford.edu&lt;br /&gt;
* Bess Sadler, Stanford University Library, bess@stanford.edu&lt;br /&gt;
&lt;br /&gt;
Transition points in who is the primary developer on an actively developing code base can be a source of frustration for everyone involved. We've tried to minimize that pain point as much as possible through the use of agile methods like test driven development, continuous integration, and modular design. Has optimizing for developer happiness brought us happiness? What's worked, what hasn't, and what's worth adopting? How do you keep your project in a state where you can easily hand it off? &lt;br /&gt;
&lt;br /&gt;
== How to be an effective evangelist for your open source project ==&lt;br /&gt;
 &lt;br /&gt;
* Bess Sadler, Stanford University Library, bess@stanford.edu&lt;br /&gt;
&lt;br /&gt;
The difference between an open source software project that gets new adopters and new contributing community members (which is to say, a project that goes on existing for any length of time) and a project that doesn't, often isn't a question of superior design or technology. It's more often a question of whether the advocates for the project can convince institutional leaders AND front line developers that a project is stable and trustworthy. What are successful strategies for attracting development partners? I'll try to answer that and talk about what we could do as a community to make collaboration easier.  &lt;br /&gt;
&lt;br /&gt;
== What does it mean to be a &amp;quot;good&amp;quot; vendor in an open source meritocracy? ==&lt;br /&gt;
&lt;br /&gt;
* Matt Zumwalt, Data Curation Experts / MediaShelf / Hydra Project, matt@curationexperts.com&lt;br /&gt;
&lt;br /&gt;
What is the role of vendors in open source?  What should be the position of vendors in a meritocracy?  What are the avenues for encouraging great vendors who contribute to open source communities in valuable ways?  How you answer these questions has a huge impact on a community, and in order to formulate strong answers, you need to be well informed.  Let’s glimpse at the business practicalities of this situation, beginning with 1) an overview of the viable profit models for open-source software, 2) some of the realities of vendor involvement in open source, and 3) an account of the ins &amp;amp; outs of compensation &amp;amp; equity structures within for-profit corporations.&lt;br /&gt;
&lt;br /&gt;
The topics of power &amp;amp; influence, fairness, community participation, software quality, employment and personal profit are fair game, along with software licensing, sponsorship, closed source software and the role of sales people.&lt;br /&gt;
&lt;br /&gt;
This presentation will draw on personal experience from the past seven years spent bootstrapping and running MediaShelf, a small but prolific for-profit consulting company that focuses entirely on open source digital repository software.  MediaShelf has played an active role in creating the Hydra Framework and continuously contributes to maintenance of Fedora. Those contributions have been funded through consulting contracts for authoring &amp;amp; implementing open source software on behalf of organizations around the world.&lt;br /&gt;
&lt;br /&gt;
==Occam’s Reader: A system that allows the sharing of eBooks via Interlibrary Loan==&lt;br /&gt;
&lt;br /&gt;
*Ryan Litsey, Texas Tech University, Ryan DOT Litsey AT ttu.edu&lt;br /&gt;
*Kenny Ketner, Texas Tech University, Kenny DOT Ketner AT ttu.edu&lt;br /&gt;
&lt;br /&gt;
Occam’s Reader is a software platform that allows the transfer and sharing of electronic books between libraries via existing interlibrary loan software. Occam’s Reader allows libraries to meet the growing need to be able to share our electronic resources. In the ever-increasing digital world, many of our collection development plans now include eBook platforms. The problem with eBooks, however, is that they are resources that are locked into the home library. With Occam’s Reader we can continue the centuries-old tradition of resource sharing and also keep up with the changing digital landscape. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Puppet for configuration management when no two servers look alike ==&lt;br /&gt;
* Eugene Vilensky, Senior Systems Administrator, Northwestern University Library, evilensky northwestern edu&lt;br /&gt;
&lt;br /&gt;
Configuration management is hot because it allows one to scale to thousands of machines, all of which look alike, and tightly manage changes across the nodes. Infrastructure as code, implement all changes programmatically, yadda yadda yadda.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, servers which have gone unmanaged for a long time do not look very similar to each other.  Variables come in many forms, usually because of some or all of the following: Who installed the server, where it was installed, where the image was sourced from, when it was installed, where additional packages were sourced, and what kind of software was hosted on it.&lt;br /&gt;
&lt;br /&gt;
Bringing such machines into your configuration management platform is no harder and no easier than some or all of the following options options: 1) blow such machines away and start from scratch, migrate your data. 2) Find the lowest common baseline between the current state and the ideal state and start the work there. 3) implement new features/services on existing unmanaged machines but manage the new features/services.&lt;br /&gt;
&lt;br /&gt;
I will describe our experiences at the library for all three options using the Puppet open-source tool on Enterprise Linux 5 and 6.&lt;br /&gt;
&lt;br /&gt;
== REST &amp;lt;b&amp;gt;IS&amp;lt;/b&amp;gt; Your Mobile Strategy ==&lt;br /&gt;
&lt;br /&gt;
* Richard Wolf, University of Illinois at Chicago, richwolf@uic.edu&lt;br /&gt;
&lt;br /&gt;
Mobile is the new hotness ... and you can't be one of the cool kids unless you've got your own mobile app ... but the road to mobility is daunting.  I'll argue that it's actually easier than it seems ... and that the simplest way to mobility is to bring your data to the party, create a REST API around the data, tell developers about your API, and then let the magic happen.  To make my argument concrete, I'll show (lord help me!) how to go from an interesting REST API to a fun iOS tool for librarians and the general public in twenty minutes.&lt;br /&gt;
&lt;br /&gt;
== ScholarSphere: How We Built a Repository App That Doesn't Feel Like Yet Another Janky Old Repository App ==&lt;br /&gt;
&lt;br /&gt;
* Dan Coughlin, Penn State University, danny@psu.edu&lt;br /&gt;
* Mike Giarlo, Penn State University, michael@psu.edu&lt;br /&gt;
&lt;br /&gt;
ScholarSphere is a web application that allows the Penn State research community to deposit, share, and manage its scholarly works.  It is also, as some of our users and our peers have observed, a repository app that feels much more like Google Docs or GitHub than earlier-generation repository applications.  ScholarSphere is built upon the Hydra framework (Fedora Commons, Solr, Blacklight, Ruby on Rails), MySQL, Redis, Resque, FITS, ImageMagick, jQuery, Bootstrap, and FontAwesome.  We'll talk about techniques we used to:&lt;br /&gt;
&lt;br /&gt;
* eliminate Fedora-isms in the application&lt;br /&gt;
* model and expose RDF metadata in ways that users find unobtrusive&lt;br /&gt;
* manage permissions via a UI widget that doesn't stab you in the face&lt;br /&gt;
* harvest and connect controlled vocabularies (such as LCSH) to forms&lt;br /&gt;
* make URIs cool&lt;br /&gt;
* keep the app snappy without venturing into the architectural labyrinth of YAGNI&lt;br /&gt;
* build and queue background jobs&lt;br /&gt;
* expose social features and populate activity streams&lt;br /&gt;
* tie checksum verification, characterization, and version control to the UI&lt;br /&gt;
* let users upload and edit multiple files at once&lt;br /&gt;
&lt;br /&gt;
The application will be demonstrated; code will be shown; and we solemnly commit to showing ABSOLUTELY NO XML.&lt;br /&gt;
&lt;br /&gt;
==Coding with Mittens==&lt;br /&gt;
&lt;br /&gt;
*Jim LeFager, DePaul University Library jlefager@depaul.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Working in an environment where developers have restricted access to servers and development areas, or where you are primarily working in multiple hosted systems with limited access, can be a challenge when you are attempting to incorporate any new functionality or improve an existing one.  Hosted web services present a benefit so that staff time is not dedicated to server maintenance and development, but customization can be difficult and at times impossible.  In many cases, incorporating any current API functionality requires additional work besides the original development work which can be frustrating and inefficient.  The result can be a Frankenstein monster of web services that is confusing to the user and difficult to navigate.  &lt;br /&gt;
&lt;br /&gt;
This talk will focus on some effective best practices, and maybe not so great but necessary practices that we have adopted to develop and improve our user’s experience using javascript/jQuery and CSS to manipulate our hosted environments.  This will include a review of available tools that allow collaborative development in the cloud, as well as examples of jQuery methods that have allowed us to take additional control of these hosted environments as well as track them using Google Analytics.  Included will be examples from Springshare Campus Guides, CONTENTdm and other hosted web spaces that have been ‘hacked’ to improve the UI.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Hacking the DPLA ==&lt;br /&gt;
* Nate Hill, Chattanooga Public Library,  nathanielhill AT gmail.com&lt;br /&gt;
* Sam Klein, Wikipedia, metasj AT gmail.com&lt;br /&gt;
&lt;br /&gt;
The Digital Public Library of America is a growing open-source platform to support digital libraries and archives of all kinds.  DPLA-alpha is available for testing, with data from six initial Hubs.  New APIs and data feeds are in development, with the next release scheduled for April.   &lt;br /&gt;
&lt;br /&gt;
Come learn what we are doing, how to contribute or hack the DPLA roadmap, and how you (or your favorite institution) can draw from and publish through it.  Larger institutions can join as a (content or service) hub, helping to aggregate and share metadata and services from across their {region, field, archive-type}.   We will discuss current challenges and possibilities (UI and API suggestions wanted!), apps being built on the platform, and related digitization efforts.&lt;br /&gt;
&lt;br /&gt;
DPLA has a transparent community and planning process; new participants are always welcome.  Half the time will be for suggestions and discussion.   Please bring proposals, problems, partnerships and possible paradoxes to discuss.&lt;br /&gt;
&lt;br /&gt;
== Introduction to SilverStripe 3.0 ==&lt;br /&gt;
 &lt;br /&gt;
* Ian Walls, University of Massachusetts Amherst, iwalls AT library DOT umass DOT edu&lt;br /&gt;
&lt;br /&gt;
SilverStripe is an open source Content Management System/development framework out of New Zealand, written in PHP, with a solid MVC structure.  This presentation will cover everything you need to know to get started with SilverStripe, including&lt;br /&gt;
* Features (and why you should consider SilverStripe)&lt;br /&gt;
* Requirements &amp;amp; Installation&lt;br /&gt;
* Model-View-Controller&lt;br /&gt;
* Key data types &amp;amp; configuration settings&lt;br /&gt;
* Modules&lt;br /&gt;
* Where to start with customization&lt;br /&gt;
* Community support and participation&lt;br /&gt;
&lt;br /&gt;
== Citation search in SOLR and second-order operators ==&lt;br /&gt;
 &lt;br /&gt;
* Roman Chyla, Astrophysics Data System, roman.chyla AT (cfa.harvad.edu|gmail.com)&lt;br /&gt;
&lt;br /&gt;
Citation search is basically about connections (Is the paper read by a friend of mine more important than others? Get me a paper read by somebody who cites many papers/is cited by many papers?), but the implementation of the citation search is surprisingly useful in many other areas.&lt;br /&gt;
&lt;br /&gt;
I will show 'guts' of the new citation search for astrophysics, it is generic and can be applied recursively to any Lucene query. Some people would call it a second-order operation because it works with the results of the previous (search) function. The talk will see technical details of the special query class, its collectors, how to add a new search operator and how to influence relevance scores. Then you can type with me: friends_of(friends_of(cited_for(keyword:&amp;quot;black holes&amp;quot;) AND keyword:&amp;quot;red dwarf&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2013]]&lt;/div&gt;</summary>
		<author><name>Rchyla</name></author>	</entry>

	</feed>