<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.code4lib.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jkeck</id>
		<title>Code4Lib - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.code4lib.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jkeck"/>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/Special:Contributions/Jkeck"/>
		<updated>2026-04-09T06:30:41Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.26.2</generator>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2015_Prepared_Talk_Proposals&amp;diff=41954</id>
		<title>2015 Prepared Talk Proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2015_Prepared_Talk_Proposals&amp;diff=41954"/>
				<updated>2014-11-07T04:01:25Z</updated>
		
		<summary type="html">&lt;p&gt;Jkeck: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Code4lib 2015 is a loosely-structured conference that provides people working at the intersection of libraries/archives/museums/cultural heritage and technology with a chance to share ideas, be inspired, and forge collaborations. For more information about the Code4lib community, please visit http://code4lib.org/about/. &lt;br /&gt;
The conference will be held at the Portland Hilton &amp;amp; Executive Tower in Portland, Oregon, from February 9-12, 2015.&lt;br /&gt;
&lt;br /&gt;
'''Proposals for Prepared Talks:'''&lt;br /&gt;
&lt;br /&gt;
We encourage everyone to propose a talk.&lt;br /&gt;
 &lt;br /&gt;
Prepared talks are 20 minutes (including setup and questions), and should focus on one or more of the following areas:&lt;br /&gt;
* Projects you've worked on which incorporate innovative implementation of existing technologies and/or development of new software&lt;br /&gt;
* Tools and technologies – How to get the most out of existing tools, standards and protocols (and ideas on how to make them better)&lt;br /&gt;
* Technical issues - Big issues in library technology that should be addressed or better understood&lt;br /&gt;
* Relevant non-technical issues – Concerns of interest to the Code4Lib community which are not strictly technical in nature, e.g. collaboration, diversity, organizational challenges, etc.&lt;br /&gt;
&lt;br /&gt;
Proposals can be submitted through Friday, November 7, 2014 at 5pm PST (GMT−8). Voting will start on November 11, 2014 and continue through November 25, 2014. The URL to submit votes will be announced on the Code4Lib website and mailing list and will require an active code4lib.org account to participate. The final list of presentations will be announced in early- to mid-December.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Proposals for Prepared Talks:'''&lt;br /&gt;
&lt;br /&gt;
Log in to the Code4lib wiki and edit this wiki page using the prescribed format. If you are not already registered, follow the instructions to do so.&lt;br /&gt;
Provide a title and brief (500 words or fewer) description of your proposed talk.&lt;br /&gt;
If you so choose, you may also indicate when, if ever, you have presented at a prior Code4Lib conference. This information is completely optional, but it may assist voters in opening the conference to new presenters.&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Talk Title: ==&lt;br /&gt;
 &lt;br /&gt;
* Speaker's name,  email address, and (optional) affiliation&lt;br /&gt;
* Second speaker's name, email address, and affiliation, if second speaker&lt;br /&gt;
&lt;br /&gt;
Abstract of no more than 500 words.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Talk Proposals'''&lt;br /&gt;
&lt;br /&gt;
== Do the Semantic FRBRoo ==&lt;br /&gt;
* Rosie Le Faive, rlefaive@upei.ca, University of Prince Edward Island&lt;br /&gt;
&lt;br /&gt;
[http://www.islandora.ca Islandora] is great for creating repositories of any data type, but how can you model meaningful relationships between digital objects and use them to tell a story?&lt;br /&gt;
&lt;br /&gt;
At UPEI, I’m assembling an ethnography of Prince Edward Island’s traditional fiddle music that includes musical clips, video clips, oral histories, musical notation, images, and ethnographic commentaries. In order to present an exhibition-style site, I’m tying these digital objects together via the people, places, events, tunes and topics that they share or describe. &lt;br /&gt;
&lt;br /&gt;
To describe the relationships, I’m extending Islandora to use [http://www.cidoc-crm.org/frbr_inro.html FRBRoo], a vocabulary that combines the FRBR model with CIDOC-CRM, the the object-oriented museum documentation ontology. These modules being developed will allow other researchers to create a structured, navigable digital repository of diverse object types, that uses Islandora as an exhibition platform. &lt;br /&gt;
&lt;br /&gt;
== Our $50,000 Problem: Why Library School? ==&lt;br /&gt;
* Jennie Rose Halperin, jhalperin@mozilla.com, Mozilla Corporation&lt;br /&gt;
&lt;br /&gt;
57 library schools in the United States are churning out approximately 100 graduates per year, many with debt upwards of $50,000.  According to ONet, [http://www.inthelibrarywiththeleadpipe.org/2011/is-the-united-states-training-too-many-librarians-or-too-few-part-1/ 84% of library jobs in the US require an MLS.] The library profession is [http://dpeaflcio.org/programs-publications/issue-fact-sheets/library-workers-facts-figures/) 92% white and 82% female and entry-level librarians can expect to make $32,500 per year.]&lt;br /&gt;
&lt;br /&gt;
Contrasted with developers, who are almost [http://www.ncwit.org/blog/did-you-know-demographics-technical-women 90% male] and can expect to make [http://www.forbes.com/sites/jennagoudreau/2011/06/01/best-entry-level-jobs/ $70,000 in an entry-level position,] these numbers are dismal.&lt;br /&gt;
&lt;br /&gt;
According to a recent survey, the top skill that outgoing library students want to know is “programming” and yet many MLS programs still consider Microsoft Word an essential technology skill.&lt;br /&gt;
&lt;br /&gt;
What is going on here? Why do we accept this fate, where mostly female debt-burdened professionals continue to be thrown onto the work force without the education their expensive degrees promised?&lt;br /&gt;
&lt;br /&gt;
As a community we need to come together to stop this cycle. We need to provide better support and mentorship to diversify and keep the profession relevant and help librarianship move into the future it deserves.&lt;br /&gt;
&lt;br /&gt;
This talk will walk through the challenges of navigating a hostile employment environment as well as present models for better development and future state imagining.&lt;br /&gt;
&lt;br /&gt;
== No cataloging software? Need more than Dublin Core? No problem!: Experiences with CollectiveAccess ==&lt;br /&gt;
* [[User:SeanHendricks|Sean Q. Hendricks]], sqhendr@clemson.edu, Clemson University&lt;br /&gt;
* Rachel Wittmann, rwittma@clemson.edu, Clemson University&lt;br /&gt;
&lt;br /&gt;
Clemson University Libraries has implemented the open-source software CollectiveAccess for customized digital collection needs. CollectiveAccess is an open-source project with the goal of providing a flexible way to manage and publish museum and archival collections. There are several applications associated with the projects; most used are: Providence (for cataloging and entering metadata) and Pawtucket (for displaying objects in a collection for the public). It has many profiles readily available for installing with existing library standards, such as Dublin Core, and there is a robust syntax for creating your own profiles to fit custom tailored metadata schemas. Plus, the user interface allows you to modify the metadata profile quickly and easily.&lt;br /&gt;
&lt;br /&gt;
In this talk, we will discuss:&lt;br /&gt;
* Our experiences with installing Providence and creating an installation profile that satisfies the needs of many of the Clemson Libraries digital archiving processes. &lt;br /&gt;
* The stumbling blocks experienced in that process and how they were resolved.&lt;br /&gt;
* The available plugins sourcing widely used authorities, such as Library of Congress thesauri and GeoNames.org, and how they have been used by our projects. &lt;br /&gt;
* A brief overview of the export and import functions and also current workflow practices within Providence.&lt;br /&gt;
* Future plans &amp;amp; the role of CollectiveAccess at Clemson University Libraries&lt;br /&gt;
&lt;br /&gt;
== Getting ContentDM and Wordpress to Play Together ==&lt;br /&gt;
* [[User:SeanHendricks|Sean Q. Hendricks]], sqhendr@clemson.edu, Clemson University&lt;br /&gt;
&lt;br /&gt;
Clemson University Libraries has a very strong program for digitizing and archiving photographs, and the Digital Imaging team processes many hundreds of photographs every month. These images are managed using different methods, including ContentDM, a digital collection manager.&lt;br /&gt;
&lt;br /&gt;
ContentDM provides various methods for searching and displaying photographs, along with their metadata. However, recent initiatives have resulted in the need to leverage those collections into exhibits displayed on other library-related websites, such as our Special Collections unit. The Clemson Libraries has invested heavily in Wordpress as our content management system of choice, and it seemed most efficient not to have to export and import images into our Wordpress sites in order to provide exhibited images.&lt;br /&gt;
&lt;br /&gt;
Fortunately, ContentDM has provided an API to many of their functions, allowing the extraction of metadata and even rescaled images through URLs. This project has been developing a plugin for Wordpress that integrates with ContentDM through shortcodes that Wordpress editors can easily include in their content. These shortcodes allow editors to choose how many images, which images from which collections, thumbnail sizes, etc. to display in different gallery styles. Plans are for it to allow integration with different plugins such as Fancybox and Masonry.&lt;br /&gt;
&lt;br /&gt;
In this presentation, I will demonstrate the current state of the plugin and discuss future plans. &lt;br /&gt;
&lt;br /&gt;
==Refinery — An open source locally deployable web platform for the analysis of large document collections==&lt;br /&gt;
 &lt;br /&gt;
* [[User:DaeilKim|Daeil Kim]], The New York Times, daeil.kim@nytimes.com&lt;br /&gt;
&lt;br /&gt;
Refinery is an open source web platform for the analysis of large unstructured document collections. It extracts meaningful semantic themes within documents also known as &amp;quot;topics&amp;quot; which can be thought of as word clouds composed of terms that highly co-occur with one another. Once this semantic index is formed, one can extract relevant documents related to these topics and further refine their contents through a summarization process that allows users to search for phrases that are relevant to them within the corpus. The goal of Refinery is to make this whole process easier and to provide some of the latest scalable versions of these learning algorithms in an intuitive web-based interface. Refinery is also meant to be run locally, thus bypassing the need for securing document collections over the internet. The talk will go through some of the technologies involved and a demo of the app.&lt;br /&gt;
&lt;br /&gt;
For more info check out http://www.docrefinery.org.&lt;br /&gt;
&lt;br /&gt;
==Drupal 8 — Evolution &amp;amp; Revolution==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Drupal 8 is in beta and nearing release. Among its many features, it notably has become more developer friendly through its adoption of the Symfony PHP framework along with Symfony's outstanding set of libraries (like Guzzle) and tools (like Composer). And, in implementing the Twig theming system, it is can begin to escape PHPtemplate. These moves also make it easier to create headless systems that uses Angular.js and other systems for presentation, or even forgo presentation entirely.&lt;br /&gt;
&lt;br /&gt;
From the site-builder's perspective, Drupal 8 provides a much smother experience and makes it easier to build and implement site recipes.&lt;br /&gt;
&lt;br /&gt;
==Using GameSalad to Build a Gamified Information Literacy Mobile App for Higher Education==&lt;br /&gt;
 &lt;br /&gt;
* [[User:StanBogdanov|Stanislav 'Stan' Bogdanov]],  stan@stanrb.com, Adelphi University and [http://bogliollc.com Boglio LLC]&lt;br /&gt;
&lt;br /&gt;
GameSalad is a popular tool for developing mobile and desktop games with little actual programming. In this presentation, Stan Bogdanov breaks down the development process he followed while building [https://github.com/stanrb/mobiLit mobiLit], a mobile app with the goal of being the first open-source gamified information literacy app to be used as part of a college-level information literacy curriculum. He will go through the basics of using GameSalad to create an app that can be easily customized by non-programmers and the instructional principles used to teach the material in a mobile medium. Stan will also go through two qualitative design studies he did on the app and discuss their results and the lessons learned from building mobiLit. The session will conclude with an overview of the next steps for the [https://github.com/stanrb/mobiLit mobiLit project].&lt;br /&gt;
&lt;br /&gt;
==The Impossible Search: Pulling data from multiple unknown sources==&lt;br /&gt;
 &lt;br /&gt;
* Riley Childs, no official affiliation (currently a Senior in High School at Charlotte United Christian Academy), rchilds (AT) cucawarriors.com &lt;br /&gt;
&lt;br /&gt;
It's easy to search data you know the structure of, but what if you need to pull in data from sources that don't have a standard structure. The ability to search community events along with your standard catalog search results is an example, but often the only way to pull these events is through XML, JSON, (Insert structured format here), or even just raw html. But how do you get that structure? That simple question is what makes this impossible. The process to define and process this structure takes a lot of manual labor, especially if the data you are pulling is just HTML, and then every time you add data to the index you have to run all the data through a script to pull in data in a format Solr or an other index can use. This talk will focus on Solr, but the principles explained will apply to many other indexes.&lt;br /&gt;
&lt;br /&gt;
==What! You're Not Using Docker?==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Boring part: Docker[1] is a container system that provides benefits similar to virtualization with only a fraction of the overhead. Scintillating part: Docker can host between four to six times the number of service instances than systems such as Xen or VMWare on a given piece of hardware. But thats not all! Docker also makes it simple(r) to create transportable instances, so you can spin up development servers on your laptop.&lt;br /&gt;
&lt;br /&gt;
*[1]https://www.docker.com/&lt;br /&gt;
&lt;br /&gt;
== Video Accessibility, WebVTT, and Timed Text Track Tricks ==&lt;br /&gt;
&lt;br /&gt;
* Jason Ronallo, jronallo@gmail.com, NCSU Libraries&lt;br /&gt;
&lt;br /&gt;
Video on the Web presents new challenges and opportunities. How do you make your video more accessible to those with various disabilities and needs? I'll show you how. This presentation will focus on how to write and deliver captions, subtitles, audio descriptions, and timed metadata tracks for Web video using the WebVTT W3C standard. Encoding timed text tracks in this way opens up opportunities for new functionality on your websites beyond accessibility. The presentation will show some examples of the potential for using timed text tracks in creative ways. I'll cover all the HTML and JavaScript you will need to know as well as some of the CSS and other bits you could probably do without but are too fun to pass up.&lt;br /&gt;
&lt;br /&gt;
== Categorizing Records with Random Forests ==&lt;br /&gt;
 &lt;br /&gt;
* Geoffrey Boushey, geoffrey.boushey@ucsf.edu, UCSF Library&lt;br /&gt;
Academic libraries are increasingly responsible for providing ingest, search, discovery, and analysis for data sets.  Emerging techniques from data science and machine learning can provide librarians and developers with an opportunity to generate new insights and services from these document collections.  This presentation will provide a brief overview of common machine learning classification techniques, then dive into a more detailed example using a random forest to assign keywords to research data sets.  The talk will emphasize the insight that can be gained from machine learning rather than the inner workings of the algorithms.  The overall goal of this presentation is to provide librarians and developers with the context to recognize an opportunity to apply machine learning categorization techniques at their home campuses and organizations.  &lt;br /&gt;
&lt;br /&gt;
== Data Science in Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Devon Smith, smithde@oclc.org, OCLC&lt;br /&gt;
&lt;br /&gt;
Data Science is increasing in buzz and hype. I'll go over what it is, what it isn't, and how it fits in libraries.&lt;br /&gt;
&lt;br /&gt;
== PDF metadata extraction for academic literature == &lt;br /&gt;
&lt;br /&gt;
* Kevin Savage, kevin.savage at mendeley.com, Mendeley&lt;br /&gt;
* Joyce Stack, joyce.stack at mendeley.com, Mendeley&lt;br /&gt;
&lt;br /&gt;
Mendeley recently added a, &amp;quot;document from file,&amp;quot; endpoint to its API which attempts to extract metadata such as title and authors directly from PDF files. This talk will describe at a high level the machine learning methods we used including how we measured and tuned our model. We will then delve more deeply into our stack, the tools we used, some of the things that didn't work and why PDFs are the worst thing ever to compute over.&lt;br /&gt;
&lt;br /&gt;
== Giving Users What They Want: Record Grouping in VuFind ==&lt;br /&gt;
 &lt;br /&gt;
* Mark Noble,  mark@marmot.org, [//www.marmot.org Marmot Library Network]&lt;br /&gt;
&lt;br /&gt;
In 2013, Marmot did extensive usability studies with patrons to determine what was difficult in the catalog.  Many patrons had problems sifting through all of the various formats and editions of a title.  In 2014 we developed a method for [//mercury.marmot.org/Union/Search?lookfor=divergent grouping records] so only a single work is shown in search results and all formats and editions are listed under that work.  We will discuss our definition of a 'work' based on FRBR principles; combining meta data from MARC records with metadata from other sources like OverDrive; the technical details of Record Grouping; the design decisions made during implementation; and the reaction from users and staff.&lt;br /&gt;
&lt;br /&gt;
== Topic Space: a mobile augmented reality recommendation app ==&lt;br /&gt;
&lt;br /&gt;
* Jim Hahn, jimhahn@illinois.edu, University of Illinois at Urbana-Champaign&lt;br /&gt;
&lt;br /&gt;
The Topic Space module (http://minrvaproject.org/modules_topicspace.php ) was developed with an IMLS Sparks! Grant to investigate augmented reality technologies for in-library recommendations. The funding allowed for sustained university community collaboration by the University Library, the Graduate School of Library and Information Science, as well as graduate student programmers sourced from the Department of Computer Science. Collaborators designed app functionality and identified relevant open source libraries that could power optical character recognition (OCR) functionality from within the mobile phone.&lt;br /&gt;
&lt;br /&gt;
Topic space allows a user to take a picture of an item's call number in the book stacks. The module will show the user other books that are relevant but that are not shelved nearby. It can also show users books that are normally shelved here but that are currently checked out. Recommendations are based on Library of Congress subject headings and ILS circulation data which indicate recommendation candidates based on total check-outs. &lt;br /&gt;
&lt;br /&gt;
Research questions included development of back end (server-side) pattern matching algorithms for recommendations, and a rapid formative evaluation of interface design that would provide optimal user experience for navigation of the book stacks as a context to recommendations.&lt;br /&gt;
&lt;br /&gt;
Along with the Topic Space native app, grant collaborators prototyped web based recommendations which could serve as a new way of providing readers advisory and “more like this” recommendations from discovery interfaces accessed through desktop browsers. Outcomes of the grant include the availability of the [https://play.google.com/store/apps/details?id=edu.illinois.ugl.minrva Topic Spaces module within Minrva app on the Android Play store] and an experimental [http://backbonejs.org/ Backbone.js] based [http://minrva-dev.library.illinois.edu Topic Space web app].&lt;br /&gt;
&lt;br /&gt;
== Leveling Up Your Git Workflow ==&lt;br /&gt;
&lt;br /&gt;
* Megan Kudzia, moneill@albion.edu, Albion College Library&lt;br /&gt;
* Kate Sears, eks11@albion.edu, Albion College Library&lt;br /&gt;
&lt;br /&gt;
Have you started experimenting with Git on your own, but now you need to include others in your projects? Learn from our mistakes! Transitioning from a one-person git workflow and repo structure, to a structure that includes multiple people (including student workers), is not for the faint of heart. We'll talk about why we decided to work this way, our path to developing a git culture amongst ourselves, conceptual and technical difficulties we've faced, what we learned, and where we are now. Also with pretty pictures (aka workflow drawings).&lt;br /&gt;
&lt;br /&gt;
== Drone Loaning Program: Because Laptops are so last century ==&lt;br /&gt;
&lt;br /&gt;
 * Uche Enwesi, uenwesi@umd.edu, University of Maryland Libraries&lt;br /&gt;
 * Francis Kayiwa, fkayiwa@umd.edu, University of Maryland Libraries&lt;br /&gt;
&lt;br /&gt;
At Univ. Maryland we are in the very early stages of looking into allowing our student body get their hands on a drone. Yes that's right we will let students take out a drone for n amount of hours to work on projects of their choosing. The talk will talk about the logistics of getting a program of this sort from concept to &amp;quot;Is the drone available?&amp;quot;. If people sign waivers we will also promise not to crash the drone into code4lib attendees.&lt;br /&gt;
&lt;br /&gt;
== Got Git? Getting More Out of Your GitHub Repositories ==&lt;br /&gt;
&lt;br /&gt;
 * Terry Brady, twb27@georgetown.edu, Georgetown University Library&lt;br /&gt;
&lt;br /&gt;
This presentation will discuss how librarians, developers, and system administrators at Georgetown University are maximizing their use of the public and private GitHub repositories. &lt;br /&gt;
&lt;br /&gt;
In additional to all of the great benefits of using Git for code management, the GitHub interface provides a powerful set of tools to showcase a project and to keep your users informed of developments to your project.  These tools can assist with marketing and outreach - turning your code repository into a focus of conversation!&lt;br /&gt;
&lt;br /&gt;
* [http://georgetown-university-libraries.github.io/File-Analyzer/ Style-able Project Pages]&lt;br /&gt;
* [https://github.com/Georgetown-University-Libraries/File-Analyzer/wiki Project Wikis]&lt;br /&gt;
* [https://github.com/Georgetown-University-Libraries/Georgetown-University-Libraries-Code/releases Project Release Notes/Portfolios]&lt;br /&gt;
* [https://rawgit.com/Georgetown-University-Libraries/Georgetown-University-Libraries-Code/master/samples/GoogleSpreadsheetFilter.html Web Resources That Can Be Directly Requested]&lt;br /&gt;
* Gists for code sharing&lt;br /&gt;
* Private Repositories and Organizational Groups&lt;br /&gt;
* Pull Request Conversation Tracking&lt;br /&gt;
* Customized Issue management&lt;br /&gt;
&lt;br /&gt;
== Quick Wins for Every Department in the Library - File Analyzer! ==&lt;br /&gt;
&lt;br /&gt;
 * Terry Brady, twb27@georgetown.edu, Georgetown University Library&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Library has customized workflows for nearly every department in our library with a single code base.&lt;br /&gt;
* Analyzing Marc Records for the Cataloging department&lt;br /&gt;
* Transferring ILS invoices for the University Account System for the Acquisitions department &lt;br /&gt;
* Delivering patron fines to the Bursar’s office for the Access Service department&lt;br /&gt;
* Summarizing student worker timesheet data for the Finance department&lt;br /&gt;
* Validating COUNTER compliant reports for the Electronic Resources department&lt;br /&gt;
* Generating ingest packages for the Digital Services department&lt;br /&gt;
* Validating checksums for the Preservation department&lt;br /&gt;
&lt;br /&gt;
Learn how you can customize the [http://georgetown-university-libraries.github.io/File-Analyzer/ File Analyzer] to become a hero in your library!&lt;br /&gt;
&lt;br /&gt;
==The Geospatial World is Moving from Maps *on* the Web to Maps *of* the web. Libraries can too==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Copystar|Mita Williams]], mita@uwindsor.ca, User Experience Librarian, University of Windsor&lt;br /&gt;
&lt;br /&gt;
The transition from paper maps to digital ones changed much more than the maps themselves; it changed the very foundation of how we work and how we find each other. Now maps are transforming again.  The Geospatial World is moving from GIS systems that are institutionally-focused, expensive, feature-burdened, and binds data into a complicated and demanding user-hostile interface. From this transition from digital to web-based digital geospatial tools has come growth and development in new forms of map-based investigative journalism, activism, scholarship, and business ventures. This talk will highlight the conditions and strategies that made these changes possible as a means to draw a path by which librarians through our own work may follow, dragons notwithstanding. &lt;br /&gt;
&lt;br /&gt;
== Building Your Own Federated Search ==&lt;br /&gt;
&lt;br /&gt;
* Rich Trott, Richard.Trott@ucsf.edu, UC San Francisco&lt;br /&gt;
&lt;br /&gt;
Advances in modern browsers have created some interesting possibilities for federated search. This presentation will cover common techniques and pitfalls in building a federated search. We will discuss what principles guided our decisions when implementing our own federated search. We will show tools we've built and our findings from building and using experimental prototypes.&lt;br /&gt;
&lt;br /&gt;
Your higher education institution likely offers dozens of online resources for educators, students, researchers, and the public. And each of these online resources likely has its own search tool. But users can't be expected to search in dozens of different interfaces to find what they're looking for. A typical solution for this issue is federated search. &lt;br /&gt;
&lt;br /&gt;
==  Indexing Linked Data with LDPath ==&lt;br /&gt;
&lt;br /&gt;
* Chris Beer, cabeer@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
LDPath [1] is a simple query language for indexing linked open data, with support for caching, content negotiation, and integration with non-RDF endpoints. This talk will demonstrate the features and potential of the language and framework to index a resource with links into id.loc.gov, viaf.org, geonames.org, etc to build an application-ready document.&lt;br /&gt;
&lt;br /&gt;
[1] http://marmotta.apache.org/ldpath/language.html&lt;br /&gt;
&lt;br /&gt;
== Show Me the Money: Integrating an LMS with Payment Providers ==&lt;br /&gt;
 &lt;br /&gt;
* Josh Weisman,  Josh.Weisman@exlibrisgroup.com, Development Director-Resources Management, Ex Libris Group&lt;br /&gt;
&lt;br /&gt;
In order to provide an easy and convenient way for patrons to pay fines, we are exploring ways to integrate the library management system with online payment providers such as PayPal. With many LMS systems being designed and developed for the cloud, we should be able to provide the frictionless user experience our patrons have come to expect from online transactions. In this session we'll discuss strategies for integration and review a sample application which uses REST APIs from a library management system to integrate with PayPal.&lt;br /&gt;
&lt;br /&gt;
== Shibboleth Federated Authentication for Library Applications: ==&lt;br /&gt;
&lt;br /&gt;
* Scott Fisher, scott.fisher@ucop.edu, California Digital Library&lt;br /&gt;
* Ken Weiss, ken.weiss@ucop.edu, California Digital Library&lt;br /&gt;
&lt;br /&gt;
Shibboleth is the most widely-used method to provide single-sign-on authentication to academic applications where users come from many different institutions. Shibboleth, the InCommon education and research trust framework, and the SAML protocol comprise a very powerful - but very complicated - solution to this very complicated problem. Scott and Ken have implemented Shibboleth for multiple library applications. They will share their understanding of the good, the bad, and the underlying spaghetti that makes it all work. Ken will discuss some of the technical aspects of the solution, touching on optimal and non-optimal use cases, administrative challenges, and authorization concerns. Scott will describe the implementation pattern for multi-institution single-sign-on that the California Digital Library has evolved, using the recently released Dash application (http://dash.cdlib.org) as an example.&lt;br /&gt;
&lt;br /&gt;
==Scientific Data: A Needs Assessment Journey==&lt;br /&gt;
 &lt;br /&gt;
*[[User:VickySteeves| Vicky Steeves]], vsteeves@amnh.org, American Museum of Natural History&lt;br /&gt;
&lt;br /&gt;
While surveying digital research and collections data in the research science divisions at the American Museum of Natural History in NYC (as a part of my [http://ndsr.nycdigital.org/ National Digital Stewardship Residency] project), I have come across the big data hogs (genome sequencing and CT scanning) and the little pieces of data (images, publications), all equally important to not only scientific discovery, but as nodes in the history of science. &lt;br /&gt;
&lt;br /&gt;
In this session, I will discuss the development of my needs assessment surveys for scientific datasets and the interview process with Museum curators and researchers as background, seguing into an explanation of the results. I will then combine my findings into preliminary selection criteria to choose tools for digital preservation and management unique to scientific datasets. This will brooke a discussion on emerging standards, tools, and technologies in big data, specific to research science. &lt;br /&gt;
&lt;br /&gt;
I will conclude with preliminary findings on emerging technology that can be used to answer concerns surrounding the management and digital preservation of these data. I am hoping the Q&amp;amp;A session can be used to both answer questions about my project, and function as a way for you (the larger tech-savy library community)  to discuss the tools I’ve touched on in this talk. &lt;br /&gt;
&lt;br /&gt;
== Feminist Human Computer Interaction (HCI) in Library Software ==&lt;br /&gt;
 &lt;br /&gt;
* Bess Sadler,  bess@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
Libraries are not neutral repositories of knowledge. Library classification systems and search technologies tend to reflect the inequalities, biases, ethnocentrism, and power imbalances of the societies in which they are built [1]. How might we better resist these tendencies in the library software we create? This talk will examine some qualities of feminist HCI (pluralism, self-disclosure, participation, ecology, advocacy, and embodiment) [2] through the lens of library software. &lt;br /&gt;
&lt;br /&gt;
[1] Olson, Hope A. (2002). The Power to Name: Locating the Limits of Subject Representation in Libraries. Dordrecht, The Netherlands: Kluwer Academic Publishers.&lt;br /&gt;
&lt;br /&gt;
[2] Bardzell, Shaowen. Feminist HCI: Taking Stock and Outlining an Agenda for Design. CHI 2010: HCI For All. http://dmrussell.net/CHI2010/docs/p1301.pdf&lt;br /&gt;
&lt;br /&gt;
== Heiðrún: DPLA's Metadata Harvesting, Mapping and Enhancement System ==&lt;br /&gt;
&lt;br /&gt;
* Audrey Altman, audrey at dp.la, Digital Public Library of America&lt;br /&gt;
* Gretchen Gueguen, gretchen at dp.la, Digital Public Library of America&lt;br /&gt;
* Mark Breedlove, mb at dp.la, Digital Public Library of America&lt;br /&gt;
&lt;br /&gt;
The Digital Public Library of America aggregates metadata for over 8 million objects from more than 24 direct partners, or Hubs, using its Metadata Application Profile (MAP), an RDF metadata application profile based on the Europeana Data Model. After working with the initial system for harvesting, mapping and enhancing our Hub’s metadata for a year, we realized that it was inadequate for working with data at this scale. There were architectural issues; it was opaque to non-developer and partner staff; there were inadequate tools for quality assurance and analysis; and the system was unaware that it was working with RDF data. As the network of Hubs expanded and we ingested more metadata, it became harder and harder to know when or why a harvest, a mapping task, or an enrichment went wrong because the tools for quality assurance were largely inadequate. &lt;br /&gt;
&lt;br /&gt;
The DPLA Content and Technology teams decided to develop a new system from the ground up to address those problems. Development of Heidrun, the internal version of the new system, started in October 2014. Heidrun’s goals are to make it easier for us to harvest and map metadata from various sources and in variety of schemas to the DPLA MAP, to better enrich that metadata using external data sources, and to actively involve our partners in the ingestion process through access to better QA tools. Heidrun and its componentry are built on Ruby on Rails, Blacklight, and ActiveTriples. Our presentation will give some background on our design principles and processes used during development, the architecture of the system, and its functionality. We plan to release a version of Heidrun and its components as a generalized metadata aggregation system for use by DPLA Hubs and others working to aggregate cultural heritage metadata.&lt;br /&gt;
&lt;br /&gt;
== OS or GTFO: Program or Perish ==&lt;br /&gt;
*Tessa Fallon, tessa.fallon@gmail.com&lt;br /&gt;
&lt;br /&gt;
Description TBD&lt;br /&gt;
&lt;br /&gt;
== Creating Dynamic— and Cheap!— Digital Displays with HTML 5 Authoring Software ==&lt;br /&gt;
* Chris Woodall, cmwoodall@salisbury.edu, Salisbury University Libraries&lt;br /&gt;
Would your library like to have large digital signage that displays dynamic information such as library hours, weather, room availability, and more? Have you looked into purchasing large digital signage, only to be turned off by the high price tag and lack of customization available with commercial solutions? Our library has developed a cheap and effective alternative to these systems using HTML 5 authoring software, a large TV, and freely-available APIs from Google, Springshare, and others. At this session, you’ll learn about the system that we have in place for displaying dynamic and easily-updatable information on our library’s large digital display, and how you can easily create something similar for your library.&lt;br /&gt;
&lt;br /&gt;
== REPOX: Metadata Blender ==&lt;br /&gt;
 &lt;br /&gt;
* John Mignault, jmignault@metro.org, Empire State Digital Network&lt;br /&gt;
&lt;br /&gt;
With the growth in the number of hubs providing metadata to the Digital Public Library of America, many of them are using REPOX, a tool originally created for the Europeana project, to aggregate disparate metadata feeds and transform them into formats suitable for ingest into DPLA. The Empire State Digital Network, the forthcoming DPLA service hub for NY state, is using it to prepare for our first ingest into DPLA in early 2015.  We'll take a look at REPOX and its capabilities and how it can be useful for ingesting and transforming metadata, and also discuss some things we've learned in massaging widely varied metadata feeds.&lt;br /&gt;
&lt;br /&gt;
== Beyond Open Source ==&lt;br /&gt;
&lt;br /&gt;
* Jason Casden, jmcasden@ncsu.edu, NCSU Libraries&lt;br /&gt;
* Bret Davidson, bddavids@ncsu.edu, NCSU Libraries&lt;br /&gt;
&lt;br /&gt;
The Code4Lib community has produced an increasingly impressive collection of open source software over the last decade, but much of this creative work remains out of reach for large portions of the library community. Do the relatively privileged institutions represented by a majority of Code4Lib participants have a professional responsibility to support the adoption of their innovations?&lt;br /&gt;
&lt;br /&gt;
Drawing from old and new software packaging and distribution approaches (from freeware to Docker), we will propose extending the open source software values of collaboration and transparency to include the wide and affordable distribution of software. We believe this will not only simplify the process of sharing our applications within the Code4Lib community, but also make it possible for less well resourced institutions to actually use our software. We will identify areas of need, present our experiences with the users of our own open source projects, discuss our attempts to go beyond open source, and make an argument for the internal value of supporting and encouraging a vibrant library ecosystem.&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2015]] &lt;br /&gt;
[[Category:Talk Proposals]]&lt;br /&gt;
&lt;br /&gt;
== Making It Work: Problem Solving Using Open Source at a Small Academic Library ==&lt;br /&gt;
 &lt;br /&gt;
* Adam Strohm, astrohm@iit.edu, Illinois Institute of Technology&lt;br /&gt;
* Max King, mking9@iit.edu, Illinois Institute of Technology&lt;br /&gt;
&lt;br /&gt;
The Illinois Institute of Technology campus was added to the National Register of Historic Places in 2005, and contains a building, Mies van der Rohe's S.R. Crown Hall, that was named a National Historic Landmark in 2001. Creating a digital resource that can adequately showcase the campus and its architecture is challenge enough in and of itself, but doing so as a two-person team of relative newcomers, at a university library without dedicated programmers on staff, ups the ante considerably.&lt;br /&gt;
The challenges of technical know-how, staff time, and funding are nothing new to anyone working on digital projects at a university library, and are amplified when doing so at a smaller institution. This talk covers the conception, development, and design of the campus map site that was built, concentrating on the problem-solving strategies developed to cope with limited technical and financial resources.&lt;br /&gt;
We'll talk about our approach to development with Open Source software, including Omeka, along with the Neatline and Simile Timeline plugins. We'll also discuss the juggling act of designing for mobile mapping functionality without sacrificing desktop design, weighing the costs of increased functionality versus our ability to time-effectively include that functionality, and the challenge of building a site that could be developed iteratively, with an eye towards future enhancement and sustainability. Finally, we’ll provide recommendations for other librarians at smaller institutions for their own efforts at digital development.&lt;br /&gt;
&lt;br /&gt;
== Recording Digitization History: Metadata Options for the Process History of Audiovisual Materials ==&lt;br /&gt;
 &lt;br /&gt;
* Peggy Griesinger, peggy_griesinger@moma.org, Museum of Modern Art&lt;br /&gt;
&lt;br /&gt;
The Museum of Modern Art has amassed a large collection of audiovisual materials over its many decades of existence. In order to preserve these materials, much of the audiovisual collection has been digitized. This is a complex process involving numerous steps and devices, and the methods used for digitization can have an effect on the quality of the file that is preserved. Therefore, knowing exactly how something was digitized is critical for future stewards of these objects to be able to properly care for and preserve them. However, detailed technical information about the processes involved in the digitization of audiovisual materials is not defined explicitly in most metadata schemas used for audiovisual materials. In order to record process history using existing metadata standards, some level of creativity is required to allow existing standards to express this information.&lt;br /&gt;
&lt;br /&gt;
This talk will detail different metadata standards, including PBCore, PREMIS, and reVTMD, that can be implemented as methods of recording this information. Specifically, the talk will examine efforts to integrate this metadata into the Museum of Modern Art’s new digital repository, the DRMC. This talk will provide background on the DRMC as well as MoMA’s specific institutional needs for process history metadata, then discuss different metadata implementations we have considered to document process history.&lt;br /&gt;
&lt;br /&gt;
== Pig Kisses Elephant: Building Research Data Services for Web Archives ==&lt;br /&gt;
 &lt;br /&gt;
* Jefferson Bailey,  jefferson@archive.org, Internet Archive&lt;br /&gt;
* Vinay Goel, vinay@archive.org, Internet Archive&lt;br /&gt;
&lt;br /&gt;
More and more libraries and archives are creating web archiving programs.  For both new and established programs, these archives can consist of hundreds of thousands, if not millions, of born-digital resources within a single collection; as such, they are ideally suited for large-scale computational study and analysis. Yet current access methods for web archives consist largely of browsing the archived web in the same manner as browsing the live web and the size of these collections and complexity of the WARC format can make aggregate analysis difficult. This talk will describe a project to create new ways for users and researchers to access and study web archives by offering extracted and post-processed datasets derived from web collections. Working with the 325+ institutions and their 2600+ collections within the Archive-It service, the Internet Archive is building methods to deliver a variety of datasets culled from collections of web content, including extracted metadata packaged in JSON, longitudinal link graph data, named entities, and other types of data. The talk will cover the technical details of building dataset production pipelines with Apache Pig, Hadoop, and tools like Stanford NER, the programmatic aspects of building data services for archives and researchers, and ongoing work to create new ways to access and study web archives.&lt;br /&gt;
&lt;br /&gt;
== Awesome Pi, LOL! ==&lt;br /&gt;
&lt;br /&gt;
* Matt Connolly, mconnolly@cornell.edu, Cornell University Library&lt;br /&gt;
* Jennifer Colt, jrc88@cornell.edu, Cornell University Library&lt;br /&gt;
&lt;br /&gt;
Inspired by Harvard Library Lab’s “Awesome Box” project, Cornell’s Library Outside the Library (LOL) group is piloting a more automated approach to letting our users tell us which materials they find particularly stunning. Armed with a Raspberry Pi, a barcode scanner, and some bits of kit that flash and glow, we have ventured into the foreign world of hardware development. This talk will discuss what it’s like for software developers and designers to get their hands dirty, how patrons are reacting to the Awesomizer, and LOL’s not-afraid-to-fail philosophy of experimentation.&lt;br /&gt;
&lt;br /&gt;
== You Gotta Keep 'em Separated: The Case for &amp;quot;Bento Box&amp;quot; Discovery Interfaces ==&lt;br /&gt;
 &lt;br /&gt;
* Jason Thomale,  jason.thomale@unt.edu, University of North Texas Libraries&lt;br /&gt;
&lt;br /&gt;
I know, I know--proposing a talk about Resource Discovery is like, ''so'' 2010.&lt;br /&gt;
&lt;br /&gt;
The thing is, practically all of us--in academic libraries at least--have a similar set up for discovery, with just a few variations, and so talking about it still seems useful. Stop me if this sounds familiar. You've got a single search box on the library homepage as a starting point for discovery. And it's probably a tabbed affair, with an option for searching the catalog for books, an option for searching a discovery service for articles, an option for searching databases, and maybe a few others. Maybe you have an option to search everything at once--probably the default, if you have it. And, if you're a crazy hepcat, maybe you ''only'' have your one search that searches everything, with no tabs.&lt;br /&gt;
&lt;br /&gt;
Now, the question is, for your &amp;quot;everything&amp;quot; search, are you doing a combined list of results, or are you doing it bento-box style, with a short results list from each category displayed in its own compartment?&lt;br /&gt;
&lt;br /&gt;
At UNT, we've been holding off on implementing an &amp;quot;everything&amp;quot; search, for various reasons. One reason is that the evidence for either style hasn't been very clear. There's this persistent paradox that we just can't reconcile: users tell us, through word and action, that they prefer searching Google, yet, libraries aren't Google, and there are valid design reasons why we shouldn't try to oversimplify our discovery interfaces to be like Google. And there's user data that supports both sides.&lt;br /&gt;
&lt;br /&gt;
Holding off on making this decision has granted us 2 years of data on how people use our tabbed search interface that does ''not'' include an &amp;quot;everything&amp;quot; search. Recently I conducted a thorough analysis of this data--specifically the usage and query data for our catalog and discovery system (Summon). And I think it helps make the case for a bento box style discovery interface. To be clear, it isn't exactly the smoking gun that I was hoping for, but the picture it paints I think is telling. At the very least, it points away from a combined-results approach.&lt;br /&gt;
&lt;br /&gt;
I'm proposing a talk discussing the data we've collected, the trends we've seen, and what I think it all means--plus other reasons that we're jumping on the &amp;quot;bento box&amp;quot; discovery bandwagon and why I think &amp;quot;bento box&amp;quot; is at this point the path that least sells our souls.&lt;br /&gt;
&lt;br /&gt;
== Don’t know about you, but I’m feeling like SHA-2!: Checksumming with Taylor Swift ==&lt;br /&gt;
 &lt;br /&gt;
* Ashley Blewer!, ashley.blewer@gmail.com&lt;br /&gt;
&lt;br /&gt;
Checksum technology is used all over the place, from git commits to authenticating Linux packages. It is most commonly used in the digital preservation field to monitor materials in storage for changes that will occur over time or used in the transmission of files during duplication. But do you even checksum, bro? I want this talk to move checksums from a position of mysterious macho jargon to something everyone can understand and want to use. I think a lot of people have heard of checksum but don’t know where to begin when it comes to actually using it at their institution. And cryptography is hella intimidating! This talk will cover what checksums are, how they can be integrated into a library or archival workflow, protecting collections requiring additional levels of security, algorithms used to verify file fixity and how they are different, and other aspects of cryptographic technology. Oh, and please note that all points in this talk will be emphasized or lightly performed through Taylor Swift lyrics. Seriously, this talk will consist of at least 50% Taylor Swift. Can you, like, even?&lt;br /&gt;
&lt;br /&gt;
== Level Up Your Coding with Code Club (yes, you can talk about it) ==&lt;br /&gt;
&lt;br /&gt;
* Coral Sheldon-Hess, coral@sheldon-hess.org&lt;br /&gt;
&lt;br /&gt;
Reading code is a necessary part of becoming a better developer. It gives you more experience and more insight into How Things Are (or Aren't) Done; it builds your intuition about how to solve problems with code; and it increases your confidence that you, too, can tackle whatever technological problems you're facing.&lt;br /&gt;
&lt;br /&gt;
But you don't have to read code alone! (Which is good. It's really not fun to read code alone.) &lt;br /&gt;
&lt;br /&gt;
In late 2014, a group of librarians formed two Code Clubs, inspired by [http://bloggytoons.com/code-club/ this talk by Saron] (of Bloggytoons fame). I'd like to tell you about how we've structured our Code Clubs, what has gone well, what we've learned, and what you need to do to form your own Code Club. I'll share a list of the codebases we've looked at, too, to help you get your own Code Club off the ground! &lt;br /&gt;
&lt;br /&gt;
== The Growth of a Programmer ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:jgo | Joshua Gomez]], Getty Research Institute, jgomez@getty.edu&lt;br /&gt;
&lt;br /&gt;
Just like other creative endeavors, software developers can experience periods of great productivity or find themselves in a rut. After contemplating the alternating periods in my own career I've noticed several factors that have effected my own professional growth and happiness, including: mentorship, structure, community, teamwork, environment, formal education, etc. Not all of the factors need to be present at all times; but some mixture of them is critical for continued growth. In this talk, I will articulate these factors, discuss how they can effect a developer's career, and how they can be sought out when missing. This talk is aimed at both new developers looking to strike their own path as well as the veterans that lead or mentor them.&lt;br /&gt;
&lt;br /&gt;
== Developing a Fedora 4.0 Content Model for Disk Images ==&lt;br /&gt;
&lt;br /&gt;
* Matthew Farrell, matthew.j.farrell@duke.edu, Duke University Libraries&lt;br /&gt;
* Alexandra Chassanoff, achass@email.unc.edu, BitCurator Access Project Manager&lt;br /&gt;
&lt;br /&gt;
As the acquisition of born-digital materials grows, institutions are seeking methods to facilitate easy ingest into their repositories and provide access to disk images and files derived or extracted from disk images. In this session, we describe our development of a Fedora 4.0 Content model for disk images, including acceptable image file formats and the rationale behind those choices.  We will also discuss efforts to integrate the disk image content model into the BitCurator Access environment. Unlike generalized, format-agnostic content models which might treat the disk image as a generic bitstream, a content model designed for disk images enables expression of relationships among associated content in the collection such as files extracted from images and other born-digital and digitized material associated with the same creator.  It also enables capture of file-system attributes such as file paths, timestamps, whether files are allocated/deleted, etc.  Further, a disk image content model suggests further steps repositories can take in order to transform and re-use associated metadata generated during the creation and forensic analysis of the disk image.&lt;br /&gt;
&lt;br /&gt;
== Data acquisition and publishing tools in R ==&lt;br /&gt;
&lt;br /&gt;
* Scott Chamberlain,  scott@ropensci.org, rOpenSci/UC Berkeley - first-time presenter&lt;br /&gt;
&lt;br /&gt;
R is an open source programming environment that is widely used among researchers in many fields. R is powerful because it's free, increasingly robust, and facilitates reproducible research, an increasingly sought after goal in academia. Although tools for data manipulation/visualization/analysis are well developed in R, data acquisition and publishing tools are not. rOpenSci is a collaborative effort to create the tools necessary to complete the reproducible research workflow. This presentation discusses the need for these tools, including examples, including interacting with the repositories Mendeley, Dryad, DataONE, and Figshare. In addition, we are building tools for searching scholarly metadata and acuiring full text of open access articles in a standarized way across metadata providers (e.g., Crossref, DataCite, DPLA) and publishers (e.g., PLOS, PeerJ, BMC, Pubmed). Last, we are building out tools for data reading and writing in Ecologial Metadata Language (EML).&lt;br /&gt;
&lt;br /&gt;
== SPLUNK: Log File Analysis ==&lt;br /&gt;
&lt;br /&gt;
* Jim LeFager, jlefager@depaul.edu, DePaul University Library&lt;br /&gt;
&lt;br /&gt;
DePaul University Library recently took over monitoring and maintaining of the library EZproxy servers this past year and using Splunk, a machine data analysis tool, we are able to gather information and statistics on our electronic resource usage in addition to monitoring the servers.  Splunk is a tool that can collect, analyze, and visualize log files and other machine data in real time and this has allowed for gathering realtime usage statistics for our electronic resources allowing us to filter by multiple facets including IP Range, Group Membership (student, faculty), so that we can see who is accessing our resources and from where.  Splunk allows our library to query our data and create rich custom dashboards  as well as create alerts that can be triggered when certain conditions are met, such as error codes, which can send an email alert to a group of users.  We will be leveraging Splunk to monitor all library web applications going forward.&lt;br /&gt;
&lt;br /&gt;
== Your code does not exist in a vacuum ==&lt;br /&gt;
* Becky Yoose, yoosebec at grinnell dot edu, Grinnell College&lt;br /&gt;
(Done a lightning talk, MC duties, but have not presented a prepared talk)&lt;br /&gt;
&lt;br /&gt;
“If you have something to say, then say it in code…” - Sebastian Hammer, code4lib 2009&lt;br /&gt;
&lt;br /&gt;
In its 10 year run, code4lib has covered the spectrum of libtech development, from search to repositories to interfaces. However, during this time there has been little discussion about this one little fact about development - code does not exist in a vacuum. &lt;br /&gt;
&lt;br /&gt;
Like the comment above, code has something to say. A person’s or organization’s culture and beliefs influences code in all steps of the development cycle. What development method you use, tools, programming languages, licenses - everything is interconnected with and influenced by the philosophies, economics, social structures, and cultural beliefs of the developer and their organization/community.&lt;br /&gt;
&lt;br /&gt;
This talk will discuss these interconnections and influences when one develops code for libraries, focusing on several development practices (such as “Fail Fast, Fail Often” and Agile)   and licensing choices (such as open source) that libtech has either tried to model or incorporate into mainstream libtech practices. It’ll only scratch the surface of the many influences present in libtech development, but it will give folks a starting point to further investigate these connections at their own organizations and as a community as a whole.&lt;br /&gt;
&lt;br /&gt;
tl;dr - this will be a messy theoretical talk about technology and libraries. No shiny code slides, no live demos. You might come out of this talk feeling uncomfortable. Your code does not exist in a vacuum. Then again, you don’t exist in a vacuum either.&lt;br /&gt;
&lt;br /&gt;
== The Metadata Hopper: Mapping and Merging Metadata Standards for Simple, User-Friendly Access ==&lt;br /&gt;
&lt;br /&gt;
* Tracy Seneca, tjseneca@uic.edu, University of Illinois at Chicago&lt;br /&gt;
* Esther Verreau: verreau1@uic.edu, University of Illinois at Chicago&lt;br /&gt;
&lt;br /&gt;
The Chicago Collections Consortium: 15 institutions and growing!  8 distinct EAD standards! At least 3 permutations of MARC, and we lost count of the varieties of custom CONTENTdm image collections.  Not to mention the 14,730 unique subject terms, nearly all of which lead our poor end-users to exactly one organization's content. &lt;br /&gt;
&lt;br /&gt;
All large content aggregation projects have faced this challenge, and there are a few emerging tools to help us wrangle disparate metadata into new contexts.  The Metadata Hopper is one such tool. The Metadata Hopper enables archivists to map their local metadata standards to standardized deposit records, and tags those materials using a shared vocabulary, integrating them into a user-friendly portal without disrupting local practices. In last year's Code4Lib lightning talk we described the challenges that the Chicago Collections Consortium faces in creating shared, in-depth access to archival and digital collections about Chicago history and culture across CCC member organizations. This year, thanks to the Andrew W. Mellon Foundation, we have a working Django application to demonstrate.  In this talk we'll discuss the design that enables multiple layers of flexibility, from the ability to accept a variety of metadata standards to designing for an open source audience.&lt;br /&gt;
&lt;br /&gt;
http://chicagocollectionsconsortium.org&lt;br /&gt;
&lt;br /&gt;
== Programmers are not projects: lessons learned from managing humans ==&lt;br /&gt;
&lt;br /&gt;
* Erin White, erwhite@vcu.edu, Virginia Commonwealth University - first-time presenter&lt;br /&gt;
&lt;br /&gt;
Managing projects is one thing, but managing people is another. Whether we’re hired as managers or grow “organically” into management roles, sometimes technical people end up leading technical teams (gasp!). I’ll talk about lessons I’ve learned about hiring, retaining, and working long-term and day-to-day with highly tech-competent humans. I’ll also talk about navigating the politics of libraryland, juggling different types of projects, and working with constrained budgets to make good things and keep talented people engaged.&lt;br /&gt;
&lt;br /&gt;
== Practical Strategies for Picking Low-Hanging Fruits to Improve Your Library's Web Usability and UX ==&lt;br /&gt;
&lt;br /&gt;
* Bohyun Kim, bkim@hshsl.umaryland.edu, University of Maryland, Baltimore&lt;br /&gt;
&lt;br /&gt;
Have you ever tried to fix an obvious (to you at least!) problem in Web usability or UX (user experience) only to face strong resistance from the library staff? Are you a strong advocate for making library resources, systems, services, and space as usable as possible, but do you often find yourself struggling to get the point across and/or obtain the crucial buy-in from colleagues and administrators? &lt;br /&gt;
&lt;br /&gt;
There is no shortage of Web usability and UX guidelines. But applying them to a library and implementing desired changes often involve a long and slow process. To tackle this issue, this talk will focus on how to utilize the 'expert review' process (aka 'heuristic evaluation') as a preliminary or even preparatory step before embarking on more time-and-labor-intensive usability testing and user research. Several examples from  simple fixes to more nuanced usability and UX issues in libraries will be discussed to your heart's content. The goal of this talk is to provide practical strategies for picking as many low-hanging fruits as possible to make a real (albeit small) difference to your library's Web usability and UX effectively and efficiently.&lt;br /&gt;
&lt;br /&gt;
== A Semantic Makeover for CMS Data ==&lt;br /&gt;
&lt;br /&gt;
* Bill Levay, wjlevay@gmail.com, Linked Jazz Project&lt;br /&gt;
&lt;br /&gt;
How can we take semi-structured but messy metadata from a repository like CONTENTdm and transform it into rich linked data? Working with metadata from Tulane’s Hogan Jazz Archive Photography Collection, the Linked Jazz Project used Open Refine and Python scripts to tease out proper names, match them with name authority URIs, and specify FOAF relationships between musicians who appear together in photographs. Additional RDF triples were created for any dates associated with the photos, and for those images with place information we employed GeoNames URIs. Historical images and data that were siloed can now interact with other datasets, like Linked Jazz’s rich set of names and personal relationships, and can be visualized [link to come] or otherwise presented on the web in any number of ways. I have not previously presented at a Code4Lib conference.&lt;br /&gt;
&lt;br /&gt;
== Taking User Experience (UX) to new heights ==&lt;br /&gt;
 &lt;br /&gt;
* Kayne Richens, kayne.richens@deakin.edu.au, Deakin University&lt;br /&gt;
&lt;br /&gt;
User Experience, or &amp;quot;UX&amp;quot;, is for more than just websites. At Deakin University Library we're exploring ways to improve the user experience inside our campus library spaces, by putting new technologies front and centre in the overall experience for our students. How are we doing this? We’re collaborating with the University's IT department and exploring the following Library-changing opportunities:&lt;br /&gt;
&lt;br /&gt;
- Augmented Reality for Way-finding: We’re tackling that infamous thing that all Libraries can't get right – way-finding. We're enhancing library tour information and way-finding experiences by introducing augmented reality solutions.&lt;br /&gt;
 &lt;br /&gt;
- Heat mapping the library with wi-fi: We’re using our existing wi-fi infrastructure to present &amp;quot;heat maps&amp;quot; of library space utilisation, allowing our users to easily locate the space that best suits their needs, whether it be busy spaces to collaborate, or quiet spaces to study. And by overlaying computer usage and group study room bookings, users can quickly locate the space they need.&lt;br /&gt;
 &lt;br /&gt;
- Video chat library service: We’re piloting video-conferencing facilities in our group study rooms and spaces, connecting users and librarians and other professionals.&lt;br /&gt;
         &lt;br /&gt;
This talk will look at how these different technologies will be brought together to provide improved user experiences, as well some of the evidence and reasons that helped us to identify our needs, so you can too.&lt;br /&gt;
&lt;br /&gt;
==How to Hack it as a Working Parent: or, Should Your Face be Bathed in the Blue Glow of a Phone at 2 AM?==&lt;br /&gt;
&lt;br /&gt;
*Margaret Heller, Loyola University Chicago, mheller1@luc.edu&lt;br /&gt;
*Christina Salazar, California State University Channel Islands, christina.salazar@csuci.edu&lt;br /&gt;
*May Yan, Ryerson University, may.yan@ryerson.ca&lt;br /&gt;
&lt;br /&gt;
Modern technology has made it easier than ever for parents employed in technical environments to keep up with work at all hours and in all locations. This makes it possible to work a flexible schedule, but also may lead to problems with work/life balance and furthering unreasonable expectations about working hours. Add to that shifting gender roles and limited paid parental leave in the United States and you have potential for burnout and a certainty for anxiety. It raises the additioal question of whether the “always connected” mindset puts up a barrier to some populations who otherwise might be better represented in open source and library technology communities. &lt;br /&gt;
&lt;br /&gt;
This presentation will address tools that are useful for working parents in technical library positions, and share some lessons learned about using these tools while maintaining a reasonable work/life balance. We will consider a question that Karen Coyle raised back in 1996: &lt;br /&gt;
“What if the thousands of hours of graveyard shift amateur hacking wasn't really the best way to get the job done? That would be unthinkable.” &lt;br /&gt;
&lt;br /&gt;
For those who are able to take an extended parental leave, we will present strategies for minimizing the impact to your career and your employer. For those (particularly in the United States) who are only able to take a short leave will require different strategies. Despite different levels of preparation, all are useful exercises in succession planning and making a stronger workplace and future ability to work a flexible schedule through reviewing workloads, cross-training personnel, hiring contract replacements, and creative divisions of labor. Such preparation makes work better for everyone, kids or no kids.&lt;br /&gt;
&lt;br /&gt;
==Making your digital objects embeddable around the web==&lt;br /&gt;
 &lt;br /&gt;
* Jessie Keck, jkeck@stanford.edu, Stanford University Libraries&lt;br /&gt;
* Jack Reed, pjreed@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
With more and more content from our digital repositories making their way into our discovery environments we quickly realize that we’re repeatedly re-inventing the wheel when it comes to creating “Viewers” for these digital objects.  With various different types of viewers necessary (books, images, audio, video, geospatial data, etc) the burden of getting these viewers into various environments (topic guides, blogs, catalogs, etc) becomes exponential.&lt;br /&gt;
&lt;br /&gt;
In this talk we’ll discuss how Stanford University Libraries implemented an oEmbed service to create an extensible viewer framework for all of its digital content. Using this service we’ve been able to easily integrate viewers into various discovery applications as well as make it easy for end users who discover our objects to easily embed customized versions into their own websites and blogs.&lt;/div&gt;</summary>
		<author><name>Jkeck</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2013_talks_proposals&amp;diff=28296</id>
		<title>2013 talks proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2013_talks_proposals&amp;diff=28296"/>
				<updated>2012-11-09T06:06:37Z</updated>
		
		<summary type="html">&lt;p&gt;Jkeck: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Deadline has been extended by request due to the hurricane/storm.'''&lt;br /&gt;
&lt;br /&gt;
Deadline for talk submission is ''Friday, November 9'' at 11:59pm ET. We ask that no changes be made after this point, so that every voter reads the same thing. You can update your description again after voting closes.&lt;br /&gt;
&lt;br /&gt;
Prepared talks are 20 minutes (including setup and questions), and focus on one or more of the following areas:&lt;br /&gt;
* tools (some cool new software, software library or integration platform)&lt;br /&gt;
* specs (how to get the most out of some protocols, or proposals for new ones)&lt;br /&gt;
* challenges (one or more big problems we should collectively address)&lt;br /&gt;
&lt;br /&gt;
The community will vote on proposals using the criteria of:&lt;br /&gt;
* usefulness&lt;br /&gt;
* newness&lt;br /&gt;
* geekiness&lt;br /&gt;
* uniqueness&lt;br /&gt;
* awesomeness&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
== Talk Title ==&lt;br /&gt;
 &lt;br /&gt;
* Speaker's name, affiliation, and email address&lt;br /&gt;
* Second speaker's name, affiliation, email address, if applicable&lt;br /&gt;
&lt;br /&gt;
Abstract of no more than 500 words.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== All Teh Metadatas Re-Revisited ==&lt;br /&gt;
 &lt;br /&gt;
* Esme Cowles, UC San Diego Library, escowles AT ucsd DOT edu&lt;br /&gt;
* Matt Critchlow, UC San Diego Library, mcritchlow AT ucsd DOT edu&lt;br /&gt;
* Bradley Westbrook, UC San Diego Library, bdwestbrook AT ucsd DOT edu&lt;br /&gt;
&lt;br /&gt;
Last year Declan Fleming presented ALL TEH METADATAS and reviewed our UC&lt;br /&gt;
San Diego Library Digital Asset Management system and RDF data model. You&lt;br /&gt;
may be shocked to hear that all that metadata wasn't quite enough to&lt;br /&gt;
handle increasingly complex digital library and research data in an&lt;br /&gt;
elegant way. Our ad-hoc, 8-year-old data model has also been added to in&lt;br /&gt;
inconsistent ways and our librarians and developers have not always been&lt;br /&gt;
perfectly in sync in understanding how the data model has evolved over&lt;br /&gt;
time.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
In this presentation we'll review our process of locking a team of&lt;br /&gt;
librarians and developers in a room to figure out a new data model, from&lt;br /&gt;
domain definition through building and testing an OWL ontology. We¹ll also&lt;br /&gt;
cover the challenges we ran into, including the review of existing&lt;br /&gt;
controlled vocabularies and ontologies, or lack thereof, and the decisions&lt;br /&gt;
made to cover the gaps. Finally, we'll discuss how we engaged the digital&lt;br /&gt;
library community for feedback and what we have to do next. We all know&lt;br /&gt;
that Things Fall Apart, this is our attempt at Doing Better This Time.&lt;br /&gt;
&lt;br /&gt;
== Modernizing VuFind with Zend Framework 2 ==&lt;br /&gt;
&lt;br /&gt;
* Demian Katz, Villanova University, demian DOT katz AT villanova DOT edu&lt;br /&gt;
&lt;br /&gt;
When setting goals for a new major release of VuFind, use of an existing web framework was an important decision to encourage standardization and avoid reinvention of the wheel.  Zend Framework 2 was selected as providing the best balance between the cutting-edge (ZF2 was released in 2012) and stability (ZF1 has a long history and many adopters).  This talk will examine some of the architecture and features of the new framework and discuss how it has been used to improve the VuFind project.&lt;br /&gt;
&lt;br /&gt;
== Did You Really Say That Out Loud?  Tools and Techniques for Safe Public WiFi Computing  ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:DataGazetteer|Peter Murray]], LYRASIS, Peter.Murray@lyrasis.org&lt;br /&gt;
&lt;br /&gt;
Public WiFi networks, even those that have passwords, are nothing more that an old-time [https://en.wikipedia.org/wiki/Party_line_(telephony) party line]: what every you say can be easily heard by anyone nearby.  &lt;br /&gt;
Remember [https://en.wikipedia.org/wiki/Firesheep Firesheep]?  &lt;br /&gt;
It was an extension to Firefox that demonstrated how easy it was to snag session cookies and impersonate someone else.&lt;br /&gt;
So what are you sending out over the airwaves, and what techniques are available to prevent eavesdropping?&lt;br /&gt;
This talk will demonstrate tools and techniques for desktop and mobile operating systems that you should be using right now -- right here at Code4Lib -- to protect your data and your network activity.&lt;br /&gt;
&lt;br /&gt;
== Drupal 8 Preview — Symfony and Twig ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Drupal is a great platform for building web applications. Last year, the core developers decided to adopt the Symfony PHP framework, because it would lay the groundwork for the modernization (and de-PHP4ification) of the Drupal codebase. As I write this, the Symfony ClassLoader and HttpFoundation libraries are committed to Drupal core, with more elements likely before Drupal 8 code freeze.&lt;br /&gt;
&lt;br /&gt;
It seems almost certain that the Twig templating engine will supplant PHPtemplate as the core Drupal template engine. Twig is a powerful, secure theme building tool that removes PHP from the templating system, the result being a very concise and powerful theme layer.&lt;br /&gt;
&lt;br /&gt;
Symfony and Twig have a common creator, Fabien Potencier, who's overall goal is to rid the world of the excesses of PHP 4.&lt;br /&gt;
&lt;br /&gt;
== Neat! But How Do We Do It? - The Real-world Problem of Digitizing Complex Corporate Digital Objects ==&lt;br /&gt;
&lt;br /&gt;
* Matthew Mariner, University of Colorado Denver, Auraria Library, matthew.mariner@ucdenver.edu&lt;br /&gt;
&lt;br /&gt;
Isn't it neat when you discover that you are the steward of dozens of Sanborn Fire Instance Maps, hundreds of issues of a city directory, and thousands of photographs of persons in either aforementioned medium? And it's even cooler when you decide, &amp;quot;Let's digitize these together and make them one big awesome project to support public urban history&amp;quot;?  Unfortunately it's a far more difficult process than one imagines at inception and, sadly, doesn't always come to fruition.  My goal here is to discuss the technological (and philosophical) problems librarians and archivists face when trying to create ultra-rich complex corporate digital projects, or, rather, projects consisting of at least three facets interrelated by theme.  I intend to address these problems by suggesting management solutions, web workarounds, and, perhaps, a philosophy that might help in determining whether to even move forward or not.  Expect a few case studies of &amp;quot;grand ideas crushed by technological limitations&amp;quot; and &amp;quot;projects on the right track&amp;quot; to follow.   &lt;br /&gt;
 &lt;br /&gt;
== ResCarta Tools building a standard format for audio archiving, discovery and display ==&lt;br /&gt;
&lt;br /&gt;
* [[User:sarney|John Sarnowski]], The ResCarta Foundation, john.sarnowski@rescarta.org&lt;br /&gt;
&lt;br /&gt;
The free ResCarta Toolkit has been used by libraries and archives around the world to host city directories, newspapers, and historic photographs and by aerospace companies to search and find millions of engineering documents.  Now the ResCarta team has released audio additions to the toolkit. &lt;br /&gt;
&lt;br /&gt;
Create full text searchable oral histories, news stories, interviews. or build an archive of lectures; all done to Library of Congress standards.  The included transcription editor allows for accurate correction of the data conversion tool’s output.  Build true archives of text, photos and audio.  A single audio file carries the embedded Axml metadata, transcription, and word location information. Checks with the FADGI BWF Metaedit.&lt;br /&gt;
&lt;br /&gt;
ResCarta-Web presents your audio to IE, Chome, Firefox, Safari, and Opera browsers with full playback and word search capability. Display format is OGG!! &lt;br /&gt;
&lt;br /&gt;
You have to see this tool in action.  Twenty minutes from an audio file to transcribed, text-searchable website.  Be there or be L seven (Yeah, I’m that old)   &lt;br /&gt;
&lt;br /&gt;
== Format Designation in MARC Records: A Trip Down the Rabbit-Hole ==&lt;br /&gt;
 &lt;br /&gt;
* Michael Doran, University of Texas at Arlington, doran@uta.edu&lt;br /&gt;
&lt;br /&gt;
This presentation will use a seemingly simple data point, the &amp;quot;format&amp;quot; of the item being described, to illustrate some of the complexities and challenges inherent in the parsing of MARC records.  I will talk about abstract vs. concrete forms; format designation in the Leader, 006, 007, and 008 fixed fields as well as the 245 and 300 variable fields; pseudo-formats; what is mandatory vs. optional in respect to format designation in cataloging practice; and the differences between cataloging theory and practice as observed via format-related data mining of a mid-size academic library collection. &lt;br /&gt;
&lt;br /&gt;
I understand that most of us go to code4lib to hear about the latest sexy technologies.  While MARC isn't sexy, many of the new tools being discussed still need to be populated with data gleaned from MARC records.  MARC format designation has ramifications for search and retrieval, limits, and facets, both in the ILS and further downstream in next generation OPACs and web-scale discovery tools.  Even veteran library coders will learn something from this session. &lt;br /&gt;
&lt;br /&gt;
== Touch Kiosk 2: Piezoelectric Boogaloo ==&lt;br /&gt;
&lt;br /&gt;
* Andreas Orphanides, North Carolina State University Libraries, akorphan@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
At the NCSU Libraries, we provide realtime access to information on library spaces and services through an interactive touchscreen kiosk in our Learning Commons. In the summer of 2012, two years after its initial deployment, I redeveloped the kiosk application from the ground up, with an entirely new codebase and a completely redesigned user interface. The changes I implemented were designed to remedy previously identified shortcomings in the code and the interface design [1], and to enhance overall stability and performance of the application.&lt;br /&gt;
&lt;br /&gt;
In this presentation I will outline my revision process, highlighting the lessons I learned and the practices I implemented in the course of redevelopment. I will highlight the key features of the HTML/Javascript codebase that allow for increased stability, flexibility, and ease of maintenance; and identify the changes to the user interface that resulted from the usability findings I uncovered in my previous research. Finally, I will compare the usage patterns of the new interface to the analysis of the previous implementation to examine the practical effect of the implemented changes.&lt;br /&gt;
&lt;br /&gt;
I will also provide access to a genericized version of the interface code for others to build their own implementations of similar kiosk applications.&lt;br /&gt;
&lt;br /&gt;
[1] http://journal.code4lib.org/articles/5832&lt;br /&gt;
&lt;br /&gt;
== Wayfinding in a Cloud: Location Service for libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Petteri Kivimäki, The National Library of Finland, petteri.kivimaki@helsinki.fi&lt;br /&gt;
&lt;br /&gt;
Searching for books in large libraries can be a difficult task for a novice library user. This paper presents The Location Service, software as a service (SaaS) wayfinding application developed and managed by The National Library of Finland, which is targeted for all the libraries. The service provides additional information and map-based guidance to books and collections by showing their location on a map, and it can be integrated with any library management system, as the integration happens by adding a link to the service in the search interface. The service is being developed continuously based on the feedback received from the users.&lt;br /&gt;
&lt;br /&gt;
The service has two user interfaces: One for the customers and one for the library staff for managing the information related to the locations. The UI for the customers is fully customizable by the libraries, and the customization is done via template files by using the following techniques: HTML, CSS, and Javascript/jQuery. The service supports multiple languages, and the libraries have a full control of the languages, which they want to support in their environment.&lt;br /&gt;
&lt;br /&gt;
The service is written in Java and it uses Spring and Hibernate frameworks. The data is stored in PostgreSQL database, which is shared by all the libraries. They do not possess a direct access to the database, but the service offers an interface, which makes it possible to retrieve XML data over HTTP. Modification of the data via admin UI, however, is restricted, and access on the other libraries’ data is blocked.&lt;br /&gt;
&lt;br /&gt;
== Empowering Collection Owners with Automated Bulk Ingest Tools for DSpace ==&lt;br /&gt;
&lt;br /&gt;
* Terry Brady, Georgetown University, twb27@georgetown.edu&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Library has developed a number of applications to expedite the process of ingesting content into DSpace.&lt;br /&gt;
* Automatically inventory a collection of documents or images to be uploaded&lt;br /&gt;
* Generate a spreadsheet for metadata capture based on the inventory&lt;br /&gt;
* Generate item-level ingest folders, contents files and dublin core metadata for the items to be ingested&lt;br /&gt;
* Validate the contents of ingest folders prior to initiating the ingest to DSpace&lt;br /&gt;
* Present users with a simple, web-based form to initiate the batch ingest process&lt;br /&gt;
&lt;br /&gt;
The applications have eliminated a number of error-prone steps from the ingest workflow and have significantly reduced a number of tedious data editing steps.  These applications have empowered content experts to be in charge of their own collections. &lt;br /&gt;
&lt;br /&gt;
In this presentation, I will provide a demonstration of the tools that were built and discuss the development process that was followed.&lt;br /&gt;
&lt;br /&gt;
== Quality Assurance Reports for DSpace Collections ==&lt;br /&gt;
&lt;br /&gt;
* Terry Brady, Georgetown University, twb27@georgetown.edu&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Library has developed a collection of quality assurance reports to improve the consistency of the metadata in our DSpace collections.  The report infrastructure permits the creation of query snippets to test for possible consistency errors within the repository such as items missing thumbnails, items with multiple thumbnails, items missing a creation date, items containing improperly formatted dates, items without duplicated metadata fields, items recently added items across the repository, a community or a collection&lt;br /&gt;
&lt;br /&gt;
These reports have served to prioritize programmatic data cleanup tasks and manual data cleanup tasks.  The reports have served as a progress tracker for data cleanup work and will provide on-going monitoring of the metadata consistency of the repository.&lt;br /&gt;
&lt;br /&gt;
In this presentation, I will provide a demonstration of the tools that were built and discuss the development process that was followed.&lt;br /&gt;
&lt;br /&gt;
== A Hybrid Solution for Improving Single Sign-On to a Proxy Service with Squid and EZproxy through Shibboleth and ExLibris’ Aleph X-Server ==&lt;br /&gt;
&lt;br /&gt;
* Alexander Jerabek, UQAM - Université du Québec à Montréal, jerabek.alexander_j@uqam.ca&lt;br /&gt;
* Minh-Quang Nguyen, UQAM - Université du Québec à Montréal, nguyen.minh-quang@uqam.ca&lt;br /&gt;
&lt;br /&gt;
In this talk, we will describe how we developed and implemented a hybrid solution for improving single sign-on in conjunction with the library’s proxy service. This hybrid solution consists of integrating the disparate elements of EZproxy, the Squid workflow, Shibboleth, and the Aleph X-Server. We will report how this new integrated service improves the user experience. To our knowledge, this new service is unique and has not been implemented anywhere else. We will also present some statistics after approximately one year in production.&lt;br /&gt;
&lt;br /&gt;
See article: http://journal.code4lib.org/articles/7470&lt;br /&gt;
&lt;br /&gt;
== HTML5 Video Now! ==&lt;br /&gt;
&lt;br /&gt;
* Jason Ronallo, North Carolina State University Libraries, jnronall@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
Can you use HTML5 video now? Yes.&lt;br /&gt;
&lt;br /&gt;
I'll show you how to get started using HTML5 video, including gotchas, tips, and tricks. Beyond the basics we'll see the power of having video integrated into HTML and the browser. Finally, we'll look at examples that push the limits and show the exciting future of video on the Web.&lt;br /&gt;
&lt;br /&gt;
My experience comes from technical development of an oral history video clips project. I developed the technical aspects of the project, including video processing, server configuration, development of a public site, creation of an administrative interface, and video engagement analytics. Major portions of this work have been open sourced under an MIT license.&lt;br /&gt;
&lt;br /&gt;
== Hybrid Archival Collections Using Blacklight and Hydra ==&lt;br /&gt;
&lt;br /&gt;
* Adam Wead, Rock and Roll Hall of Fame and Museum, awead@rockhall.org&lt;br /&gt;
&lt;br /&gt;
At the Library and Archives of the Rock and Roll Hall of Fame, we use available tools such as Archivists' Toolkit to create EAD finding aids of our collections.  However, managing digital content created from these materials and the born-digital content that is also part of these collections represents a significant challenge.  In my presentation, I will discuss how we solve the problem of our hybrid collections by using Hydra as a digital asset manager and Blacklight as a unified presentation and discovery interface for all our materials.&lt;br /&gt;
&lt;br /&gt;
Our strategy centers around indexing ead xml into Solr as multiple documents: one for each collection, and one for every series, sub-series and item contained within a collection.  For discovery, we use this strategy to leverage item-level searching of archival collections alongside our traditional library content.  For digital collections, we use this same technique to represent a finding aid in Hydra as a set of linked objects using RDF.  New digital items are then linked to these parent objects at the collection and series level.  Once this is done, the items can be exported back out to the Blacklight solr index and the digital content appears along with the rest of the items in the collection.&lt;br /&gt;
&lt;br /&gt;
== Making the Web Accessible through Solid Design ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Cynthia|Cynthia Ng]] from Ryerson University Library &amp;amp; Archives&lt;br /&gt;
&lt;br /&gt;
In libraries, we are always trying our best to be accessible to everyone and we make every effort to do so physically, but what about our websites? Web designers are great at talking about the user experience and how to improve it, but what sometimes gets overlooked is how to make a site more accessible and meet accessibility guidelines. While guidelines are necessary to cover a minimum standard, web accessibility should come from good web design without ‘sacrificing’ features. While it's difficult to make a website fully accessible to everyone, there are easy, practical ways to make a site as accessible as possible.&lt;br /&gt;
&lt;br /&gt;
While the focus will be on websites and meeting the Web Accessibility Guidelines WCAG, the presentation will also touch on how to make custom web interfaces accessible.&lt;br /&gt;
&lt;br /&gt;
== Getting People to What They Need Fast! A Wayfinding Tool to Locate Books &amp;amp; Much More ==&lt;br /&gt;
 &lt;br /&gt;
* Steven Marsden, Ryerson University Library &amp;amp; Archives, steven dot marsden at ryerson dot ca&lt;br /&gt;
* [[User:Cynthia|Cynthia Ng]], Ryerson University Library &amp;amp; Archives&lt;br /&gt;
&lt;br /&gt;
Having a bewildered, lost user in the building or stacks is a common occurrence, but we can help our users find their way through enhanced maps and floor plans.  While not a new concept, these maps are integrated into the user’s flow of information without having to load a special app. The map not only highlights the location, but also provides all the related information with a link back to the detailed item view. During the first stage of the project, it has only be implemented for books (and other physical items), but the 'RULA Finder' is built to help users find just about anything and everything in the library including study rooms, computer labs, and staff. With a simple to use admin interface, it makes it easy for everyone, staff and users. &lt;br /&gt;
&lt;br /&gt;
The application is written in PHP with data stored in a MySQL database. The end-user interface involves jQuery, JSON, and the library's discovery layer (Summon) API.&lt;br /&gt;
&lt;br /&gt;
The presentation will not only cover the technical aspects, but also the implementation and usability findings.&lt;br /&gt;
&lt;br /&gt;
== De-sucking the Library User Experience ==&lt;br /&gt;
 &lt;br /&gt;
* Jeremy Prevost, Northwestern University, j-prevost {AT} northwestern [DOT] edu&lt;br /&gt;
&lt;br /&gt;
Have you ever thought that library vendors purposely create the worst possible user experience they can imagine because they just hate users? Have you ever thought that your own library website feels like it was created by committee rather than for users because, well, it was? I’ll talk about how we used vendor supplied APIs to our ILS and Discovery tool to create an experience for our users that sucks at least a little bit less.&lt;br /&gt;
&lt;br /&gt;
The talk will provide specific examples of how inefficient or confusing vendor supplied solutions are from a user perspective along with our specific streamlined solutions to the same problems. Code examples will be minimal as the focus will be on improving user experience rather than any one code solution of doing that. Examples may include the seemingly simple tasks of renewing a book or requesting an item from another campus library.&lt;br /&gt;
&lt;br /&gt;
== Solr Testing Is Easy with Rspec-Solr Gem ==&lt;br /&gt;
&lt;br /&gt;
* Naomi Dushay, Stanford University, ndushay AT stanford DOT edu&lt;br /&gt;
&lt;br /&gt;
How do you know if &lt;br /&gt;
&lt;br /&gt;
* your idea for &amp;quot;left anchoring&amp;quot; searches actually works?&lt;br /&gt;
* your field analysis for LC call numbers accommodates a suffix between the first and second cutter without breaking the rest of LC call number parsing?&lt;br /&gt;
* tweaking Solr configs to improve, say, Chinese searching, won't break Turkish and Cyrillic?&lt;br /&gt;
* changes to your solrconfig file accomplish what you wanted without breaking anything else?&lt;br /&gt;
&lt;br /&gt;
Avoid the whole app stack when writing Solr acceptance/relevancy/regression tests!  Forget cucumber and capybara.  This gem lets you easily (only 4 short files needed!) write tests like this, passing arbitrary parameters to Solr:&lt;br /&gt;
&lt;br /&gt;
  it &amp;quot;unstemmed author name Zare should precede stemmed variants&amp;quot; do&lt;br /&gt;
    resp = solr_response(author_search_args('Zare').merge({'fl'=&amp;gt;'id,author_person_display', 'facet'=&amp;gt;false}))&lt;br /&gt;
    resp.should include(&amp;quot;author_person_display&amp;quot; =&amp;gt; /\bZare\W/).in_each_of_first(3).documents&lt;br /&gt;
    resp.should_not include(&amp;quot;author_person_display&amp;quot; =&amp;gt; /Zaring/).in_each_of_first(20).documents&lt;br /&gt;
  end&lt;br /&gt;
      &lt;br /&gt;
  it &amp;quot;Cyrillic searching should work:  Восемьсoт семьдесят один день&amp;quot; do&lt;br /&gt;
    resp = solr_resp_doc_ids_only({'q'=&amp;gt;'Восемьсoт семьдесят один день'})&lt;br /&gt;
    resp.should include(&amp;quot;9091779&amp;quot;)&lt;br /&gt;
  end&lt;br /&gt;
   &lt;br /&gt;
  it &amp;quot;q of 'String quartets Parts' and variants should be plausible &amp;quot; do&lt;br /&gt;
    resp = solr_resp_doc_ids_only({'q'=&amp;gt;'String quartets Parts'})&lt;br /&gt;
    resp.should have_at_least(2000).documents&lt;br /&gt;
    resp.should have_the_same_number_of_results_as(solr_resp_doc_ids_only({'q'=&amp;gt;'(String quartets Parts)'}))&lt;br /&gt;
    resp.should have_more_results_than(solr_resp_doc_ids_only({'q'=&amp;gt;'&amp;quot;String quartets Parts&amp;quot;'}))&lt;br /&gt;
  end&lt;br /&gt;
   &lt;br /&gt;
  it &amp;quot;Traditional Chinese chars 三國誌 should get the same results as simplified chars 三国志&amp;quot; do&lt;br /&gt;
    resp = solr_response({'q'=&amp;gt;'三國誌', 'fl'=&amp;gt;'id', 'facet'=&amp;gt;false}) &lt;br /&gt;
    resp.should have_at_least(240).documents&lt;br /&gt;
    resp.should have_the_same_number_of_results_as(solr_resp_doc_ids_only({'q'=&amp;gt;'三国志'})) &lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
See&lt;br /&gt;
   http://rubydoc.info/github/sul-dlss/rspec-solr/frames&lt;br /&gt;
   https://github.com/sul-dlss/rspec-solr&lt;br /&gt;
&lt;br /&gt;
and our production relevancy/acceptance/regression tests slowly migrating from cucumber to:&lt;br /&gt;
   https://github.com/sul-dlss/sw_index_tests&lt;br /&gt;
&lt;br /&gt;
== Northwestern's Digital Image Library ==&lt;br /&gt;
&lt;br /&gt;
*Mike Stroming, Northwestern University Library, m-stroming AT northwestern DOT edu&lt;br /&gt;
*Edgar Garcia, Northwestern University Library, edgar-garcia AT northwestern DOT edu&lt;br /&gt;
&lt;br /&gt;
At Northwestern University Library, we are about to release a beta version of our Digital Image Library (DIL).  DIL is an implementation of the Hydra technology that provides a Fedora repository solution for discovery of and access to over 100,000 images for staff, students, and scholars. Some important features are:&lt;br /&gt;
&lt;br /&gt;
*Build custom collection of images using drag-and-drop&lt;br /&gt;
*Re-order images within a collection using drag-and-drop&lt;br /&gt;
*Nest collections within other collections&lt;br /&gt;
*Create details/crops of images&lt;br /&gt;
*Zoom, rotate images&lt;br /&gt;
*Upload personal images&lt;br /&gt;
*Retrieve your own uploads and details from a collection&lt;br /&gt;
*Export a collection to a PowerPoint presentation&lt;br /&gt;
*Create a group of users and authorize access to your images&lt;br /&gt;
*Batch edit image metadata&lt;br /&gt;
&lt;br /&gt;
Our presentation will include a demo, explanation of the architecture, and a discussion of the benefits of being a part of the Hydra open-source community.&lt;br /&gt;
&lt;br /&gt;
== Two standards in a software (to say nothing of Normarc) ==&lt;br /&gt;
&lt;br /&gt;
*Zeno Tajoli, CINECA (Italy), z DOT tajoli AT cineca DOT it&lt;br /&gt;
&lt;br /&gt;
With this presentation I want to show how ILS Koha handles the support of three differnt MARC dialects:&lt;br /&gt;
MARC21, Unimarc and Normarc. The main points of the presentation:&lt;br /&gt;
&lt;br /&gt;
*Three MARC at MySQL level&lt;br /&gt;
*Three MARC at API level&lt;br /&gt;
*Three MARC at display&lt;br /&gt;
*Can I add a new format ?&lt;br /&gt;
&lt;br /&gt;
== Future Friendly Web Design for Libraries ==&lt;br /&gt;
&lt;br /&gt;
*[[User:michaelschofield|Michael Schofield]], Alvin Sherman Library, Research, and Information Technology Center, mschofied[dot]nova[dot]edu&lt;br /&gt;
&lt;br /&gt;
Libraries on the web are afterthoughts. Often their design is stymied on one hand by red tape imposed by the larger institution and on the other by an overload of too democratic input from colleagues. Slashed budgets / staff stretched too thin foul-up the R-word (that'd be &amp;quot;redesign&amp;quot;) - but things are getting pretty strange. Notions about the Web (and where it can be accessed) are changing. &lt;br /&gt;
&lt;br /&gt;
So libraries can only avoid refabbing their fixed-width desktop and jQuery Mobile m-dot websites for so long until desktop users evaporate and demand from patrons with web-ready refrigerators becomes deafening. Just when we have largely hopped on the bandwagon and gotten enthusiastic about being online, our users expect a library's site to look and perform great on everything. &lt;br /&gt;
&lt;br /&gt;
Our presence on the web should be built to weather ever-increasing device complexity. To meet users at their point of need, libraries must start thinking Future Friendly.&lt;br /&gt;
&lt;br /&gt;
This overview rehashes the approach and philosophy of library web design, re-orienting it for maximum accessibility and maximum efficiency of design. While just 20 minutes, we'll mull over techniques like mobile-first responsive web design, modular CSS, browser feature detection for progressive enhancement, and lots of nifty tricks.&lt;br /&gt;
&lt;br /&gt;
==BYU's discovery layer service aggregator==&lt;br /&gt;
&lt;br /&gt;
*Curtis	Thacker, Brigham Young University, curtis.thacker AT byu DOT edu&lt;br /&gt;
&lt;br /&gt;
It is clear that libraries will continue to experience rapid change based on the speed of technology. To acknowledge this new reality and to provide rapid response to shifting end user paradigms BYU has developed a custom service aggregator. At first our vendors looked at us a bit funny; however, in the last year they have been astonished with the fluid implementation of new services – here’s the short list:&lt;br /&gt;
&lt;br /&gt;
*filmfinder - a tool for browsing and searching films&lt;br /&gt;
*A custom book recommender service based on checkout data&lt;br /&gt;
*Integrated library services like personell, library hours, study room scheduler and database finder through a custom adwords system.&lt;br /&gt;
*A very geeky and powerful utility used for converting marc XML into primo compliant xml.&lt;br /&gt;
*Embedded floormaps&lt;br /&gt;
*A responsive web design&lt;br /&gt;
*Bing did-you-mean&lt;br /&gt;
*And many more.&lt;br /&gt;
&lt;br /&gt;
I will demo the system, review the archtecture and talk about future plans.&lt;br /&gt;
&lt;br /&gt;
==The Avalon Media System: A Next Generation Hydra Head For Audio and Video Delivery==&lt;br /&gt;
&lt;br /&gt;
* Michael Klein, Senior Software Developer, Northwestern University LIbrary, michael.klein AT northwestern DOT edu&lt;br /&gt;
* Nathan Rogers, Programmer/Analyst, Indiana University, rogersna AT indiana DOT edu&lt;br /&gt;
&lt;br /&gt;
Based on the success of the [http://www.dml.indiana.edu/ Variations] digital music platform, Indiana University and Northwestern University have developed a next generation educational tool for delivering multimedia resources to the classroom. The Avalon Media System (formerly Variations on Video) supports the ingest, media processing, management, and access-controlled delivery of library-managed video and audio collections. To do so, the system draws on several existing, mature, open source technologies:&lt;br /&gt;
&lt;br /&gt;
* The ingest, search, and discovery functionality of the Hydra framework&lt;br /&gt;
* The powerful multimedia workflow management features of Opencast Matterhorn&lt;br /&gt;
* The flexible Engage audio/video player&lt;br /&gt;
* The streaming capabilities of both Red5 Media Server (open source) and Adobe Flash Media Server (proprietary)&lt;br /&gt;
&lt;br /&gt;
Extensive customization options are built into the framework for tailoring the application to the needs of a specific institution.&lt;br /&gt;
&lt;br /&gt;
Our goal is to create an open platform that can be used by other institutions to serve the needs of the academic community. Release 1 is planned for a late February launch with future versions released every couple of months following. For more information visit http://avalonmediasystem.org/ and https://github.com/variations-on-video/hydrant.&lt;br /&gt;
&lt;br /&gt;
== The DH Curation Guide: Building a Community Resource == &lt;br /&gt;
&lt;br /&gt;
*Robin Davis, John Jay College of Criminal Justice, robdavis AT jjay.cuny.edu &lt;br /&gt;
*James Little, University of Illinois Urbana-Champaign, little9 AT illinois.edu  &lt;br /&gt;
&lt;br /&gt;
Data curation for the digital humanities is an emerging area of research and practice. The DH Curation Guide, launched in July 2012, is an educational resource that addresses aspects of humanities data curation in a series of expert-written articles. Each provides a succinct introduction to a topic with annotated lists of useful tools, projects, standards, and good examples of data curation done right. The DH Curation Guide is intended to be a go-to resource for data curation practitioners and learners in libraries, archives, museums, and academic institutions.  &lt;br /&gt;
&lt;br /&gt;
Because it's a growing field, we designed the DH Curation Guide to be a community-driven, living document. We developed a granular commenting system that encourages data curation community members to contribute remarks on articles, article sections, and article paragraphs. Moreover, we built in a way for readers to contribute and annotate resources for other data curation practitioners.  &lt;br /&gt;
&lt;br /&gt;
This talk will address how the DH Curation Guide is currently used and will include a sneak peek at the articles that are in store for the Guide’s future. We will talk about the difficulties and successes of launching a site that encourages community. We are all builders here, so we will also walk through developing the granular commenting/annotation system and the XSLT-powered publication workflow. &lt;br /&gt;
&lt;br /&gt;
== Solr Update == &lt;br /&gt;
&lt;br /&gt;
*Erik Hatcher, LucidWorks, erik.hatcher AT lucidworks.com &lt;br /&gt;
&lt;br /&gt;
Solr is continually improving.  Solr 4 was recently released, bringing dramatic changes in the underlying Lucene library and Solr-level features.  It's tough for us all to keep up with the various versions and capabilities.&lt;br /&gt;
&lt;br /&gt;
This talk will blaze through the highlights of new features and improvements in Solr 4 (and up).  Topics will include: SolrCloud, direct spell checking, surround query parser, and many other features.  We will focus on the features library coders really need to know about.&lt;br /&gt;
&lt;br /&gt;
== Reports for the People == &lt;br /&gt;
&lt;br /&gt;
*Kara Young, Keene State College, NH, kyoung1 at keene.edu&lt;br /&gt;
*Dana Clark, Keene State College, NH, dclark5 at keene.edu&lt;br /&gt;
&lt;br /&gt;
Libraries are increasingly being called upon to provide information on how our programs and services are moving our institutional strategic goals forward.  In support of College and departmental Information Literacy learning outcomes, Mason Library Systems at Keene State College developed an assessment database to record and report assessment activities by Library faculty.  Frustrated by the lack of freely available options for intuitively recording, accounting for, and outputting useful reports on instructional activities, Librarians requested a tool to make capturing and reporting activities (and their lives) easier.  Library Systems was able to respond to this need by working with librarians to identify what information is necessary to capture, where other assessment tools had fallen short, and ultimately by developing an application that supports current reporting imperatives while providing flexibility for future changes.&lt;br /&gt;
&lt;br /&gt;
The result of our efforts was an in-house browser interfaced Assessment Database to improve the process of data collection and analysis.  The application is written in PHP, data stored in a MySQL database, and presented via browser making extensive use of JQuery and JQuery plug-ins for data collection, manipulation, and presentation. &lt;br /&gt;
The presentation will outline the process undertaken to build a successful collaboration with Library faculty from conception to implementation, as well as the technical aspects of our trial-and-error approach. Plus: cool charts and graphs!&lt;br /&gt;
&lt;br /&gt;
==  Network Analyses of Library Catalog Data ==&lt;br /&gt;
 &lt;br /&gt;
* Kirk Hess, University of Illinois at Urbana-Champaign, kirkhess AT illinois.edu&lt;br /&gt;
* Harriett Green, University of Illinois at Urbana-Champaign, green19 AT illinois.edu &lt;br /&gt;
&lt;br /&gt;
Library collections are all too often like icebergs:  The amount exposed on the surface is only a fraction of the actual amount of content, and we’d like to recommend relevant items from deep within the catalog to users. With the assistance of an XSEDE Allocation grant (http://xsede.org), we’ve used R to reconstitute anonymous circulation data from the University of Illinois’s library catalog into separate user transactions. The transaction data is incorporated into subject analyses that use XSEDE supercomputing resources to generate predictive network analyses and visualizations of subject areas searched by library users using Gephi (https://gephi.org/). The test data set for developing the subject analyses consisted of approximately 38,000 items from the Literatures and Languages Library that contained 110,000 headings and 130,620 transactions. We’re currently working on developing a recommender system within VuFind to display the results of these analyses.&lt;br /&gt;
&lt;br /&gt;
== Pitfall! Working with Legacy Born Digital Materials in Special Collections ==&lt;br /&gt;
&lt;br /&gt;
* Donald Mennerich, The New York Public Library, don.mennerich AT gmail.com&lt;br /&gt;
* Mark A. Matienzo, Yale University Library, mark AT matienzo.org&lt;br /&gt;
&lt;br /&gt;
Archives and special collections are being faced with a growing abundance of  born digital material, as well as an abundance of many promising tools for managing them. However, one must consider the potential problems that can arise when approaching a collection containing legacy materials (from roughly the pre-internet era). Many of the tried and true, &amp;quot;best of breed&amp;quot; tools for digital preservation don't always work as they do for more recent materials, requiring a fair amount of ingenuity and use of &amp;quot;word of mouth tradecraft and knowledge exchanged through serendipitous contacts, backchannel conversations, and beer&amp;quot; (Kirschenbaum, &amp;quot;Breaking &amp;lt;code&amp;gt;badflag&amp;lt;/code&amp;gt;&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Our presentation will focus on some of the strange problems encountered and creative solutions devised by two digital archivists in the course of preserving, processing, and providing access to collections at their institutions. We'll be placing particular particular emphasis of the pitfalls and crocodiles we've learned to swing over safely, while collecting treasure in the process. We'll address working with CP/M disks in collections of authors' papers, reconstructing a multipart hard drive backup spread across floppy disks, and more. &lt;br /&gt;
&lt;br /&gt;
== Project &amp;lt;s&amp;gt;foobar&amp;lt;/s&amp;gt; FUBAR ==&lt;br /&gt;
&lt;br /&gt;
* Becky Yoose, Grinnell College, yoosebec AT grinnell DOT edu&lt;br /&gt;
&lt;br /&gt;
Be it mandated from Those In A Higher Pay Grade Than You or self-inflicted, many of us deal with managing major library-related technology projects [1]. It’s common nowadays to manage multiple technology projects, and generally external and internal issues can be planned for to minimize project timeline shifts and quality of deliverables. Life, however, has other plans for you, and all your major library technology infrastructure projects pile on top of each other at the same time. How do you and your staff survive a train wreck of technology projects and produce deliverables to project stakeholders without having to go into the library IT version of the United States Federal Witness Protection Program?&lt;br /&gt;
&lt;br /&gt;
This session covers my experience with the collision of three major library technology projects - including a new institutional repository and an integrated library system migration - and how we dealt with external and internal factors, implemented damage control, and overall lessening the damage from the epic crash. You might laugh, you might cry, you will probably have flashbacks from previous projects, but you will come out of this session with a set of tools to use when you’re dealing with managing mission-critical projects.&lt;br /&gt;
&lt;br /&gt;
[1] Past code4lib talks have covered specific project management strategies, such as Agile, for application development. I will be focusing on and discussing general project management practices in relation to various library technology projects, many of which these strategies include in their own structures.&lt;br /&gt;
&lt;br /&gt;
== Implementing RFID in an Academic Library == &lt;br /&gt;
&lt;br /&gt;
* Scott Bacon, Coastal Carolina University, sbacon AT coastal DOT edu&lt;br /&gt;
&lt;br /&gt;
Coastal Carolina University’s Kimbel Library recently implemented RFID to increase security, provide better inventory control over library materials and enable do-it-yourself patron services such as self checkout. &lt;br /&gt;
&lt;br /&gt;
I’ll give a quick overview of RFID and the components involved and then will talk about how our library utilized the technology. It takes a lot of research, time, money and not too little resourcefulness to make your library RFID-ready. I’ll show how we developed our project timeline, how we assessed and evaluated vendors and how we navigated the bid process. I’ll also talk about hardware and software installation, configuration and troubleshooting and will discuss our book and media collection encoding process. &lt;br /&gt;
&lt;br /&gt;
We encountered myriad issues with our vendor, the hardware and the software. Would we do it all over again? Should your library consider RFID? Caveats abound...&lt;br /&gt;
&lt;br /&gt;
== Coding an Academic Library Intranet in Drupal: Now We're Getting Organizized... ==&lt;br /&gt;
&lt;br /&gt;
* Scott Bacon, Coastal Carolina University, sbacon AT coastal DOT edu&lt;br /&gt;
&lt;br /&gt;
The Kimbel Library Intranet is coded in Drupal 7, and was created to increase staff communication and store documentation. This presentation will contain an overview of our intranet project, including the modules we used, implementation issues, and possible directions in future development phases. I won’t forget to talk about the slew of tasty development issues we faced, including dealing with our university IT department, user buy-in, site navigation, user roles, project management, training and mobile modules (or the lack thereof). And some other fun (mostly) true anecdotes will surely be shared. &lt;br /&gt;
&lt;br /&gt;
The main functions of Phase I of this project were to increase communication across departments and committees, facilitate project management and revise the library's shared drive. Another important function of this first phase was to host mission-critical documentation such as strategic goals, policies and procedures. Phase II of this project will focus on porting employee tasks into the centralized intranet environment. This development phase, which aims to replicate and automate the bulk of staff workflows within a content management system, will be a huge undertaking. &lt;br /&gt;
&lt;br /&gt;
We chose Drupal as our intranet platform because of its extensibility, flexibility and community support. We are also moving our entire library web presence to Drupal in 2013 and will be soliciting any advice on which modules to use/avoid and which third-party services to wrangle into the Drupal environment. Should we use Drupal as the back-end to our entire Web presence? Why or why not?&lt;br /&gt;
&lt;br /&gt;
== Hands off! Best Practices and Top Ten Lists for Code Handoffs ==&lt;br /&gt;
 &lt;br /&gt;
* Naomi Dushay, Stanford University Library, ndushay@stanford.edu&lt;br /&gt;
* Bess Sadler, Stanford University Library, bess@stanford.edu&lt;br /&gt;
&lt;br /&gt;
Transition points in who is the primary developer on an actively developing code base can be a source of frustration for everyone involved. We've tried to minimize that pain point as much as possible through the use of agile methods like test driven development, continuous integration, and modular design. Has optimizing for developer happiness brought us happiness? What's worked, what hasn't, and what's worth adopting? How do you keep your project in a state where you can easily hand it off? &lt;br /&gt;
&lt;br /&gt;
== How to be an effective evangelist for your open source project ==&lt;br /&gt;
 &lt;br /&gt;
* Bess Sadler, Stanford University Library, bess@stanford.edu&lt;br /&gt;
&lt;br /&gt;
The difference between an open source software project that gets new adopters and new contributing community members (which is to say, a project that goes on existing for any length of time) and a project that doesn't, often isn't a question of superior design or technology. It's more often a question of whether the advocates for the project can convince institutional leaders AND front line developers that a project is stable and trustworthy. What are successful strategies for attracting development partners? I'll try to answer that and talk about what we could do as a community to make collaboration easier.  &lt;br /&gt;
&lt;br /&gt;
== Thoughts from an open source vendor - What makes a &amp;quot;good&amp;quot; vendor in a meritocracy? ==&lt;br /&gt;
&lt;br /&gt;
* Matt Zumwalt, Data Curation Experts / MediaShelf / Hydra Project, matt@curationexperts.com&lt;br /&gt;
&lt;br /&gt;
What is the role of vendors in open source?  What should be the position of vendors in a meritocracy?  What are the avenues for encouraging great vendors who contribute to open source communities in valuable ways?  How you answer these questions has a huge impact on a community, and in order to formulate strong answers, you need to be well informed.  Let’s glimpse at the business practicalities of this situation, beginning with 1) an overview of the viable profit models for open-source software, 2) some of the realities of vendor involvement in open source, and 3) an account of the ins &amp;amp; outs of compensation &amp;amp; equity structures within for-profit corporations.&lt;br /&gt;
&lt;br /&gt;
The topics of power &amp;amp; influence, fairness, community participation, software quality, employment and personal profit are fair game, along with software licensing, support,  sponsorship, closed source software and the role of sales people.&lt;br /&gt;
&lt;br /&gt;
This presentation will draw on personal experience from the past seven years spent bootstrapping and running MediaShelf, a small but prolific for-profit consulting company that focuses entirely on open source digital repository software.  MediaShelf has played an active role in creating the Hydra Framework and continuously contributes to maintenance of Fedora and Blacklight. Those contributions have been funded through consulting contracts for authoring &amp;amp; implementing open source software on behalf of organizations around the world.&lt;br /&gt;
&lt;br /&gt;
==Occam’s Reader: A system that allows the sharing of eBooks via Interlibrary Loan==&lt;br /&gt;
&lt;br /&gt;
*Ryan Litsey, Texas Tech University, Ryan DOT Litsey AT ttu.edu&lt;br /&gt;
*Kenny Ketner, Texas Tech University, Kenny DOT Ketner AT ttu.edu&lt;br /&gt;
&lt;br /&gt;
Occam’s Reader is a software platform that allows the transfer and sharing of electronic books between libraries via existing interlibrary loan software. Occam’s Reader allows libraries to meet the growing need to be able to share our electronic resources. In the ever-increasing digital world, many of our collection development plans now include eBook platforms. The problem with eBooks, however, is that they are resources that are locked into the home library. With Occam’s Reader we can continue the centuries-old tradition of resource sharing and also keep up with the changing digital landscape. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Puppet for configuration management when no two servers look alike ==&lt;br /&gt;
* Eugene Vilensky, Senior Systems Administrator, Northwestern University Library, evilensky northwestern edu&lt;br /&gt;
&lt;br /&gt;
Configuration management is hot because it allows one to scale to thousands of machines, all of which look alike, and tightly manage changes across the nodes. Infrastructure as code, implement all changes programmatically, yadda yadda yadda.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, servers which have gone unmanaged for a long time do not look very similar to each other.  Variables come in many forms, usually because of some or all of the following: Who installed the server, where it was installed, where the image was sourced from, when it was installed, where additional packages were sourced, and what kind of software was hosted on it.&lt;br /&gt;
&lt;br /&gt;
Bringing such machines into your configuration management platform is no harder and no easier than some or all of the following options options: 1) blow such machines away and start from scratch, migrate your data. 2) Find the lowest common baseline between the current state and the ideal state and start the work there. 3) implement new features/services on existing unmanaged machines but manage the new features/services.&lt;br /&gt;
&lt;br /&gt;
I will describe our experiences at the library for all three options using the Puppet open-source tool on Enterprise Linux 5 and 6.&lt;br /&gt;
&lt;br /&gt;
== REST &amp;lt;b&amp;gt;IS&amp;lt;/b&amp;gt; Your Mobile Strategy ==&lt;br /&gt;
&lt;br /&gt;
* Richard Wolf, University of Illinois at Chicago, richwolf@uic.edu&lt;br /&gt;
&lt;br /&gt;
Mobile is the new hotness ... and you can't be one of the cool kids unless you've got your own mobile app ... but the road to mobility is daunting.  I'll argue that it's actually easier than it seems ... and that the simplest way to mobility is to bring your data to the party, create a REST API around the data, tell developers about your API, and then let the magic happen.  To make my argument concrete, I'll show (lord help me!) how to go from an interesting REST API to a fun iOS tool for librarians and the general public in twenty minutes.&lt;br /&gt;
&lt;br /&gt;
== ARCHITECTING ScholarSphere: How We Built a Repository App That Doesn't Feel Like Yet Another Janky Old Repository App ==&lt;br /&gt;
&lt;br /&gt;
* Dan Coughlin, Penn State University, danny@psu.edu&lt;br /&gt;
* Mike Giarlo, Penn State University, michael@psu.edu&lt;br /&gt;
&lt;br /&gt;
ScholarSphere is a web application that allows the Penn State research community to deposit, share, and manage its scholarly works.  It is also, as some of our users and our peers have observed, a repository app that feels much more like Google Docs or GitHub than earlier-generation repository applications.  ScholarSphere is built upon the Hydra framework (Fedora Commons, Solr, Blacklight, Ruby on Rails), MySQL, Redis, Resque, FITS, ImageMagick, jQuery, Bootstrap, and FontAwesome.  We'll talk about techniques we used to:&lt;br /&gt;
&lt;br /&gt;
* eliminate Fedora-isms in the application&lt;br /&gt;
* model and expose RDF metadata in ways that users find unobtrusive&lt;br /&gt;
* manage permissions via a UI widget that doesn't stab you in the face&lt;br /&gt;
* harvest and connect controlled vocabularies (such as LCSH) to forms&lt;br /&gt;
* make URIs cool&lt;br /&gt;
* keep the app snappy without venturing into the architectural labyrinth of YAGNI&lt;br /&gt;
* build and queue background jobs&lt;br /&gt;
* expose social features and populate activity streams&lt;br /&gt;
* tie checksum verification, characterization, and version control to the UI&lt;br /&gt;
* let users upload and edit multiple files at once&lt;br /&gt;
&lt;br /&gt;
The application will be demonstrated; code will be shown; and we solemnly commit to showing ABSOLUTELY NO XML.&lt;br /&gt;
&lt;br /&gt;
==Coding with Mittens==&lt;br /&gt;
&lt;br /&gt;
*Jim LeFager, DePaul University Library jlefager@depaul.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Working in an environment where developers have restricted access to servers and development areas, or where you are primarily working in multiple hosted systems with limited access, can be a challenge when you are attempting to incorporate any new functionality or improve an existing one.  Hosted web services present a benefit so that staff time is not dedicated to server maintenance and development, but customization can be difficult and at times impossible.  In many cases, incorporating any current API functionality requires additional work besides the original development work which can be frustrating and inefficient.  The result can be a Frankenstein monster of web services that is confusing to the user and difficult to navigate.  &lt;br /&gt;
&lt;br /&gt;
This talk will focus on some effective best practices, and maybe not so great but necessary practices that we have adopted to develop and improve our user’s experience using javascript/jQuery and CSS to manipulate our hosted environments.  This will include a review of available tools that allow collaborative development in the cloud, as well as examples of jQuery methods that have allowed us to take additional control of these hosted environments as well as track them using Google Analytics.  Included will be examples from Springshare Campus Guides, CONTENTdm and other hosted web spaces that have been ‘hacked’ to improve the UI.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Hacking the DPLA ==&lt;br /&gt;
* Nate Hill, Chattanooga Public Library,  nathanielhill AT gmail.com&lt;br /&gt;
* Sam Klein, Wikipedia, metasj AT gmail.com&lt;br /&gt;
&lt;br /&gt;
The Digital Public Library of America is a growing open-source platform to support digital libraries and archives of all kinds.  DPLA-alpha is available for testing, with data from six initial Hubs.  New APIs and data feeds are in development, with the next release scheduled for April.   &lt;br /&gt;
&lt;br /&gt;
Come learn what we are doing, how to contribute or hack the DPLA roadmap, and how you (or your favorite institution) can draw from and publish through it.  Larger institutions can join as a (content or service) hub, helping to aggregate and share metadata and services from across their {region, field, archive-type}.   We will discuss current challenges and possibilities (UI and API suggestions wanted!), apps being built on the platform, and related digitization efforts.&lt;br /&gt;
&lt;br /&gt;
DPLA has a transparent community and planning process; new participants are always welcome.  Half the time will be for suggestions and discussion.   Please bring proposals, problems, partnerships and possible paradoxes to discuss.&lt;br /&gt;
&lt;br /&gt;
== Introduction to SilverStripe 3.0 ==&lt;br /&gt;
 &lt;br /&gt;
* Ian Walls, University of Massachusetts Amherst, iwalls AT library DOT umass DOT edu&lt;br /&gt;
&lt;br /&gt;
SilverStripe is an open source Content Management System/development framework out of New Zealand, written in PHP, with a solid MVC structure.  This presentation will cover everything you need to know to get started with SilverStripe, including&lt;br /&gt;
* Features (and why you should consider SilverStripe)&lt;br /&gt;
* Requirements &amp;amp; Installation&lt;br /&gt;
* Model-View-Controller&lt;br /&gt;
* Key data types &amp;amp; configuration settings&lt;br /&gt;
* Modules&lt;br /&gt;
* Where to start with customization&lt;br /&gt;
* Community support and participation&lt;br /&gt;
&lt;br /&gt;
== Citation search in SOLR and second-order operators ==&lt;br /&gt;
 &lt;br /&gt;
* Roman Chyla, Astrophysics Data System, roman.chyla AT (cfa.harvad.edu|gmail.com)&lt;br /&gt;
&lt;br /&gt;
Citation search is basically about connections (Is the paper read by a friend of mine more important than others? Get me a paper read by somebody who cites many papers/is cited by many papers?), but the implementation of the citation search is surprisingly useful in many other areas.&lt;br /&gt;
&lt;br /&gt;
I will show 'guts' of the new citation search for astrophysics, it is generic and can be applied recursively to any Lucene query. Some people would call it a second-order operation because it works with the results of the previous (search) function. The talk will see technical details of the special query class, its collectors, how to add a new search operator and how to influence relevance scores. Then you can type with me: friends_of(friends_of(cited_for(keyword:&amp;quot;black holes&amp;quot;) AND keyword:&amp;quot;red dwarf&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Managing Segmented Images and Hierarchical Collections with Fedora-Commons and Solr ==&lt;br /&gt;
&lt;br /&gt;
* David Lacy, Villanova University, david DOT lacy AT villanova.edu&lt;br /&gt;
&lt;br /&gt;
Many of the resources within our digital library are split into parts -- newspapers, scrapbooks and journals being examples of collections of individual scanned pages.  In some cases, groups of pages within a collection, or segments within a particular page, may also represent chapters or articles.&lt;br /&gt;
&lt;br /&gt;
We recently devised a procedure to extract these &amp;quot;segmented resources&amp;quot; into their own objects within our repository, and index them individually in our Discovery Layer.&lt;br /&gt;
&lt;br /&gt;
In this talk I will explain how we dissected and organized these newly created resources with an extension to our Fedora Model, and how we make them discoverable through Solr configurations that facilitate browsable hierarchical relationships and field-collapsed results that group items within relevant resources.&lt;br /&gt;
&lt;br /&gt;
== Google Analytics, Event Tracking and Discovery Tools==&lt;br /&gt;
 &lt;br /&gt;
* Emily Lynema, North Carolina State University Libraries. ejlynema AT ncsu DOT edu&lt;br /&gt;
* Adam Constabaris, North Carolina State University Libraries, ajconsta AT ncsu DOT edu&lt;br /&gt;
&lt;br /&gt;
The NCSU Libraries is using Google Analytics increasingly across its website as a replacement for usage tracking via Urchin. More recently, we have also begun to use the event tracking features in Google Analytics. This has allowed us to gather usage statistics for activities that don’t initiate new requests to the server, such as clicks that hide and show already-loaded content (as in many tabbed interfaces).  Aggregating these events together with pageview tracking in Google Analytics presents a more unified picture of patron activity and can help improve design of tools like the library catalog.  While assuming a basic understanding of the use of Google Analytics pageview tracking, this presentation will start with an introduction to the event tracking capabilities that may be less widely known. &lt;br /&gt;
&lt;br /&gt;
We’ll share library catalog usage data pulled from Google Analytics, including information about  features that are common across the newest wave of catalog interfaces, such as tabbed content, Google Preview, and shelf browse. We will also cover the approach taken for the technical implementation of this data-intensive JavaScript event tracking.&lt;br /&gt;
&lt;br /&gt;
As a counterpart, we can demonstrate how we have begun to use Google Analytics event tracking in a proprietary vendor discovery tool (Serials Solutions Summon). While the same technical ideas govern this implementation, we can highlight the differences (read, challenges) inherent in utilizing this type of event tracking in vendor-owned application vs. a locally developed application.&lt;br /&gt;
&lt;br /&gt;
Along the way, hopefully you’ll learn a little about why you might (or might not) want to use Google Analytics event tracking yourself and see some interesting catalog usage stats.&lt;br /&gt;
&lt;br /&gt;
== Actions speak louder than words: Analyzing large-scale query logs to improve the research experience ==&lt;br /&gt;
&lt;br /&gt;
* Raman Chandrasekar, Serials Solutions, Raman DOT Chandrasekar AT serialssolutions DOT com&lt;br /&gt;
* Ted Diamond, Serials Solutions, Ted DOT Diamond AT serialssolutions DOT com&lt;br /&gt;
&lt;br /&gt;
Analyzing anonymized query and click through logs leads to a better understanding of user behaviors and intentions and provides great opportunities to respond to users with an improved search experience. A large-scale provider of SaaS services, Serials Solutions is uniquely positioned to learn from the dataset of queries aggregated from the Summon service generated by millions of users at hundreds of libraries around the world.&lt;br /&gt;
 &lt;br /&gt;
In this session, we will describe our Relevance Metrics Framework and provide examples of insights gained during its development and implementation. We will also cover recent product changes inspired by these insights. Chandra and Ted, from the Summon dev team, will share insights and outcomes from this ongoing process and highlight how analysis of large-scale query logs helps improve the academic research experience.&lt;br /&gt;
&lt;br /&gt;
== Supporting Gaming in the College Classroom == &lt;br /&gt;
&lt;br /&gt;
*Megan O'Neill, Albion College, moneill AT albion DOT edu&lt;br /&gt;
&lt;br /&gt;
Faculty are increasingly interested both in teaching with games and with gamifying their courses. Introducing digital games and game support for faculty through the library makes a lot of sense, but it comes with a thorny set of issues. This talk will discuss our library's initial steps toward creating a digital gamerspace and game support infrastructure in the library, including:&lt;br /&gt;
1) The scope and acquisitions decisions that make the most sense for us, and 2) Some difficulties we've discovered in trying to get our collection, physical- , digital- and head-space, and infrastructure up and going.&lt;br /&gt;
There will also be an extremely brief overview of WHY we decided to teach with games and to support gamification, what (if anything) to do about mobile gaming, and where games in education might be going.&lt;br /&gt;
&lt;br /&gt;
== Codecraft ==&lt;br /&gt;
 &lt;br /&gt;
* Devon Smith, OCLC Research, smithde@oclc.org&lt;br /&gt;
&lt;br /&gt;
We can think of and talk about software development as science, engineering, and craft. In this presentation, I'll talk about the craft aspect of software. From Wikipedia[1]: &amp;quot;In English, to describe something as a craft is to describe it as lying somewhere between an art (which relies on talent and technique) and a science (which relies on knowledge). In this sense, the English word craft is roughly equivalent to the ancient Greek term techne.&amp;quot; Of the questions who, what, where, why, when, and how, I will focus on why and how, with a minor in where.&lt;br /&gt;
&lt;br /&gt;
'''N.B.''': This will be a NON-TECHNICAL talk.&lt;br /&gt;
&lt;br /&gt;
[1] https://en.wikipedia.org/wiki/Craft#Classification&lt;br /&gt;
&lt;br /&gt;
== KnowBot: A Tool to Manage Reference and Beyond == &lt;br /&gt;
&lt;br /&gt;
* Sarah Park, Northwest Missouri State University&lt;br /&gt;
* Hong Gyu Han, Northwest Missouri State University&lt;br /&gt;
* Lori Mardis, Northwest Missouri State University&lt;br /&gt;
&lt;br /&gt;
Northwest Missouri State University has developed and used RefPole for collecting and analyzing reference statistics since 2005. RefPole was a tool to answer librarians’ needs to manage reference statistics and knowledge among librarians. It was an analysis tool for the library leaders to make decisions on library operations. RefPole was adequate for the internal use; however, it was developed for local access which keeps the collective reference knowledge from being shared beyond the desktop and from being accessed by students and faculty. &lt;br /&gt;
&lt;br /&gt;
In 2011, responding to growing internal and external need, the library has developed a web based knowledge base management system, KnowBot, in Ruby on Rail. KnowBot offers public searching, rating, cloud tagging, librarian, and reporting interfaces. With the additional public interfaces, it also extended reference services 24/7. Librarians can record responses to questions with graphics and multimedia. The reporting interface features not only the simple transactional data, but it also exhibits multi-dimensional analytic tool in real time.&lt;br /&gt;
&lt;br /&gt;
The presenters will demonstrate KnowBot; share the source code; and discuss the use of the knowledge base to answer the organizational and public need.&lt;br /&gt;
&lt;br /&gt;
== Creating a (mostly) integrated Patron Account with SirsiDynix Symphony and ILLiad ==&lt;br /&gt;
&lt;br /&gt;
* Emily Lynema, North Carolina State University Libraries, ejlynema AT ncsu DOT edu&lt;br /&gt;
* Jason Raitz, North Carolina State University Libraries, jcraitz AT ncsu DOT edu&lt;br /&gt;
&lt;br /&gt;
IIn 2012, the NCSU Libraries at long last replaced a vendor “my account” tool that had been running unsupported for years. With the opportunity to create something new, one of the initial goals was a user experience that more seamlessly combined ILS data from SirsiDynix Symphony with ILL data from ILLiad. As a Kuali OLE beta partner, the NCSU Libraries is looking at an ILS migration within the next few years, so another goal was to build the interface on top of a standard so it would not have to be re-written as part of the migration. And the icing on the cake was a transition from a local Perl-based authentication system to the newer campus-wide Shibboleth authentication.&lt;br /&gt;
&lt;br /&gt;
This presentation will start with our design goals for a new user interface, include a demonstration, and describe the simple techniques used to provide a more integrated view of Symphony and ILLiad patron data. The backbone of the actual application is built using Zend’s PHP Framework and integrates eXtensible Catalog’s NCIP Toolkit to reach out to Symphony for patron data. In addition, we can talk about our successes (and difficulties) using jQuery Mobile to create a mobile view using the same underlying code as the web version. As one of our first Shibboleth applications here in the Libraries, this experience also taught us first-hand about some of the challenges of this type of single sign-on.&lt;br /&gt;
&lt;br /&gt;
== SKOS Name Authority in a DSpace Institutional Repository ==&lt;br /&gt;
&lt;br /&gt;
* Tom Johnson, Oregon State University, thomas.johnson@oregonstate.edu&lt;br /&gt;
&lt;br /&gt;
Name ambiguity is widespread in institutional repositories. Searching by author, users are typically greeted by a variety of misspellings and permutations of initials, collision between contributors with similar names, and other problems inherent in uncontrolled (often user-submitted) data. While DSpace has the technical capacity to use controlled names, it relies on outside authority files (from LoC, for example) to do the heavy lifting. For institutional authors, this leaves a major coverage gap and creates namespace pollution on a vast scale (try searching [http://authorities.loc.gov authorities.loc.gov] for &amp;quot;Johnson, John&amp;quot;, sometime). &lt;br /&gt;
&lt;br /&gt;
OSU is solving this problem with an institutionally scoped, low maintenance SKOS/FOAF &amp;quot;name authority file&amp;quot;. People in the IR are assigned URIs, names are maintained as skos:prefLabel, altLabel, or hiddenLabel. We've developed a simple Python application allowing staff to update individual &amp;quot;records&amp;quot;, and code on the DSpace side to access the dataset over SPARQL. This presentation will walk you through where we are now, limitations we've run into, and possibilities for the future.&lt;br /&gt;
&lt;br /&gt;
== Meta-Harvesting: Harvesting the Harvesters ==&lt;br /&gt;
&lt;br /&gt;
* Steven Anderson, Boston Public Library, sanderson AT bpl DOT org&lt;br /&gt;
* Eben English, Boston Public Library, eenglish AT bpl DOT org&lt;br /&gt;
&lt;br /&gt;
The emerging Digital Public Library of America (http://dp.la/) has proposed to aggregate digital content for search and discovery from several regional &amp;quot;service hubs&amp;quot; that will provide metadata via an as-yet-unspecified harvest process. As these service hubs are already harvesters of digital content from myriad sources themselves, the potential for &amp;quot;telephone game&amp;quot;-esque data loss and/or transmutation is a significant danger.&lt;br /&gt;
&lt;br /&gt;
This talk will discuss the experience of Digital Commonwealth (http://www.digitalcommonwealth.org/), a statewide digital repository currently in the process of being revamped, refactored, and redesigned by the Boston Public Library using the Hydra Framework. The repository, which aggregates data from over 20 institutions (some of which are themselves aggregators), is also undergoing a massive metadata cleanup effort as records are prepared to be ingested into the DPLA as one of the regional service hubs. Topics will include automated and manual processes for data crosswalking and cleanup, advanced OAI-PMH chops, and the implications of the (at this time still-emerging) metadata standards and APIs being created by the DPLA.&lt;br /&gt;
&lt;br /&gt;
Every crosswalk, transformation, migration, harvest, or export/ingest of metadata requires informed decision making and precise attention to detail. This talk will provide insight into key decision points and potential quagmires, as well as a discussion of the challenges of dealing with heterogeneous data from a wide variety of institutions.&lt;br /&gt;
&lt;br /&gt;
== Pay No More Than £3 // DIY Digital Curation ==&lt;br /&gt;
 &lt;br /&gt;
* Chris Fitzpatrick, World Maritime University, cf AT wmu DOT se&lt;br /&gt;
&lt;br /&gt;
Are you a small library or archive? &amp;lt;br&amp;gt;&lt;br /&gt;
Do you feel you are being held back by limited technical resources?&amp;lt;br&amp;gt;&lt;br /&gt;
Tired of waiting around for the Google Books Library people to reply to your emails? &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Join the club. Open-source software, hackerspaces, dirt cheap storage, cloud computing, and social media make it possible for any institution to start curating digitally. Today.&lt;br /&gt;
This talk will cover some of the guerrilla tactics being employed to drag a small university's large collection into the internet age. &lt;br /&gt;
&lt;br /&gt;
Topics will include: &lt;br /&gt;
*Cheap and effective document scanning methods.&lt;br /&gt;
*Valuable resources found at your local hackerspace / makerspace / fablab.&lt;br /&gt;
*Metadata enrichment for the not-so-rich and NLP for the people.&lt;br /&gt;
*Utilizing social media to crowdsource your collection building.&lt;br /&gt;
*How to post-process, OCR, PDF, and ePub your documents using Free software.&lt;br /&gt;
*Ways to build out a digital repository with no servers, code, or large 2-year grants required. (ok, maybe some code).&lt;br /&gt;
&lt;br /&gt;
== IIIF: One Image Delivery API to Rule Them All ==&lt;br /&gt;
&lt;br /&gt;
* Willy Mene, Stanford University Libraries, wmene AT stanford DOT edu&lt;br /&gt;
* Stuart Snydman, Stanford University Libraries, snydman AT stanford DOT edu&lt;br /&gt;
&lt;br /&gt;
The International Image Interoperability Framework was conceived of by a group of research and national libraries determined to achieve the holy grail of seamless sharing and reuse of images in digital image repositories and applications.  By converging on common API’s for image delivery, metadata transmission and search, it is catalyzing the development of a new wave of interoperable image delivery software that will surpass the current crop of image viewers, page turners, and navigation systems, and in so doing give scholars an unprecedented level of consistent and rich access to image-based resources across participating repositories.&lt;br /&gt;
&lt;br /&gt;
The IIIF Image API (http://library.stanford.edu/iiif/image-api) specifies a web service that returns an image in response to a standard http or https request. The URL can specify the region, size, rotation, quality characteristics and format of the requested image. A URL can also be constructed to request basic technical information about the image to support client applications.  The API could be adopted by any image repository or service, and can be used to retrieve static images in response to a properly constructed URL.&lt;br /&gt;
&lt;br /&gt;
In this presentation we will review version 1 of the IIIF image api and validator, demonstrate applications by daring early adopters, and encourage widespread adoption.&lt;br /&gt;
&lt;br /&gt;
== Data-Driven Documents: Visualizing library data with D3.js ==&lt;br /&gt;
&lt;br /&gt;
* Bret Davidson, North Carolina State University Libraries, bret_davidson@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
Several JavaScript libraries have emerged over the past few years for creating rich, interactive visualizations using web standards. Few are as powerful and flexible as D3.js[1]. D3 stands apart by merging web standards with a rich API and a unique approach to binding data to DOM elements, allowing you to apply data-driven transformations to a document. This emphasis on data over presentation has made D3 very popular; D3 is used by several prominent organizations including the New York Times[2], GOV.UK[3], and Trulia[4].&lt;br /&gt;
&lt;br /&gt;
Power usually comes at a cost, and D3 makes you pay with a steeper learning curve than many alternatives. In this talk, I will get you over the hump by introducing the core construct of D3, the Data-Join. I will also discuss when you might want to use D3.js, share some examples, and explore some advanced utilities like scales and shapes. I will close with a brief overview of how we are successfully using D3 at NCSU[5] and why investing time in learning D3 might make sense for your library.&lt;br /&gt;
&lt;br /&gt;
*[1]http://d3js.org/&lt;br /&gt;
*[2]http://www.nytimes.com/interactive/2012/08/24/us/drought-crops.html&lt;br /&gt;
*[3]https://www.gov.uk/performance/dashboard&lt;br /&gt;
*[4]http://trends.truliablog.com/vis/pricerange-boston/&lt;br /&gt;
*[5]http://www.lib.ncsu.edu/dli/projects/spaceassesstool&lt;br /&gt;
&lt;br /&gt;
== ''n'' Characters in Search of an Author ==&lt;br /&gt;
&lt;br /&gt;
* Jay Luker, IT Specialist, Smithsonian Astrophysics Data System, jluker@cfa.harvard.edu&lt;br /&gt;
&lt;br /&gt;
When it comes to author names the disconnect between our metadata and what a user might enter into a search box presents challenges when trying to maximize both precision and recall [0]. When indexing a paper written by &amp;quot;Wäterwheels, A&amp;quot; a goal should be to preserve as much as possible the original information. However, users searching by author name may frequently omit the diaeresis and search for simply, &amp;quot;Waterwheels&amp;quot;. The reverse of this scenario is also possible, i.e., your decrepit metadata contains only the ASCII, &amp;quot;Supybot, Zoia&amp;quot;, whereas the user enters, &amp;quot;Supybot, Zóia&amp;quot;. If recall is your highest priority the simple solution is to always downgrade to ASCII when indexing and querying. However this strategy sacrifices precision, as you will be unable to provide an &amp;quot;exact&amp;quot; search, necessary in cases where &amp;quot;Hacker, J&amp;quot; and &amp;quot;Häcker, J&amp;quot; really are two distinct authors.&lt;br /&gt;
&lt;br /&gt;
This talk will describe the strategy ADS[1] has devised for addressing common and edge-case problems faced when dealing with author name indexing and searching. I will cover the approach we devised to not only the transliteration issue described above, but also how we deal with author initials vs. full first and/or middle names, authors who have published under different forms of their name, authors who change their names (wha? people get married?!). Our implementation relies on Solr/Lucene[2], but my goal is an 80/20 mix of high- vs. low-level details to keep things both useful and stackgnostic [3].&lt;br /&gt;
&lt;br /&gt;
*[0] http://en.wikipedia.org/wiki/Precision_and_recall&lt;br /&gt;
*[1] http://www.adsabs.harvard.edu/&lt;br /&gt;
*[2] http://lucene.apache.org/solr/&lt;br /&gt;
*[3] http://en.wikipedia.org/wiki/Portmanteau&lt;br /&gt;
&lt;br /&gt;
== But, does it all still work : Testing Drupal with simpletest and casperjs ==&lt;br /&gt;
&lt;br /&gt;
* David Kinzer - Lead Developer, Jenkins Law Library, dkinzer@jenkinslaw.org&lt;br /&gt;
* Chad Nelson  - Developer, Jenkins Law Library, cnelson@jenkinslaw.org&lt;br /&gt;
&lt;br /&gt;
Most developers know that they should be writing tests along with their code, but not every developer knows how or where to get started. This talk will walk through the nuts and bolts of the testing a medium-sized Drupal site with many integrated moving parts. We’ll talk about unit testing of individual functions with [http://www.simpletest.org/en/overview.html SimpleTest] (and how that has changed how we write functions), functional testing of the user interface with [http://casperjs.org/ casperjs]. We will discuss automating deployment with [http://www.phing.info/ phing], [http://drupal.org/project/drush drush], [http://jenkins-ci.org/ jenkins-ci] &amp;amp; github, which, combined with our tests, removes the “hold-your-breath” feeling before updating our live site. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2013]]&lt;br /&gt;
&lt;br /&gt;
== Relations, Recommendations and PostgreSQL ==&lt;br /&gt;
&lt;br /&gt;
* William Denton, Web Librarian, York University, wdenton@yorku.ca&lt;br /&gt;
* Dan Scott, Systems Librarian, Laurentian University, dscott@laurentian.ca&lt;br /&gt;
&lt;br /&gt;
In 2012, a ragtag group of library hackers from various Ontario &lt;br /&gt;
universities, funded with only train tickets and fueled with Tim Hortons &lt;br /&gt;
coffee, assembled under the Scholars Portal banner to build a common &lt;br /&gt;
circulation data repository and recommendation engine: the Scholars &lt;br /&gt;
Portal Library Usage-based Recommendation Engine (SPLURGE). PostgreSQL, &lt;br /&gt;
the emerging darling of the old-school relational database world, is the &lt;br /&gt;
heart of SPLURGE, and the circulation data for Ontario's 400,000 &lt;br /&gt;
university students is its blood. Two of the contributors to this effort explore the PostgreSQL features &lt;br /&gt;
that SPLURGE uses to ease administration efforts, simplify application &lt;br /&gt;
development, and deliver high performance results. If you don't use &lt;br /&gt;
PostgreSQL for your data, you might want to try it after this &lt;br /&gt;
presentation; if you already do, you'll pick up some new tips and tricks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A Cure for Romnesia: Site Story Web-Archiving ==&lt;br /&gt;
&lt;br /&gt;
* Harihar Shankar, Research Library, Los Alamos National Laboratory, harihar@lanl.gov&lt;br /&gt;
&lt;br /&gt;
The web changes constantly, erasing both inconvenient facts and&lt;br /&gt;
fictions.  At web-scale, preservation organizations cannot be expected&lt;br /&gt;
to keep up by using traditional crawling, and they already miss many&lt;br /&gt;
important versions.  The cure for this is to capture the interactions&lt;br /&gt;
between real browsers and the server, and push these into an archive&lt;br /&gt;
for safe keeping rather than trying to guess when pages change.&lt;br /&gt;
&lt;br /&gt;
Every time the Apache Web Server sends data to a browser, SiteStory’s&lt;br /&gt;
Apache Module also pushes this data to the SiteStory Web Archive. The&lt;br /&gt;
same version of a resource will not be archived more than once, no&lt;br /&gt;
matter how many times it has been requested.  The resulting archive is&lt;br /&gt;
effectively representative of a server's entire history, although&lt;br /&gt;
versions of resources that are never requested by a browser will also&lt;br /&gt;
never be archived.&lt;br /&gt;
&lt;br /&gt;
In this presentation I will give an overview of SiteStory, an&lt;br /&gt;
Open-Source project written in Java that runs as an application under&lt;br /&gt;
Tomcat 6 or greater. SiteStory’s Apache Module is written in C. I will&lt;br /&gt;
also demonstrate the TimeMap tool that visualizes versions of a&lt;br /&gt;
resource available in the SiteStory archive. The TimeMap tool is a&lt;br /&gt;
Firefox browser extension that plots versions of a resource on a&lt;br /&gt;
SIMILE timeline. Since the tools uses the Memento protocol, it can&lt;br /&gt;
also display versions of resources available in Memento compliant web&lt;br /&gt;
archives and content management systems.&lt;br /&gt;
&lt;br /&gt;
== Practical Relevance Ranking for 10 million books. ==&lt;br /&gt;
 &lt;br /&gt;
* Tom Burton-West, University of Michigan Library, tburtonw@umich.edu&lt;br /&gt;
&lt;br /&gt;
[http://www.hathitrust.org/ HathiTrust Full-text search] indexes the full-text and metadata for over 10 million books.  There are many challenges in tuning relevance ranking for a collection of this size.  This talk will discuss some of the underlying issues, some of our experiments to improve relevance ranking, and our ongoing efforts to develop a principled framework for testing changes to relevance ranking.&lt;br /&gt;
&lt;br /&gt;
Some of the topics covered will include:&lt;br /&gt;
&lt;br /&gt;
* Length normalization for indexing the full-text of book-length documents&lt;br /&gt;
* Indexing granularity for books&lt;br /&gt;
&lt;br /&gt;
*Testing new features in Solr 4.0:&lt;br /&gt;
**New ranking formulas that should work better with book-length documents: BM25 and DFR.&lt;br /&gt;
**Grouping/Field Collapsing.  Can we index 3 billion pages and then use Solr's field collapsing feature to rank books according to the most relevant page(s)?&lt;br /&gt;
**Finite State Automota/Block Trees for storing the in-memory index to the index.  Will this allow us to allow wildcards/truncation despite over 2 billion unique terms per index?&lt;br /&gt;
&lt;br /&gt;
*Relevance testing methodologies:Query log analysis, Click models, Interleaving, A/B testing, and Test collection based evaluation.&lt;br /&gt;
&lt;br /&gt;
*Testing of a new high-performance storage system to be installed in early 2013. We will report on any tests we are able to run prior to conference time.&lt;br /&gt;
&lt;br /&gt;
== Browser/Javascript Integration Testing with Ruby ==&lt;br /&gt;
&lt;br /&gt;
* Jessie Keck, Stanford University, jkeck at stanford dot edu&lt;br /&gt;
&lt;br /&gt;
It's near impossible to build a rich web application without javascript. We have a lot of great patterns to follow, such as progressive enhancement, to make sure our rich web applications are usable, accessible, and testable. However; when javascript is involved the possibility exists that bugs can be introduced that won't get caught by most unit and integration testing frameworks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is where Watir (pronounced water) comes in.  Watir can be used with popular ruby testing frameworks like RSpec and Capybara.  This talk will show how to use the combination of these tools to write RSpec tests using Watir to spin up an application in a variety of browsers, navigate the application, and make assertions about the page using Capybara.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Tests using Watir are written in ruby but they don't necessarily need to test ruby application. You can test any application that you can point a browser at, so there are a wide variety of potential uses for tests written with Watir.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2013]]&lt;/div&gt;</summary>
		<author><name>Jkeck</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2012_Craft_Brew_Drinkup&amp;diff=10817</id>
		<title>2012 Craft Brew Drinkup</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2012_Craft_Brew_Drinkup&amp;diff=10817"/>
				<updated>2012-02-03T18:59:37Z</updated>
		
		<summary type="html">&lt;p&gt;Jkeck: /* Sign up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Wednesday, February 8, after 9 PM, in hospitality suite'''&lt;br /&gt;
&lt;br /&gt;
The Craft Brew Drinkup at Code4lib 2012 is all about sharing and enjoying good beer with fellow conference attendees. The idea is to bring bottles of your favorite beers.&lt;br /&gt;
&lt;br /&gt;
While you're not obligated to bring ''local beers'' from whereever you're from, participants are definitely encouraged to bring beer that you think is special and might be somewhat hard for others outside your area to find. Homebrew is especially welcome. Sign up below with your name, where you're from, and list a few brews or bottles you're thinking about (but not necessarily committing to) bringing along. You can also request that people bring specific beer if you so desire, but don't necessarily expect that your wishes will be granted.&lt;br /&gt;
&lt;br /&gt;
''If you do not check bags or otherwise cannot arrange to bring beer from where you call home, you may be interested in buying beer from a local beer store. See the &amp;quot;Buying Beer in Seattle&amp;quot; section below for suggestions.''&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
=== Sign up ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; class=&amp;quot;sortable&amp;quot; &lt;br /&gt;
! Name&lt;br /&gt;
! Location&lt;br /&gt;
! Brews or Breweries I might bring&lt;br /&gt;
! Requests&lt;br /&gt;
|-&lt;br /&gt;
| anarchivist&lt;br /&gt;
| New Haven CT/Brooklyn NY&lt;br /&gt;
| '''Purchased''': Element Brewing Dark Element, Element Brewing Extra Special Oak, Olde Burnside Ten Penny Ale Reserve, Cisco Captain Swain's Extra Stout&lt;br /&gt;
'''Special bottles''': Who knows? Something special.&lt;br /&gt;
| Imperial porters/stouts; really funky-/Brett-tasting beers or wild ales; highly-hopped stuff; interesting session beers&lt;br /&gt;
|-&lt;br /&gt;
| kayiwa&lt;br /&gt;
| Chicago IL&lt;br /&gt;
| Bourbon County Stout; New Glarus Barleywine&lt;br /&gt;
| Barleywines; Aged Stouts; Anything from Deschutes&lt;br /&gt;
(psst, Francis: I've got some homebrewed barleywine aging in the basement; i won't be in Seattle but I'll bring some to C4L-Midwest -[[User:Kenirwin|Kenirwin]] 13:26, 29 January 2012 (PST))&lt;br /&gt;
|-&lt;br /&gt;
| danwho&lt;br /&gt;
| San Diego, CA&lt;br /&gt;
| Alpine Brewery Exponential Hoppiness; Iron Fist; maybe Lost Abbey; Bud Light&lt;br /&gt;
| hoppy imperials, sours, funky farmhouses.  Also, I'd vote Wednesday or Tuesday evening since a lot of folks are doing the Microsoft tour and/or newcomer dinners Monday&lt;br /&gt;
|-&lt;br /&gt;
| declan&lt;br /&gt;
| San Diego, CA&lt;br /&gt;
| hmm, looking over the cellar... Parabola, Black Tuesday, Cherry Adam, Angel Share, Captain stout, Silva.... we'll see!&lt;br /&gt;
| dark, black stuff.  like my heart.  Or sours.  Or Belgies.  Founders, Bells, New Glaris, Goose Island.&lt;br /&gt;
|-&lt;br /&gt;
| awead&lt;br /&gt;
| Cleveland, OH&lt;br /&gt;
| Founders Porter, some new IPA I found...&lt;br /&gt;
| Stuff that doesn't suck.&lt;br /&gt;
|-&lt;br /&gt;
|bibliotechy&lt;br /&gt;
|Atlanta, Ga&lt;br /&gt;
|Some Terrapin beers... Hopsecutioner,  Sweetwater Brewery Exodus Porter if it is still around&lt;br /&gt;
|Boreale noire, rousse or cuivre from Montreal! &lt;br /&gt;
|-&lt;br /&gt;
|sdellis&lt;br /&gt;
|Lambertville, NJ&lt;br /&gt;
|Riverhorse... (possibly Hop Hazard, but I'll see what's fresh).  Maybe Lionshead (pilsner) from Doylestown, PA (legend has it you can drink as much as you want and never get a hangover).&lt;br /&gt;
|Bitters, pub style, IPAs, brown ales&lt;br /&gt;
|-&lt;br /&gt;
|jastirn&lt;br /&gt;
|Kansas City, KS&lt;br /&gt;
|Whatever I can get from Wilderness Brewing (KC), Free State (Lawrence, KS), Schlafly Imperial Stout (St. Louis), and Blvd Smokestack (KC) (for Danwho)&lt;br /&gt;
| More blueberry stout, stouts, lagers, spicy&lt;br /&gt;
|-&lt;br /&gt;
|HLPitts&lt;br /&gt;
|Salem, OR&lt;br /&gt;
|Hopworks barleywine, Rogue Chocolate Stout, Seven Brides porter, Wandering Aengus cider, and a small variety from Deschutes (including Obsidian for anarchivist)&lt;br /&gt;
|stouts/porters, sours, red ales&lt;br /&gt;
|-&lt;br /&gt;
|bohyunkim&lt;br /&gt;
|Miami, FL&lt;br /&gt;
|same as last year - canned bears from Oskar Blues brewery in Colorado unless I spot something better&lt;br /&gt;
|cider, Rogue Dead Guy, malty, fruity, blonde/gloden ale &lt;br /&gt;
|-&lt;br /&gt;
|carmendarlene&lt;br /&gt;
|San Diego, CA&lt;br /&gt;
|something from SoCal...Maybe more Alpine. Going shopping at the Best Damn Beer Store later this week.&lt;br /&gt;
|New Glaris, Goose Island, Three Floyds, Cantillon...stuff that I can't get in San Diego. &lt;br /&gt;
|-&lt;br /&gt;
|flyingzumwalt &amp;amp; jcoyne&lt;br /&gt;
|Minneapolis, MN&lt;br /&gt;
|Surly Coffee Bender &amp;amp; Surly Cynic, Bell's Two Hearted, Lift Bridge Farm Girl, Crispin Cider&lt;br /&gt;
|Revivalist beers (ie. [http://www.yardsbrewing.com/ales_poor-richards-tavern-spruce.asp Yard's Revolutionary Beers] ), New Glarus, Yuengling&lt;br /&gt;
|-&lt;br /&gt;
|singlesoliloquy&lt;br /&gt;
|St. Louis, MO&lt;br /&gt;
|Schlafly, Six Row.&lt;br /&gt;
|Good pilsners.&lt;br /&gt;
|-&lt;br /&gt;
|pberry&lt;br /&gt;
|Chico, CA&lt;br /&gt;
|Hope to buy Chico stuff in SEA, Bigfoot was just released.&lt;br /&gt;
|Ales&lt;br /&gt;
|-&lt;br /&gt;
|calvinmah&lt;br /&gt;
|Vancouver, Canada&lt;br /&gt;
|driving to SEA so I'll bring a crate&lt;br /&gt;
|Beer&lt;br /&gt;
|-&lt;br /&gt;
|tara robertson&lt;br /&gt;
|Vancouver, Canada&lt;br /&gt;
|two limited release beers from [http://gib.ca/beer/ Granville Island Brewing]: Fresh Hop ESP, Imperial IPA&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|younga, ward, jeff&lt;br /&gt;
|Seattle, WA&lt;br /&gt;
|Random assortment of growlers: Georgetown Brewery, Big Time, Fremont, Epic Ales.&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|lrobare&lt;br /&gt;
|Eugene, OR&lt;br /&gt;
|Ninkasi, probably Total Domination and something else&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scollett&lt;br /&gt;
|Seattle, WA&lt;br /&gt;
|Live in Berkeley, CA, but will buy local or raid the beer stash of my Seattle relatives.&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|lisapisa77&lt;br /&gt;
|Reno, NV&lt;br /&gt;
|Ichthyosaur &amp;quot;Icky&amp;quot; IPA from Great Basin and probably something else&lt;br /&gt;
|Alagash or Victory or brown ales&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|chrpr&lt;br /&gt;
|New York, NY&lt;br /&gt;
|Brooklyn Sorachi Ace, Southhampton Saison, Probably some other stuff&lt;br /&gt;
|Sours, Farmhouse, Misc. high abv goodness...&lt;br /&gt;
|-&lt;br /&gt;
|carboy&lt;br /&gt;
|Arlington, TX&lt;br /&gt;
|Yeti, Mephistopheles&lt;br /&gt;
|Imperial stout, IPA, barleywine&lt;br /&gt;
|-&lt;br /&gt;
|mbaggett&lt;br /&gt;
|Knoxville, TN&lt;br /&gt;
|I won't be checking a bag, but I'll be raiding all the Seattle beer spots this weekend. I hope to surprise everyone with a bottle of Pliny the Elder or at least the new Oak Aged Espresso Yeti.&lt;br /&gt;
|Double IPAs, West Coast IPAs, Saisons and Sours&lt;br /&gt;
|-&lt;br /&gt;
|dlovins&lt;br /&gt;
|New York,  NY&lt;br /&gt;
| Not sure. Something local&lt;br /&gt;
|&amp;lt;del&amp;gt;Maybe a hefeweizen of some sort&amp;lt;/del&amp;gt; something good in any case&lt;br /&gt;
|-&lt;br /&gt;
|saverkamp&lt;br /&gt;
|Iowa City, IA&lt;br /&gt;
|Something from Good People (AL), Back Forty (AL), maybe also Millstream (IA) or Peacetree (IA)&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|dileshni&lt;br /&gt;
|Toronto, ON&lt;br /&gt;
|Muskoka cream ale &amp;amp; maybe some Flying Monkey/ Great Lakes&lt;br /&gt;
|Cookies.&lt;br /&gt;
|-&lt;br /&gt;
|chick&lt;br /&gt;
|Berkeley&lt;br /&gt;
|Best I can find between now and then&lt;br /&gt;
|Chocolate Bacon Candy&lt;br /&gt;
|-&lt;br /&gt;
|jeg&lt;br /&gt;
|Charlottetown, PEI&lt;br /&gt;
|Gahan IPA, Brown, Might pickup something else on the way&lt;br /&gt;
|Hops. Enough hops to peel paint off walls.&lt;br /&gt;
|-&lt;br /&gt;
|jkeck&lt;br /&gt;
|SF Bay Area&lt;br /&gt;
|Won't be checking baggage so will I will pick up something local.&lt;br /&gt;
|All kinds of IPAs. Hoppy beers. Bacon.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Suggestions ===&lt;br /&gt;
&lt;br /&gt;
'''Add using the following format: (person who you are making the request of): (your request) - (your name)'''&lt;br /&gt;
&lt;br /&gt;
* Unnamed persons from the Keystone State: Sly Fox, any special Weyerbacher - anarchivist&lt;br /&gt;
* anyone: Boulevard smokestack series - danwho&lt;br /&gt;
* whosoever brought blueberry stout last year, more of that please - jastirn&lt;br /&gt;
* Oregonians/PNW folks: Deschutes Hop Henge (cuz it's seasonal) and Obsidian (cuz I like it) - anarchivist&lt;br /&gt;
* younga/Seattlites: Georgetown Donkey Deux; Georgetown Braggott - anarchivist&lt;br /&gt;
* if by chance anyone is coming from Salt Lake City: I would love Big Bad Baptist from Epic. Or the Wit if it's available again (I think it's the wrong season though). - HLPitts&lt;br /&gt;
&lt;br /&gt;
=== Buying Beer in Seattle ===&lt;br /&gt;
&lt;br /&gt;
from an email to the code4lib list: &lt;br /&gt;
&lt;br /&gt;
: I can think of three good bottleshops (all w/ taps in case you want a growler) that are located on bus lines from downtown:&lt;br /&gt;
:&lt;br /&gt;
: * [http://bottleworksbeerstore.blogspot.com/ Bottleworks]: Probably the shop I frequent the most. Take the 16 to Wallingford.&lt;br /&gt;
: * [http://www.lastdropbeershop.com/ Last Drop]: Take the 71,72, or 73 north from downtown and get off at 80th.&lt;br /&gt;
: * [http://www.seattlebeerauthority.com/ Beer Authority]: probably the quickest trip from downtown on the 522. get off at the 125th St stop in Lake City and walk north a couple of blocks.&lt;br /&gt;
: * [http://www.fullthrottlebottles.com/ Full Throttle Bottles]: Buses 131, 106, 23 --about 30 minute ride. &lt;br /&gt;
: * Also, QFC (large grocery store chain) usually has a great selection.&lt;br /&gt;
: * Lots of other pub/beer places noted on [http://g.co/maps/4m5pk the map]&lt;br /&gt;
&lt;br /&gt;
=== Disclaimers === &lt;br /&gt;
&lt;br /&gt;
* This is an unofficial event organized by attendees of Code4lib 2012.&lt;br /&gt;
* All guests at the Drinkup must be 21 years of age or over with a [http://www.cherylslastcall.com/pdfs/Acceptable-ID-Forms.pdf valid form of ID].&lt;br /&gt;
* Any participation in the Drinkup is at your own risk.&lt;br /&gt;
* All guests are expected to drink responsibly and behave appropriately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Packing and Shipping Beer ===&lt;br /&gt;
&lt;br /&gt;
If you're flying to Code4lib, you will have to consider how to bring your beer. Some attendees in past years have packed beer in their checked luggage, and others have purchased a beer shipper that was checked separately as luggage. In any event, '''you will not be able to bring beer in carryon luggage.'''&lt;br /&gt;
&lt;br /&gt;
The following are links to resources that provide info on packing your beer for transit.&lt;br /&gt;
&lt;br /&gt;
* [http://barlowbrewing.com/2010/11/11/how-to-pack-and-ship-beer/ How to pack and ship beer]&lt;br /&gt;
* [http://baltimoresnacker.blogspot.com/2009/06/how-to-pack-beer-and-wine-into-your.html How to pack beer and wine into your luggage]&lt;br /&gt;
* [http://beeradvocate.com/forum/read/3880083 Flying With Beer (Beer Advocate forums)]&lt;br /&gt;
* [http://beeradvocate.com/forum/read/4364472 Shipping beer while on business travel (Beer Advocate forums)]&lt;br /&gt;
* [http://www.mrboxonline.com/bottle-styrofoam-beer-shipper-p-7579.html A sample styrofoam beer shipper/box combo]&lt;/div&gt;</summary>
		<author><name>Jkeck</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2012_preconference_proposals&amp;diff=10735</id>
		<title>2012 preconference proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2012_preconference_proposals&amp;diff=10735"/>
				<updated>2012-02-02T00:51:35Z</updated>
		
		<summary type="html">&lt;p&gt;Jkeck: /* Blacklight */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Proposals for 2012 Code4LibCon Preconferences=&lt;br /&gt;
Proposals closed Sunday, November 20, 2011, so we can finalize the list and add them to registration! (The deadline for preconference proposals has passed.)&lt;br /&gt;
&lt;br /&gt;
Spaces available: main meeting room (max 275) + 5 breakout rooms (max 30-50). &lt;br /&gt;
&lt;br /&gt;
'''Please include a &amp;quot;Contact/Responsible Individual&amp;quot; name and email address so we know who is willing to put on the proposed precon.&lt;br /&gt;
'''&lt;br /&gt;
==Full Day==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Developing applications using REST web services ===&lt;br /&gt;
&lt;br /&gt;
Been hearing about web services but don’t know where to start to build something? Have you built applications that use read services but are stumped by OAuth, Content Negotiation and HTTP Headers? Come dig in and learn how to build applications that interact with both read and write REST services. We’ll cover the basic principles and practices of REST services and discuss the Atom Publishing Protocol as a REST service and its extensibility. The group will examine and test the CouchDB HTTP API by building a simple list creation tool. You’ll learn how OCLC’s platform web services leverage Atom to expose the data and business processes from OCLC’s library systems. By the end of the session, you’ll know the basic principles of REST services, be able to perform Create, Read, Update and Delete operations via REST and be able to authenticate to REST services via API keys and OAuth.&lt;br /&gt;
&lt;br /&gt;
Come ready to learn and code!&lt;br /&gt;
&lt;br /&gt;
Presenter: Karen Coombs - coombsk at oclc dot org&lt;br /&gt;
&lt;br /&gt;
==== Interest in Attending ====&lt;br /&gt;
&lt;br /&gt;
*Sam Kome&lt;br /&gt;
*Ray Schwartz (schwartzr2@wpunj.edu)&lt;br /&gt;
*Jim Robinson&lt;br /&gt;
*David Bucknum&lt;br /&gt;
*Jean Rainwater&lt;br /&gt;
*Joshua Gomez&lt;br /&gt;
*Andy Kohler&lt;br /&gt;
*Michael North&lt;br /&gt;
* Tom Keays (keaysht at lemoyne dot edu)&lt;br /&gt;
*Charlie Morris&lt;br /&gt;
*Michael Lindsey&lt;br /&gt;
* Kåre Fiedler Christiansen (morning only)&lt;br /&gt;
* Jørn Thøgersen&lt;br /&gt;
* Michael Poltorak Nielsen&lt;br /&gt;
* Dre&lt;br /&gt;
* Andrew Darby&lt;br /&gt;
* Timothy Clarke (tclarke@muhlenberg.edu)&lt;br /&gt;
* Keith Folsom&lt;br /&gt;
* Rebecca Jones&lt;br /&gt;
* Michael Doran (doran@uta.edu)&lt;br /&gt;
* Ray Henry (ray dot henry at pcc dot edu)&lt;br /&gt;
* Stephanie Collett&lt;br /&gt;
* Bohyun Kim&lt;br /&gt;
* Matt Connolly&lt;br /&gt;
* Cynthia Ng&lt;br /&gt;
* Justin Littman&lt;br /&gt;
&lt;br /&gt;
==Half Day Morning==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Linkfest ===&lt;br /&gt;
&lt;br /&gt;
We've had talks and sessions galore about Linked Data at code4lib in past years.  Let's focus on linking.  Bring data you want to publish and link to or link from and your ideas about new ways we can push data linking into being part of our regular approach to how we put our libraries' content and services on the web.  At the start of the session we'll run a quick poll to see who wants to link to what and how, and we'll pair or group up and get to work from there.  May a kajillion links bloom!&lt;br /&gt;
&lt;br /&gt;
If you need an &amp;quot;intro to linked data&amp;quot; we can prep a good list of readings/talks to review before you come.  But please come ready to link!&lt;br /&gt;
&lt;br /&gt;
Organizer type person:  Dan Chudnov, GWU Libraries, @dchud or dchud at gwu edu&lt;br /&gt;
&lt;br /&gt;
==== Interest in Attending ====&lt;br /&gt;
* Becky Yoose&lt;br /&gt;
* Tom Johnson&lt;br /&gt;
* Ed Summers&lt;br /&gt;
* bernardo gomez ( bgomez at emory dot edu )&lt;br /&gt;
* William Gunn&lt;br /&gt;
* Jason Ronallo&lt;br /&gt;
* Keri Thompson&lt;br /&gt;
* David Lacy&lt;br /&gt;
* Corey A Harper&lt;br /&gt;
* Matt Phillips (mphillips@law.harvard.edu)&lt;br /&gt;
* Declan Fleming&lt;br /&gt;
* Shaun Ellis (shaune@princeton.edu)&lt;br /&gt;
* Wendy Robertson&lt;br /&gt;
* Joel Richard (richardjm AT si DOT edu)&lt;br /&gt;
* Devon Smith&lt;br /&gt;
* Ron Peterson (ronp@udel.edu0&lt;br /&gt;
* Scott Hanrath (shanrath AT ku DOT edu)&lt;br /&gt;
* Jason Stirnaman (jstirnaman AT kumc DOT edu)&lt;br /&gt;
* Sean Chen&lt;br /&gt;
* Laura Smart&lt;br /&gt;
* Tommy Ingulfsen&lt;br /&gt;
&lt;br /&gt;
=== What's New in Solr ===&lt;br /&gt;
&lt;br /&gt;
UPDATE: Erik won't be making it to Seattle, but will tune in and call in as desired to that time slot.  Discuss Solr!!!  I'll be lurking and helping out however I can.&lt;br /&gt;
&lt;br /&gt;
This session will bring folks up to speed on the latest developments in Lucene and Solr.  There's always a lot of new capabilities as well as tips and tricks on using Solr in clever and powerful ways.  &lt;br /&gt;
&lt;br /&gt;
Presenter: Erik Hatcher - erik . hatcher @ lucidimagination dot com (remotely calling in and/or via IRC)&lt;br /&gt;
&lt;br /&gt;
==== Interest in Attending ====&lt;br /&gt;
* &amp;quot;Gabriel Farrell&amp;quot; &amp;lt;gsf24@drexel.edu&amp;gt;&lt;br /&gt;
* &amp;quot;Erik Hetzner&amp;quot; &amp;lt;erik.hetzner AT ucop BORK edu&amp;gt;&lt;br /&gt;
* &amp;quot;Michael B. Klein&amp;quot; &amp;lt;mbklein@gmail&amp;gt;&lt;br /&gt;
* Demian Katz (demian DOT katz AT villanova DOT edu)&lt;br /&gt;
* &amp;quot;Mark Mounts&amp;quot; &amp;lt;mark.mounts@dartmouth.edu&amp;gt;&lt;br /&gt;
* Anoop Atre ~ anoop.atre AT mnsu . edu&lt;br /&gt;
* David Isaak &amp;lt;david.isaak@kpchr.org&amp;gt;&lt;br /&gt;
* John Pillans &amp;lt;jpillan@indiana.edu&amp;gt;&lt;br /&gt;
* John Wynstra (john.wynstra@uni.edu)&lt;br /&gt;
* mark a. matienzo (mark at matienzo dot oh are gee)&lt;br /&gt;
* Sepehr Mavedati (sepehr DOT mavedati AT utoronto DOT ca)&lt;br /&gt;
* Mads Villadsen&lt;br /&gt;
* Jonathan Rochkind&lt;br /&gt;
* Shahin Sahebi (shahin.ezzatsahebi at utoronto dot ca)&lt;br /&gt;
* Naomi Dushay (ndushay at stanford dot edu)&lt;br /&gt;
* Jeremy Nelson&lt;br /&gt;
* Kirk Hess &amp;lt;kirkhess@illinois.edu&amp;gt;&lt;br /&gt;
* Gary Thompson&lt;br /&gt;
* Larry Baerveldt &amp;lt;lrbaerveldt@gmail.com&amp;gt;&lt;br /&gt;
* Dennis Schafroth &amp;lt;dennis @ indexdata.com&amp;gt;&lt;br /&gt;
* Bobbi Fox &amp;lt;bobbi_fox at harvard dot edu&amp;gt;&lt;br /&gt;
* Ed Fugikawa &amp;lt;ed at coalliance dot org&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Knocking Down Silos: Tools and Approaches for Simplifying Discovery ===&lt;br /&gt;
&lt;br /&gt;
What strategies have you used to merge silos to give users a more streamlined search experience? How are libraries using tools like Drupal, Islandora, Dublin Core, Solr and Blacklight to make article, catalog and/or repository content discoverable via a single interface? If you’re interested in these issues, challenges and conundrums join us for a morning of thinking, dreaming and scheming. &lt;br /&gt;
&lt;br /&gt;
Speakers/Facilitators will be:&lt;br /&gt;
 - Thom Cox - Manager of Library Information Technology Services - Tufts University&lt;br /&gt;
 - Ken Varnum – Web Systems Manager - University of Michigan Libraries&lt;br /&gt;
 - Stephen Westman – Analyst Programmer, Emerging Technologies and Services - Oregon State University Libraries &lt;br /&gt;
&lt;br /&gt;
Contact:  Margaret Mellinger - margaret dot mellinger at oregonstate dot edu&lt;br /&gt;
&lt;br /&gt;
==== Interest in Attending ====&lt;br /&gt;
&lt;br /&gt;
*David Uspal (david DOT uspal AT villanova DOT edu)&lt;br /&gt;
*Tammy Allgood Wolf&lt;br /&gt;
*Wayne Schneider&lt;br /&gt;
*Laney McGlohon&lt;br /&gt;
*Andrea Schurr (Andrea-Schurr AT utc DOT edu)&lt;br /&gt;
* &amp;quot;Kevin S. Clarke&amp;quot; &amp;lt;ksclarke@gmail&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Half Day Afternoon==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Git -r done === &lt;br /&gt;
&lt;br /&gt;
A session to cover all things Git, everyone's favorite distributed version control system.  This session should cover a little bit of the history of Git, how it works, and how it's different than other version controls systems like SVN.  Practical application should also be covered, including how to clone existing repos and contribute code back to them, how to host your own repository, and best practices for setting up a distributed network.&lt;br /&gt;
&lt;br /&gt;
Looking for attendees with real-life Git experience to share it, so we can all broaden our understanding of possible use-cases and nifty advanced features.&lt;br /&gt;
&lt;br /&gt;
Coordinator:  &amp;lt;del&amp;gt;Ian Walls, ByWater Solutions, @sekjal or ian.walls at bywatersolutions com&amp;lt;/del&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Coordinator Stand-In: Michael B. Klein, Stanford University Libraries, @mbklein or mbklein at stanford.edu&lt;br /&gt;
&lt;br /&gt;
Helper: Cary Gordon, Cherry Hill Company, @highermath / cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
==== Interest in Attending ====&lt;br /&gt;
&lt;br /&gt;
* Patrick Berry (pberry@csuchico.edu)&lt;br /&gt;
* Chris Sharp (csharp@georgialibraries.org)&lt;br /&gt;
* Matt Critchlow (mcritchlow@ucsd.edu)&lt;br /&gt;
* Peter Murray (Peter.Murray@lyrasis.org)&lt;br /&gt;
* Margaret Heller (mheller@dom.edu)&lt;br /&gt;
* Kevin S. Clarke (ksclarke@gmail)&lt;br /&gt;
* Michael B. Klein (mbklein@gmail)&lt;br /&gt;
* Demian Katz (demian DOT katz AT villanova DOT edu)&lt;br /&gt;
* Benjamin Shum (bshum@biblio.org)&lt;br /&gt;
* Sibyl Schaefer (sschaefer@rockarch.org)&lt;br /&gt;
* Tammy Allgood Wolf (tammy.allgood@asu.edu)&lt;br /&gt;
* Chad Nelson (cnelson17 AT gsu DOT edu)&lt;br /&gt;
* Lisa Kurt (lkurt@unr.edu)&lt;br /&gt;
* Matt Phillips (mphillips@law.harvard.edu)&lt;br /&gt;
* Dileshni Jayasinghe (d.jayasinghe@utoronto.ca)&lt;br /&gt;
* John Wynstra (john.wynstra@uni.edu)&lt;br /&gt;
* Declan Fleming&lt;br /&gt;
* Shaun Ellis (shaune@princeton.edu)&lt;br /&gt;
* Mads Villadsen&lt;br /&gt;
* Kåre Fiedler Christiansen&lt;br /&gt;
* Shahin Sahebi (shahin.ezzatsahebi@utoronto.ca)&lt;br /&gt;
* Devon Smith&lt;br /&gt;
* Jeremy Nelson&lt;br /&gt;
* Stephanie Collett&lt;br /&gt;
* Ron Peterson (ronp@udel.edu)&lt;br /&gt;
* Gary Thompson&lt;br /&gt;
* Brian McBride (brian.mcbride at utah.edu)&lt;br /&gt;
* Jacob Reed (jacob.reed at utah.edu)&lt;br /&gt;
* Bohyun Kim (bohyun.kim at fiu.edu)&lt;br /&gt;
* Larry Baerveldt &amp;lt;lrbaerveldt@gmail.com&amp;gt;&lt;br /&gt;
* Wayne Schneider&lt;br /&gt;
* Matt Connolly&lt;br /&gt;
* ernesto valencia&lt;br /&gt;
* Ed Fugikawa &amp;lt;ed at coalliance dot org&amp;gt;&lt;br /&gt;
* Andrea Schurr (Andrea-Schurr at utc dot edu)&lt;br /&gt;
&lt;br /&gt;
=== Blacklight ===&lt;br /&gt;
&lt;br /&gt;
This session will be walk-through of the architecture of Blacklight and what we have been improving since the rails 3 upgrade.  In addition to the architecture of the software, we will also briefly discuss the architecture of the Blacklight community and what has made it successful so far.&lt;br /&gt;
&lt;br /&gt;
For part of the session we will install Blacklight live and get it up and running.  This install demo will include a How-To on basic customizations in Blacklight using a test-driven approach (one of the cornerstones of the Blacklight community).&lt;br /&gt;
&lt;br /&gt;
For more information about Blacklight see our wiki ( http://projectblacklight.org/ ) and our GitHub repo ( https://github.com/projectblacklight/blacklight ).  We will also send out some brief instructions beforehand for those that would like to setup their environments to follow along and get Blacklight up and running on their local machines.&lt;br /&gt;
&lt;br /&gt;
Installation screencast: https://www.youtube.com/watch?v=VLuHuoB8Z6w&lt;br /&gt;
&lt;br /&gt;
Presenters: Jessie Keck, Stanford University - jkeck at stanford dot edu | Molly Pickral, University of Virginia - mpc3c at virginia dot edu&lt;br /&gt;
&lt;br /&gt;
==== Interest in Attending ====&lt;br /&gt;
* bernardo gomez ( bgomez at emory dot edu )&lt;br /&gt;
* Mark Mounts &amp;lt;mark.mounts@dartmouth.edu&amp;gt;&lt;br /&gt;
* Sibyl Schaefer (sschaefer@rockarch.org)&lt;br /&gt;
* John Pillans (jpillan@indiana.edu)&lt;br /&gt;
* Mang Sun (mang.dot sun at rice dot edu)&lt;br /&gt;
* Emily Lynema (emily_lynema at ncsu dot edu)&lt;br /&gt;
* mark a. matienzo (mark at matienzo dot oh are gee)&lt;br /&gt;
* Daniel Lovins (daniel dot lovins at nyu dot edu)&lt;br /&gt;
* Jonathan Rochkind&lt;br /&gt;
* Keith Folsom&lt;br /&gt;
* Kirk Hess &amp;lt;kirkhess@illinois.edu&amp;gt;&lt;br /&gt;
* Jason Stirnaman (jstirnaman AT kumc DOT edu)&lt;br /&gt;
* David Drexler &amp;lt;ddrexler@eou.edu&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== DACS and EAD Overview ===&lt;br /&gt;
&lt;br /&gt;
This session will look at what DACS (Describing Archives: a Content Standard) is and describe the ten required elements.  Then there will be an overview of what EAD is, how it works, and the required elements.  The final part will be a practice session on taking a paper finding aid and coding it using DACS and EAD.&lt;br /&gt;
&lt;br /&gt;
Presenter:  Doris Munson, Eastern Washington University, dmunson at ewu dot edu&lt;br /&gt;
(please feel free to contact me if you are interested in being a co-presenter)&lt;br /&gt;
&lt;br /&gt;
==== Interest in Attending ====&lt;br /&gt;
* Francis Kayiwa ( kayiwa@ YouEyeSee dot edu )&lt;br /&gt;
* Carmen Mitchell (carmenmitchell at gmail dot com)&lt;br /&gt;
&lt;br /&gt;
=== [[Digging into metadata: context, code, and collaboration]] ===&lt;br /&gt;
&lt;br /&gt;
Working with library/archival metadata is difficult. This preconference will tackle pressing questions and will show some of the intricacies of metadata (including AACR2/MARC) with exercises to demonstrate why inconsistencies exist in the data. What steps can the cataloging &amp;amp; metadata community take to help improve the quality of this data?  What tools &amp;amp; techniques could help?  Rules have evolved over time leaving dirty legacy data.  Systems have impacted--and will continue to impact--data structure &amp;amp; design.  How can this data be aggregated and refined for use in a new emerging data environments?  What assumptions can safely be made and when do you need to inquire about local practice?  We will end with a hack-fest where you can ask questions of experienced catalogers and get help with your metadata related problems.  Bring your laptops and data.   &lt;br /&gt;
&lt;br /&gt;
Person Herder: Becky Yoose, Grinnell College, yoosebec at grinnell dot edu&lt;br /&gt;
&lt;br /&gt;
Collaborators/Facilitators: Corey Harper, New York University - corey dot harper at nyu dot edu | Shana L. McDanold, University of Pennsylvania - 	&lt;br /&gt;
mcdanold at pobox dot upenn dot edu  | Laura Smart, Caltech - laura at library dot caltech dot edu&lt;br /&gt;
&lt;br /&gt;
==== Interest in Attending ====&lt;br /&gt;
* Peter Green (pmgreen@princeton.edu)&lt;br /&gt;
* David Isaak (david.isaak@kpchr.org)&lt;br /&gt;
* Alex Rolfe (arolfe@georgefox.edu)&lt;br /&gt;
* mark a. matienzo (mark at matienzo dot oh are gee)&lt;br /&gt;
* Sarah Johnston (johnsts@stolaf.edu)&lt;br /&gt;
* Derek Merleaux (derek@merleaux d0t net)&lt;br /&gt;
* Adam Wead (awead {at} rockhall d.t 0 R G)&lt;br /&gt;
* Tania Fersenheim (tania dot fersenheim at gmail) (I'm only a maybe because I may have a conflict in this time slot)&lt;br /&gt;
* Robin Dean (robin at coalliance dot org)&lt;br /&gt;
&lt;br /&gt;
=== &amp;quot;Geo&amp;quot; ===&lt;br /&gt;
This session will explore, we hope collaboratively, the presentation of objects on maps.  There will be a section on workflow, a section on discovering objects via &amp;quot;geobrowse,&amp;quot; a section discovery of objects via &amp;quot;geosearch,&amp;quot; and an exploration of the discovery and presentation of geo-referenced images (e.g. historic maps). There will be open discussion on other approaches to map-based discovery.  Emphasis will be placed on simplicity of workflow and implementation.  Technologies include: Atom, Django, Solr, and OpenLayers.  &lt;br /&gt;
&lt;br /&gt;
Presenters:  Mike Graves, UNC Chapel Hill, gravm at email dot unc dot edu; Tim Shearer, UNC Chapel Hill, tshearer at email dot unc dot edu&lt;br /&gt;
(please feel free to contact Tim if you are interested in being a co-presenter)&lt;br /&gt;
&lt;br /&gt;
==== Interest in Attending ====&lt;br /&gt;
* &amp;quot;Gabriel Farrell&amp;quot; &amp;lt;gsf24@drexel.edu&amp;gt;&lt;br /&gt;
* Anoop Atre ~ anoop.atre AT mnsu . edu&lt;br /&gt;
* Chad Nelson (cnelson17 AT gsu DOT edu)&lt;br /&gt;
* Jason Casden (jmcasden AT ncsu DOT edu&lt;br /&gt;
* Dileshni Jayasinghe (d.jayasinghe@utoronto.ca)&lt;br /&gt;
* Sepehr Mavedati (sepehr DOT mavedati AT utoronto DOT ca)&lt;br /&gt;
* Michael Poltorak Nielsen&lt;br /&gt;
* Wendy Robertson&lt;br /&gt;
* Joel Richard (richardjm AT si DOT edu)&lt;br /&gt;
* Jonathan Rochkind&lt;br /&gt;
* Naomi Dushay (ndushay at stanford dot edu)&lt;br /&gt;
* Scott Hanrath (shanrath AT ku DOT edu)&lt;br /&gt;
* Aaron Collier (acollier AT csufresno DOT edu)&lt;br /&gt;
* David Lacy (david DOT lacy AT villanova DOT edu)&lt;br /&gt;
* Jen Weintraub (jweintraub AT library dot ucla dot edu)&lt;br /&gt;
* Sean Chen&lt;br /&gt;
* Bobbi Fox (bobbi_fox AT harvard dot edu)&lt;br /&gt;
&lt;br /&gt;
== Half-day Evening ==&lt;br /&gt;
&lt;br /&gt;
=== Microsoft Campus Visit ===&lt;br /&gt;
Join us for a trip across Lake Washington to Microsoft Headquarters.  Bus will depart from the conference hotel at 4:15pm on Monday. We will visit the Microsoft Home, the Envisioning Lab, and/or the MS Library.  The we'll head over to Microsoft Research for drinks and appetizers, and you'll see some great demos of some cool new (and free!) technologies coming out of MSR.  Bus will get back to hotel by 9:00pm, plenty of time to hit a pub.  You'll learn about:&lt;br /&gt;
&lt;br /&gt;
1. Layerscape -[http://communities.worldwidetelescope.org/]&lt;br /&gt;
&lt;br /&gt;
2. ChronoZoom - [http://research.microsoft.com/chronozoom/]&lt;br /&gt;
&lt;br /&gt;
3. F# - [http://www.tryfsharp.org]&lt;br /&gt;
&lt;br /&gt;
4. Microsoft Academic Search - [http://academic.research.microsoft.com]&lt;br /&gt;
&lt;br /&gt;
5. Microsoft Audio Visual Indexing System - [http://research.microsoft.com/mavis] &lt;br /&gt;
&lt;br /&gt;
Space is limited, so reserve your seat today  Email Alex at the address below.    &lt;br /&gt;
&lt;br /&gt;
Coordinator: Alex Wade, Microsoft Research, awade at microsoft dot com &lt;br /&gt;
&lt;br /&gt;
Presenters: Behrooz Chitsaz; Rob Fatland; Christophe Poulain; Michael Zyskowski &lt;br /&gt;
&lt;br /&gt;
==== Interest in Attending (Registration closed! We are now at capacity.)   ====&lt;br /&gt;
* Declan Fleming&lt;br /&gt;
* Matt Critchlow&lt;br /&gt;
* Tom Keays (keaysht at lemoyne dot edu)&lt;br /&gt;
* mark a. matienzo (mark at matienzo dot oh are gee)&lt;br /&gt;
* Mark Mounts &amp;lt;mark.mounts@dartmouth.edu&amp;gt;&lt;br /&gt;
* Kyle Banerjee &amp;lt;banerjek@uoregon.edu&amp;gt;&lt;br /&gt;
* Evviva Weinraub&lt;br /&gt;
* Emily Lynema &amp;lt;emily_lynema at ncsu dot edu&amp;gt;&lt;br /&gt;
* Jason Casden &amp;lt;jmcasden AT ncsu DOT edu&amp;gt;&lt;br /&gt;
* Daniel Lovins &amp;lt;daniel.lovins@nyu.edu&amp;gt;&lt;br /&gt;
* Cynthia Ng&lt;br /&gt;
* &amp;quot;Gabriel Farrell&amp;quot; &amp;lt;gsf24@drexel.edu&amp;gt;&lt;br /&gt;
* Shaun Ellis (shaune AT princeton DOT edu)&lt;br /&gt;
* Derek Merleaux (derek@merleaux d0t net)&lt;br /&gt;
* Mads Villadsen&lt;br /&gt;
* Kåre Fiedler Christiansen&lt;br /&gt;
* Jørn Thøgersen&lt;br /&gt;
* Michael Poltorak Nielsen&lt;br /&gt;
* Dileshni Jayasinghe&lt;br /&gt;
* Matt Phillips (mphillips@law.harvard.edu)&lt;br /&gt;
* Wendy Robertson&lt;br /&gt;
* Shahin Sahebi&lt;br /&gt;
* Matt Connolly &amp;lt;mjc12 AT cornell dot edu&amp;gt;&lt;br /&gt;
* Jeremy Nelson&lt;br /&gt;
* Naomi Dushay (ndushay at stanford dot edu)&lt;br /&gt;
* Dre&lt;br /&gt;
* Ken Varnum (varnum umich edu)&lt;br /&gt;
* Andrew Darby (agdarby at miami dot edu)&lt;br /&gt;
* David Uspal (david DOT uspal AT villanova DOT edu)&lt;br /&gt;
* REGISTRATION IS NOW CLOSED&lt;br /&gt;
&lt;br /&gt;
[[Category: Code4Lib2012]]&lt;/div&gt;</summary>
		<author><name>Jkeck</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2012_preconference_proposals&amp;diff=9713</id>
		<title>2012 preconference proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2012_preconference_proposals&amp;diff=9713"/>
				<updated>2011-11-15T23:34:30Z</updated>
		
		<summary type="html">&lt;p&gt;Jkeck: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Proposals for 2012 Code4LibCon Preconferences=&lt;br /&gt;
Proposals will close Sunday, November 20 so we can finalize the list and add them to registration!&lt;br /&gt;
&lt;br /&gt;
Spaces available: main meeting room (max 275) + 5 breakout rooms (max 30-50). &lt;br /&gt;
&lt;br /&gt;
'''Please include a &amp;quot;Contact/Responsible Individual&amp;quot; name and email address so we know who is willing to put on the proposed precon.&lt;br /&gt;
'''&lt;br /&gt;
==Full Day==&lt;br /&gt;
&lt;br /&gt;
=== Hackfest ===&lt;br /&gt;
&lt;br /&gt;
Like the hackfests at Access, let's get together and do something.  An informal gathering of developers, librarians, and variants in-between.    Got code or an idea?  Bring it and have others help you at the Hackfest where anything and everything goes! &lt;br /&gt;
&lt;br /&gt;
Bring a laptop or portable device, coffee or beverage of choice and we'll spend a day bodging, tinkering, tweaking, coding, chatting, and having fun.  We'll spawn clusters around proposals at the start and then everyone can break off.  Reports / presentations / demos at code4lib greatly encouraged but not mandatory.&lt;br /&gt;
&lt;br /&gt;
[[Hackfest Barn Raising (idea page)]]&lt;br /&gt;
&lt;br /&gt;
Farmer and Ranch-hand:  Jason Fowler, UBC Library Systems, jason dot fowler at ubc dot ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Hacking Content ===&lt;br /&gt;
&lt;br /&gt;
What is the future of getting library information and resources into users’ hands at the right time and with appropriate context and relevancy.  Learning management systems, library guides, Web-scale discovery systems-plenty of tools to choose from and still we see lots of opportunities for improvement. Let’s pick them apart and brainstorm ideas for projects that could address weaknesses in one or all of these systems. If you’re interested in these issues, challenges and conundrums join us for a day of thinking, dreaming and scheming. All skill sets and backgrounds needed. &lt;br /&gt;
&lt;br /&gt;
Speakers/Facilitators will be:&lt;br /&gt;
 - Thom Cox - Manager of Library Information Technology Services - Tufts University&lt;br /&gt;
 - Ken Varnum – Web Systems Manager - University of Michigan Libraries&lt;br /&gt;
 - Evviva Weinraub – Director, Emerging Technologies and Services - Oregon State University Libraries &lt;br /&gt;
&lt;br /&gt;
Contact:  Margaret Mellinger - margaret dot mellinger at oregonstate dot edu&lt;br /&gt;
&lt;br /&gt;
==Half Day Morning==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Linkfest ===&lt;br /&gt;
&lt;br /&gt;
We've had talks and sessions galore about Linked Data at code4lib in past years.  Let's focus on linking.  Bring data you want to publish and link to or link from and your ideas about new ways we can push data linking into being part of our regular approach to how we put our libraries' content and services on the web.  At the start of the session we'll run a quick poll to see who wants to link to what and how, and we'll pair or group up and get to work from there.  May a kajillion links bloom!&lt;br /&gt;
&lt;br /&gt;
If you need an &amp;quot;intro to linked data&amp;quot; we can prep a good list of readings/talks to review before you come.  But please come ready to link!&lt;br /&gt;
&lt;br /&gt;
Organizer type person:  Dan Chudnov, GWU Libraries, @dchud or dchud at gwu edu&lt;br /&gt;
&lt;br /&gt;
=== What's New in Solr ===&lt;br /&gt;
&lt;br /&gt;
This session will bring folks up to speed on the latest developments in Lucene and Solr.  There's always a lot of new capabilities as well as tips and tricks on using Solr in clever and powerful ways.  &lt;br /&gt;
&lt;br /&gt;
Presenter: Erik Hatcher - erik . hatcher @ lucidimagination dot com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Half Day Afternoon==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Git -r done === &lt;br /&gt;
&lt;br /&gt;
A session to cover all things Git, everyone's favourite distributed version control system.  This session should cover a little bit of the history of Git, how it works, and how it's different than other version controls systems like SVN.  Practical application should also be covered, including how to clone existing repos and contribute code back to them, how to host your own repository, and best practices for setting up a distributed network.&lt;br /&gt;
&lt;br /&gt;
Looking for attendees with real-life Git experience to share it, so we can all broaden our understanding of possible use-cases and nifty advanced features.&lt;br /&gt;
&lt;br /&gt;
Coordinator:  Ian Walls, ByWater Solutions, @sekjal or ian.walls at bywatersolutions com&lt;br /&gt;
&lt;br /&gt;
=== Blacklight ===&lt;br /&gt;
&lt;br /&gt;
This session will be walk-through of the architecture of Blacklight and what we have been improving since the rails 3 upgrade.  In addition to the architecture of the software, we will also briefly discuss the architecture of the Blacklight community and what has made it successful so far.&lt;br /&gt;
&lt;br /&gt;
For part of the session we will install Blacklight live and get it up and running.  This install demo will include a How-To on basic customizations in Blacklight using a test-driven approach (one of the cornerstones of the Blacklight community).&lt;br /&gt;
&lt;br /&gt;
For more information about Blacklight see our wiki ( http://projectblacklight.org/ ) and our GitHub repo ( https://github.com/projectblacklight/blacklight ).  We will also send out some brief instructions beforehand for those that would like to setup their environments to follow along and get Blacklight up and running on their local machines.&lt;br /&gt;
&lt;br /&gt;
Presenters: Jessie Keck, Stanford University - jkeck at stanford dot edu | Molly Pickral, University of Virginia - mpc3c at virginia dot edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Code4Lib2012]]&lt;/div&gt;</summary>
		<author><name>Jkeck</name></author>	</entry>

	</feed>