<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.code4lib.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Michaelhagedon</id>
		<title>Code4Lib - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.code4lib.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Michaelhagedon"/>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/Special:Contributions/Michaelhagedon"/>
		<updated>2026-04-29T20:01:07Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.26.2</generator>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2015_Prepared_Talk_Proposals&amp;diff=41996</id>
		<title>2015 Prepared Talk Proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2015_Prepared_Talk_Proposals&amp;diff=41996"/>
				<updated>2014-11-07T20:09:46Z</updated>
		
		<summary type="html">&lt;p&gt;Michaelhagedon: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Code4lib 2015 is a loosely-structured conference that provides people working at the intersection of libraries/archives/museums/cultural heritage and technology with a chance to share ideas, be inspired, and forge collaborations. For more information about the Code4lib community, please visit http://code4lib.org/about/. &lt;br /&gt;
The conference will be held at the Portland Hilton &amp;amp; Executive Tower in Portland, Oregon, from February 9-12, 2015.&lt;br /&gt;
&lt;br /&gt;
'''Proposals for Prepared Talks:'''&lt;br /&gt;
&lt;br /&gt;
We encourage everyone to propose a talk.&lt;br /&gt;
 &lt;br /&gt;
Prepared talks are 20 minutes (including setup and questions), and should focus on one or more of the following areas:&lt;br /&gt;
* Projects you've worked on which incorporate innovative implementation of existing technologies and/or development of new software&lt;br /&gt;
* Tools and technologies – How to get the most out of existing tools, standards and protocols (and ideas on how to make them better)&lt;br /&gt;
* Technical issues - Big issues in library technology that should be addressed or better understood&lt;br /&gt;
* Relevant non-technical issues – Concerns of interest to the Code4Lib community which are not strictly technical in nature, e.g. collaboration, diversity, organizational challenges, etc.&lt;br /&gt;
&lt;br /&gt;
Proposals can be submitted through Friday, November 7, 2014 at 5pm PST (GMT−8). Voting will start on November 11, 2014 and continue through November 25, 2014. The URL to submit votes will be announced on the Code4Lib website and mailing list and will require an active code4lib.org account to participate. The final list of presentations will be announced in early- to mid-December.&lt;br /&gt;
&lt;br /&gt;
'''Proposals for Prepared Talks:'''&lt;br /&gt;
&lt;br /&gt;
Log in to the Code4lib wiki and edit this wiki page using the prescribed format. If you are not already registered, follow the instructions to do so.&lt;br /&gt;
Provide a title and brief (500 words or fewer) description of your proposed talk.&lt;br /&gt;
If you so choose, you may also indicate when, if ever, you have presented at a prior Code4Lib conference. This information is completely optional, but it may assist voters in opening the conference to new presenters.&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Talk Title: ==&lt;br /&gt;
 &lt;br /&gt;
* Speaker's name,  email address, and (optional) affiliation&lt;br /&gt;
* Second speaker's name, email address, and affiliation, if second speaker&lt;br /&gt;
&lt;br /&gt;
Abstract of no more than 500 words.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Talk Proposals'''&lt;br /&gt;
== Zines + Gamification = Awesomest Metadata Literacy Outreach Event Ever! ==&lt;br /&gt;
 &lt;br /&gt;
* [http://www.JenniferHecker.info Jennifer Hecker], jenniferraehecker@gmail.com, [http://www.lib.utexas.edu/subject/zines University of Texas Libraries] &amp;amp; [http://www.AustinFanzineProject.org Austin Fanzine Project]&lt;br /&gt;
* [http://anomalily.net/ Lillian Karabaic], librarian@iprc.org, [http://www.iprc.org/ Independent Publishing Resource Center] (Portland)&lt;br /&gt;
 &lt;br /&gt;
In academic libraries, and elsewhere, the popularity of zine (a magazine produced for love, not profit) collections is on the rise. At the same time, metadata literacy is becoming an increasingly important skill, helping people navigate and understand digital environments and interactions. We have found a way to teach metadata literacy to the general public that isn’t super-boring – in fact, we’ve made it downright fun!&lt;br /&gt;
&lt;br /&gt;
First, volunteer zine librarian Lillian Karabaic of Portland’s Independent Publishing Resource Center facilitated the creation of a gamified cataloging interface for the IPRC’s annual Raiders of the Lost Archives backlog-busting 24-hour volunteer cataloging event.&lt;br /&gt;
&lt;br /&gt;
Then, archivist Jennifer Hecker facilitated the adaptation of the IPRC’s game for use in a similar, but also very different context – promoting UT Libraries newly-acquired zine collections. The main goal of the academic-library-based event was increasing excitement around the collections, but with the side goal of building metadata literacy, and introducing an understanding of library cataloging issues.&lt;br /&gt;
&lt;br /&gt;
The Texas modification also conforms to the xZINECOREx metadata schema developed by the national [http://zinelibraries.info/ Zine Librarians Interest Group], and triggered interesting conversations with the Libraries’s cataloging department about evolving metadata standards and how to incorporate the products of crowd-sourcing projects into existing workflows.&lt;br /&gt;
&lt;br /&gt;
Both games will be demoed.&lt;br /&gt;
&lt;br /&gt;
We have never presented at Code4lib.&lt;br /&gt;
&lt;br /&gt;
== Do the Semantic FRBRoo ==&lt;br /&gt;
* Rosie Le Faive, rlefaive@upei.ca, University of Prince Edward Island&lt;br /&gt;
&lt;br /&gt;
[http://www.islandora.ca Islandora] is great for creating repositories of any data type, but how can you model meaningful relationships between digital objects and use them to tell a story?&lt;br /&gt;
&lt;br /&gt;
At UPEI, I’m assembling an ethnography of Prince Edward Island’s traditional fiddle music that includes musical clips, video clips, oral histories, musical notation, images, and ethnographic commentaries. In order to present an exhibition-style site, I’m tying these digital objects together via the people, places, events, tunes and topics that they share or describe. &lt;br /&gt;
&lt;br /&gt;
To describe the relationships, I’m extending Islandora to use [http://www.cidoc-crm.org/frbr_inro.html FRBRoo], a vocabulary that combines the FRBR model with CIDOC-CRM, the the object-oriented museum documentation ontology. These modules being developed will allow other researchers to create a structured, navigable digital repository of diverse object types, that uses Islandora as an exhibition platform. &lt;br /&gt;
&lt;br /&gt;
== Our $50,000 Problem: Why Library School? ==&lt;br /&gt;
* Jennie Rose Halperin, jhalperin@mozilla.com, Mozilla Corporation&lt;br /&gt;
&lt;br /&gt;
57 library schools in the United States are churning out approximately 100 graduates per year, many with debt upwards of $50,000.  According to ONet, [http://www.inthelibrarywiththeleadpipe.org/2011/is-the-united-states-training-too-many-librarians-or-too-few-part-1/ 84% of library jobs in the US require an MLS.] The library profession is [http://dpeaflcio.org/programs-publications/issue-fact-sheets/library-workers-facts-figures/) 92% white and 82% female and entry-level librarians can expect to make $32,500 per year.]&lt;br /&gt;
&lt;br /&gt;
Contrasted with developers, who are almost [http://www.ncwit.org/blog/did-you-know-demographics-technical-women 90% male] and can expect to make [http://www.forbes.com/sites/jennagoudreau/2011/06/01/best-entry-level-jobs/ $70,000 in an entry-level position,] these numbers are dismal.&lt;br /&gt;
&lt;br /&gt;
According to a recent survey, the top skill that outgoing library students want to know is “programming” and yet many MLS programs still consider Microsoft Word an essential technology skill.&lt;br /&gt;
&lt;br /&gt;
What is going on here? Why do we accept this fate, where mostly female debt-burdened professionals continue to be thrown onto the work force without the education their expensive degrees promised?&lt;br /&gt;
&lt;br /&gt;
As a community we need to come together to stop this cycle. We need to provide better support and mentorship to diversify and keep the profession relevant and help librarianship move into the future it deserves.&lt;br /&gt;
&lt;br /&gt;
This talk will walk through the challenges of navigating a hostile employment environment as well as present models for better development and future state imagining.&lt;br /&gt;
&lt;br /&gt;
== No cataloging software? Need more than Dublin Core? No problem!: Experiences with CollectiveAccess ==&lt;br /&gt;
* [[User:SeanHendricks|Sean Q. Hendricks]], sqhendr@clemson.edu, Clemson University&lt;br /&gt;
* Rachel Wittmann, rwittma@clemson.edu, Clemson University&lt;br /&gt;
&lt;br /&gt;
Clemson University Libraries has implemented the open-source software CollectiveAccess for customized digital collection needs. CollectiveAccess is an open-source project with the goal of providing a flexible way to manage and publish museum and archival collections. There are several applications associated with the projects; most used are: Providence (for cataloging and entering metadata) and Pawtucket (for displaying objects in a collection for the public). It has many profiles readily available for installing with existing library standards, such as Dublin Core, and there is a robust syntax for creating your own profiles to fit custom tailored metadata schemas. Plus, the user interface allows you to modify the metadata profile quickly and easily.&lt;br /&gt;
&lt;br /&gt;
In this talk, we will discuss:&lt;br /&gt;
* Our experiences with installing Providence and creating an installation profile that satisfies the needs of many of the Clemson Libraries digital archiving processes. &lt;br /&gt;
* The stumbling blocks experienced in that process and how they were resolved.&lt;br /&gt;
* The available plugins sourcing widely used authorities, such as Library of Congress thesauri and GeoNames.org, and how they have been used by our projects. &lt;br /&gt;
* A brief overview of the export and import functions and also current workflow practices within Providence.&lt;br /&gt;
* Future plans &amp;amp; the role of CollectiveAccess at Clemson University Libraries&lt;br /&gt;
&lt;br /&gt;
== Getting ContentDM and Wordpress to Play Together ==&lt;br /&gt;
* [[User:SeanHendricks|Sean Q. Hendricks]], sqhendr@clemson.edu, Clemson University&lt;br /&gt;
&lt;br /&gt;
Clemson University Libraries has a very strong program for digitizing and archiving photographs, and the Digital Imaging team processes many hundreds of photographs every month. These images are managed using different methods, including ContentDM, a digital collection manager.&lt;br /&gt;
&lt;br /&gt;
ContentDM provides various methods for searching and displaying photographs, along with their metadata. However, recent initiatives have resulted in the need to leverage those collections into exhibits displayed on other library-related websites, such as our Special Collections unit. The Clemson Libraries has invested heavily in Wordpress as our content management system of choice, and it seemed most efficient not to have to export and import images into our Wordpress sites in order to provide exhibited images.&lt;br /&gt;
&lt;br /&gt;
Fortunately, ContentDM has provided an API to many of their functions, allowing the extraction of metadata and even rescaled images through URLs. This project has been developing a plugin for Wordpress that integrates with ContentDM through shortcodes that Wordpress editors can easily include in their content. These shortcodes allow editors to choose how many images, which images from which collections, thumbnail sizes, etc. to display in different gallery styles. Plans are for it to allow integration with different plugins such as Fancybox and Masonry.&lt;br /&gt;
&lt;br /&gt;
In this presentation, I will demonstrate the current state of the plugin and discuss future plans. &lt;br /&gt;
&lt;br /&gt;
==Refinery — An open source locally deployable web platform for the analysis of large document collections==&lt;br /&gt;
 &lt;br /&gt;
* [[User:DaeilKim|Daeil Kim]], The New York Times, daeil.kim@nytimes.com&lt;br /&gt;
&lt;br /&gt;
Refinery is an open source web platform for the analysis of large unstructured document collections. It extracts meaningful semantic themes within documents also known as &amp;quot;topics&amp;quot; which can be thought of as word clouds composed of terms that highly co-occur with one another. Once this semantic index is formed, one can extract relevant documents related to these topics and further refine their contents through a summarization process that allows users to search for phrases that are relevant to them within the corpus. The goal of Refinery is to make this whole process easier and to provide some of the latest scalable versions of these learning algorithms in an intuitive web-based interface. Refinery is also meant to be run locally, thus bypassing the need for securing document collections over the internet. The talk will go through some of the technologies involved and a demo of the app.&lt;br /&gt;
&lt;br /&gt;
For more info check out http://www.docrefinery.org.&lt;br /&gt;
&lt;br /&gt;
==Drupal 8 — Evolution &amp;amp; Revolution==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Drupal 8 is in beta and nearing release. Among its many features, it notably has become more developer friendly through its adoption of the Symfony PHP framework along with Symfony's outstanding set of libraries (like Guzzle) and tools (like Composer). And, in implementing the Twig theming system, it is can begin to escape PHPtemplate. These moves also make it easier to create headless systems that uses Angular.js and other systems for presentation, or even forgo presentation entirely.&lt;br /&gt;
&lt;br /&gt;
From the site-builder's perspective, Drupal 8 provides a much smother experience and makes it easier to build and implement site recipes.&lt;br /&gt;
&lt;br /&gt;
==Using GameSalad to Build a Gamified Information Literacy Mobile App for Higher Education==&lt;br /&gt;
 &lt;br /&gt;
* [[User:StanBogdanov|Stanislav 'Stan' Bogdanov]],  stan@stanrb.com, Adelphi University and [http://bogliollc.com Boglio LLC]&lt;br /&gt;
&lt;br /&gt;
GameSalad is a popular tool for developing mobile and desktop games with little actual programming. In this presentation, Stan Bogdanov breaks down the development process he followed while building [https://github.com/stanrb/mobiLit mobiLit], a mobile app with the goal of being the first open-source gamified information literacy app to be used as part of a college-level information literacy curriculum. He will go through the basics of using GameSalad to create an app that can be easily customized by non-programmers and the instructional principles used to teach the material in a mobile medium. Stan will also go through two qualitative design studies he did on the app and discuss their results and the lessons learned from building mobiLit. The session will conclude with an overview of the next steps for the [https://github.com/stanrb/mobiLit mobiLit project].&lt;br /&gt;
&lt;br /&gt;
==The Impossible Search: Pulling data from multiple unknown sources==&lt;br /&gt;
 &lt;br /&gt;
* Riley Childs, no official affiliation (currently a Senior in High School at Charlotte United Christian Academy), rchilds (AT) cucawarriors.com &lt;br /&gt;
&lt;br /&gt;
It's easy to search data you know the structure of, but what if you need to pull in data from sources that don't have a standard structure. The ability to search community events along with your standard catalog search results is an example, but often the only way to pull these events is through XML, JSON, (Insert structured format here), or even just raw html. But how do you get that structure? That simple question is what makes this impossible. The process to define and process this structure takes a lot of manual labor, especially if the data you are pulling is just HTML, and then every time you add data to the index you have to run all the data through a script to pull in data in a format Solr or an other index can use. This talk will focus on Solr, but the principles explained will apply to many other indexes.&lt;br /&gt;
&lt;br /&gt;
==What! You're Not Using Docker?==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Boring part: Docker[1] is a container system that provides benefits similar to virtualization with only a fraction of the overhead. Scintillating part: Docker can host between four to six times the number of service instances than systems such as Xen or VMWare on a given piece of hardware. But thats not all! Docker also makes it simple(r) to create transportable instances, so you can spin up development servers on your laptop.&lt;br /&gt;
&lt;br /&gt;
*[1]https://www.docker.com/&lt;br /&gt;
&lt;br /&gt;
== Video Accessibility, WebVTT, and Timed Text Track Tricks ==&lt;br /&gt;
&lt;br /&gt;
* Jason Ronallo, jronallo@gmail.com, NCSU Libraries&lt;br /&gt;
&lt;br /&gt;
Video on the Web presents new challenges and opportunities. How do you make your video more accessible to those with various disabilities and needs? I'll show you how. This presentation will focus on how to write and deliver captions, subtitles, audio descriptions, and timed metadata tracks for Web video using the WebVTT W3C standard. Encoding timed text tracks in this way opens up opportunities for new functionality on your websites beyond accessibility. The presentation will show some examples of the potential for using timed text tracks in creative ways. I'll cover all the HTML and JavaScript you will need to know as well as some of the CSS and other bits you could probably do without but are too fun to pass up.&lt;br /&gt;
&lt;br /&gt;
== Categorizing Records with Random Forests ==&lt;br /&gt;
 &lt;br /&gt;
* Geoffrey Boushey, geoffrey.boushey@ucsf.edu, UCSF Library&lt;br /&gt;
Academic libraries are increasingly responsible for providing ingest, search, discovery, and analysis for data sets.  Emerging techniques from data science and machine learning can provide librarians and developers with an opportunity to generate new insights and services from these document collections.  This presentation will provide a brief overview of common machine learning classification techniques, then dive into a more detailed example using a random forest to assign keywords to research data sets.  The talk will emphasize the insight that can be gained from machine learning rather than the inner workings of the algorithms.  The overall goal of this presentation is to provide librarians and developers with the context to recognize an opportunity to apply machine learning categorization techniques at their home campuses and organizations.  &lt;br /&gt;
&lt;br /&gt;
== Data Science in Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Devon Smith, smithde@oclc.org, OCLC&lt;br /&gt;
&lt;br /&gt;
Data Science is increasing in buzz and hype. I'll go over what it is, what it isn't, and how it fits in libraries.&lt;br /&gt;
&lt;br /&gt;
== PDF metadata extraction for academic literature == &lt;br /&gt;
&lt;br /&gt;
* Kevin Savage, kevin.savage at mendeley.com, Mendeley&lt;br /&gt;
* Joyce Stack, joyce.stack at mendeley.com, Mendeley&lt;br /&gt;
&lt;br /&gt;
Mendeley recently added a, &amp;quot;document from file,&amp;quot; endpoint to its API which attempts to extract metadata such as title and authors directly from PDF files. This talk will describe at a high level the machine learning methods we used including how we measured and tuned our model. We will then delve more deeply into our stack, the tools we used, some of the things that didn't work and why PDFs are the worst thing ever to compute over.&lt;br /&gt;
&lt;br /&gt;
== Giving Users What They Want: Record Grouping in VuFind ==&lt;br /&gt;
 &lt;br /&gt;
* Mark Noble,  mark@marmot.org, [//www.marmot.org Marmot Library Network]&lt;br /&gt;
&lt;br /&gt;
In 2013, Marmot did extensive usability studies with patrons to determine what was difficult in the catalog.  Many patrons had problems sifting through all of the various formats and editions of a title.  In 2014 we developed a method for [//mercury.marmot.org/Union/Search?lookfor=divergent grouping records] so only a single work is shown in search results and all formats and editions are listed under that work.  We will discuss our definition of a 'work' based on FRBR principles; combining meta data from MARC records with metadata from other sources like OverDrive; the technical details of Record Grouping; the design decisions made during implementation; and the reaction from users and staff.&lt;br /&gt;
&lt;br /&gt;
== Topic Space: a mobile augmented reality recommendation app ==&lt;br /&gt;
&lt;br /&gt;
* Jim Hahn, jimhahn@illinois.edu, University of Illinois at Urbana-Champaign&lt;br /&gt;
&lt;br /&gt;
The Topic Space module (http://minrvaproject.org/modules_topicspace.php ) was developed with an IMLS Sparks! Grant to investigate augmented reality technologies for in-library recommendations. The funding allowed for sustained university community collaboration by the University Library, the Graduate School of Library and Information Science, as well as graduate student programmers sourced from the Department of Computer Science. Collaborators designed app functionality and identified relevant open source libraries that could power optical character recognition (OCR) functionality from within the mobile phone.&lt;br /&gt;
&lt;br /&gt;
Topic space allows a user to take a picture of an item's call number in the book stacks. The module will show the user other books that are relevant but that are not shelved nearby. It can also show users books that are normally shelved here but that are currently checked out. Recommendations are based on Library of Congress subject headings and ILS circulation data which indicate recommendation candidates based on total check-outs. &lt;br /&gt;
&lt;br /&gt;
Research questions included development of back end (server-side) pattern matching algorithms for recommendations, and a rapid formative evaluation of interface design that would provide optimal user experience for navigation of the book stacks as a context to recommendations.&lt;br /&gt;
&lt;br /&gt;
Along with the Topic Space native app, grant collaborators prototyped web based recommendations which could serve as a new way of providing readers advisory and “more like this” recommendations from discovery interfaces accessed through desktop browsers. Outcomes of the grant include the availability of the [https://play.google.com/store/apps/details?id=edu.illinois.ugl.minrva Topic Spaces module within Minrva app on the Android Play store] and an experimental [http://backbonejs.org/ Backbone.js] based [http://minrva-dev.library.illinois.edu Topic Space web app].&lt;br /&gt;
&lt;br /&gt;
== Leveling Up Your Git Workflow ==&lt;br /&gt;
&lt;br /&gt;
* Megan Kudzia, moneill@albion.edu, Albion College Library&lt;br /&gt;
* Kate Sears, eks11@albion.edu, Albion College Library&lt;br /&gt;
&lt;br /&gt;
Have you started experimenting with Git on your own, but now you need to include others in your projects? Learn from our mistakes! Transitioning from a one-person git workflow and repo structure, to a structure that includes multiple people (including student workers), is not for the faint of heart. We'll talk about why we decided to work this way, our path to developing a git culture amongst ourselves, conceptual and technical difficulties we've faced, what we learned, and where we are now. Also with pretty pictures (aka workflow drawings).&lt;br /&gt;
&lt;br /&gt;
== Drone Loaning Program: Because Laptops are so last century ==&lt;br /&gt;
&lt;br /&gt;
 * Uche Enwesi, uenwesi@umd.edu, University of Maryland Libraries&lt;br /&gt;
 * Francis Kayiwa, fkayiwa@umd.edu, University of Maryland Libraries&lt;br /&gt;
&lt;br /&gt;
At Univ. Maryland we are in the very early stages of looking into allowing our student body get their hands on a drone. Yes that's right we will let students take out a drone for n amount of hours to work on projects of their choosing. The talk will talk about the logistics of getting a program of this sort from concept to &amp;quot;Is the drone available?&amp;quot;. If people sign waivers we will also promise not to crash the drone into code4lib attendees.&lt;br /&gt;
&lt;br /&gt;
== Got Git? Getting More Out of Your GitHub Repositories ==&lt;br /&gt;
&lt;br /&gt;
 * Terry Brady, twb27@georgetown.edu, Georgetown University Library&lt;br /&gt;
&lt;br /&gt;
This presentation will discuss how librarians, developers, and system administrators at Georgetown University are maximizing their use of the public and private GitHub repositories. &lt;br /&gt;
&lt;br /&gt;
In additional to all of the great benefits of using Git for code management, the GitHub interface provides a powerful set of tools to showcase a project and to keep your users informed of developments to your project.  These tools can assist with marketing and outreach - turning your code repository into a focus of conversation!&lt;br /&gt;
&lt;br /&gt;
* [http://georgetown-university-libraries.github.io/File-Analyzer/ Style-able Project Pages]&lt;br /&gt;
* [https://github.com/Georgetown-University-Libraries/File-Analyzer/wiki Project Wikis]&lt;br /&gt;
* [https://github.com/Georgetown-University-Libraries/Georgetown-University-Libraries-Code/releases Project Release Notes/Portfolios]&lt;br /&gt;
* [https://rawgit.com/Georgetown-University-Libraries/Georgetown-University-Libraries-Code/master/samples/GoogleSpreadsheetFilter.html Web Resources That Can Be Directly Requested]&lt;br /&gt;
* Gists for code sharing&lt;br /&gt;
* Private Repositories and Organizational Groups&lt;br /&gt;
* Pull Request Conversation Tracking&lt;br /&gt;
* Customized Issue management&lt;br /&gt;
&lt;br /&gt;
== Quick Wins for Every Department in the Library - File Analyzer! ==&lt;br /&gt;
&lt;br /&gt;
 * Terry Brady, twb27@georgetown.edu, Georgetown University Library&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Library has customized workflows for nearly every department in our library with a single code base.&lt;br /&gt;
* Analyzing Marc Records for the Cataloging department&lt;br /&gt;
* Transferring ILS invoices for the University Account System for the Acquisitions department &lt;br /&gt;
* Delivering patron fines to the Bursar’s office for the Access Service department&lt;br /&gt;
* Summarizing student worker timesheet data for the Finance department&lt;br /&gt;
* Validating COUNTER compliant reports for the Electronic Resources department&lt;br /&gt;
* Generating ingest packages for the Digital Services department&lt;br /&gt;
* Validating checksums for the Preservation department&lt;br /&gt;
&lt;br /&gt;
Learn how you can customize the [http://georgetown-university-libraries.github.io/File-Analyzer/ File Analyzer] to become a hero in your library!&lt;br /&gt;
&lt;br /&gt;
==The Geospatial World is Moving from Maps *on* the Web to Maps *of* the web. Libraries can too==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Copystar|Mita Williams]], mita@uwindsor.ca, User Experience Librarian, University of Windsor&lt;br /&gt;
&lt;br /&gt;
The transition from paper maps to digital ones changed much more than the maps themselves; it changed the very foundation of how we work and how we find each other. Now maps are transforming again.  The Geospatial World is moving from GIS systems that are institutionally-focused, expensive, feature-burdened, and binds data into a complicated and demanding user-hostile interface. From this transition from digital to web-based digital geospatial tools has come growth and development in new forms of map-based investigative journalism, activism, scholarship, and business ventures. This talk will highlight the conditions and strategies that made these changes possible as a means to draw a path by which librarians through our own work may follow, dragons notwithstanding. &lt;br /&gt;
&lt;br /&gt;
== Building Your Own Federated Search ==&lt;br /&gt;
&lt;br /&gt;
* Rich Trott, Richard.Trott@ucsf.edu, UC San Francisco&lt;br /&gt;
&lt;br /&gt;
Advances in modern browsers have created some interesting possibilities for federated search. This presentation will cover common techniques and pitfalls in building a federated search. We will discuss what principles guided our decisions when implementing our own federated search. We will show tools we've built and our findings from building and using experimental prototypes.&lt;br /&gt;
&lt;br /&gt;
Your higher education institution likely offers dozens of online resources for educators, students, researchers, and the public. And each of these online resources likely has its own search tool. But users can't be expected to search in dozens of different interfaces to find what they're looking for. A typical solution for this issue is federated search. &lt;br /&gt;
&lt;br /&gt;
==  Indexing Linked Data with LDPath ==&lt;br /&gt;
&lt;br /&gt;
* Chris Beer, cabeer@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
LDPath [1] is a simple query language for indexing linked open data, with support for caching, content negotiation, and integration with non-RDF endpoints. This talk will demonstrate the features and potential of the language and framework to index a resource with links into id.loc.gov, viaf.org, geonames.org, etc to build an application-ready document.&lt;br /&gt;
&lt;br /&gt;
[1] http://marmotta.apache.org/ldpath/language.html&lt;br /&gt;
&lt;br /&gt;
== Show Me the Money: Integrating an LMS with Payment Providers ==&lt;br /&gt;
 &lt;br /&gt;
* Josh Weisman,  Josh.Weisman@exlibrisgroup.com, Development Director-Resources Management, Ex Libris Group&lt;br /&gt;
&lt;br /&gt;
In order to provide an easy and convenient way for patrons to pay fines, we are exploring ways to integrate the library management system with online payment providers such as PayPal. With many LMS systems being designed and developed for the cloud, we should be able to provide the frictionless user experience our patrons have come to expect from online transactions. In this session we'll discuss strategies for integration and review a sample application which uses REST APIs from a library management system to integrate with PayPal.&lt;br /&gt;
&lt;br /&gt;
== Shibboleth Federated Authentication for Library Applications: ==&lt;br /&gt;
&lt;br /&gt;
* Scott Fisher, scott.fisher@ucop.edu, California Digital Library&lt;br /&gt;
* Ken Weiss, ken.weiss@ucop.edu, California Digital Library&lt;br /&gt;
&lt;br /&gt;
Shibboleth is the most widely-used method to provide single-sign-on authentication to academic applications where users come from many different institutions. Shibboleth, the InCommon education and research trust framework, and the SAML protocol comprise a very powerful - but very complicated - solution to this very complicated problem. Scott and Ken have implemented Shibboleth for multiple library applications. They will share their understanding of the good, the bad, and the underlying spaghetti that makes it all work. Ken will discuss some of the technical aspects of the solution, touching on optimal and non-optimal use cases, administrative challenges, and authorization concerns. Scott will describe the implementation pattern for multi-institution single-sign-on that the California Digital Library has evolved, using the recently released Dash application (http://dash.cdlib.org) as an example.&lt;br /&gt;
&lt;br /&gt;
==Scientific Data: A Needs Assessment Journey==&lt;br /&gt;
 &lt;br /&gt;
*[[User:VickySteeves| Vicky Steeves]], vsteeves@amnh.org, American Museum of Natural History&lt;br /&gt;
&lt;br /&gt;
While surveying digital research and collections data in the research science divisions at the American Museum of Natural History in NYC (as a part of my [http://ndsr.nycdigital.org/ National Digital Stewardship Residency] project), I have come across the big data hogs (genome sequencing and CT scanning) and the little pieces of data (images, publications), all equally important to not only scientific discovery, but as nodes in the history of science. &lt;br /&gt;
&lt;br /&gt;
In this session, I will discuss the development of my needs assessment surveys for scientific datasets and the interview process with Museum curators and researchers as background, seguing into an explanation of the results. I will then combine my findings into preliminary selection criteria to choose tools for digital preservation and management unique to scientific datasets. This will brooke a discussion on emerging standards, tools, and technologies in big data, specific to research science. &lt;br /&gt;
&lt;br /&gt;
I will conclude with preliminary findings on emerging technology that can be used to answer concerns surrounding the management and digital preservation of these data. I am hoping the Q&amp;amp;A session can be used to both answer questions about my project, and function as a way for you (the larger tech-savy library community)  to discuss the tools I’ve touched on in this talk. &lt;br /&gt;
&lt;br /&gt;
== Feminist Human Computer Interaction (HCI) in Library Software ==&lt;br /&gt;
 &lt;br /&gt;
* Bess Sadler,  bess@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
Libraries are not neutral repositories of knowledge. Library classification systems and search technologies tend to reflect the inequalities, biases, ethnocentrism, and power imbalances of the societies in which they are built [1]. How might we better resist these tendencies in the library software we create? This talk will examine some qualities of feminist HCI (pluralism, self-disclosure, participation, ecology, advocacy, and embodiment) [2] through the lens of library software. &lt;br /&gt;
&lt;br /&gt;
[1] Olson, Hope A. (2002). The Power to Name: Locating the Limits of Subject Representation in Libraries. Dordrecht, The Netherlands: Kluwer Academic Publishers.&lt;br /&gt;
&lt;br /&gt;
[2] Bardzell, Shaowen. Feminist HCI: Taking Stock and Outlining an Agenda for Design. CHI 2010: HCI For All. http://dmrussell.net/CHI2010/docs/p1301.pdf&lt;br /&gt;
&lt;br /&gt;
== Heiðrún: DPLA's Metadata Harvesting, Mapping and Enhancement System ==&lt;br /&gt;
&lt;br /&gt;
* Audrey Altman, audrey at dp.la, Digital Public Library of America&lt;br /&gt;
* Gretchen Gueguen, gretchen at dp.la, Digital Public Library of America&lt;br /&gt;
* Mark Breedlove, mb at dp.la, Digital Public Library of America&lt;br /&gt;
&lt;br /&gt;
The Digital Public Library of America aggregates metadata for over 8 million objects from more than 24 direct partners, or Hubs, using its Metadata Application Profile (MAP), an RDF metadata application profile based on the Europeana Data Model. After working with the initial system for harvesting, mapping and enhancing our Hub’s metadata for a year, we realized that it was inadequate for working with data at this scale. There were architectural issues; it was opaque to non-developer and partner staff; there were inadequate tools for quality assurance and analysis; and the system was unaware that it was working with RDF data. As the network of Hubs expanded and we ingested more metadata, it became harder and harder to know when or why a harvest, a mapping task, or an enrichment went wrong because the tools for quality assurance were largely inadequate. &lt;br /&gt;
&lt;br /&gt;
The DPLA Content and Technology teams decided to develop a new system from the ground up to address those problems. Development of Heidrun, the internal version of the new system, started in October 2014. Heidrun’s goals are to make it easier for us to harvest and map metadata from various sources and in variety of schemas to the DPLA MAP, to better enrich that metadata using external data sources, and to actively involve our partners in the ingestion process through access to better QA tools. Heidrun and its componentry are built on Ruby on Rails, Blacklight, and ActiveTriples. Our presentation will give some background on our design principles and processes used during development, the architecture of the system, and its functionality. We plan to release a version of Heidrun and its components as a generalized metadata aggregation system for use by DPLA Hubs and others working to aggregate cultural heritage metadata.&lt;br /&gt;
&lt;br /&gt;
== OS or GTFO: Program or Perish ==&lt;br /&gt;
*Tessa Fallon, tessa.fallon@gmail.com&lt;br /&gt;
&lt;br /&gt;
Description TBD&lt;br /&gt;
&lt;br /&gt;
== Creating Dynamic— and Cheap!— Digital Displays with HTML 5 Authoring Software ==&lt;br /&gt;
* Chris Woodall, cmwoodall@salisbury.edu, Salisbury University Libraries&lt;br /&gt;
Would your library like to have large digital signage that displays dynamic information such as library hours, weather, room availability, and more? Have you looked into purchasing large digital signage, only to be turned off by the high price tag and lack of customization available with commercial solutions? Our library has developed a cheap and effective alternative to these systems using HTML 5 authoring software, a large TV, and freely-available APIs from Google, Springshare, and others. At this session, you’ll learn about the system that we have in place for displaying dynamic and easily-updatable information on our library’s large digital display, and how you can easily create something similar for your library.&lt;br /&gt;
&lt;br /&gt;
== REPOX: Metadata Blender ==&lt;br /&gt;
 &lt;br /&gt;
* John Mignault, jmignault@metro.org, Empire State Digital Network&lt;br /&gt;
&lt;br /&gt;
With the growth in the number of hubs providing metadata to the Digital Public Library of America, many of them are using REPOX, a tool originally created for the Europeana project, to aggregate disparate metadata feeds and transform them into formats suitable for ingest into DPLA. The Empire State Digital Network, the forthcoming DPLA service hub for NY state, is using it to prepare for our first ingest into DPLA in early 2015.  We'll take a look at REPOX and its capabilities and how it can be useful for ingesting and transforming metadata, and also discuss some things we've learned in massaging widely varied metadata feeds.&lt;br /&gt;
&lt;br /&gt;
== Beyond Open Source ==&lt;br /&gt;
&lt;br /&gt;
* Jason Casden, jmcasden@ncsu.edu, NCSU Libraries&lt;br /&gt;
* Bret Davidson, bddavids@ncsu.edu, NCSU Libraries&lt;br /&gt;
&lt;br /&gt;
The Code4Lib community has produced an increasingly impressive collection of open source software over the last decade, but much of this creative work remains out of reach for large portions of the library community. Do the relatively privileged institutions represented by a majority of Code4Lib participants have a professional responsibility to support the adoption of their innovations?&lt;br /&gt;
&lt;br /&gt;
Drawing from old and new software packaging and distribution approaches (from freeware to Docker), we will propose extending the open source software values of collaboration and transparency to include the wide and affordable distribution of software. We believe this will not only simplify the process of sharing our applications within the Code4Lib community, but also make it possible for less well resourced institutions to actually use our software. We will identify areas of need, present our experiences with the users of our own open source projects, discuss our attempts to go beyond open source, and make an argument for the internal value of supporting and encouraging a vibrant library ecosystem.&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2015]] &lt;br /&gt;
[[Category:Talk Proposals]]&lt;br /&gt;
&lt;br /&gt;
== Making It Work: Problem Solving Using Open Source at a Small Academic Library ==&lt;br /&gt;
 &lt;br /&gt;
* Adam Strohm, astrohm@iit.edu, Illinois Institute of Technology&lt;br /&gt;
* Max King, mking9@iit.edu, Illinois Institute of Technology&lt;br /&gt;
&lt;br /&gt;
The Illinois Institute of Technology campus was added to the National Register of Historic Places in 2005, and contains a building, Mies van der Rohe's S.R. Crown Hall, that was named a National Historic Landmark in 2001. Creating a digital resource that can adequately showcase the campus and its architecture is challenge enough in and of itself, but doing so as a two-person team of relative newcomers, at a university library without dedicated programmers on staff, ups the ante considerably.&lt;br /&gt;
The challenges of technical know-how, staff time, and funding are nothing new to anyone working on digital projects at a university library, and are amplified when doing so at a smaller institution. This talk covers the conception, development, and design of the campus map site that was built, concentrating on the problem-solving strategies developed to cope with limited technical and financial resources.&lt;br /&gt;
We'll talk about our approach to development with Open Source software, including Omeka, along with the Neatline and Simile Timeline plugins. We'll also discuss the juggling act of designing for mobile mapping functionality without sacrificing desktop design, weighing the costs of increased functionality versus our ability to time-effectively include that functionality, and the challenge of building a site that could be developed iteratively, with an eye towards future enhancement and sustainability. Finally, we’ll provide recommendations for other librarians at smaller institutions for their own efforts at digital development.&lt;br /&gt;
&lt;br /&gt;
== Recording Digitization History: Metadata Options for the Process History of Audiovisual Materials ==&lt;br /&gt;
 &lt;br /&gt;
* Peggy Griesinger, peggy_griesinger@moma.org, Museum of Modern Art&lt;br /&gt;
&lt;br /&gt;
The Museum of Modern Art has amassed a large collection of audiovisual materials over its many decades of existence. In order to preserve these materials, much of the audiovisual collection has been digitized. This is a complex process involving numerous steps and devices, and the methods used for digitization can have an effect on the quality of the file that is preserved. Therefore, knowing exactly how something was digitized is critical for future stewards of these objects to be able to properly care for and preserve them. However, detailed technical information about the processes involved in the digitization of audiovisual materials is not defined explicitly in most metadata schemas used for audiovisual materials. In order to record process history using existing metadata standards, some level of creativity is required to allow existing standards to express this information.&lt;br /&gt;
&lt;br /&gt;
This talk will detail different metadata standards, including PBCore, PREMIS, and reVTMD, that can be implemented as methods of recording this information. Specifically, the talk will examine efforts to integrate this metadata into the Museum of Modern Art’s new digital repository, the DRMC. This talk will provide background on the DRMC as well as MoMA’s specific institutional needs for process history metadata, then discuss different metadata implementations we have considered to document process history.&lt;br /&gt;
&lt;br /&gt;
== Pig Kisses Elephant: Building Research Data Services for Web Archives ==&lt;br /&gt;
 &lt;br /&gt;
* Jefferson Bailey,  jefferson@archive.org, Internet Archive&lt;br /&gt;
* Vinay Goel, vinay@archive.org, Internet Archive&lt;br /&gt;
&lt;br /&gt;
More and more libraries and archives are creating web archiving programs.  For both new and established programs, these archives can consist of hundreds of thousands, if not millions, of born-digital resources within a single collection; as such, they are ideally suited for large-scale computational study and analysis. Yet current access methods for web archives consist largely of browsing the archived web in the same manner as browsing the live web and the size of these collections and complexity of the WARC format can make aggregate analysis difficult. This talk will describe a project to create new ways for users and researchers to access and study web archives by offering extracted and post-processed datasets derived from web collections. Working with the 325+ institutions and their 2600+ collections within the Archive-It service, the Internet Archive is building methods to deliver a variety of datasets culled from collections of web content, including extracted metadata packaged in JSON, longitudinal link graph data, named entities, and other types of data. The talk will cover the technical details of building dataset production pipelines with Apache Pig, Hadoop, and tools like Stanford NER, the programmatic aspects of building data services for archives and researchers, and ongoing work to create new ways to access and study web archives.&lt;br /&gt;
&lt;br /&gt;
== Awesome Pi, LOL! ==&lt;br /&gt;
&lt;br /&gt;
* Matt Connolly, mconnolly@cornell.edu, Cornell University Library&lt;br /&gt;
* Jennifer Colt, jrc88@cornell.edu, Cornell University Library&lt;br /&gt;
&lt;br /&gt;
Inspired by Harvard Library Lab’s “Awesome Box” project, Cornell’s Library Outside the Library (LOL) group is piloting a more automated approach to letting our users tell us which materials they find particularly stunning. Armed with a Raspberry Pi, a barcode scanner, and some bits of kit that flash and glow, we have ventured into the foreign world of hardware development. This talk will discuss what it’s like for software developers and designers to get their hands dirty, how patrons are reacting to the Awesomizer, and LOL’s not-afraid-to-fail philosophy of experimentation.&lt;br /&gt;
&lt;br /&gt;
== You Gotta Keep 'em Separated: The Case for &amp;quot;Bento Box&amp;quot; Discovery Interfaces ==&lt;br /&gt;
 &lt;br /&gt;
* Jason Thomale,  jason.thomale@unt.edu, University of North Texas Libraries&lt;br /&gt;
&lt;br /&gt;
I know, I know--proposing a talk about Resource Discovery is like, ''so'' 2010.&lt;br /&gt;
&lt;br /&gt;
The thing is, practically all of us--in academic libraries at least--have a similar set up for discovery, with just a few variations, and so talking about it still seems useful. Stop me if this sounds familiar. You've got a single search box on the library homepage as a starting point for discovery. And it's probably a tabbed affair, with an option for searching the catalog for books, an option for searching a discovery service for articles, an option for searching databases, and maybe a few others. Maybe you have an option to search everything at once--probably the default, if you have it. And, if you're a crazy hepcat, maybe you ''only'' have your one search that searches everything, with no tabs.&lt;br /&gt;
&lt;br /&gt;
Now, the question is, for your &amp;quot;everything&amp;quot; search, are you doing a combined list of results, or are you doing it bento-box style, with a short results list from each category displayed in its own compartment?&lt;br /&gt;
&lt;br /&gt;
At UNT, we've been holding off on implementing an &amp;quot;everything&amp;quot; search, for various reasons. One reason is that the evidence for either style hasn't been very clear. There's this persistent paradox that we just can't reconcile: users tell us, through word and action, that they prefer searching Google, yet, libraries aren't Google, and there are valid design reasons why we shouldn't try to oversimplify our discovery interfaces to be like Google. And there's user data that supports both sides.&lt;br /&gt;
&lt;br /&gt;
Holding off on making this decision has granted us 2 years of data on how people use our tabbed search interface that does ''not'' include an &amp;quot;everything&amp;quot; search. Recently I conducted a thorough analysis of this data--specifically the usage and query data for our catalog and discovery system (Summon). And I think it helps make the case for a bento box style discovery interface. To be clear, it isn't exactly the smoking gun that I was hoping for, but the picture it paints I think is telling. At the very least, it points away from a combined-results approach.&lt;br /&gt;
&lt;br /&gt;
I'm proposing a talk discussing the data we've collected, the trends we've seen, and what I think it all means--plus other reasons that we're jumping on the &amp;quot;bento box&amp;quot; discovery bandwagon and why I think &amp;quot;bento box&amp;quot; is at this point the path that least sells our souls.&lt;br /&gt;
&lt;br /&gt;
== Don’t know about you, but I’m feeling like SHA-2!: Checksumming with Taylor Swift ==&lt;br /&gt;
 &lt;br /&gt;
* Ashley Blewer!, ashley.blewer@gmail.com&lt;br /&gt;
&lt;br /&gt;
Checksum technology is used all over the place, from git commits to authenticating Linux packages. It is most commonly used in the digital preservation field to monitor materials in storage for changes that will occur over time or used in the transmission of files during duplication. But do you even checksum, bro? I want this talk to move checksums from a position of mysterious macho jargon to something everyone can understand and want to use. I think a lot of people have heard of checksum but don’t know where to begin when it comes to actually using it at their institution. And cryptography is hella intimidating! This talk will cover what checksums are, how they can be integrated into a library or archival workflow, protecting collections requiring additional levels of security, algorithms used to verify file fixity and how they are different, and other aspects of cryptographic technology. Oh, and please note that all points in this talk will be emphasized or lightly performed through Taylor Swift lyrics. Seriously, this talk will consist of at least 50% Taylor Swift. Can you, like, even?&lt;br /&gt;
&lt;br /&gt;
== Level Up Your Coding with Code Club (yes, you can talk about it) ==&lt;br /&gt;
&lt;br /&gt;
* Coral Sheldon-Hess, coral@sheldon-hess.org&lt;br /&gt;
&lt;br /&gt;
Reading code is a necessary part of becoming a better developer. It gives you more experience and more insight into How Things Are (or Aren't) Done; it builds your intuition about how to solve problems with code; and it increases your confidence that you, too, can tackle whatever technological problems you're facing.&lt;br /&gt;
&lt;br /&gt;
But you don't have to read code alone! (Which is good. It's really not fun to read code alone.) &lt;br /&gt;
&lt;br /&gt;
In late 2014, a group of librarians formed two Code Clubs, inspired by [http://bloggytoons.com/code-club/ this talk by Saron] (of Bloggytoons fame). I'd like to tell you about how we've structured our Code Clubs, what has gone well, what we've learned, and what you need to do to form your own Code Club. I'll share a list of the codebases we've looked at, too, to help you get your own Code Club off the ground! &lt;br /&gt;
&lt;br /&gt;
== The Growth of a Programmer ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:jgo | Joshua Gomez]], Getty Research Institute, jgomez@getty.edu&lt;br /&gt;
&lt;br /&gt;
Just like other creative endeavors, software developers can experience periods of great productivity or find themselves in a rut. After contemplating the alternating periods in my own career I've noticed several factors that have effected my own professional growth and happiness, including: mentorship, structure, community, teamwork, environment, formal education, etc. Not all of the factors need to be present at all times; but some mixture of them is critical for continued growth. In this talk, I will articulate these factors, discuss how they can effect a developer's career, and how they can be sought out when missing. This talk is aimed at both new developers looking to strike their own path as well as the veterans that lead or mentor them.&lt;br /&gt;
&lt;br /&gt;
== Developing a Fedora 4.0 Content Model for Disk Images ==&lt;br /&gt;
&lt;br /&gt;
* Matthew Farrell, matthew.j.farrell@duke.edu, Duke University Libraries&lt;br /&gt;
* Alexandra Chassanoff, achass@email.unc.edu, BitCurator Access Project Manager&lt;br /&gt;
&lt;br /&gt;
As the acquisition of born-digital materials grows, institutions are seeking methods to facilitate easy ingest into their repositories and provide access to disk images and files derived or extracted from disk images. In this session, we describe our development of a Fedora 4.0 Content model for disk images, including acceptable image file formats and the rationale behind those choices.  We will also discuss efforts to integrate the disk image content model into the BitCurator Access environment. Unlike generalized, format-agnostic content models which might treat the disk image as a generic bitstream, a content model designed for disk images enables expression of relationships among associated content in the collection such as files extracted from images and other born-digital and digitized material associated with the same creator.  It also enables capture of file-system attributes such as file paths, timestamps, whether files are allocated/deleted, etc.  Further, a disk image content model suggests further steps repositories can take in order to transform and re-use associated metadata generated during the creation and forensic analysis of the disk image.&lt;br /&gt;
&lt;br /&gt;
== Data acquisition and publishing tools in R ==&lt;br /&gt;
&lt;br /&gt;
* Scott Chamberlain,  scott@ropensci.org, rOpenSci/UC Berkeley - first-time presenter&lt;br /&gt;
&lt;br /&gt;
R is an open source programming environment that is widely used among researchers in many fields. R is powerful because it's free, increasingly robust, and facilitates reproducible research, an increasingly sought after goal in academia. Although tools for data manipulation/visualization/analysis are well developed in R, data acquisition and publishing tools are not. rOpenSci is a collaborative effort to create the tools necessary to complete the reproducible research workflow. This presentation discusses the need for these tools, including examples, including interacting with the repositories Mendeley, Dryad, DataONE, and Figshare. In addition, we are building tools for searching scholarly metadata and acuiring full text of open access articles in a standarized way across metadata providers (e.g., Crossref, DataCite, DPLA) and publishers (e.g., PLOS, PeerJ, BMC, Pubmed). Last, we are building out tools for data reading and writing in Ecologial Metadata Language (EML).&lt;br /&gt;
&lt;br /&gt;
== SPLUNK: Log File Analysis ==&lt;br /&gt;
&lt;br /&gt;
* Jim LeFager, jlefager@depaul.edu, DePaul University Library&lt;br /&gt;
DePaul University Library recently took over monitoring and maintaining of the library EZproxy servers this past year and using Splunk, a machine data analysis tool, we are able to gather information and statistics on our electronic resource usage in addition to monitoring the servers. Splunk is a tool that can collect, analyze, and visualize log files and other machine data in real time and this has allowed for gathering realtime usage statistics for our electronic resources allowing us to filter by multiple facets including IP Range, Group Membership (student, faculty), so that we can see who is accessing our resources and from where. Splunk allows our library to query our data and create rich custom dashboards as well as create alerts that can be triggered when certain conditions are met, such as error codes, which can send an email alert to a group of users. We will be leveraging Splunk to monitor all library web applications going forward. This talk will review setting up Splunk and best practices in using the available features and customizations available including creating queries, alerts, and custom dashboards.  &lt;br /&gt;
&lt;br /&gt;
== Your code does not exist in a vacuum ==&lt;br /&gt;
* Becky Yoose, yoosebec at grinnell dot edu, Grinnell College (Done a lightning talk, MC duties, but have not presented a prepared talk)&lt;br /&gt;
&lt;br /&gt;
“If you have something to say, then say it in code…” - Sebastian Hammer, code4lib 2009&lt;br /&gt;
&lt;br /&gt;
In its 10 year run, code4lib has covered the spectrum of libtech development, from search to repositories to interfaces. However, during this time there has been little discussion about this one little fact about development - code does not exist in a vacuum. &lt;br /&gt;
&lt;br /&gt;
Like the comment above, code has something to say. A person’s or organization’s culture and beliefs influences code in all steps of the development cycle. What development method you use, tools, programming languages, licenses - everything is interconnected with and influenced by the philosophies, economics, social structures, and cultural beliefs of the developer and their organization/community.&lt;br /&gt;
&lt;br /&gt;
This talk will discuss these interconnections and influences when one develops code for libraries, focusing on several development practices (such as “Fail Fast, Fail Often” and Agile)   and licensing choices (such as open source) that libtech has either tried to model or incorporate into mainstream libtech practices. It’ll only scratch the surface of the many influences present in libtech development, but it will give folks a starting point to further investigate these connections at their own organizations and as a community as a whole.&lt;br /&gt;
&lt;br /&gt;
tl;dr - this will be a messy theoretical talk about technology and libraries. No shiny code slides, no live demos. You might come out of this talk feeling uncomfortable. Your code does not exist in a vacuum. Then again, you don’t exist in a vacuum either.&lt;br /&gt;
&lt;br /&gt;
== The Metadata Hopper: Mapping and Merging Metadata Standards for Simple, User-Friendly Access ==&lt;br /&gt;
&lt;br /&gt;
* Tracy Seneca, tjseneca@uic.edu, University of Illinois at Chicago&lt;br /&gt;
* Esther Verreau: verreau1@uic.edu, University of Illinois at Chicago&lt;br /&gt;
&lt;br /&gt;
The Chicago Collections Consortium: 15 institutions and growing!  8 distinct EAD standards! At least 3 permutations of MARC, and we lost count of the varieties of custom CONTENTdm image collections.  Not to mention the 14,730 unique subject terms, nearly all of which lead our poor end-users to exactly one organization's content. &lt;br /&gt;
&lt;br /&gt;
All large content aggregation projects have faced this challenge, and there are a few emerging tools to help us wrangle disparate metadata into new contexts.  The Metadata Hopper is one such tool. The Metadata Hopper enables archivists to map their local metadata standards to standardized deposit records, and tags those materials using a shared vocabulary, integrating them into a user-friendly portal without disrupting local practices. In last year's Code4Lib lightning talk we described the challenges that the Chicago Collections Consortium faces in creating shared, in-depth access to archival and digital collections about Chicago history and culture across CCC member organizations. This year, thanks to the Andrew W. Mellon Foundation, we have a working Django application to demonstrate.  In this talk we'll discuss the design that enables multiple layers of flexibility, from the ability to accept a variety of metadata standards to designing for an open source audience.&lt;br /&gt;
&lt;br /&gt;
http://chicagocollectionsconsortium.org&lt;br /&gt;
&lt;br /&gt;
== Programmers are not projects: lessons learned from managing humans ==&lt;br /&gt;
&lt;br /&gt;
* Erin White, erwhite@vcu.edu, Virginia Commonwealth University - first-time presenter&lt;br /&gt;
&lt;br /&gt;
Managing projects is one thing, but managing people is another. Whether we’re hired as managers or grow “organically” into management roles, sometimes technical people end up leading technical teams (gasp!). I’ll talk about lessons I’ve learned about hiring, retaining, and working long-term and day-to-day with highly tech-competent humans. I’ll also talk about navigating the politics of libraryland, juggling different types of projects, and working with constrained budgets to make good things and keep talented people engaged.&lt;br /&gt;
&lt;br /&gt;
== Practical Strategies for Picking Low-Hanging Fruits to Improve Your Library's Web Usability and UX ==&lt;br /&gt;
&lt;br /&gt;
* Bohyun Kim, bkim@hshsl.umaryland.edu, University of Maryland, Baltimore&lt;br /&gt;
&lt;br /&gt;
Have you ever tried to fix an obvious (to you at least!) problem in Web usability or UX (user experience) only to face strong resistance from the library staff? Are you a strong advocate for making library resources, systems, services, and space as usable as possible, but do you often find yourself struggling to get the point across and/or obtain the crucial buy-in from colleagues and administrators? &lt;br /&gt;
&lt;br /&gt;
There is no shortage of Web usability and UX guidelines. But applying them to a library and implementing desired changes often involve a long and slow process. To tackle this issue, this talk will focus on how to utilize the 'expert review' process (aka 'heuristic evaluation') as a preliminary or even preparatory step before embarking on more time-and-labor-intensive usability testing and user research. Several examples from  simple fixes to more nuanced usability and UX issues in libraries will be discussed to your heart's content. The goal of this talk is to provide practical strategies for picking as many low-hanging fruits as possible to make a real (albeit small) difference to your library's Web usability and UX effectively and efficiently.&lt;br /&gt;
&lt;br /&gt;
== A Semantic Makeover for CMS Data ==&lt;br /&gt;
&lt;br /&gt;
* Bill Levay, wjlevay@gmail.com, Linked Jazz Project&lt;br /&gt;
&lt;br /&gt;
How can we take semi-structured but messy metadata from a repository like CONTENTdm and transform it into rich linked data? Working with metadata from Tulane’s Hogan Jazz Archive Photography Collection, the Linked Jazz Project used Open Refine and Python scripts to tease out proper names, match them with name authority URIs, and specify FOAF relationships between musicians who appear together in photographs. Additional RDF triples were created for any dates associated with the photos, and for those images with place information we employed GeoNames URIs. Historical images and data that were siloed can now interact with other datasets, like Linked Jazz’s rich set of names and personal relationships, and can be visualized [link to come] or otherwise presented on the web in any number of ways. I have not previously presented at a Code4Lib conference.&lt;br /&gt;
&lt;br /&gt;
== Taking User Experience (UX) to new heights ==&lt;br /&gt;
 &lt;br /&gt;
* Kayne Richens, kayne.richens@deakin.edu.au, Deakin University&lt;br /&gt;
&lt;br /&gt;
User Experience, or &amp;quot;UX&amp;quot;, is for more than just websites. At Deakin University Library we're exploring ways to improve the user experience inside our campus library spaces, by putting new technologies front and centre in the overall experience for our students. How are we doing this? We’re collaborating with the University's IT department and exploring the following Library-changing opportunities:&lt;br /&gt;
&lt;br /&gt;
- Augmented Reality for Way-finding: We’re tackling that infamous thing that all Libraries can't get right – way-finding. We're enhancing library tour information and way-finding experiences by introducing augmented reality solutions.&lt;br /&gt;
 &lt;br /&gt;
- Heat mapping the library with wi-fi: We’re using our existing wi-fi infrastructure to present &amp;quot;heat maps&amp;quot; of library space utilisation, allowing our users to easily locate the space that best suits their needs, whether it be busy spaces to collaborate, or quiet spaces to study. And by overlaying computer usage and group study room bookings, users can quickly locate the space they need.&lt;br /&gt;
 &lt;br /&gt;
- Video chat library service: We’re piloting video-conferencing facilities in our group study rooms and spaces, connecting users and librarians and other professionals.&lt;br /&gt;
         &lt;br /&gt;
This talk will look at how these different technologies will be brought together to provide improved user experiences, as well some of the evidence and reasons that helped us to identify our needs, so you can too.&lt;br /&gt;
&lt;br /&gt;
==How to Hack it as a Working Parent: or, Should Your Face be Bathed in the Blue Glow of a Phone at 2 AM?==&lt;br /&gt;
&lt;br /&gt;
*Margaret Heller, Loyola University Chicago, mheller1@luc.edu&lt;br /&gt;
*Christina Salazar, California State University Channel Islands, christina.salazar@csuci.edu&lt;br /&gt;
*May Yan, Ryerson University, may.yan@ryerson.ca&lt;br /&gt;
&lt;br /&gt;
Modern technology has made it easier than ever for parents employed in technical environments to keep up with work at all hours and in all locations. This makes it possible to work a flexible schedule, but also may lead to problems with work/life balance and furthering unreasonable expectations about working hours. Add to that shifting gender roles and limited paid parental leave in the United States and you have potential for burnout and a certainty for anxiety. It raises the additioal question of whether the “always connected” mindset puts up a barrier to some populations who otherwise might be better represented in open source and library technology communities. &lt;br /&gt;
&lt;br /&gt;
This presentation will address tools that are useful for working parents in technical library positions, and share some lessons learned about using these tools while maintaining a reasonable work/life balance. We will consider a question that Karen Coyle raised back in 1996: &lt;br /&gt;
“What if the thousands of hours of graveyard shift amateur hacking wasn't really the best way to get the job done? That would be unthinkable.” &lt;br /&gt;
&lt;br /&gt;
For those who are able to take an extended parental leave, we will present strategies for minimizing the impact to your career and your employer. For those (particularly in the United States) who are only able to take a short leave will require different strategies. Despite different levels of preparation, all are useful exercises in succession planning and making a stronger workplace and future ability to work a flexible schedule through reviewing workloads, cross-training personnel, hiring contract replacements, and creative divisions of labor. Such preparation makes work better for everyone, kids or no kids or caretakers of any kind.&lt;br /&gt;
&lt;br /&gt;
==Making your digital objects embeddable around the web==&lt;br /&gt;
 &lt;br /&gt;
* Jessie Keck, jkeck@stanford.edu, Stanford University Libraries&lt;br /&gt;
* Jack Reed, pjreed@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
With more and more content from our digital repositories making their way into our discovery environments we quickly realize that we’re repeatedly re-inventing the wheel when it comes to creating “Viewers” for these digital objects.  With various different types of viewers necessary (books, images, audio, video, geospatial data, etc) the burden of getting these viewers into various environments (topic guides, blogs, catalogs, etc) becomes exponential.&lt;br /&gt;
&lt;br /&gt;
In this talk we’ll discuss how Stanford University Libraries implemented an oEmbed service to create an extensible viewer framework for all of its digital content. Using this service we’ve been able to easily integrate viewers into various discovery applications as well as make it easy for end users who discover our objects to easily embed customized versions into their own websites and blogs.&lt;br /&gt;
&lt;br /&gt;
==So you want to make your geospatial data discoverable==&lt;br /&gt;
 &lt;br /&gt;
* Jack Reed, pjreed@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
Finding data for research or coursework can be one of the most time intensive tasks for a scholar or student. We introduce GeoBlacklight, an open source, multi-institutional software project focused on solving these common challenges at institutions across the world. GeoBlacklight prioritizes user experience, integrates with many GIS tools, and streamlines the use and organization of geospatial data. This talk will provide an introduction to the software, demonstrate current functionality, and provide a road map for future work.&lt;br /&gt;
&lt;br /&gt;
== Clueless-Driven Development: How I learned to migrate to Fedora 4 ==&lt;br /&gt;
&lt;br /&gt;
* Adam Wead, awead@psu.edu, Penn State University&lt;br /&gt;
&lt;br /&gt;
Recently I was tasked with migrating the content from our Fedora3 repository to the new Fedora4 repository architecture.&lt;br /&gt;
Despite a wealth of community support, I had no idea how to approach, or even begin to solve this problem. I knew I&lt;br /&gt;
wanted to follow best practices and use test-driven  development to build my solution, but had no idea where to start.&lt;br /&gt;
Despite this initial setback, I was able to start writing tests with only a  vague understanding of the problem. As my&lt;br /&gt;
tests exposed where my understanding of the problem was flawed, my code evolved, and within a week I had arrived  at a&lt;br /&gt;
working solution that exhibited all the hallmarks of good testing and software design.&lt;br /&gt;
&lt;br /&gt;
This talk recounts the process I went through from starting with practically nothing, to arriving at a working solution.&lt;br /&gt;
You can follow the rules of  test-driven development, but you can write tests in an expressive way to describe the&lt;br /&gt;
problem instead of just describing what the code should do. It was also essential to begin testing from an integration&lt;br /&gt;
viewpoint as opposed to a unit one, because at the outset the units were unknown and were later realized through further&lt;br /&gt;
development. For the presentation, I will be demonstrating using RSpec and Ruby. All the code examples will be related&lt;br /&gt;
to the Hydra software stack; however, I hope to show  that the processes at work will be applicable in any context.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Designing and Leading a Kick A** Tech Team ==&lt;br /&gt;
 &lt;br /&gt;
* Sibyl Schaefer,  sschaefer@rockarch.org, Rockefeller Archive Center&lt;br /&gt;
&lt;br /&gt;
New managers are often promoted without receiving management training, yet management is not something you just figure out. The experience of being expected to know how to manage, yet not being trained to do so often results in new managers feeling isolated and unsure how to move from making to managing. In this talk I’ll focus on my own managerial experience of designing and leading an archival tech team in a small independent archives. Topics covered will include hiring, delegating, creating a team culture, and leading people whose specialized knowledge exceeds your own. The talk take-aways should be applicable to managers and employees at large and small institutions alike.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==American (Archives) Horror Story: LTO Failure and Data Loss ==&lt;br /&gt;
 &lt;br /&gt;
* Rebecca Fraimow, rebecca_fraimow@wgbh.org, NDSR Resident, WGBH&lt;br /&gt;
* Casey Davis, casey_davis@wgbh.org, Project Manager, American Archive of Public Broadcasting, WGBH&lt;br /&gt;
&lt;br /&gt;
Here’s a story to send shivers down archival spines: when transferring video files off LTO for the American Archive project, WGBH got an initial failure rate of 57%.   After repeat tries, the rates improved; still, an unnervingly large percentage of files were never able to be transferred successfully.   Even more unnerving, going public with our horror story got a big response from other archives using LTO -- it seems like many institutions are having similarly scary results.   What are the real risks with LTO tape?  Are there steps that archives should be taking to better circumvent those risks?  This presentation will share information about LTO storage failures across archives world and discuss the process of investigating the problem at WGBH by testing different methods of data retrieval from LTO (direct and networked downloads, individual file retrieval and bulk data dump, use of LTO 4 and LTO 6 decks) and using checksum comparisons and file analysis and characterization tools such as ffprobe, mediainfo and exiftool to analyze failed files.  We'll also present whatever results we’ve managed to turn up by the time of Code4Lib!&lt;br /&gt;
&lt;br /&gt;
== PBCore in Action: Three Words, Not Two! ==&lt;br /&gt;
 &lt;br /&gt;
* Casey E. Davis,  casey_davis@wgbh.org, Project Manager, American Archive of Public Broadcasting, WGBH&lt;br /&gt;
* Andrew (Drew) Myers, andrew_myers@wgbh.org, Supervising Developer, WGBH&lt;br /&gt;
&lt;br /&gt;
In 2001, public media representatives developed the PBCore XML schema to establish a common language for managing metadata about their analog and digital audio and video. Since then, PBCore has been adopted by a number of organizations and archivists in the moving image archival community. The schema has also undergone a few revisions, but on more than one occasion it was left orphaned and with little to no support.&lt;br /&gt;
 &lt;br /&gt;
Times have changed. You may have heard the news that PBCore is back in action as part of the American Archive of Public Broadcasting initiative and via the Association of Moving Image Archivists (AMIA) PBCore Advisory Subcommittee. A group of archivists, public media stakeholders, and engaged users have come together to provide necessary support for the standard and to see to its further development. &lt;br /&gt;
 &lt;br /&gt;
At this session, we'll discuss the scope and uses of PBCore in digital preservation and access, report on the progress and goals of the PBCore Advisory Subcommittee, and share how the group (by the time of the conference) will have transformed the XML schema into an RDF ontology, bringing PBCore into the second decade of the 21st century. #PBHardcore&lt;br /&gt;
&lt;br /&gt;
==Collaborating to Avert the Digital Graveyard==&lt;br /&gt;
&lt;br /&gt;
* Harish Nayak, hnayak@library.rochester.edu, University of Rochester Libraries &lt;br /&gt;
* Sean Morris, smorris@library.rochester.edu, University of Rochester Libraries &lt;br /&gt;
&lt;br /&gt;
In 1995, the Robbins Library at the University of Rochester created a digital collection of Arthurian texts, images, and bibliographies. Together with medieval scholars, we recently completed the redesign and development of an interface for this collection. Using FRBR concepts, we re-conceptualized organization and editing workflow from the ground up in a mobile-first Drupal-based project. &lt;br /&gt;
&lt;br /&gt;
In this talk we will describe the project as well as how we utilized the techniques of work practice study and user centered design to maintain engagement with reluctant stakeholders, nontechnical scholars, and VERY meticulous graduate students.  Neither of us have previously presented at a Code4Lib conference.&lt;br /&gt;
&lt;br /&gt;
==Docker? VMs? EC2? Yes! With Packer.io==&lt;br /&gt;
&lt;br /&gt;
* Kevin S. Clarke, ksclarke@gmail.com, Digital Library Programmer, UCLA&lt;br /&gt;
&lt;br /&gt;
There are a lot of exciting ways to deploy a software stack nowadays. Many of our library systems are fully virtualized. Docker is a compelling alternative, and there are also cloud options like Amazon's EC2. This talk will introduce Packer.io, a tool for creating identical machine images for multiple platforms (e.g., Docker, VMWare, VirtualBox, EC2, GCE, OpenStack, et al.) all from a single source configuration.  It works well with Ansible, Chef, Puppet, Salt, and plain old Bash scripts. And, it's designed to be scriptable so that builds can be automated. This presentation will show how easy it is to use Packer.io to bring up a set of related services like Fedora 4, Grinder (for stress testing), and Graphite (for charting metrics). As an added value, all the buzzwords in this proposal will be defined and explained!&lt;br /&gt;
&lt;br /&gt;
== Technology on your Wrist: Cross-platform Smartwatch Development for Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:sanderson|Steven Carl Anderson]], sanderson@bpl.org, Boston Public Library (no previously accepted prepared talks but have done lightning talks in the past)&lt;br /&gt;
&lt;br /&gt;
I'll be the first to admit: smartwatches are unlikely to completely revolutionize how a library provides online services. But I believe they still represent an opportunity to further enhance existing library services and resources in a unique way.&lt;br /&gt;
&lt;br /&gt;
At the Boston Public Library (BPL), we're in the initial phases of designing a modest smartwatch app to provide notifications for circulation availability and checked-out-material due-date alerts by the end of current year. We're starting small, but we plan to evolve the concept over time as we see what (if any) traction such an application gets with potential users. For example, we plan to explore the possibility of adding &amp;quot;nearest branch to my current location&amp;quot; functionality to this app.&lt;br /&gt;
&lt;br /&gt;
Despite the &amp;quot;development phase&amp;quot; of this application as of this writing, this talk is not being given by a novice. As a technology enthusiast, I've released [http://www.phdgaming.com/smartwatch_projects/ five smartwatch applications] and have had two of those be finalists in a [http://www.phdgaming.com/samsung_challenge/ Samsung sponsored development challenge]. This experience not only will allow for the BPL to avoid many beginner mistakes in its smartwatch app development but also gives a much more complete understanding of the smartwatch development ecosystem.&lt;br /&gt;
&lt;br /&gt;
This talk will explore the following questions:&lt;br /&gt;
&lt;br /&gt;
* What kinds of online library services could potentially be transformed or translated into the smartwatch/wearable domain? What kinds of services are better left alone? These questions are currently being explored and I'll talk about our plans and experiences. Included will be any statistical information from our application launch along with statistics from my personal development.&lt;br /&gt;
&lt;br /&gt;
* How to support all the different operating systems these devices run without painful modifications to your codebase. (There's Tizen that is used by Samsung's Gear 2 and Gear S, Android Wear that is used by most other non-Apple manufacturers, then there is Apple's upcoming smartwatch itself, etc.)&lt;br /&gt;
&lt;br /&gt;
* How to support different screen resolutions on such a small device. From round to rectangular to perfectly square, smartwatches come in all different shapes these days.&lt;br /&gt;
&lt;br /&gt;
* What are the app stores like on these platforms? As I support multiple applications through different distribution networks, a guide to navigating how to distribute one's app is included and I'll reveal how these systems work “behind the curtain.”&lt;br /&gt;
&lt;br /&gt;
* What are common issues and pitfalls to avoid when doing development? Tips on broken APIs and how to cope or optimizing your code will be included.&lt;br /&gt;
&lt;br /&gt;
==Seeing the Forest From the Trees: The Art of Creating Workflows for Digital Projects ==&lt;br /&gt;
 &lt;br /&gt;
* Jen LaBarbera, j.labarbera@neu.edu, NDSR Resident, Northeastern University&lt;br /&gt;
* Joey Heinen, joseph_heinen@harvard.edu, NDSR Resident, Harvard University&lt;br /&gt;
* Rebecca Fraimow, rebecca_fraimow@wgbh.org, NDSR Resident, WGBH&lt;br /&gt;
* Tricia Patterson, triciap@mit.edu, NDSR Resident, MIT&lt;br /&gt;
&lt;br /&gt;
We have to &amp;quot;turn projects into programs&amp;quot; in order to create a solid and sustainable digital preservation initiative...but what the heck does that even mean? What does that look like?&lt;br /&gt;
&lt;br /&gt;
In this talk, members of the inaugural Boston cohort of the National Digital Stewardship Residency will discuss one piece of our digital preservation test kitchen: our stabs at creating digital workflows that will (hopefully) help our institutions turn digital preservation projects into programs. Specifically, we will talk about how difficult it is to create a general and overarching workflow for digital preservation tasks (e.g. ingest into repositories, format migrations, etc.) that incorporates various technical tools while also taking into account the myriad and unending list of possible exceptions or special scenarios. Turning these complicated, specific processes into a simplified and generalized workflow is an art. We haven't necessarily perfected that art yet, but in this talk, we'll share what has worked for us -- and what hasn't. We’ll also touch on the importance of documentation, and achieving that delicate balance of adequately thorough documentation that doesn’t pose the risk of information avalanche. These processes often create more questions than answers, but we'll share the answers that we (and our mentors) have found along the way!&lt;br /&gt;
&lt;br /&gt;
== Annotations as Linked Data with Fedora4 and Triannon (a Real Use Case for RDF!) ==&lt;br /&gt;
&lt;br /&gt;
* Rob Sanderson, azaroth@stanford.edu,  Stanford University Libraries&lt;br /&gt;
* Naomi Dushay, ndushay@stanford.edu,  Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
Annotations on content resources allow users to contribute knowledge within the digital repository space.  W3C Open Annotation provides a comprehensive model for web annotation on all types of content, using Linked Data as a fundamental framework.  Annotation clients generate instances of this model, typically using a JSON serialization, but need to store that data somewhere using a standard interaction pattern so that best of breed clients, servers, and data can be mixed and matched.&lt;br /&gt;
&lt;br /&gt;
Stanford is using Fedora4 for managing Open Annotations, via a middleware component called Triannon.  Triannon receives the JSON data from the annotation client, and uses the Linked Data Platform API implementation in Fedora4 to create, retrieve, update and delete the constituent resources.  Triannon could be easily modified to use other LDP implementations, or could be modified to work with linked data other than annotations.&lt;br /&gt;
&lt;br /&gt;
== Helping Google (and scholars, researchers, educators, &amp;amp; the public) find archival audio ==&lt;br /&gt;
&lt;br /&gt;
* Anne Wootton, anne@popuparchive.org, Pop Up Archive (www.popuparchive.org)&lt;br /&gt;
&lt;br /&gt;
Culturally significant digital audio collections are hard to discover on the web. There are major barriers keeping this valuable media from scholars, researchers, and the general public:&lt;br /&gt;
&lt;br /&gt;
Audio is opaque: you can’t picture sound, or skim the words in a recording. &lt;br /&gt;
Audio is hard to share: there’s no text to interact with. &lt;br /&gt;
Audio is not text: but since text is the medium of the web, there’s no path for audiences to find content-rich audio.&lt;br /&gt;
Audio metadata is inconsistent and incomplete.&lt;br /&gt;
&lt;br /&gt;
At Pop Up Archive, we're helping solve this problem making the spoken word searchable. We began as a UC-Berkeley School of Information Master's thesis to provide better access to recorded sound for audio producers, journalists, and historians. Today, Pop Up Archive processes thousands of hours of sound from all over the web to create automatic, timestamped transcripts and keywords, working with media companies and institutions like NPR, KQED, HuffPost Live, Princeton, and Stanford. We're building collections of sound from journalists, media organizations, and oral history archives from around the world. Pop Up Archive is supported by the John S. and James L. Knight Foundation, the National Endowment for the Humanities, and 500 Startups.&lt;br /&gt;
&lt;br /&gt;
== Digital Content Integrated with ILS Data for User Discovery:  Lessons Learned ==&lt;br /&gt;
&lt;br /&gt;
* Naomi Dushay, ndushay@stanford.edu,  Stanford University Libraries&lt;br /&gt;
* Laney McGlohon, laneymcg@stanford.edu,  Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
So you want to expose your digital content in your discovery interface, integrated with the data from your ILS?  How do you make the best information user searchable?  How do you present complete, up to date search results with a minimum of duplicate entries?&lt;br /&gt;
&lt;br /&gt;
At Stanford, we have these cases and more:&lt;br /&gt;
* digital content with no metadata in ILS&lt;br /&gt;
* digital content for metadata in ILS&lt;br /&gt;
* digital content with its own metadata derived from ILS metadata.&lt;br /&gt;
&lt;br /&gt;
We will describe our efforts to accommodate multiple updatable metadata sources for materials in the ILS and our Digital Object Repository while presenting users with reduced duplication in SearchWorks.  Included will be some failures, some successes, and an honest assessment of where we are now.&lt;br /&gt;
&lt;br /&gt;
== Show All the Things: Kanban for Libraries == &lt;br /&gt;
&lt;br /&gt;
* Mike Hagedon, mhagedon@email.arizona.edu, University of Arizona Libraries (first-time presenter)&lt;br /&gt;
&lt;br /&gt;
The web developers at the University of Arizona Libraries had a problem: we were working on a major website rebuild project with no clear way to prioritize it against our other work. We knew we wanted to follow Agile principles and initially chose Scrum to organize and communicate about our work. But we found that certain core pieces of Scrum did not work for our team. Then we discovered Kanban, an Agile meta-process for organizing work (team or individual) that treats the work more as a flow than as a series of fixed time boxes. I’ll be talking about our journey toward finding a process that works for our team and how we’ve applied the principles of Kanban to better get our work done. Specifically, I'll discuss principles like how to visualize all your work, how to limit how much you’re doing (to get more done!), and how to optimize the flow of your work.&lt;/div&gt;</summary>
		<author><name>Michaelhagedon</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2015_Prepared_Talk_Proposals&amp;diff=41995</id>
		<title>2015 Prepared Talk Proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2015_Prepared_Talk_Proposals&amp;diff=41995"/>
				<updated>2014-11-07T20:08:36Z</updated>
		
		<summary type="html">&lt;p&gt;Michaelhagedon: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Code4lib 2015 is a loosely-structured conference that provides people working at the intersection of libraries/archives/museums/cultural heritage and technology with a chance to share ideas, be inspired, and forge collaborations. For more information about the Code4lib community, please visit http://code4lib.org/about/. &lt;br /&gt;
The conference will be held at the Portland Hilton &amp;amp; Executive Tower in Portland, Oregon, from February 9-12, 2015.&lt;br /&gt;
&lt;br /&gt;
'''Proposals for Prepared Talks:'''&lt;br /&gt;
&lt;br /&gt;
We encourage everyone to propose a talk.&lt;br /&gt;
 &lt;br /&gt;
Prepared talks are 20 minutes (including setup and questions), and should focus on one or more of the following areas:&lt;br /&gt;
* Projects you've worked on which incorporate innovative implementation of existing technologies and/or development of new software&lt;br /&gt;
* Tools and technologies – How to get the most out of existing tools, standards and protocols (and ideas on how to make them better)&lt;br /&gt;
* Technical issues - Big issues in library technology that should be addressed or better understood&lt;br /&gt;
* Relevant non-technical issues – Concerns of interest to the Code4Lib community which are not strictly technical in nature, e.g. collaboration, diversity, organizational challenges, etc.&lt;br /&gt;
&lt;br /&gt;
Proposals can be submitted through Friday, November 7, 2014 at 5pm PST (GMT−8). Voting will start on November 11, 2014 and continue through November 25, 2014. The URL to submit votes will be announced on the Code4Lib website and mailing list and will require an active code4lib.org account to participate. The final list of presentations will be announced in early- to mid-December.&lt;br /&gt;
&lt;br /&gt;
'''Proposals for Prepared Talks:'''&lt;br /&gt;
&lt;br /&gt;
Log in to the Code4lib wiki and edit this wiki page using the prescribed format. If you are not already registered, follow the instructions to do so.&lt;br /&gt;
Provide a title and brief (500 words or fewer) description of your proposed talk.&lt;br /&gt;
If you so choose, you may also indicate when, if ever, you have presented at a prior Code4Lib conference. This information is completely optional, but it may assist voters in opening the conference to new presenters.&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Talk Title: ==&lt;br /&gt;
 &lt;br /&gt;
* Speaker's name,  email address, and (optional) affiliation&lt;br /&gt;
* Second speaker's name, email address, and affiliation, if second speaker&lt;br /&gt;
&lt;br /&gt;
Abstract of no more than 500 words.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Talk Proposals'''&lt;br /&gt;
== Zines + Gamification = Awesomest Metadata Literacy Outreach Event Ever! ==&lt;br /&gt;
 &lt;br /&gt;
* [http://www.JenniferHecker.info Jennifer Hecker], jenniferraehecker@gmail.com, [http://www.lib.utexas.edu/subject/zines University of Texas Libraries] &amp;amp; [http://www.AustinFanzineProject.org Austin Fanzine Project]&lt;br /&gt;
* [http://anomalily.net/ Lillian Karabaic], librarian@iprc.org, [http://www.iprc.org/ Independent Publishing Resource Center] (Portland)&lt;br /&gt;
 &lt;br /&gt;
In academic libraries, and elsewhere, the popularity of zine (a magazine produced for love, not profit) collections is on the rise. At the same time, metadata literacy is becoming an increasingly important skill, helping people navigate and understand digital environments and interactions. We have found a way to teach metadata literacy to the general public that isn’t super-boring – in fact, we’ve made it downright fun!&lt;br /&gt;
&lt;br /&gt;
First, volunteer zine librarian Lillian Karabaic of Portland’s Independent Publishing Resource Center facilitated the creation of a gamified cataloging interface for the IPRC’s annual Raiders of the Lost Archives backlog-busting 24-hour volunteer cataloging event.&lt;br /&gt;
&lt;br /&gt;
Then, archivist Jennifer Hecker facilitated the adaptation of the IPRC’s game for use in a similar, but also very different context – promoting UT Libraries newly-acquired zine collections. The main goal of the academic-library-based event was increasing excitement around the collections, but with the side goal of building metadata literacy, and introducing an understanding of library cataloging issues.&lt;br /&gt;
&lt;br /&gt;
The Texas modification also conforms to the xZINECOREx metadata schema developed by the national [http://zinelibraries.info/ Zine Librarians Interest Group], and triggered interesting conversations with the Libraries’s cataloging department about evolving metadata standards and how to incorporate the products of crowd-sourcing projects into existing workflows.&lt;br /&gt;
&lt;br /&gt;
Both games will be demoed.&lt;br /&gt;
&lt;br /&gt;
We have never presented at Code4lib.&lt;br /&gt;
&lt;br /&gt;
== Do the Semantic FRBRoo ==&lt;br /&gt;
* Rosie Le Faive, rlefaive@upei.ca, University of Prince Edward Island&lt;br /&gt;
&lt;br /&gt;
[http://www.islandora.ca Islandora] is great for creating repositories of any data type, but how can you model meaningful relationships between digital objects and use them to tell a story?&lt;br /&gt;
&lt;br /&gt;
At UPEI, I’m assembling an ethnography of Prince Edward Island’s traditional fiddle music that includes musical clips, video clips, oral histories, musical notation, images, and ethnographic commentaries. In order to present an exhibition-style site, I’m tying these digital objects together via the people, places, events, tunes and topics that they share or describe. &lt;br /&gt;
&lt;br /&gt;
To describe the relationships, I’m extending Islandora to use [http://www.cidoc-crm.org/frbr_inro.html FRBRoo], a vocabulary that combines the FRBR model with CIDOC-CRM, the the object-oriented museum documentation ontology. These modules being developed will allow other researchers to create a structured, navigable digital repository of diverse object types, that uses Islandora as an exhibition platform. &lt;br /&gt;
&lt;br /&gt;
== Our $50,000 Problem: Why Library School? ==&lt;br /&gt;
* Jennie Rose Halperin, jhalperin@mozilla.com, Mozilla Corporation&lt;br /&gt;
&lt;br /&gt;
57 library schools in the United States are churning out approximately 100 graduates per year, many with debt upwards of $50,000.  According to ONet, [http://www.inthelibrarywiththeleadpipe.org/2011/is-the-united-states-training-too-many-librarians-or-too-few-part-1/ 84% of library jobs in the US require an MLS.] The library profession is [http://dpeaflcio.org/programs-publications/issue-fact-sheets/library-workers-facts-figures/) 92% white and 82% female and entry-level librarians can expect to make $32,500 per year.]&lt;br /&gt;
&lt;br /&gt;
Contrasted with developers, who are almost [http://www.ncwit.org/blog/did-you-know-demographics-technical-women 90% male] and can expect to make [http://www.forbes.com/sites/jennagoudreau/2011/06/01/best-entry-level-jobs/ $70,000 in an entry-level position,] these numbers are dismal.&lt;br /&gt;
&lt;br /&gt;
According to a recent survey, the top skill that outgoing library students want to know is “programming” and yet many MLS programs still consider Microsoft Word an essential technology skill.&lt;br /&gt;
&lt;br /&gt;
What is going on here? Why do we accept this fate, where mostly female debt-burdened professionals continue to be thrown onto the work force without the education their expensive degrees promised?&lt;br /&gt;
&lt;br /&gt;
As a community we need to come together to stop this cycle. We need to provide better support and mentorship to diversify and keep the profession relevant and help librarianship move into the future it deserves.&lt;br /&gt;
&lt;br /&gt;
This talk will walk through the challenges of navigating a hostile employment environment as well as present models for better development and future state imagining.&lt;br /&gt;
&lt;br /&gt;
== No cataloging software? Need more than Dublin Core? No problem!: Experiences with CollectiveAccess ==&lt;br /&gt;
* [[User:SeanHendricks|Sean Q. Hendricks]], sqhendr@clemson.edu, Clemson University&lt;br /&gt;
* Rachel Wittmann, rwittma@clemson.edu, Clemson University&lt;br /&gt;
&lt;br /&gt;
Clemson University Libraries has implemented the open-source software CollectiveAccess for customized digital collection needs. CollectiveAccess is an open-source project with the goal of providing a flexible way to manage and publish museum and archival collections. There are several applications associated with the projects; most used are: Providence (for cataloging and entering metadata) and Pawtucket (for displaying objects in a collection for the public). It has many profiles readily available for installing with existing library standards, such as Dublin Core, and there is a robust syntax for creating your own profiles to fit custom tailored metadata schemas. Plus, the user interface allows you to modify the metadata profile quickly and easily.&lt;br /&gt;
&lt;br /&gt;
In this talk, we will discuss:&lt;br /&gt;
* Our experiences with installing Providence and creating an installation profile that satisfies the needs of many of the Clemson Libraries digital archiving processes. &lt;br /&gt;
* The stumbling blocks experienced in that process and how they were resolved.&lt;br /&gt;
* The available plugins sourcing widely used authorities, such as Library of Congress thesauri and GeoNames.org, and how they have been used by our projects. &lt;br /&gt;
* A brief overview of the export and import functions and also current workflow practices within Providence.&lt;br /&gt;
* Future plans &amp;amp; the role of CollectiveAccess at Clemson University Libraries&lt;br /&gt;
&lt;br /&gt;
== Getting ContentDM and Wordpress to Play Together ==&lt;br /&gt;
* [[User:SeanHendricks|Sean Q. Hendricks]], sqhendr@clemson.edu, Clemson University&lt;br /&gt;
&lt;br /&gt;
Clemson University Libraries has a very strong program for digitizing and archiving photographs, and the Digital Imaging team processes many hundreds of photographs every month. These images are managed using different methods, including ContentDM, a digital collection manager.&lt;br /&gt;
&lt;br /&gt;
ContentDM provides various methods for searching and displaying photographs, along with their metadata. However, recent initiatives have resulted in the need to leverage those collections into exhibits displayed on other library-related websites, such as our Special Collections unit. The Clemson Libraries has invested heavily in Wordpress as our content management system of choice, and it seemed most efficient not to have to export and import images into our Wordpress sites in order to provide exhibited images.&lt;br /&gt;
&lt;br /&gt;
Fortunately, ContentDM has provided an API to many of their functions, allowing the extraction of metadata and even rescaled images through URLs. This project has been developing a plugin for Wordpress that integrates with ContentDM through shortcodes that Wordpress editors can easily include in their content. These shortcodes allow editors to choose how many images, which images from which collections, thumbnail sizes, etc. to display in different gallery styles. Plans are for it to allow integration with different plugins such as Fancybox and Masonry.&lt;br /&gt;
&lt;br /&gt;
In this presentation, I will demonstrate the current state of the plugin and discuss future plans. &lt;br /&gt;
&lt;br /&gt;
==Refinery — An open source locally deployable web platform for the analysis of large document collections==&lt;br /&gt;
 &lt;br /&gt;
* [[User:DaeilKim|Daeil Kim]], The New York Times, daeil.kim@nytimes.com&lt;br /&gt;
&lt;br /&gt;
Refinery is an open source web platform for the analysis of large unstructured document collections. It extracts meaningful semantic themes within documents also known as &amp;quot;topics&amp;quot; which can be thought of as word clouds composed of terms that highly co-occur with one another. Once this semantic index is formed, one can extract relevant documents related to these topics and further refine their contents through a summarization process that allows users to search for phrases that are relevant to them within the corpus. The goal of Refinery is to make this whole process easier and to provide some of the latest scalable versions of these learning algorithms in an intuitive web-based interface. Refinery is also meant to be run locally, thus bypassing the need for securing document collections over the internet. The talk will go through some of the technologies involved and a demo of the app.&lt;br /&gt;
&lt;br /&gt;
For more info check out http://www.docrefinery.org.&lt;br /&gt;
&lt;br /&gt;
==Drupal 8 — Evolution &amp;amp; Revolution==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Drupal 8 is in beta and nearing release. Among its many features, it notably has become more developer friendly through its adoption of the Symfony PHP framework along with Symfony's outstanding set of libraries (like Guzzle) and tools (like Composer). And, in implementing the Twig theming system, it is can begin to escape PHPtemplate. These moves also make it easier to create headless systems that uses Angular.js and other systems for presentation, or even forgo presentation entirely.&lt;br /&gt;
&lt;br /&gt;
From the site-builder's perspective, Drupal 8 provides a much smother experience and makes it easier to build and implement site recipes.&lt;br /&gt;
&lt;br /&gt;
==Using GameSalad to Build a Gamified Information Literacy Mobile App for Higher Education==&lt;br /&gt;
 &lt;br /&gt;
* [[User:StanBogdanov|Stanislav 'Stan' Bogdanov]],  stan@stanrb.com, Adelphi University and [http://bogliollc.com Boglio LLC]&lt;br /&gt;
&lt;br /&gt;
GameSalad is a popular tool for developing mobile and desktop games with little actual programming. In this presentation, Stan Bogdanov breaks down the development process he followed while building [https://github.com/stanrb/mobiLit mobiLit], a mobile app with the goal of being the first open-source gamified information literacy app to be used as part of a college-level information literacy curriculum. He will go through the basics of using GameSalad to create an app that can be easily customized by non-programmers and the instructional principles used to teach the material in a mobile medium. Stan will also go through two qualitative design studies he did on the app and discuss their results and the lessons learned from building mobiLit. The session will conclude with an overview of the next steps for the [https://github.com/stanrb/mobiLit mobiLit project].&lt;br /&gt;
&lt;br /&gt;
==The Impossible Search: Pulling data from multiple unknown sources==&lt;br /&gt;
 &lt;br /&gt;
* Riley Childs, no official affiliation (currently a Senior in High School at Charlotte United Christian Academy), rchilds (AT) cucawarriors.com &lt;br /&gt;
&lt;br /&gt;
It's easy to search data you know the structure of, but what if you need to pull in data from sources that don't have a standard structure. The ability to search community events along with your standard catalog search results is an example, but often the only way to pull these events is through XML, JSON, (Insert structured format here), or even just raw html. But how do you get that structure? That simple question is what makes this impossible. The process to define and process this structure takes a lot of manual labor, especially if the data you are pulling is just HTML, and then every time you add data to the index you have to run all the data through a script to pull in data in a format Solr or an other index can use. This talk will focus on Solr, but the principles explained will apply to many other indexes.&lt;br /&gt;
&lt;br /&gt;
==What! You're Not Using Docker?==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Boring part: Docker[1] is a container system that provides benefits similar to virtualization with only a fraction of the overhead. Scintillating part: Docker can host between four to six times the number of service instances than systems such as Xen or VMWare on a given piece of hardware. But thats not all! Docker also makes it simple(r) to create transportable instances, so you can spin up development servers on your laptop.&lt;br /&gt;
&lt;br /&gt;
*[1]https://www.docker.com/&lt;br /&gt;
&lt;br /&gt;
== Video Accessibility, WebVTT, and Timed Text Track Tricks ==&lt;br /&gt;
&lt;br /&gt;
* Jason Ronallo, jronallo@gmail.com, NCSU Libraries&lt;br /&gt;
&lt;br /&gt;
Video on the Web presents new challenges and opportunities. How do you make your video more accessible to those with various disabilities and needs? I'll show you how. This presentation will focus on how to write and deliver captions, subtitles, audio descriptions, and timed metadata tracks for Web video using the WebVTT W3C standard. Encoding timed text tracks in this way opens up opportunities for new functionality on your websites beyond accessibility. The presentation will show some examples of the potential for using timed text tracks in creative ways. I'll cover all the HTML and JavaScript you will need to know as well as some of the CSS and other bits you could probably do without but are too fun to pass up.&lt;br /&gt;
&lt;br /&gt;
== Categorizing Records with Random Forests ==&lt;br /&gt;
 &lt;br /&gt;
* Geoffrey Boushey, geoffrey.boushey@ucsf.edu, UCSF Library&lt;br /&gt;
Academic libraries are increasingly responsible for providing ingest, search, discovery, and analysis for data sets.  Emerging techniques from data science and machine learning can provide librarians and developers with an opportunity to generate new insights and services from these document collections.  This presentation will provide a brief overview of common machine learning classification techniques, then dive into a more detailed example using a random forest to assign keywords to research data sets.  The talk will emphasize the insight that can be gained from machine learning rather than the inner workings of the algorithms.  The overall goal of this presentation is to provide librarians and developers with the context to recognize an opportunity to apply machine learning categorization techniques at their home campuses and organizations.  &lt;br /&gt;
&lt;br /&gt;
== Data Science in Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Devon Smith, smithde@oclc.org, OCLC&lt;br /&gt;
&lt;br /&gt;
Data Science is increasing in buzz and hype. I'll go over what it is, what it isn't, and how it fits in libraries.&lt;br /&gt;
&lt;br /&gt;
== PDF metadata extraction for academic literature == &lt;br /&gt;
&lt;br /&gt;
* Kevin Savage, kevin.savage at mendeley.com, Mendeley&lt;br /&gt;
* Joyce Stack, joyce.stack at mendeley.com, Mendeley&lt;br /&gt;
&lt;br /&gt;
Mendeley recently added a, &amp;quot;document from file,&amp;quot; endpoint to its API which attempts to extract metadata such as title and authors directly from PDF files. This talk will describe at a high level the machine learning methods we used including how we measured and tuned our model. We will then delve more deeply into our stack, the tools we used, some of the things that didn't work and why PDFs are the worst thing ever to compute over.&lt;br /&gt;
&lt;br /&gt;
== Giving Users What They Want: Record Grouping in VuFind ==&lt;br /&gt;
 &lt;br /&gt;
* Mark Noble,  mark@marmot.org, [//www.marmot.org Marmot Library Network]&lt;br /&gt;
&lt;br /&gt;
In 2013, Marmot did extensive usability studies with patrons to determine what was difficult in the catalog.  Many patrons had problems sifting through all of the various formats and editions of a title.  In 2014 we developed a method for [//mercury.marmot.org/Union/Search?lookfor=divergent grouping records] so only a single work is shown in search results and all formats and editions are listed under that work.  We will discuss our definition of a 'work' based on FRBR principles; combining meta data from MARC records with metadata from other sources like OverDrive; the technical details of Record Grouping; the design decisions made during implementation; and the reaction from users and staff.&lt;br /&gt;
&lt;br /&gt;
== Topic Space: a mobile augmented reality recommendation app ==&lt;br /&gt;
&lt;br /&gt;
* Jim Hahn, jimhahn@illinois.edu, University of Illinois at Urbana-Champaign&lt;br /&gt;
&lt;br /&gt;
The Topic Space module (http://minrvaproject.org/modules_topicspace.php ) was developed with an IMLS Sparks! Grant to investigate augmented reality technologies for in-library recommendations. The funding allowed for sustained university community collaboration by the University Library, the Graduate School of Library and Information Science, as well as graduate student programmers sourced from the Department of Computer Science. Collaborators designed app functionality and identified relevant open source libraries that could power optical character recognition (OCR) functionality from within the mobile phone.&lt;br /&gt;
&lt;br /&gt;
Topic space allows a user to take a picture of an item's call number in the book stacks. The module will show the user other books that are relevant but that are not shelved nearby. It can also show users books that are normally shelved here but that are currently checked out. Recommendations are based on Library of Congress subject headings and ILS circulation data which indicate recommendation candidates based on total check-outs. &lt;br /&gt;
&lt;br /&gt;
Research questions included development of back end (server-side) pattern matching algorithms for recommendations, and a rapid formative evaluation of interface design that would provide optimal user experience for navigation of the book stacks as a context to recommendations.&lt;br /&gt;
&lt;br /&gt;
Along with the Topic Space native app, grant collaborators prototyped web based recommendations which could serve as a new way of providing readers advisory and “more like this” recommendations from discovery interfaces accessed through desktop browsers. Outcomes of the grant include the availability of the [https://play.google.com/store/apps/details?id=edu.illinois.ugl.minrva Topic Spaces module within Minrva app on the Android Play store] and an experimental [http://backbonejs.org/ Backbone.js] based [http://minrva-dev.library.illinois.edu Topic Space web app].&lt;br /&gt;
&lt;br /&gt;
== Leveling Up Your Git Workflow ==&lt;br /&gt;
&lt;br /&gt;
* Megan Kudzia, moneill@albion.edu, Albion College Library&lt;br /&gt;
* Kate Sears, eks11@albion.edu, Albion College Library&lt;br /&gt;
&lt;br /&gt;
Have you started experimenting with Git on your own, but now you need to include others in your projects? Learn from our mistakes! Transitioning from a one-person git workflow and repo structure, to a structure that includes multiple people (including student workers), is not for the faint of heart. We'll talk about why we decided to work this way, our path to developing a git culture amongst ourselves, conceptual and technical difficulties we've faced, what we learned, and where we are now. Also with pretty pictures (aka workflow drawings).&lt;br /&gt;
&lt;br /&gt;
== Drone Loaning Program: Because Laptops are so last century ==&lt;br /&gt;
&lt;br /&gt;
 * Uche Enwesi, uenwesi@umd.edu, University of Maryland Libraries&lt;br /&gt;
 * Francis Kayiwa, fkayiwa@umd.edu, University of Maryland Libraries&lt;br /&gt;
&lt;br /&gt;
At Univ. Maryland we are in the very early stages of looking into allowing our student body get their hands on a drone. Yes that's right we will let students take out a drone for n amount of hours to work on projects of their choosing. The talk will talk about the logistics of getting a program of this sort from concept to &amp;quot;Is the drone available?&amp;quot;. If people sign waivers we will also promise not to crash the drone into code4lib attendees.&lt;br /&gt;
&lt;br /&gt;
== Got Git? Getting More Out of Your GitHub Repositories ==&lt;br /&gt;
&lt;br /&gt;
 * Terry Brady, twb27@georgetown.edu, Georgetown University Library&lt;br /&gt;
&lt;br /&gt;
This presentation will discuss how librarians, developers, and system administrators at Georgetown University are maximizing their use of the public and private GitHub repositories. &lt;br /&gt;
&lt;br /&gt;
In additional to all of the great benefits of using Git for code management, the GitHub interface provides a powerful set of tools to showcase a project and to keep your users informed of developments to your project.  These tools can assist with marketing and outreach - turning your code repository into a focus of conversation!&lt;br /&gt;
&lt;br /&gt;
* [http://georgetown-university-libraries.github.io/File-Analyzer/ Style-able Project Pages]&lt;br /&gt;
* [https://github.com/Georgetown-University-Libraries/File-Analyzer/wiki Project Wikis]&lt;br /&gt;
* [https://github.com/Georgetown-University-Libraries/Georgetown-University-Libraries-Code/releases Project Release Notes/Portfolios]&lt;br /&gt;
* [https://rawgit.com/Georgetown-University-Libraries/Georgetown-University-Libraries-Code/master/samples/GoogleSpreadsheetFilter.html Web Resources That Can Be Directly Requested]&lt;br /&gt;
* Gists for code sharing&lt;br /&gt;
* Private Repositories and Organizational Groups&lt;br /&gt;
* Pull Request Conversation Tracking&lt;br /&gt;
* Customized Issue management&lt;br /&gt;
&lt;br /&gt;
== Quick Wins for Every Department in the Library - File Analyzer! ==&lt;br /&gt;
&lt;br /&gt;
 * Terry Brady, twb27@georgetown.edu, Georgetown University Library&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Library has customized workflows for nearly every department in our library with a single code base.&lt;br /&gt;
* Analyzing Marc Records for the Cataloging department&lt;br /&gt;
* Transferring ILS invoices for the University Account System for the Acquisitions department &lt;br /&gt;
* Delivering patron fines to the Bursar’s office for the Access Service department&lt;br /&gt;
* Summarizing student worker timesheet data for the Finance department&lt;br /&gt;
* Validating COUNTER compliant reports for the Electronic Resources department&lt;br /&gt;
* Generating ingest packages for the Digital Services department&lt;br /&gt;
* Validating checksums for the Preservation department&lt;br /&gt;
&lt;br /&gt;
Learn how you can customize the [http://georgetown-university-libraries.github.io/File-Analyzer/ File Analyzer] to become a hero in your library!&lt;br /&gt;
&lt;br /&gt;
==The Geospatial World is Moving from Maps *on* the Web to Maps *of* the web. Libraries can too==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Copystar|Mita Williams]], mita@uwindsor.ca, User Experience Librarian, University of Windsor&lt;br /&gt;
&lt;br /&gt;
The transition from paper maps to digital ones changed much more than the maps themselves; it changed the very foundation of how we work and how we find each other. Now maps are transforming again.  The Geospatial World is moving from GIS systems that are institutionally-focused, expensive, feature-burdened, and binds data into a complicated and demanding user-hostile interface. From this transition from digital to web-based digital geospatial tools has come growth and development in new forms of map-based investigative journalism, activism, scholarship, and business ventures. This talk will highlight the conditions and strategies that made these changes possible as a means to draw a path by which librarians through our own work may follow, dragons notwithstanding. &lt;br /&gt;
&lt;br /&gt;
== Building Your Own Federated Search ==&lt;br /&gt;
&lt;br /&gt;
* Rich Trott, Richard.Trott@ucsf.edu, UC San Francisco&lt;br /&gt;
&lt;br /&gt;
Advances in modern browsers have created some interesting possibilities for federated search. This presentation will cover common techniques and pitfalls in building a federated search. We will discuss what principles guided our decisions when implementing our own federated search. We will show tools we've built and our findings from building and using experimental prototypes.&lt;br /&gt;
&lt;br /&gt;
Your higher education institution likely offers dozens of online resources for educators, students, researchers, and the public. And each of these online resources likely has its own search tool. But users can't be expected to search in dozens of different interfaces to find what they're looking for. A typical solution for this issue is federated search. &lt;br /&gt;
&lt;br /&gt;
==  Indexing Linked Data with LDPath ==&lt;br /&gt;
&lt;br /&gt;
* Chris Beer, cabeer@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
LDPath [1] is a simple query language for indexing linked open data, with support for caching, content negotiation, and integration with non-RDF endpoints. This talk will demonstrate the features and potential of the language and framework to index a resource with links into id.loc.gov, viaf.org, geonames.org, etc to build an application-ready document.&lt;br /&gt;
&lt;br /&gt;
[1] http://marmotta.apache.org/ldpath/language.html&lt;br /&gt;
&lt;br /&gt;
== Show Me the Money: Integrating an LMS with Payment Providers ==&lt;br /&gt;
 &lt;br /&gt;
* Josh Weisman,  Josh.Weisman@exlibrisgroup.com, Development Director-Resources Management, Ex Libris Group&lt;br /&gt;
&lt;br /&gt;
In order to provide an easy and convenient way for patrons to pay fines, we are exploring ways to integrate the library management system with online payment providers such as PayPal. With many LMS systems being designed and developed for the cloud, we should be able to provide the frictionless user experience our patrons have come to expect from online transactions. In this session we'll discuss strategies for integration and review a sample application which uses REST APIs from a library management system to integrate with PayPal.&lt;br /&gt;
&lt;br /&gt;
== Shibboleth Federated Authentication for Library Applications: ==&lt;br /&gt;
&lt;br /&gt;
* Scott Fisher, scott.fisher@ucop.edu, California Digital Library&lt;br /&gt;
* Ken Weiss, ken.weiss@ucop.edu, California Digital Library&lt;br /&gt;
&lt;br /&gt;
Shibboleth is the most widely-used method to provide single-sign-on authentication to academic applications where users come from many different institutions. Shibboleth, the InCommon education and research trust framework, and the SAML protocol comprise a very powerful - but very complicated - solution to this very complicated problem. Scott and Ken have implemented Shibboleth for multiple library applications. They will share their understanding of the good, the bad, and the underlying spaghetti that makes it all work. Ken will discuss some of the technical aspects of the solution, touching on optimal and non-optimal use cases, administrative challenges, and authorization concerns. Scott will describe the implementation pattern for multi-institution single-sign-on that the California Digital Library has evolved, using the recently released Dash application (http://dash.cdlib.org) as an example.&lt;br /&gt;
&lt;br /&gt;
==Scientific Data: A Needs Assessment Journey==&lt;br /&gt;
 &lt;br /&gt;
*[[User:VickySteeves| Vicky Steeves]], vsteeves@amnh.org, American Museum of Natural History&lt;br /&gt;
&lt;br /&gt;
While surveying digital research and collections data in the research science divisions at the American Museum of Natural History in NYC (as a part of my [http://ndsr.nycdigital.org/ National Digital Stewardship Residency] project), I have come across the big data hogs (genome sequencing and CT scanning) and the little pieces of data (images, publications), all equally important to not only scientific discovery, but as nodes in the history of science. &lt;br /&gt;
&lt;br /&gt;
In this session, I will discuss the development of my needs assessment surveys for scientific datasets and the interview process with Museum curators and researchers as background, seguing into an explanation of the results. I will then combine my findings into preliminary selection criteria to choose tools for digital preservation and management unique to scientific datasets. This will brooke a discussion on emerging standards, tools, and technologies in big data, specific to research science. &lt;br /&gt;
&lt;br /&gt;
I will conclude with preliminary findings on emerging technology that can be used to answer concerns surrounding the management and digital preservation of these data. I am hoping the Q&amp;amp;A session can be used to both answer questions about my project, and function as a way for you (the larger tech-savy library community)  to discuss the tools I’ve touched on in this talk. &lt;br /&gt;
&lt;br /&gt;
== Feminist Human Computer Interaction (HCI) in Library Software ==&lt;br /&gt;
 &lt;br /&gt;
* Bess Sadler,  bess@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
Libraries are not neutral repositories of knowledge. Library classification systems and search technologies tend to reflect the inequalities, biases, ethnocentrism, and power imbalances of the societies in which they are built [1]. How might we better resist these tendencies in the library software we create? This talk will examine some qualities of feminist HCI (pluralism, self-disclosure, participation, ecology, advocacy, and embodiment) [2] through the lens of library software. &lt;br /&gt;
&lt;br /&gt;
[1] Olson, Hope A. (2002). The Power to Name: Locating the Limits of Subject Representation in Libraries. Dordrecht, The Netherlands: Kluwer Academic Publishers.&lt;br /&gt;
&lt;br /&gt;
[2] Bardzell, Shaowen. Feminist HCI: Taking Stock and Outlining an Agenda for Design. CHI 2010: HCI For All. http://dmrussell.net/CHI2010/docs/p1301.pdf&lt;br /&gt;
&lt;br /&gt;
== Heiðrún: DPLA's Metadata Harvesting, Mapping and Enhancement System ==&lt;br /&gt;
&lt;br /&gt;
* Audrey Altman, audrey at dp.la, Digital Public Library of America&lt;br /&gt;
* Gretchen Gueguen, gretchen at dp.la, Digital Public Library of America&lt;br /&gt;
* Mark Breedlove, mb at dp.la, Digital Public Library of America&lt;br /&gt;
&lt;br /&gt;
The Digital Public Library of America aggregates metadata for over 8 million objects from more than 24 direct partners, or Hubs, using its Metadata Application Profile (MAP), an RDF metadata application profile based on the Europeana Data Model. After working with the initial system for harvesting, mapping and enhancing our Hub’s metadata for a year, we realized that it was inadequate for working with data at this scale. There were architectural issues; it was opaque to non-developer and partner staff; there were inadequate tools for quality assurance and analysis; and the system was unaware that it was working with RDF data. As the network of Hubs expanded and we ingested more metadata, it became harder and harder to know when or why a harvest, a mapping task, or an enrichment went wrong because the tools for quality assurance were largely inadequate. &lt;br /&gt;
&lt;br /&gt;
The DPLA Content and Technology teams decided to develop a new system from the ground up to address those problems. Development of Heidrun, the internal version of the new system, started in October 2014. Heidrun’s goals are to make it easier for us to harvest and map metadata from various sources and in variety of schemas to the DPLA MAP, to better enrich that metadata using external data sources, and to actively involve our partners in the ingestion process through access to better QA tools. Heidrun and its componentry are built on Ruby on Rails, Blacklight, and ActiveTriples. Our presentation will give some background on our design principles and processes used during development, the architecture of the system, and its functionality. We plan to release a version of Heidrun and its components as a generalized metadata aggregation system for use by DPLA Hubs and others working to aggregate cultural heritage metadata.&lt;br /&gt;
&lt;br /&gt;
== OS or GTFO: Program or Perish ==&lt;br /&gt;
*Tessa Fallon, tessa.fallon@gmail.com&lt;br /&gt;
&lt;br /&gt;
Description TBD&lt;br /&gt;
&lt;br /&gt;
== Creating Dynamic— and Cheap!— Digital Displays with HTML 5 Authoring Software ==&lt;br /&gt;
* Chris Woodall, cmwoodall@salisbury.edu, Salisbury University Libraries&lt;br /&gt;
Would your library like to have large digital signage that displays dynamic information such as library hours, weather, room availability, and more? Have you looked into purchasing large digital signage, only to be turned off by the high price tag and lack of customization available with commercial solutions? Our library has developed a cheap and effective alternative to these systems using HTML 5 authoring software, a large TV, and freely-available APIs from Google, Springshare, and others. At this session, you’ll learn about the system that we have in place for displaying dynamic and easily-updatable information on our library’s large digital display, and how you can easily create something similar for your library.&lt;br /&gt;
&lt;br /&gt;
== REPOX: Metadata Blender ==&lt;br /&gt;
 &lt;br /&gt;
* John Mignault, jmignault@metro.org, Empire State Digital Network&lt;br /&gt;
&lt;br /&gt;
With the growth in the number of hubs providing metadata to the Digital Public Library of America, many of them are using REPOX, a tool originally created for the Europeana project, to aggregate disparate metadata feeds and transform them into formats suitable for ingest into DPLA. The Empire State Digital Network, the forthcoming DPLA service hub for NY state, is using it to prepare for our first ingest into DPLA in early 2015.  We'll take a look at REPOX and its capabilities and how it can be useful for ingesting and transforming metadata, and also discuss some things we've learned in massaging widely varied metadata feeds.&lt;br /&gt;
&lt;br /&gt;
== Beyond Open Source ==&lt;br /&gt;
&lt;br /&gt;
* Jason Casden, jmcasden@ncsu.edu, NCSU Libraries&lt;br /&gt;
* Bret Davidson, bddavids@ncsu.edu, NCSU Libraries&lt;br /&gt;
&lt;br /&gt;
The Code4Lib community has produced an increasingly impressive collection of open source software over the last decade, but much of this creative work remains out of reach for large portions of the library community. Do the relatively privileged institutions represented by a majority of Code4Lib participants have a professional responsibility to support the adoption of their innovations?&lt;br /&gt;
&lt;br /&gt;
Drawing from old and new software packaging and distribution approaches (from freeware to Docker), we will propose extending the open source software values of collaboration and transparency to include the wide and affordable distribution of software. We believe this will not only simplify the process of sharing our applications within the Code4Lib community, but also make it possible for less well resourced institutions to actually use our software. We will identify areas of need, present our experiences with the users of our own open source projects, discuss our attempts to go beyond open source, and make an argument for the internal value of supporting and encouraging a vibrant library ecosystem.&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2015]] &lt;br /&gt;
[[Category:Talk Proposals]]&lt;br /&gt;
&lt;br /&gt;
== Making It Work: Problem Solving Using Open Source at a Small Academic Library ==&lt;br /&gt;
 &lt;br /&gt;
* Adam Strohm, astrohm@iit.edu, Illinois Institute of Technology&lt;br /&gt;
* Max King, mking9@iit.edu, Illinois Institute of Technology&lt;br /&gt;
&lt;br /&gt;
The Illinois Institute of Technology campus was added to the National Register of Historic Places in 2005, and contains a building, Mies van der Rohe's S.R. Crown Hall, that was named a National Historic Landmark in 2001. Creating a digital resource that can adequately showcase the campus and its architecture is challenge enough in and of itself, but doing so as a two-person team of relative newcomers, at a university library without dedicated programmers on staff, ups the ante considerably.&lt;br /&gt;
The challenges of technical know-how, staff time, and funding are nothing new to anyone working on digital projects at a university library, and are amplified when doing so at a smaller institution. This talk covers the conception, development, and design of the campus map site that was built, concentrating on the problem-solving strategies developed to cope with limited technical and financial resources.&lt;br /&gt;
We'll talk about our approach to development with Open Source software, including Omeka, along with the Neatline and Simile Timeline plugins. We'll also discuss the juggling act of designing for mobile mapping functionality without sacrificing desktop design, weighing the costs of increased functionality versus our ability to time-effectively include that functionality, and the challenge of building a site that could be developed iteratively, with an eye towards future enhancement and sustainability. Finally, we’ll provide recommendations for other librarians at smaller institutions for their own efforts at digital development.&lt;br /&gt;
&lt;br /&gt;
== Recording Digitization History: Metadata Options for the Process History of Audiovisual Materials ==&lt;br /&gt;
 &lt;br /&gt;
* Peggy Griesinger, peggy_griesinger@moma.org, Museum of Modern Art&lt;br /&gt;
&lt;br /&gt;
The Museum of Modern Art has amassed a large collection of audiovisual materials over its many decades of existence. In order to preserve these materials, much of the audiovisual collection has been digitized. This is a complex process involving numerous steps and devices, and the methods used for digitization can have an effect on the quality of the file that is preserved. Therefore, knowing exactly how something was digitized is critical for future stewards of these objects to be able to properly care for and preserve them. However, detailed technical information about the processes involved in the digitization of audiovisual materials is not defined explicitly in most metadata schemas used for audiovisual materials. In order to record process history using existing metadata standards, some level of creativity is required to allow existing standards to express this information.&lt;br /&gt;
&lt;br /&gt;
This talk will detail different metadata standards, including PBCore, PREMIS, and reVTMD, that can be implemented as methods of recording this information. Specifically, the talk will examine efforts to integrate this metadata into the Museum of Modern Art’s new digital repository, the DRMC. This talk will provide background on the DRMC as well as MoMA’s specific institutional needs for process history metadata, then discuss different metadata implementations we have considered to document process history.&lt;br /&gt;
&lt;br /&gt;
== Pig Kisses Elephant: Building Research Data Services for Web Archives ==&lt;br /&gt;
 &lt;br /&gt;
* Jefferson Bailey,  jefferson@archive.org, Internet Archive&lt;br /&gt;
* Vinay Goel, vinay@archive.org, Internet Archive&lt;br /&gt;
&lt;br /&gt;
More and more libraries and archives are creating web archiving programs.  For both new and established programs, these archives can consist of hundreds of thousands, if not millions, of born-digital resources within a single collection; as such, they are ideally suited for large-scale computational study and analysis. Yet current access methods for web archives consist largely of browsing the archived web in the same manner as browsing the live web and the size of these collections and complexity of the WARC format can make aggregate analysis difficult. This talk will describe a project to create new ways for users and researchers to access and study web archives by offering extracted and post-processed datasets derived from web collections. Working with the 325+ institutions and their 2600+ collections within the Archive-It service, the Internet Archive is building methods to deliver a variety of datasets culled from collections of web content, including extracted metadata packaged in JSON, longitudinal link graph data, named entities, and other types of data. The talk will cover the technical details of building dataset production pipelines with Apache Pig, Hadoop, and tools like Stanford NER, the programmatic aspects of building data services for archives and researchers, and ongoing work to create new ways to access and study web archives.&lt;br /&gt;
&lt;br /&gt;
== Awesome Pi, LOL! ==&lt;br /&gt;
&lt;br /&gt;
* Matt Connolly, mconnolly@cornell.edu, Cornell University Library&lt;br /&gt;
* Jennifer Colt, jrc88@cornell.edu, Cornell University Library&lt;br /&gt;
&lt;br /&gt;
Inspired by Harvard Library Lab’s “Awesome Box” project, Cornell’s Library Outside the Library (LOL) group is piloting a more automated approach to letting our users tell us which materials they find particularly stunning. Armed with a Raspberry Pi, a barcode scanner, and some bits of kit that flash and glow, we have ventured into the foreign world of hardware development. This talk will discuss what it’s like for software developers and designers to get their hands dirty, how patrons are reacting to the Awesomizer, and LOL’s not-afraid-to-fail philosophy of experimentation.&lt;br /&gt;
&lt;br /&gt;
== You Gotta Keep 'em Separated: The Case for &amp;quot;Bento Box&amp;quot; Discovery Interfaces ==&lt;br /&gt;
 &lt;br /&gt;
* Jason Thomale,  jason.thomale@unt.edu, University of North Texas Libraries&lt;br /&gt;
&lt;br /&gt;
I know, I know--proposing a talk about Resource Discovery is like, ''so'' 2010.&lt;br /&gt;
&lt;br /&gt;
The thing is, practically all of us--in academic libraries at least--have a similar set up for discovery, with just a few variations, and so talking about it still seems useful. Stop me if this sounds familiar. You've got a single search box on the library homepage as a starting point for discovery. And it's probably a tabbed affair, with an option for searching the catalog for books, an option for searching a discovery service for articles, an option for searching databases, and maybe a few others. Maybe you have an option to search everything at once--probably the default, if you have it. And, if you're a crazy hepcat, maybe you ''only'' have your one search that searches everything, with no tabs.&lt;br /&gt;
&lt;br /&gt;
Now, the question is, for your &amp;quot;everything&amp;quot; search, are you doing a combined list of results, or are you doing it bento-box style, with a short results list from each category displayed in its own compartment?&lt;br /&gt;
&lt;br /&gt;
At UNT, we've been holding off on implementing an &amp;quot;everything&amp;quot; search, for various reasons. One reason is that the evidence for either style hasn't been very clear. There's this persistent paradox that we just can't reconcile: users tell us, through word and action, that they prefer searching Google, yet, libraries aren't Google, and there are valid design reasons why we shouldn't try to oversimplify our discovery interfaces to be like Google. And there's user data that supports both sides.&lt;br /&gt;
&lt;br /&gt;
Holding off on making this decision has granted us 2 years of data on how people use our tabbed search interface that does ''not'' include an &amp;quot;everything&amp;quot; search. Recently I conducted a thorough analysis of this data--specifically the usage and query data for our catalog and discovery system (Summon). And I think it helps make the case for a bento box style discovery interface. To be clear, it isn't exactly the smoking gun that I was hoping for, but the picture it paints I think is telling. At the very least, it points away from a combined-results approach.&lt;br /&gt;
&lt;br /&gt;
I'm proposing a talk discussing the data we've collected, the trends we've seen, and what I think it all means--plus other reasons that we're jumping on the &amp;quot;bento box&amp;quot; discovery bandwagon and why I think &amp;quot;bento box&amp;quot; is at this point the path that least sells our souls.&lt;br /&gt;
&lt;br /&gt;
== Don’t know about you, but I’m feeling like SHA-2!: Checksumming with Taylor Swift ==&lt;br /&gt;
 &lt;br /&gt;
* Ashley Blewer!, ashley.blewer@gmail.com&lt;br /&gt;
&lt;br /&gt;
Checksum technology is used all over the place, from git commits to authenticating Linux packages. It is most commonly used in the digital preservation field to monitor materials in storage for changes that will occur over time or used in the transmission of files during duplication. But do you even checksum, bro? I want this talk to move checksums from a position of mysterious macho jargon to something everyone can understand and want to use. I think a lot of people have heard of checksum but don’t know where to begin when it comes to actually using it at their institution. And cryptography is hella intimidating! This talk will cover what checksums are, how they can be integrated into a library or archival workflow, protecting collections requiring additional levels of security, algorithms used to verify file fixity and how they are different, and other aspects of cryptographic technology. Oh, and please note that all points in this talk will be emphasized or lightly performed through Taylor Swift lyrics. Seriously, this talk will consist of at least 50% Taylor Swift. Can you, like, even?&lt;br /&gt;
&lt;br /&gt;
== Level Up Your Coding with Code Club (yes, you can talk about it) ==&lt;br /&gt;
&lt;br /&gt;
* Coral Sheldon-Hess, coral@sheldon-hess.org&lt;br /&gt;
&lt;br /&gt;
Reading code is a necessary part of becoming a better developer. It gives you more experience and more insight into How Things Are (or Aren't) Done; it builds your intuition about how to solve problems with code; and it increases your confidence that you, too, can tackle whatever technological problems you're facing.&lt;br /&gt;
&lt;br /&gt;
But you don't have to read code alone! (Which is good. It's really not fun to read code alone.) &lt;br /&gt;
&lt;br /&gt;
In late 2014, a group of librarians formed two Code Clubs, inspired by [http://bloggytoons.com/code-club/ this talk by Saron] (of Bloggytoons fame). I'd like to tell you about how we've structured our Code Clubs, what has gone well, what we've learned, and what you need to do to form your own Code Club. I'll share a list of the codebases we've looked at, too, to help you get your own Code Club off the ground! &lt;br /&gt;
&lt;br /&gt;
== The Growth of a Programmer ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:jgo | Joshua Gomez]], Getty Research Institute, jgomez@getty.edu&lt;br /&gt;
&lt;br /&gt;
Just like other creative endeavors, software developers can experience periods of great productivity or find themselves in a rut. After contemplating the alternating periods in my own career I've noticed several factors that have effected my own professional growth and happiness, including: mentorship, structure, community, teamwork, environment, formal education, etc. Not all of the factors need to be present at all times; but some mixture of them is critical for continued growth. In this talk, I will articulate these factors, discuss how they can effect a developer's career, and how they can be sought out when missing. This talk is aimed at both new developers looking to strike their own path as well as the veterans that lead or mentor them.&lt;br /&gt;
&lt;br /&gt;
== Developing a Fedora 4.0 Content Model for Disk Images ==&lt;br /&gt;
&lt;br /&gt;
* Matthew Farrell, matthew.j.farrell@duke.edu, Duke University Libraries&lt;br /&gt;
* Alexandra Chassanoff, achass@email.unc.edu, BitCurator Access Project Manager&lt;br /&gt;
&lt;br /&gt;
As the acquisition of born-digital materials grows, institutions are seeking methods to facilitate easy ingest into their repositories and provide access to disk images and files derived or extracted from disk images. In this session, we describe our development of a Fedora 4.0 Content model for disk images, including acceptable image file formats and the rationale behind those choices.  We will also discuss efforts to integrate the disk image content model into the BitCurator Access environment. Unlike generalized, format-agnostic content models which might treat the disk image as a generic bitstream, a content model designed for disk images enables expression of relationships among associated content in the collection such as files extracted from images and other born-digital and digitized material associated with the same creator.  It also enables capture of file-system attributes such as file paths, timestamps, whether files are allocated/deleted, etc.  Further, a disk image content model suggests further steps repositories can take in order to transform and re-use associated metadata generated during the creation and forensic analysis of the disk image.&lt;br /&gt;
&lt;br /&gt;
== Data acquisition and publishing tools in R ==&lt;br /&gt;
&lt;br /&gt;
* Scott Chamberlain,  scott@ropensci.org, rOpenSci/UC Berkeley - first-time presenter&lt;br /&gt;
&lt;br /&gt;
R is an open source programming environment that is widely used among researchers in many fields. R is powerful because it's free, increasingly robust, and facilitates reproducible research, an increasingly sought after goal in academia. Although tools for data manipulation/visualization/analysis are well developed in R, data acquisition and publishing tools are not. rOpenSci is a collaborative effort to create the tools necessary to complete the reproducible research workflow. This presentation discusses the need for these tools, including examples, including interacting with the repositories Mendeley, Dryad, DataONE, and Figshare. In addition, we are building tools for searching scholarly metadata and acuiring full text of open access articles in a standarized way across metadata providers (e.g., Crossref, DataCite, DPLA) and publishers (e.g., PLOS, PeerJ, BMC, Pubmed). Last, we are building out tools for data reading and writing in Ecologial Metadata Language (EML).&lt;br /&gt;
&lt;br /&gt;
== SPLUNK: Log File Analysis ==&lt;br /&gt;
&lt;br /&gt;
* Jim LeFager, jlefager@depaul.edu, DePaul University Library&lt;br /&gt;
DePaul University Library recently took over monitoring and maintaining of the library EZproxy servers this past year and using Splunk, a machine data analysis tool, we are able to gather information and statistics on our electronic resource usage in addition to monitoring the servers. Splunk is a tool that can collect, analyze, and visualize log files and other machine data in real time and this has allowed for gathering realtime usage statistics for our electronic resources allowing us to filter by multiple facets including IP Range, Group Membership (student, faculty), so that we can see who is accessing our resources and from where. Splunk allows our library to query our data and create rich custom dashboards as well as create alerts that can be triggered when certain conditions are met, such as error codes, which can send an email alert to a group of users. We will be leveraging Splunk to monitor all library web applications going forward. This talk will review setting up Splunk and best practices in using the available features and customizations available including creating queries, alerts, and custom dashboards.  &lt;br /&gt;
&lt;br /&gt;
== Your code does not exist in a vacuum ==&lt;br /&gt;
* Becky Yoose, yoosebec at grinnell dot edu, Grinnell College (Done a lightning talk, MC duties, but have not presented a prepared talk)&lt;br /&gt;
&lt;br /&gt;
“If you have something to say, then say it in code…” - Sebastian Hammer, code4lib 2009&lt;br /&gt;
&lt;br /&gt;
In its 10 year run, code4lib has covered the spectrum of libtech development, from search to repositories to interfaces. However, during this time there has been little discussion about this one little fact about development - code does not exist in a vacuum. &lt;br /&gt;
&lt;br /&gt;
Like the comment above, code has something to say. A person’s or organization’s culture and beliefs influences code in all steps of the development cycle. What development method you use, tools, programming languages, licenses - everything is interconnected with and influenced by the philosophies, economics, social structures, and cultural beliefs of the developer and their organization/community.&lt;br /&gt;
&lt;br /&gt;
This talk will discuss these interconnections and influences when one develops code for libraries, focusing on several development practices (such as “Fail Fast, Fail Often” and Agile)   and licensing choices (such as open source) that libtech has either tried to model or incorporate into mainstream libtech practices. It’ll only scratch the surface of the many influences present in libtech development, but it will give folks a starting point to further investigate these connections at their own organizations and as a community as a whole.&lt;br /&gt;
&lt;br /&gt;
tl;dr - this will be a messy theoretical talk about technology and libraries. No shiny code slides, no live demos. You might come out of this talk feeling uncomfortable. Your code does not exist in a vacuum. Then again, you don’t exist in a vacuum either.&lt;br /&gt;
&lt;br /&gt;
== The Metadata Hopper: Mapping and Merging Metadata Standards for Simple, User-Friendly Access ==&lt;br /&gt;
&lt;br /&gt;
* Tracy Seneca, tjseneca@uic.edu, University of Illinois at Chicago&lt;br /&gt;
* Esther Verreau: verreau1@uic.edu, University of Illinois at Chicago&lt;br /&gt;
&lt;br /&gt;
The Chicago Collections Consortium: 15 institutions and growing!  8 distinct EAD standards! At least 3 permutations of MARC, and we lost count of the varieties of custom CONTENTdm image collections.  Not to mention the 14,730 unique subject terms, nearly all of which lead our poor end-users to exactly one organization's content. &lt;br /&gt;
&lt;br /&gt;
All large content aggregation projects have faced this challenge, and there are a few emerging tools to help us wrangle disparate metadata into new contexts.  The Metadata Hopper is one such tool. The Metadata Hopper enables archivists to map their local metadata standards to standardized deposit records, and tags those materials using a shared vocabulary, integrating them into a user-friendly portal without disrupting local practices. In last year's Code4Lib lightning talk we described the challenges that the Chicago Collections Consortium faces in creating shared, in-depth access to archival and digital collections about Chicago history and culture across CCC member organizations. This year, thanks to the Andrew W. Mellon Foundation, we have a working Django application to demonstrate.  In this talk we'll discuss the design that enables multiple layers of flexibility, from the ability to accept a variety of metadata standards to designing for an open source audience.&lt;br /&gt;
&lt;br /&gt;
http://chicagocollectionsconsortium.org&lt;br /&gt;
&lt;br /&gt;
== Programmers are not projects: lessons learned from managing humans ==&lt;br /&gt;
&lt;br /&gt;
* Erin White, erwhite@vcu.edu, Virginia Commonwealth University - first-time presenter&lt;br /&gt;
&lt;br /&gt;
Managing projects is one thing, but managing people is another. Whether we’re hired as managers or grow “organically” into management roles, sometimes technical people end up leading technical teams (gasp!). I’ll talk about lessons I’ve learned about hiring, retaining, and working long-term and day-to-day with highly tech-competent humans. I’ll also talk about navigating the politics of libraryland, juggling different types of projects, and working with constrained budgets to make good things and keep talented people engaged.&lt;br /&gt;
&lt;br /&gt;
== Practical Strategies for Picking Low-Hanging Fruits to Improve Your Library's Web Usability and UX ==&lt;br /&gt;
&lt;br /&gt;
* Bohyun Kim, bkim@hshsl.umaryland.edu, University of Maryland, Baltimore&lt;br /&gt;
&lt;br /&gt;
Have you ever tried to fix an obvious (to you at least!) problem in Web usability or UX (user experience) only to face strong resistance from the library staff? Are you a strong advocate for making library resources, systems, services, and space as usable as possible, but do you often find yourself struggling to get the point across and/or obtain the crucial buy-in from colleagues and administrators? &lt;br /&gt;
&lt;br /&gt;
There is no shortage of Web usability and UX guidelines. But applying them to a library and implementing desired changes often involve a long and slow process. To tackle this issue, this talk will focus on how to utilize the 'expert review' process (aka 'heuristic evaluation') as a preliminary or even preparatory step before embarking on more time-and-labor-intensive usability testing and user research. Several examples from  simple fixes to more nuanced usability and UX issues in libraries will be discussed to your heart's content. The goal of this talk is to provide practical strategies for picking as many low-hanging fruits as possible to make a real (albeit small) difference to your library's Web usability and UX effectively and efficiently.&lt;br /&gt;
&lt;br /&gt;
== A Semantic Makeover for CMS Data ==&lt;br /&gt;
&lt;br /&gt;
* Bill Levay, wjlevay@gmail.com, Linked Jazz Project&lt;br /&gt;
&lt;br /&gt;
How can we take semi-structured but messy metadata from a repository like CONTENTdm and transform it into rich linked data? Working with metadata from Tulane’s Hogan Jazz Archive Photography Collection, the Linked Jazz Project used Open Refine and Python scripts to tease out proper names, match them with name authority URIs, and specify FOAF relationships between musicians who appear together in photographs. Additional RDF triples were created for any dates associated with the photos, and for those images with place information we employed GeoNames URIs. Historical images and data that were siloed can now interact with other datasets, like Linked Jazz’s rich set of names and personal relationships, and can be visualized [link to come] or otherwise presented on the web in any number of ways. I have not previously presented at a Code4Lib conference.&lt;br /&gt;
&lt;br /&gt;
== Taking User Experience (UX) to new heights ==&lt;br /&gt;
 &lt;br /&gt;
* Kayne Richens, kayne.richens@deakin.edu.au, Deakin University&lt;br /&gt;
&lt;br /&gt;
User Experience, or &amp;quot;UX&amp;quot;, is for more than just websites. At Deakin University Library we're exploring ways to improve the user experience inside our campus library spaces, by putting new technologies front and centre in the overall experience for our students. How are we doing this? We’re collaborating with the University's IT department and exploring the following Library-changing opportunities:&lt;br /&gt;
&lt;br /&gt;
- Augmented Reality for Way-finding: We’re tackling that infamous thing that all Libraries can't get right – way-finding. We're enhancing library tour information and way-finding experiences by introducing augmented reality solutions.&lt;br /&gt;
 &lt;br /&gt;
- Heat mapping the library with wi-fi: We’re using our existing wi-fi infrastructure to present &amp;quot;heat maps&amp;quot; of library space utilisation, allowing our users to easily locate the space that best suits their needs, whether it be busy spaces to collaborate, or quiet spaces to study. And by overlaying computer usage and group study room bookings, users can quickly locate the space they need.&lt;br /&gt;
 &lt;br /&gt;
- Video chat library service: We’re piloting video-conferencing facilities in our group study rooms and spaces, connecting users and librarians and other professionals.&lt;br /&gt;
         &lt;br /&gt;
This talk will look at how these different technologies will be brought together to provide improved user experiences, as well some of the evidence and reasons that helped us to identify our needs, so you can too.&lt;br /&gt;
&lt;br /&gt;
==How to Hack it as a Working Parent: or, Should Your Face be Bathed in the Blue Glow of a Phone at 2 AM?==&lt;br /&gt;
&lt;br /&gt;
*Margaret Heller, Loyola University Chicago, mheller1@luc.edu&lt;br /&gt;
*Christina Salazar, California State University Channel Islands, christina.salazar@csuci.edu&lt;br /&gt;
*May Yan, Ryerson University, may.yan@ryerson.ca&lt;br /&gt;
&lt;br /&gt;
Modern technology has made it easier than ever for parents employed in technical environments to keep up with work at all hours and in all locations. This makes it possible to work a flexible schedule, but also may lead to problems with work/life balance and furthering unreasonable expectations about working hours. Add to that shifting gender roles and limited paid parental leave in the United States and you have potential for burnout and a certainty for anxiety. It raises the additioal question of whether the “always connected” mindset puts up a barrier to some populations who otherwise might be better represented in open source and library technology communities. &lt;br /&gt;
&lt;br /&gt;
This presentation will address tools that are useful for working parents in technical library positions, and share some lessons learned about using these tools while maintaining a reasonable work/life balance. We will consider a question that Karen Coyle raised back in 1996: &lt;br /&gt;
“What if the thousands of hours of graveyard shift amateur hacking wasn't really the best way to get the job done? That would be unthinkable.” &lt;br /&gt;
&lt;br /&gt;
For those who are able to take an extended parental leave, we will present strategies for minimizing the impact to your career and your employer. For those (particularly in the United States) who are only able to take a short leave will require different strategies. Despite different levels of preparation, all are useful exercises in succession planning and making a stronger workplace and future ability to work a flexible schedule through reviewing workloads, cross-training personnel, hiring contract replacements, and creative divisions of labor. Such preparation makes work better for everyone, kids or no kids or caretakers of any kind.&lt;br /&gt;
&lt;br /&gt;
==Making your digital objects embeddable around the web==&lt;br /&gt;
 &lt;br /&gt;
* Jessie Keck, jkeck@stanford.edu, Stanford University Libraries&lt;br /&gt;
* Jack Reed, pjreed@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
With more and more content from our digital repositories making their way into our discovery environments we quickly realize that we’re repeatedly re-inventing the wheel when it comes to creating “Viewers” for these digital objects.  With various different types of viewers necessary (books, images, audio, video, geospatial data, etc) the burden of getting these viewers into various environments (topic guides, blogs, catalogs, etc) becomes exponential.&lt;br /&gt;
&lt;br /&gt;
In this talk we’ll discuss how Stanford University Libraries implemented an oEmbed service to create an extensible viewer framework for all of its digital content. Using this service we’ve been able to easily integrate viewers into various discovery applications as well as make it easy for end users who discover our objects to easily embed customized versions into their own websites and blogs.&lt;br /&gt;
&lt;br /&gt;
==So you want to make your geospatial data discoverable==&lt;br /&gt;
 &lt;br /&gt;
* Jack Reed, pjreed@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
Finding data for research or coursework can be one of the most time intensive tasks for a scholar or student. We introduce GeoBlacklight, an open source, multi-institutional software project focused on solving these common challenges at institutions across the world. GeoBlacklight prioritizes user experience, integrates with many GIS tools, and streamlines the use and organization of geospatial data. This talk will provide an introduction to the software, demonstrate current functionality, and provide a road map for future work.&lt;br /&gt;
&lt;br /&gt;
== Clueless-Driven Development: How I learned to migrate to Fedora 4 ==&lt;br /&gt;
&lt;br /&gt;
* Adam Wead, awead@psu.edu, Penn State University&lt;br /&gt;
&lt;br /&gt;
Recently I was tasked with migrating the content from our Fedora3 repository to the new Fedora4 repository architecture.&lt;br /&gt;
Despite a wealth of community support, I had no idea how to approach, or even begin to solve this problem. I knew I&lt;br /&gt;
wanted to follow best practices and use test-driven  development to build my solution, but had no idea where to start.&lt;br /&gt;
Despite this initial setback, I was able to start writing tests with only a  vague understanding of the problem. As my&lt;br /&gt;
tests exposed where my understanding of the problem was flawed, my code evolved, and within a week I had arrived  at a&lt;br /&gt;
working solution that exhibited all the hallmarks of good testing and software design.&lt;br /&gt;
&lt;br /&gt;
This talk recounts the process I went through from starting with practically nothing, to arriving at a working solution.&lt;br /&gt;
You can follow the rules of  test-driven development, but you can write tests in an expressive way to describe the&lt;br /&gt;
problem instead of just describing what the code should do. It was also essential to begin testing from an integration&lt;br /&gt;
viewpoint as opposed to a unit one, because at the outset the units were unknown and were later realized through further&lt;br /&gt;
development. For the presentation, I will be demonstrating using RSpec and Ruby. All the code examples will be related&lt;br /&gt;
to the Hydra software stack; however, I hope to show  that the processes at work will be applicable in any context.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Designing and Leading a Kick A** Tech Team ==&lt;br /&gt;
 &lt;br /&gt;
* Sibyl Schaefer,  sschaefer@rockarch.org, Rockefeller Archive Center&lt;br /&gt;
&lt;br /&gt;
New managers are often promoted without receiving management training, yet management is not something you just figure out. The experience of being expected to know how to manage, yet not being trained to do so often results in new managers feeling isolated and unsure how to move from making to managing. In this talk I’ll focus on my own managerial experience of designing and leading an archival tech team in a small independent archives. Topics covered will include hiring, delegating, creating a team culture, and leading people whose specialized knowledge exceeds your own. The talk take-aways should be applicable to managers and employees at large and small institutions alike.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==American (Archives) Horror Story: LTO Failure and Data Loss ==&lt;br /&gt;
 &lt;br /&gt;
* Rebecca Fraimow, rebecca_fraimow@wgbh.org, NDSR Resident, WGBH&lt;br /&gt;
* Casey Davis, casey_davis@wgbh.org, Project Manager, American Archive of Public Broadcasting, WGBH&lt;br /&gt;
&lt;br /&gt;
Here’s a story to send shivers down archival spines: when transferring video files off LTO for the American Archive project, WGBH got an initial failure rate of 57%.   After repeat tries, the rates improved; still, an unnervingly large percentage of files were never able to be transferred successfully.   Even more unnerving, going public with our horror story got a big response from other archives using LTO -- it seems like many institutions are having similarly scary results.   What are the real risks with LTO tape?  Are there steps that archives should be taking to better circumvent those risks?  This presentation will share information about LTO storage failures across archives world and discuss the process of investigating the problem at WGBH by testing different methods of data retrieval from LTO (direct and networked downloads, individual file retrieval and bulk data dump, use of LTO 4 and LTO 6 decks) and using checksum comparisons and file analysis and characterization tools such as ffprobe, mediainfo and exiftool to analyze failed files.  We'll also present whatever results we’ve managed to turn up by the time of Code4Lib!&lt;br /&gt;
&lt;br /&gt;
== PBCore in Action: Three Words, Not Two! ==&lt;br /&gt;
 &lt;br /&gt;
* Casey E. Davis,  casey_davis@wgbh.org, Project Manager, American Archive of Public Broadcasting, WGBH&lt;br /&gt;
* Andrew (Drew) Myers, andrew_myers@wgbh.org, Supervising Developer, WGBH&lt;br /&gt;
&lt;br /&gt;
In 2001, public media representatives developed the PBCore XML schema to establish a common language for managing metadata about their analog and digital audio and video. Since then, PBCore has been adopted by a number of organizations and archivists in the moving image archival community. The schema has also undergone a few revisions, but on more than one occasion it was left orphaned and with little to no support.&lt;br /&gt;
 &lt;br /&gt;
Times have changed. You may have heard the news that PBCore is back in action as part of the American Archive of Public Broadcasting initiative and via the Association of Moving Image Archivists (AMIA) PBCore Advisory Subcommittee. A group of archivists, public media stakeholders, and engaged users have come together to provide necessary support for the standard and to see to its further development. &lt;br /&gt;
 &lt;br /&gt;
At this session, we'll discuss the scope and uses of PBCore in digital preservation and access, report on the progress and goals of the PBCore Advisory Subcommittee, and share how the group (by the time of the conference) will have transformed the XML schema into an RDF ontology, bringing PBCore into the second decade of the 21st century. #PBHardcore&lt;br /&gt;
&lt;br /&gt;
==Collaborating to Avert the Digital Graveyard==&lt;br /&gt;
&lt;br /&gt;
* Harish Nayak, hnayak@library.rochester.edu, University of Rochester Libraries &lt;br /&gt;
* Sean Morris, smorris@library.rochester.edu, University of Rochester Libraries &lt;br /&gt;
&lt;br /&gt;
In 1995, the Robbins Library at the University of Rochester created a digital collection of Arthurian texts, images, and bibliographies. Together with medieval scholars, we recently completed the redesign and development of an interface for this collection. Using FRBR concepts, we re-conceptualized organization and editing workflow from the ground up in a mobile-first Drupal-based project. &lt;br /&gt;
&lt;br /&gt;
In this talk we will describe the project as well as how we utilized the techniques of work practice study and user centered design to maintain engagement with reluctant stakeholders, nontechnical scholars, and VERY meticulous graduate students.  Neither of us have previously presented at a Code4Lib conference.&lt;br /&gt;
&lt;br /&gt;
==Docker? VMs? EC2? Yes! With Packer.io==&lt;br /&gt;
&lt;br /&gt;
* Kevin S. Clarke, ksclarke@gmail.com, Digital Library Programmer, UCLA&lt;br /&gt;
&lt;br /&gt;
There are a lot of exciting ways to deploy a software stack nowadays. Many of our library systems are fully virtualized. Docker is a compelling alternative, and there are also cloud options like Amazon's EC2. This talk will introduce Packer.io, a tool for creating identical machine images for multiple platforms (e.g., Docker, VMWare, VirtualBox, EC2, GCE, OpenStack, et al.) all from a single source configuration.  It works well with Ansible, Chef, Puppet, Salt, and plain old Bash scripts. And, it's designed to be scriptable so that builds can be automated. This presentation will show how easy it is to use Packer.io to bring up a set of related services like Fedora 4, Grinder (for stress testing), and Graphite (for charting metrics). As an added value, all the buzzwords in this proposal will be defined and explained!&lt;br /&gt;
&lt;br /&gt;
== Technology on your Wrist: Cross-platform Smartwatch Development for Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:sanderson|Steven Carl Anderson]], sanderson@bpl.org, Boston Public Library (no previously accepted prepared talks but have done lightning talks in the past)&lt;br /&gt;
&lt;br /&gt;
I'll be the first to admit: smartwatches are unlikely to completely revolutionize how a library provides online services. But I believe they still represent an opportunity to further enhance existing library services and resources in a unique way.&lt;br /&gt;
&lt;br /&gt;
At the Boston Public Library (BPL), we're in the initial phases of designing a modest smartwatch app to provide notifications for circulation availability and checked-out-material due-date alerts by the end of current year. We're starting small, but we plan to evolve the concept over time as we see what (if any) traction such an application gets with potential users. For example, we plan to explore the possibility of adding &amp;quot;nearest branch to my current location&amp;quot; functionality to this app.&lt;br /&gt;
&lt;br /&gt;
Despite the &amp;quot;development phase&amp;quot; of this application as of this writing, this talk is not being given by a novice. As a technology enthusiast, I've released [http://www.phdgaming.com/smartwatch_projects/ five smartwatch applications] and have had two of those be finalists in a [http://www.phdgaming.com/samsung_challenge/ Samsung sponsored development challenge]. This experience not only will allow for the BPL to avoid many beginner mistakes in its smartwatch app development but also gives a much more complete understanding of the smartwatch development ecosystem.&lt;br /&gt;
&lt;br /&gt;
This talk will explore the following questions:&lt;br /&gt;
&lt;br /&gt;
* What kinds of online library services could potentially be transformed or translated into the smartwatch/wearable domain? What kinds of services are better left alone? These questions are currently being explored and I'll talk about our plans and experiences. Included will be any statistical information from our application launch along with statistics from my personal development.&lt;br /&gt;
&lt;br /&gt;
* How to support all the different operating systems these devices run without painful modifications to your codebase. (There's Tizen that is used by Samsung's Gear 2 and Gear S, Android Wear that is used by most other non-Apple manufacturers, then there is Apple's upcoming smartwatch itself, etc.)&lt;br /&gt;
&lt;br /&gt;
* How to support different screen resolutions on such a small device. From round to rectangular to perfectly square, smartwatches come in all different shapes these days.&lt;br /&gt;
&lt;br /&gt;
* What are the app stores like on these platforms? As I support multiple applications through different distribution networks, a guide to navigating how to distribute one's app is included and I'll reveal how these systems work “behind the curtain.”&lt;br /&gt;
&lt;br /&gt;
* What are common issues and pitfalls to avoid when doing development? Tips on broken APIs and how to cope or optimizing your code will be included.&lt;br /&gt;
&lt;br /&gt;
==Seeing the Forest From the Trees: The Art of Creating Workflows for Digital Projects ==&lt;br /&gt;
 &lt;br /&gt;
* Jen LaBarbera, j.labarbera@neu.edu, NDSR Resident, Northeastern University&lt;br /&gt;
* Joey Heinen, joseph_heinen@harvard.edu, NDSR Resident, Harvard University&lt;br /&gt;
* Rebecca Fraimow, rebecca_fraimow@wgbh.org, NDSR Resident, WGBH&lt;br /&gt;
* Tricia Patterson, triciap@mit.edu, NDSR Resident, MIT&lt;br /&gt;
&lt;br /&gt;
We have to &amp;quot;turn projects into programs&amp;quot; in order to create a solid and sustainable digital preservation initiative...but what the heck does that even mean? What does that look like?&lt;br /&gt;
&lt;br /&gt;
In this talk, members of the inaugural Boston cohort of the National Digital Stewardship Residency will discuss one piece of our digital preservation test kitchen: our stabs at creating digital workflows that will (hopefully) help our institutions turn digital preservation projects into programs. Specifically, we will talk about how difficult it is to create a general and overarching workflow for digital preservation tasks (e.g. ingest into repositories, format migrations, etc.) that incorporates various technical tools while also taking into account the myriad and unending list of possible exceptions or special scenarios. Turning these complicated, specific processes into a simplified and generalized workflow is an art. We haven't necessarily perfected that art yet, but in this talk, we'll share what has worked for us -- and what hasn't. We’ll also touch on the importance of documentation, and achieving that delicate balance of adequately thorough documentation that doesn’t pose the risk of information avalanche. These processes often create more questions than answers, but we'll share the answers that we (and our mentors) have found along the way!&lt;br /&gt;
&lt;br /&gt;
== Annotations as Linked Data with Fedora4 and Triannon (a Real Use Case for RDF!) ==&lt;br /&gt;
&lt;br /&gt;
* Rob Sanderson, azaroth@stanford.edu,  Stanford University Libraries&lt;br /&gt;
* Naomi Dushay, ndushay@stanford.edu,  Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
Annotations on content resources allow users to contribute knowledge within the digital repository space.  W3C Open Annotation provides a comprehensive model for web annotation on all types of content, using Linked Data as a fundamental framework.  Annotation clients generate instances of this model, typically using a JSON serialization, but need to store that data somewhere using a standard interaction pattern so that best of breed clients, servers, and data can be mixed and matched.&lt;br /&gt;
&lt;br /&gt;
Stanford is using Fedora4 for managing Open Annotations, via a middleware component called Triannon.  Triannon receives the JSON data from the annotation client, and uses the Linked Data Platform API implementation in Fedora4 to create, retrieve, update and delete the constituent resources.  Triannon could be easily modified to use other LDP implementations, or could be modified to work with linked data other than annotations.&lt;br /&gt;
&lt;br /&gt;
== Helping Google (and scholars, researchers, educators, &amp;amp; the public) find archival audio ==&lt;br /&gt;
&lt;br /&gt;
* Anne Wootton, anne@popuparchive.org, Pop Up Archive (www.popuparchive.org)&lt;br /&gt;
&lt;br /&gt;
Culturally significant digital audio collections are hard to discover on the web. There are major barriers keeping this valuable media from scholars, researchers, and the general public:&lt;br /&gt;
&lt;br /&gt;
Audio is opaque: you can’t picture sound, or skim the words in a recording. &lt;br /&gt;
Audio is hard to share: there’s no text to interact with. &lt;br /&gt;
Audio is not text: but since text is the medium of the web, there’s no path for audiences to find content-rich audio.&lt;br /&gt;
Audio metadata is inconsistent and incomplete.&lt;br /&gt;
&lt;br /&gt;
At Pop Up Archive, we're helping solve this problem making the spoken word searchable. We began as a UC-Berkeley School of Information Master's thesis to provide better access to recorded sound for audio producers, journalists, and historians. Today, Pop Up Archive processes thousands of hours of sound from all over the web to create automatic, timestamped transcripts and keywords, working with media companies and institutions like NPR, KQED, HuffPost Live, Princeton, and Stanford. We're building collections of sound from journalists, media organizations, and oral history archives from around the world. Pop Up Archive is supported by the John S. and James L. Knight Foundation, the National Endowment for the Humanities, and 500 Startups.&lt;br /&gt;
&lt;br /&gt;
== Digital Content Integrated with ILS Data for User Discovery:  Lessons Learned ==&lt;br /&gt;
&lt;br /&gt;
* Naomi Dushay, ndushay@stanford.edu,  Stanford University Libraries&lt;br /&gt;
* Laney McGlohon, laneymcg@stanford.edu,  Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
So you want to expose your digital content in your discovery interface, integrated with the data from your ILS?  How do you make the best information user searchable?  How do you present complete, up to date search results with a minimum of duplicate entries?&lt;br /&gt;
&lt;br /&gt;
At Stanford, we have these cases and more:&lt;br /&gt;
* digital content with no metadata in ILS&lt;br /&gt;
* digital content for metadata in ILS&lt;br /&gt;
* digital content with its own metadata derived from ILS metadata.&lt;br /&gt;
&lt;br /&gt;
We will describe our efforts to accommodate multiple updatable metadata sources for materials in the ILS and our Digital Object Repository while presenting users with reduced duplication in SearchWorks.  Included will be some failures, some successes, and an honest assessment of where we are now.&lt;br /&gt;
&lt;br /&gt;
== Show All the Things: Kanban for Libraries == &lt;br /&gt;
&lt;br /&gt;
* Mike Hagedon, mhagedon@email.arizona.edu, University of Arizona Libraries (first-time presenter)&lt;br /&gt;
&lt;br /&gt;
The web developers at the University of Arizona Libraries had a problem: we were working on a major website rebuild project with no clear way to prioritize it against our other work. We knew we wanted to follow Agile principles and initially chose Scrum to organize and communicate about our work. But we found that certain core pieces of Scrum did not work for our team. Then we discovered Kanban, an Agile meta-process for organizing work (team or individual) that treats the work more as a flow than as a series of fixed time boxes. I’ll be talking about our journey toward finding a process that works for our team and how we’ve applied the principles of Kanban to better get our work done. Specifically, we talk about principles like how to visualize all your work, how to limit how much you’re doing (to get more done!), and how to optimize the flow of your work.&lt;/div&gt;</summary>
		<author><name>Michaelhagedon</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2015_Prepared_Talk_Proposals&amp;diff=41992</id>
		<title>2015 Prepared Talk Proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2015_Prepared_Talk_Proposals&amp;diff=41992"/>
				<updated>2014-11-07T19:58:26Z</updated>
		
		<summary type="html">&lt;p&gt;Michaelhagedon: Add talk Show All the Things: Kanban for Libraries&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Code4lib 2015 is a loosely-structured conference that provides people working at the intersection of libraries/archives/museums/cultural heritage and technology with a chance to share ideas, be inspired, and forge collaborations. For more information about the Code4lib community, please visit http://code4lib.org/about/. &lt;br /&gt;
The conference will be held at the Portland Hilton &amp;amp; Executive Tower in Portland, Oregon, from February 9-12, 2015.&lt;br /&gt;
&lt;br /&gt;
'''Proposals for Prepared Talks:'''&lt;br /&gt;
&lt;br /&gt;
We encourage everyone to propose a talk.&lt;br /&gt;
 &lt;br /&gt;
Prepared talks are 20 minutes (including setup and questions), and should focus on one or more of the following areas:&lt;br /&gt;
* Projects you've worked on which incorporate innovative implementation of existing technologies and/or development of new software&lt;br /&gt;
* Tools and technologies – How to get the most out of existing tools, standards and protocols (and ideas on how to make them better)&lt;br /&gt;
* Technical issues - Big issues in library technology that should be addressed or better understood&lt;br /&gt;
* Relevant non-technical issues – Concerns of interest to the Code4Lib community which are not strictly technical in nature, e.g. collaboration, diversity, organizational challenges, etc.&lt;br /&gt;
&lt;br /&gt;
Proposals can be submitted through Friday, November 7, 2014 at 5pm PST (GMT−8). Voting will start on November 11, 2014 and continue through November 25, 2014. The URL to submit votes will be announced on the Code4Lib website and mailing list and will require an active code4lib.org account to participate. The final list of presentations will be announced in early- to mid-December.&lt;br /&gt;
&lt;br /&gt;
'''Proposals for Prepared Talks:'''&lt;br /&gt;
&lt;br /&gt;
Log in to the Code4lib wiki and edit this wiki page using the prescribed format. If you are not already registered, follow the instructions to do so.&lt;br /&gt;
Provide a title and brief (500 words or fewer) description of your proposed talk.&lt;br /&gt;
If you so choose, you may also indicate when, if ever, you have presented at a prior Code4Lib conference. This information is completely optional, but it may assist voters in opening the conference to new presenters.&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Talk Title: ==&lt;br /&gt;
 &lt;br /&gt;
* Speaker's name,  email address, and (optional) affiliation&lt;br /&gt;
* Second speaker's name, email address, and affiliation, if second speaker&lt;br /&gt;
&lt;br /&gt;
Abstract of no more than 500 words.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Talk Proposals'''&lt;br /&gt;
== Zines + Gamification = Awesomest Metadata Literacy Outreach Event Ever! ==&lt;br /&gt;
 &lt;br /&gt;
* [http://www.JenniferHecker.info Jennifer Hecker], jenniferraehecker@gmail.com, [http://www.lib.utexas.edu/subject/zines University of Texas Libraries] &amp;amp; [http://www.AustinFanzineProject.org Austin Fanzine Project]&lt;br /&gt;
* [http://anomalily.net/ Lillian Karabaic], librarian@iprc.org, [http://www.iprc.org/ Independent Publishing Resource Center] (Portland)&lt;br /&gt;
 &lt;br /&gt;
In academic libraries, and elsewhere, the popularity of zine (a magazine produced for love, not profit) collections is on the rise. At the same time, metadata literacy is becoming an increasingly important skill, helping people navigate and understand digital environments and interactions. We have found a way to teach metadata literacy to the general public that isn’t super-boring – in fact, we’ve made it downright fun!&lt;br /&gt;
&lt;br /&gt;
First, volunteer zine librarian Lillian Karabaic of Portland’s Independent Publishing Resource Center facilitated the creation of a gamified cataloging interface for the IPRC’s annual Raiders of the Lost Archives backlog-busting 24-hour volunteer cataloging event.&lt;br /&gt;
&lt;br /&gt;
Then, archivist Jennifer Hecker facilitated the adaptation of the IPRC’s game for use in a similar, but also very different context – promoting UT Libraries newly-acquired zine collections. The main goal of the academic-library-based event was increasing excitement around the collections, but with the side goal of building metadata literacy, and introducing an understanding of library cataloging issues.&lt;br /&gt;
&lt;br /&gt;
The Texas modification also conforms to the xZINECOREx metadata schema developed by the national [http://zinelibraries.info/ Zine Librarians Interest Group], and triggered interesting conversations with the Libraries’s cataloging department about evolving metadata standards and how to incorporate the products of crowd-sourcing projects into existing workflows.&lt;br /&gt;
&lt;br /&gt;
Both games will be demoed.&lt;br /&gt;
&lt;br /&gt;
We have never presented at Code4lib.&lt;br /&gt;
&lt;br /&gt;
== Do the Semantic FRBRoo ==&lt;br /&gt;
* Rosie Le Faive, rlefaive@upei.ca, University of Prince Edward Island&lt;br /&gt;
&lt;br /&gt;
[http://www.islandora.ca Islandora] is great for creating repositories of any data type, but how can you model meaningful relationships between digital objects and use them to tell a story?&lt;br /&gt;
&lt;br /&gt;
At UPEI, I’m assembling an ethnography of Prince Edward Island’s traditional fiddle music that includes musical clips, video clips, oral histories, musical notation, images, and ethnographic commentaries. In order to present an exhibition-style site, I’m tying these digital objects together via the people, places, events, tunes and topics that they share or describe. &lt;br /&gt;
&lt;br /&gt;
To describe the relationships, I’m extending Islandora to use [http://www.cidoc-crm.org/frbr_inro.html FRBRoo], a vocabulary that combines the FRBR model with CIDOC-CRM, the the object-oriented museum documentation ontology. These modules being developed will allow other researchers to create a structured, navigable digital repository of diverse object types, that uses Islandora as an exhibition platform. &lt;br /&gt;
&lt;br /&gt;
== Our $50,000 Problem: Why Library School? ==&lt;br /&gt;
* Jennie Rose Halperin, jhalperin@mozilla.com, Mozilla Corporation&lt;br /&gt;
&lt;br /&gt;
57 library schools in the United States are churning out approximately 100 graduates per year, many with debt upwards of $50,000.  According to ONet, [http://www.inthelibrarywiththeleadpipe.org/2011/is-the-united-states-training-too-many-librarians-or-too-few-part-1/ 84% of library jobs in the US require an MLS.] The library profession is [http://dpeaflcio.org/programs-publications/issue-fact-sheets/library-workers-facts-figures/) 92% white and 82% female and entry-level librarians can expect to make $32,500 per year.]&lt;br /&gt;
&lt;br /&gt;
Contrasted with developers, who are almost [http://www.ncwit.org/blog/did-you-know-demographics-technical-women 90% male] and can expect to make [http://www.forbes.com/sites/jennagoudreau/2011/06/01/best-entry-level-jobs/ $70,000 in an entry-level position,] these numbers are dismal.&lt;br /&gt;
&lt;br /&gt;
According to a recent survey, the top skill that outgoing library students want to know is “programming” and yet many MLS programs still consider Microsoft Word an essential technology skill.&lt;br /&gt;
&lt;br /&gt;
What is going on here? Why do we accept this fate, where mostly female debt-burdened professionals continue to be thrown onto the work force without the education their expensive degrees promised?&lt;br /&gt;
&lt;br /&gt;
As a community we need to come together to stop this cycle. We need to provide better support and mentorship to diversify and keep the profession relevant and help librarianship move into the future it deserves.&lt;br /&gt;
&lt;br /&gt;
This talk will walk through the challenges of navigating a hostile employment environment as well as present models for better development and future state imagining.&lt;br /&gt;
&lt;br /&gt;
== No cataloging software? Need more than Dublin Core? No problem!: Experiences with CollectiveAccess ==&lt;br /&gt;
* [[User:SeanHendricks|Sean Q. Hendricks]], sqhendr@clemson.edu, Clemson University&lt;br /&gt;
* Rachel Wittmann, rwittma@clemson.edu, Clemson University&lt;br /&gt;
&lt;br /&gt;
Clemson University Libraries has implemented the open-source software CollectiveAccess for customized digital collection needs. CollectiveAccess is an open-source project with the goal of providing a flexible way to manage and publish museum and archival collections. There are several applications associated with the projects; most used are: Providence (for cataloging and entering metadata) and Pawtucket (for displaying objects in a collection for the public). It has many profiles readily available for installing with existing library standards, such as Dublin Core, and there is a robust syntax for creating your own profiles to fit custom tailored metadata schemas. Plus, the user interface allows you to modify the metadata profile quickly and easily.&lt;br /&gt;
&lt;br /&gt;
In this talk, we will discuss:&lt;br /&gt;
* Our experiences with installing Providence and creating an installation profile that satisfies the needs of many of the Clemson Libraries digital archiving processes. &lt;br /&gt;
* The stumbling blocks experienced in that process and how they were resolved.&lt;br /&gt;
* The available plugins sourcing widely used authorities, such as Library of Congress thesauri and GeoNames.org, and how they have been used by our projects. &lt;br /&gt;
* A brief overview of the export and import functions and also current workflow practices within Providence.&lt;br /&gt;
* Future plans &amp;amp; the role of CollectiveAccess at Clemson University Libraries&lt;br /&gt;
&lt;br /&gt;
== Getting ContentDM and Wordpress to Play Together ==&lt;br /&gt;
* [[User:SeanHendricks|Sean Q. Hendricks]], sqhendr@clemson.edu, Clemson University&lt;br /&gt;
&lt;br /&gt;
Clemson University Libraries has a very strong program for digitizing and archiving photographs, and the Digital Imaging team processes many hundreds of photographs every month. These images are managed using different methods, including ContentDM, a digital collection manager.&lt;br /&gt;
&lt;br /&gt;
ContentDM provides various methods for searching and displaying photographs, along with their metadata. However, recent initiatives have resulted in the need to leverage those collections into exhibits displayed on other library-related websites, such as our Special Collections unit. The Clemson Libraries has invested heavily in Wordpress as our content management system of choice, and it seemed most efficient not to have to export and import images into our Wordpress sites in order to provide exhibited images.&lt;br /&gt;
&lt;br /&gt;
Fortunately, ContentDM has provided an API to many of their functions, allowing the extraction of metadata and even rescaled images through URLs. This project has been developing a plugin for Wordpress that integrates with ContentDM through shortcodes that Wordpress editors can easily include in their content. These shortcodes allow editors to choose how many images, which images from which collections, thumbnail sizes, etc. to display in different gallery styles. Plans are for it to allow integration with different plugins such as Fancybox and Masonry.&lt;br /&gt;
&lt;br /&gt;
In this presentation, I will demonstrate the current state of the plugin and discuss future plans. &lt;br /&gt;
&lt;br /&gt;
==Refinery — An open source locally deployable web platform for the analysis of large document collections==&lt;br /&gt;
 &lt;br /&gt;
* [[User:DaeilKim|Daeil Kim]], The New York Times, daeil.kim@nytimes.com&lt;br /&gt;
&lt;br /&gt;
Refinery is an open source web platform for the analysis of large unstructured document collections. It extracts meaningful semantic themes within documents also known as &amp;quot;topics&amp;quot; which can be thought of as word clouds composed of terms that highly co-occur with one another. Once this semantic index is formed, one can extract relevant documents related to these topics and further refine their contents through a summarization process that allows users to search for phrases that are relevant to them within the corpus. The goal of Refinery is to make this whole process easier and to provide some of the latest scalable versions of these learning algorithms in an intuitive web-based interface. Refinery is also meant to be run locally, thus bypassing the need for securing document collections over the internet. The talk will go through some of the technologies involved and a demo of the app.&lt;br /&gt;
&lt;br /&gt;
For more info check out http://www.docrefinery.org.&lt;br /&gt;
&lt;br /&gt;
==Drupal 8 — Evolution &amp;amp; Revolution==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Drupal 8 is in beta and nearing release. Among its many features, it notably has become more developer friendly through its adoption of the Symfony PHP framework along with Symfony's outstanding set of libraries (like Guzzle) and tools (like Composer). And, in implementing the Twig theming system, it is can begin to escape PHPtemplate. These moves also make it easier to create headless systems that uses Angular.js and other systems for presentation, or even forgo presentation entirely.&lt;br /&gt;
&lt;br /&gt;
From the site-builder's perspective, Drupal 8 provides a much smother experience and makes it easier to build and implement site recipes.&lt;br /&gt;
&lt;br /&gt;
==Using GameSalad to Build a Gamified Information Literacy Mobile App for Higher Education==&lt;br /&gt;
 &lt;br /&gt;
* [[User:StanBogdanov|Stanislav 'Stan' Bogdanov]],  stan@stanrb.com, Adelphi University and [http://bogliollc.com Boglio LLC]&lt;br /&gt;
&lt;br /&gt;
GameSalad is a popular tool for developing mobile and desktop games with little actual programming. In this presentation, Stan Bogdanov breaks down the development process he followed while building [https://github.com/stanrb/mobiLit mobiLit], a mobile app with the goal of being the first open-source gamified information literacy app to be used as part of a college-level information literacy curriculum. He will go through the basics of using GameSalad to create an app that can be easily customized by non-programmers and the instructional principles used to teach the material in a mobile medium. Stan will also go through two qualitative design studies he did on the app and discuss their results and the lessons learned from building mobiLit. The session will conclude with an overview of the next steps for the [https://github.com/stanrb/mobiLit mobiLit project].&lt;br /&gt;
&lt;br /&gt;
==The Impossible Search: Pulling data from multiple unknown sources==&lt;br /&gt;
 &lt;br /&gt;
* Riley Childs, no official affiliation (currently a Senior in High School at Charlotte United Christian Academy), rchilds (AT) cucawarriors.com &lt;br /&gt;
&lt;br /&gt;
It's easy to search data you know the structure of, but what if you need to pull in data from sources that don't have a standard structure. The ability to search community events along with your standard catalog search results is an example, but often the only way to pull these events is through XML, JSON, (Insert structured format here), or even just raw html. But how do you get that structure? That simple question is what makes this impossible. The process to define and process this structure takes a lot of manual labor, especially if the data you are pulling is just HTML, and then every time you add data to the index you have to run all the data through a script to pull in data in a format Solr or an other index can use. This talk will focus on Solr, but the principles explained will apply to many other indexes.&lt;br /&gt;
&lt;br /&gt;
==What! You're Not Using Docker?==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Boring part: Docker[1] is a container system that provides benefits similar to virtualization with only a fraction of the overhead. Scintillating part: Docker can host between four to six times the number of service instances than systems such as Xen or VMWare on a given piece of hardware. But thats not all! Docker also makes it simple(r) to create transportable instances, so you can spin up development servers on your laptop.&lt;br /&gt;
&lt;br /&gt;
*[1]https://www.docker.com/&lt;br /&gt;
&lt;br /&gt;
== Video Accessibility, WebVTT, and Timed Text Track Tricks ==&lt;br /&gt;
&lt;br /&gt;
* Jason Ronallo, jronallo@gmail.com, NCSU Libraries&lt;br /&gt;
&lt;br /&gt;
Video on the Web presents new challenges and opportunities. How do you make your video more accessible to those with various disabilities and needs? I'll show you how. This presentation will focus on how to write and deliver captions, subtitles, audio descriptions, and timed metadata tracks for Web video using the WebVTT W3C standard. Encoding timed text tracks in this way opens up opportunities for new functionality on your websites beyond accessibility. The presentation will show some examples of the potential for using timed text tracks in creative ways. I'll cover all the HTML and JavaScript you will need to know as well as some of the CSS and other bits you could probably do without but are too fun to pass up.&lt;br /&gt;
&lt;br /&gt;
== Categorizing Records with Random Forests ==&lt;br /&gt;
 &lt;br /&gt;
* Geoffrey Boushey, geoffrey.boushey@ucsf.edu, UCSF Library&lt;br /&gt;
Academic libraries are increasingly responsible for providing ingest, search, discovery, and analysis for data sets.  Emerging techniques from data science and machine learning can provide librarians and developers with an opportunity to generate new insights and services from these document collections.  This presentation will provide a brief overview of common machine learning classification techniques, then dive into a more detailed example using a random forest to assign keywords to research data sets.  The talk will emphasize the insight that can be gained from machine learning rather than the inner workings of the algorithms.  The overall goal of this presentation is to provide librarians and developers with the context to recognize an opportunity to apply machine learning categorization techniques at their home campuses and organizations.  &lt;br /&gt;
&lt;br /&gt;
== Data Science in Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Devon Smith, smithde@oclc.org, OCLC&lt;br /&gt;
&lt;br /&gt;
Data Science is increasing in buzz and hype. I'll go over what it is, what it isn't, and how it fits in libraries.&lt;br /&gt;
&lt;br /&gt;
== PDF metadata extraction for academic literature == &lt;br /&gt;
&lt;br /&gt;
* Kevin Savage, kevin.savage at mendeley.com, Mendeley&lt;br /&gt;
* Joyce Stack, joyce.stack at mendeley.com, Mendeley&lt;br /&gt;
&lt;br /&gt;
Mendeley recently added a, &amp;quot;document from file,&amp;quot; endpoint to its API which attempts to extract metadata such as title and authors directly from PDF files. This talk will describe at a high level the machine learning methods we used including how we measured and tuned our model. We will then delve more deeply into our stack, the tools we used, some of the things that didn't work and why PDFs are the worst thing ever to compute over.&lt;br /&gt;
&lt;br /&gt;
== Giving Users What They Want: Record Grouping in VuFind ==&lt;br /&gt;
 &lt;br /&gt;
* Mark Noble,  mark@marmot.org, [//www.marmot.org Marmot Library Network]&lt;br /&gt;
&lt;br /&gt;
In 2013, Marmot did extensive usability studies with patrons to determine what was difficult in the catalog.  Many patrons had problems sifting through all of the various formats and editions of a title.  In 2014 we developed a method for [//mercury.marmot.org/Union/Search?lookfor=divergent grouping records] so only a single work is shown in search results and all formats and editions are listed under that work.  We will discuss our definition of a 'work' based on FRBR principles; combining meta data from MARC records with metadata from other sources like OverDrive; the technical details of Record Grouping; the design decisions made during implementation; and the reaction from users and staff.&lt;br /&gt;
&lt;br /&gt;
== Topic Space: a mobile augmented reality recommendation app ==&lt;br /&gt;
&lt;br /&gt;
* Jim Hahn, jimhahn@illinois.edu, University of Illinois at Urbana-Champaign&lt;br /&gt;
&lt;br /&gt;
The Topic Space module (http://minrvaproject.org/modules_topicspace.php ) was developed with an IMLS Sparks! Grant to investigate augmented reality technologies for in-library recommendations. The funding allowed for sustained university community collaboration by the University Library, the Graduate School of Library and Information Science, as well as graduate student programmers sourced from the Department of Computer Science. Collaborators designed app functionality and identified relevant open source libraries that could power optical character recognition (OCR) functionality from within the mobile phone.&lt;br /&gt;
&lt;br /&gt;
Topic space allows a user to take a picture of an item's call number in the book stacks. The module will show the user other books that are relevant but that are not shelved nearby. It can also show users books that are normally shelved here but that are currently checked out. Recommendations are based on Library of Congress subject headings and ILS circulation data which indicate recommendation candidates based on total check-outs. &lt;br /&gt;
&lt;br /&gt;
Research questions included development of back end (server-side) pattern matching algorithms for recommendations, and a rapid formative evaluation of interface design that would provide optimal user experience for navigation of the book stacks as a context to recommendations.&lt;br /&gt;
&lt;br /&gt;
Along with the Topic Space native app, grant collaborators prototyped web based recommendations which could serve as a new way of providing readers advisory and “more like this” recommendations from discovery interfaces accessed through desktop browsers. Outcomes of the grant include the availability of the [https://play.google.com/store/apps/details?id=edu.illinois.ugl.minrva Topic Spaces module within Minrva app on the Android Play store] and an experimental [http://backbonejs.org/ Backbone.js] based [http://minrva-dev.library.illinois.edu Topic Space web app].&lt;br /&gt;
&lt;br /&gt;
== Leveling Up Your Git Workflow ==&lt;br /&gt;
&lt;br /&gt;
* Megan Kudzia, moneill@albion.edu, Albion College Library&lt;br /&gt;
* Kate Sears, eks11@albion.edu, Albion College Library&lt;br /&gt;
&lt;br /&gt;
Have you started experimenting with Git on your own, but now you need to include others in your projects? Learn from our mistakes! Transitioning from a one-person git workflow and repo structure, to a structure that includes multiple people (including student workers), is not for the faint of heart. We'll talk about why we decided to work this way, our path to developing a git culture amongst ourselves, conceptual and technical difficulties we've faced, what we learned, and where we are now. Also with pretty pictures (aka workflow drawings).&lt;br /&gt;
&lt;br /&gt;
== Drone Loaning Program: Because Laptops are so last century ==&lt;br /&gt;
&lt;br /&gt;
 * Uche Enwesi, uenwesi@umd.edu, University of Maryland Libraries&lt;br /&gt;
 * Francis Kayiwa, fkayiwa@umd.edu, University of Maryland Libraries&lt;br /&gt;
&lt;br /&gt;
At Univ. Maryland we are in the very early stages of looking into allowing our student body get their hands on a drone. Yes that's right we will let students take out a drone for n amount of hours to work on projects of their choosing. The talk will talk about the logistics of getting a program of this sort from concept to &amp;quot;Is the drone available?&amp;quot;. If people sign waivers we will also promise not to crash the drone into code4lib attendees.&lt;br /&gt;
&lt;br /&gt;
== Got Git? Getting More Out of Your GitHub Repositories ==&lt;br /&gt;
&lt;br /&gt;
 * Terry Brady, twb27@georgetown.edu, Georgetown University Library&lt;br /&gt;
&lt;br /&gt;
This presentation will discuss how librarians, developers, and system administrators at Georgetown University are maximizing their use of the public and private GitHub repositories. &lt;br /&gt;
&lt;br /&gt;
In additional to all of the great benefits of using Git for code management, the GitHub interface provides a powerful set of tools to showcase a project and to keep your users informed of developments to your project.  These tools can assist with marketing and outreach - turning your code repository into a focus of conversation!&lt;br /&gt;
&lt;br /&gt;
* [http://georgetown-university-libraries.github.io/File-Analyzer/ Style-able Project Pages]&lt;br /&gt;
* [https://github.com/Georgetown-University-Libraries/File-Analyzer/wiki Project Wikis]&lt;br /&gt;
* [https://github.com/Georgetown-University-Libraries/Georgetown-University-Libraries-Code/releases Project Release Notes/Portfolios]&lt;br /&gt;
* [https://rawgit.com/Georgetown-University-Libraries/Georgetown-University-Libraries-Code/master/samples/GoogleSpreadsheetFilter.html Web Resources That Can Be Directly Requested]&lt;br /&gt;
* Gists for code sharing&lt;br /&gt;
* Private Repositories and Organizational Groups&lt;br /&gt;
* Pull Request Conversation Tracking&lt;br /&gt;
* Customized Issue management&lt;br /&gt;
&lt;br /&gt;
== Quick Wins for Every Department in the Library - File Analyzer! ==&lt;br /&gt;
&lt;br /&gt;
 * Terry Brady, twb27@georgetown.edu, Georgetown University Library&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Library has customized workflows for nearly every department in our library with a single code base.&lt;br /&gt;
* Analyzing Marc Records for the Cataloging department&lt;br /&gt;
* Transferring ILS invoices for the University Account System for the Acquisitions department &lt;br /&gt;
* Delivering patron fines to the Bursar’s office for the Access Service department&lt;br /&gt;
* Summarizing student worker timesheet data for the Finance department&lt;br /&gt;
* Validating COUNTER compliant reports for the Electronic Resources department&lt;br /&gt;
* Generating ingest packages for the Digital Services department&lt;br /&gt;
* Validating checksums for the Preservation department&lt;br /&gt;
&lt;br /&gt;
Learn how you can customize the [http://georgetown-university-libraries.github.io/File-Analyzer/ File Analyzer] to become a hero in your library!&lt;br /&gt;
&lt;br /&gt;
==The Geospatial World is Moving from Maps *on* the Web to Maps *of* the web. Libraries can too==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Copystar|Mita Williams]], mita@uwindsor.ca, User Experience Librarian, University of Windsor&lt;br /&gt;
&lt;br /&gt;
The transition from paper maps to digital ones changed much more than the maps themselves; it changed the very foundation of how we work and how we find each other. Now maps are transforming again.  The Geospatial World is moving from GIS systems that are institutionally-focused, expensive, feature-burdened, and binds data into a complicated and demanding user-hostile interface. From this transition from digital to web-based digital geospatial tools has come growth and development in new forms of map-based investigative journalism, activism, scholarship, and business ventures. This talk will highlight the conditions and strategies that made these changes possible as a means to draw a path by which librarians through our own work may follow, dragons notwithstanding. &lt;br /&gt;
&lt;br /&gt;
== Building Your Own Federated Search ==&lt;br /&gt;
&lt;br /&gt;
* Rich Trott, Richard.Trott@ucsf.edu, UC San Francisco&lt;br /&gt;
&lt;br /&gt;
Advances in modern browsers have created some interesting possibilities for federated search. This presentation will cover common techniques and pitfalls in building a federated search. We will discuss what principles guided our decisions when implementing our own federated search. We will show tools we've built and our findings from building and using experimental prototypes.&lt;br /&gt;
&lt;br /&gt;
Your higher education institution likely offers dozens of online resources for educators, students, researchers, and the public. And each of these online resources likely has its own search tool. But users can't be expected to search in dozens of different interfaces to find what they're looking for. A typical solution for this issue is federated search. &lt;br /&gt;
&lt;br /&gt;
==  Indexing Linked Data with LDPath ==&lt;br /&gt;
&lt;br /&gt;
* Chris Beer, cabeer@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
LDPath [1] is a simple query language for indexing linked open data, with support for caching, content negotiation, and integration with non-RDF endpoints. This talk will demonstrate the features and potential of the language and framework to index a resource with links into id.loc.gov, viaf.org, geonames.org, etc to build an application-ready document.&lt;br /&gt;
&lt;br /&gt;
[1] http://marmotta.apache.org/ldpath/language.html&lt;br /&gt;
&lt;br /&gt;
== Show Me the Money: Integrating an LMS with Payment Providers ==&lt;br /&gt;
 &lt;br /&gt;
* Josh Weisman,  Josh.Weisman@exlibrisgroup.com, Development Director-Resources Management, Ex Libris Group&lt;br /&gt;
&lt;br /&gt;
In order to provide an easy and convenient way for patrons to pay fines, we are exploring ways to integrate the library management system with online payment providers such as PayPal. With many LMS systems being designed and developed for the cloud, we should be able to provide the frictionless user experience our patrons have come to expect from online transactions. In this session we'll discuss strategies for integration and review a sample application which uses REST APIs from a library management system to integrate with PayPal.&lt;br /&gt;
&lt;br /&gt;
== Shibboleth Federated Authentication for Library Applications: ==&lt;br /&gt;
&lt;br /&gt;
* Scott Fisher, scott.fisher@ucop.edu, California Digital Library&lt;br /&gt;
* Ken Weiss, ken.weiss@ucop.edu, California Digital Library&lt;br /&gt;
&lt;br /&gt;
Shibboleth is the most widely-used method to provide single-sign-on authentication to academic applications where users come from many different institutions. Shibboleth, the InCommon education and research trust framework, and the SAML protocol comprise a very powerful - but very complicated - solution to this very complicated problem. Scott and Ken have implemented Shibboleth for multiple library applications. They will share their understanding of the good, the bad, and the underlying spaghetti that makes it all work. Ken will discuss some of the technical aspects of the solution, touching on optimal and non-optimal use cases, administrative challenges, and authorization concerns. Scott will describe the implementation pattern for multi-institution single-sign-on that the California Digital Library has evolved, using the recently released Dash application (http://dash.cdlib.org) as an example.&lt;br /&gt;
&lt;br /&gt;
==Scientific Data: A Needs Assessment Journey==&lt;br /&gt;
 &lt;br /&gt;
*[[User:VickySteeves| Vicky Steeves]], vsteeves@amnh.org, American Museum of Natural History&lt;br /&gt;
&lt;br /&gt;
While surveying digital research and collections data in the research science divisions at the American Museum of Natural History in NYC (as a part of my [http://ndsr.nycdigital.org/ National Digital Stewardship Residency] project), I have come across the big data hogs (genome sequencing and CT scanning) and the little pieces of data (images, publications), all equally important to not only scientific discovery, but as nodes in the history of science. &lt;br /&gt;
&lt;br /&gt;
In this session, I will discuss the development of my needs assessment surveys for scientific datasets and the interview process with Museum curators and researchers as background, seguing into an explanation of the results. I will then combine my findings into preliminary selection criteria to choose tools for digital preservation and management unique to scientific datasets. This will brooke a discussion on emerging standards, tools, and technologies in big data, specific to research science. &lt;br /&gt;
&lt;br /&gt;
I will conclude with preliminary findings on emerging technology that can be used to answer concerns surrounding the management and digital preservation of these data. I am hoping the Q&amp;amp;A session can be used to both answer questions about my project, and function as a way for you (the larger tech-savy library community)  to discuss the tools I’ve touched on in this talk. &lt;br /&gt;
&lt;br /&gt;
== Feminist Human Computer Interaction (HCI) in Library Software ==&lt;br /&gt;
 &lt;br /&gt;
* Bess Sadler,  bess@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
Libraries are not neutral repositories of knowledge. Library classification systems and search technologies tend to reflect the inequalities, biases, ethnocentrism, and power imbalances of the societies in which they are built [1]. How might we better resist these tendencies in the library software we create? This talk will examine some qualities of feminist HCI (pluralism, self-disclosure, participation, ecology, advocacy, and embodiment) [2] through the lens of library software. &lt;br /&gt;
&lt;br /&gt;
[1] Olson, Hope A. (2002). The Power to Name: Locating the Limits of Subject Representation in Libraries. Dordrecht, The Netherlands: Kluwer Academic Publishers.&lt;br /&gt;
&lt;br /&gt;
[2] Bardzell, Shaowen. Feminist HCI: Taking Stock and Outlining an Agenda for Design. CHI 2010: HCI For All. http://dmrussell.net/CHI2010/docs/p1301.pdf&lt;br /&gt;
&lt;br /&gt;
== Heiðrún: DPLA's Metadata Harvesting, Mapping and Enhancement System ==&lt;br /&gt;
&lt;br /&gt;
* Audrey Altman, audrey at dp.la, Digital Public Library of America&lt;br /&gt;
* Gretchen Gueguen, gretchen at dp.la, Digital Public Library of America&lt;br /&gt;
* Mark Breedlove, mb at dp.la, Digital Public Library of America&lt;br /&gt;
&lt;br /&gt;
The Digital Public Library of America aggregates metadata for over 8 million objects from more than 24 direct partners, or Hubs, using its Metadata Application Profile (MAP), an RDF metadata application profile based on the Europeana Data Model. After working with the initial system for harvesting, mapping and enhancing our Hub’s metadata for a year, we realized that it was inadequate for working with data at this scale. There were architectural issues; it was opaque to non-developer and partner staff; there were inadequate tools for quality assurance and analysis; and the system was unaware that it was working with RDF data. As the network of Hubs expanded and we ingested more metadata, it became harder and harder to know when or why a harvest, a mapping task, or an enrichment went wrong because the tools for quality assurance were largely inadequate. &lt;br /&gt;
&lt;br /&gt;
The DPLA Content and Technology teams decided to develop a new system from the ground up to address those problems. Development of Heidrun, the internal version of the new system, started in October 2014. Heidrun’s goals are to make it easier for us to harvest and map metadata from various sources and in variety of schemas to the DPLA MAP, to better enrich that metadata using external data sources, and to actively involve our partners in the ingestion process through access to better QA tools. Heidrun and its componentry are built on Ruby on Rails, Blacklight, and ActiveTriples. Our presentation will give some background on our design principles and processes used during development, the architecture of the system, and its functionality. We plan to release a version of Heidrun and its components as a generalized metadata aggregation system for use by DPLA Hubs and others working to aggregate cultural heritage metadata.&lt;br /&gt;
&lt;br /&gt;
== OS or GTFO: Program or Perish ==&lt;br /&gt;
*Tessa Fallon, tessa.fallon@gmail.com&lt;br /&gt;
&lt;br /&gt;
Description TBD&lt;br /&gt;
&lt;br /&gt;
== Creating Dynamic— and Cheap!— Digital Displays with HTML 5 Authoring Software ==&lt;br /&gt;
* Chris Woodall, cmwoodall@salisbury.edu, Salisbury University Libraries&lt;br /&gt;
Would your library like to have large digital signage that displays dynamic information such as library hours, weather, room availability, and more? Have you looked into purchasing large digital signage, only to be turned off by the high price tag and lack of customization available with commercial solutions? Our library has developed a cheap and effective alternative to these systems using HTML 5 authoring software, a large TV, and freely-available APIs from Google, Springshare, and others. At this session, you’ll learn about the system that we have in place for displaying dynamic and easily-updatable information on our library’s large digital display, and how you can easily create something similar for your library.&lt;br /&gt;
&lt;br /&gt;
== REPOX: Metadata Blender ==&lt;br /&gt;
 &lt;br /&gt;
* John Mignault, jmignault@metro.org, Empire State Digital Network&lt;br /&gt;
&lt;br /&gt;
With the growth in the number of hubs providing metadata to the Digital Public Library of America, many of them are using REPOX, a tool originally created for the Europeana project, to aggregate disparate metadata feeds and transform them into formats suitable for ingest into DPLA. The Empire State Digital Network, the forthcoming DPLA service hub for NY state, is using it to prepare for our first ingest into DPLA in early 2015.  We'll take a look at REPOX and its capabilities and how it can be useful for ingesting and transforming metadata, and also discuss some things we've learned in massaging widely varied metadata feeds.&lt;br /&gt;
&lt;br /&gt;
== Beyond Open Source ==&lt;br /&gt;
&lt;br /&gt;
* Jason Casden, jmcasden@ncsu.edu, NCSU Libraries&lt;br /&gt;
* Bret Davidson, bddavids@ncsu.edu, NCSU Libraries&lt;br /&gt;
&lt;br /&gt;
The Code4Lib community has produced an increasingly impressive collection of open source software over the last decade, but much of this creative work remains out of reach for large portions of the library community. Do the relatively privileged institutions represented by a majority of Code4Lib participants have a professional responsibility to support the adoption of their innovations?&lt;br /&gt;
&lt;br /&gt;
Drawing from old and new software packaging and distribution approaches (from freeware to Docker), we will propose extending the open source software values of collaboration and transparency to include the wide and affordable distribution of software. We believe this will not only simplify the process of sharing our applications within the Code4Lib community, but also make it possible for less well resourced institutions to actually use our software. We will identify areas of need, present our experiences with the users of our own open source projects, discuss our attempts to go beyond open source, and make an argument for the internal value of supporting and encouraging a vibrant library ecosystem.&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2015]] &lt;br /&gt;
[[Category:Talk Proposals]]&lt;br /&gt;
&lt;br /&gt;
== Making It Work: Problem Solving Using Open Source at a Small Academic Library ==&lt;br /&gt;
 &lt;br /&gt;
* Adam Strohm, astrohm@iit.edu, Illinois Institute of Technology&lt;br /&gt;
* Max King, mking9@iit.edu, Illinois Institute of Technology&lt;br /&gt;
&lt;br /&gt;
The Illinois Institute of Technology campus was added to the National Register of Historic Places in 2005, and contains a building, Mies van der Rohe's S.R. Crown Hall, that was named a National Historic Landmark in 2001. Creating a digital resource that can adequately showcase the campus and its architecture is challenge enough in and of itself, but doing so as a two-person team of relative newcomers, at a university library without dedicated programmers on staff, ups the ante considerably.&lt;br /&gt;
The challenges of technical know-how, staff time, and funding are nothing new to anyone working on digital projects at a university library, and are amplified when doing so at a smaller institution. This talk covers the conception, development, and design of the campus map site that was built, concentrating on the problem-solving strategies developed to cope with limited technical and financial resources.&lt;br /&gt;
We'll talk about our approach to development with Open Source software, including Omeka, along with the Neatline and Simile Timeline plugins. We'll also discuss the juggling act of designing for mobile mapping functionality without sacrificing desktop design, weighing the costs of increased functionality versus our ability to time-effectively include that functionality, and the challenge of building a site that could be developed iteratively, with an eye towards future enhancement and sustainability. Finally, we’ll provide recommendations for other librarians at smaller institutions for their own efforts at digital development.&lt;br /&gt;
&lt;br /&gt;
== Recording Digitization History: Metadata Options for the Process History of Audiovisual Materials ==&lt;br /&gt;
 &lt;br /&gt;
* Peggy Griesinger, peggy_griesinger@moma.org, Museum of Modern Art&lt;br /&gt;
&lt;br /&gt;
The Museum of Modern Art has amassed a large collection of audiovisual materials over its many decades of existence. In order to preserve these materials, much of the audiovisual collection has been digitized. This is a complex process involving numerous steps and devices, and the methods used for digitization can have an effect on the quality of the file that is preserved. Therefore, knowing exactly how something was digitized is critical for future stewards of these objects to be able to properly care for and preserve them. However, detailed technical information about the processes involved in the digitization of audiovisual materials is not defined explicitly in most metadata schemas used for audiovisual materials. In order to record process history using existing metadata standards, some level of creativity is required to allow existing standards to express this information.&lt;br /&gt;
&lt;br /&gt;
This talk will detail different metadata standards, including PBCore, PREMIS, and reVTMD, that can be implemented as methods of recording this information. Specifically, the talk will examine efforts to integrate this metadata into the Museum of Modern Art’s new digital repository, the DRMC. This talk will provide background on the DRMC as well as MoMA’s specific institutional needs for process history metadata, then discuss different metadata implementations we have considered to document process history.&lt;br /&gt;
&lt;br /&gt;
== Pig Kisses Elephant: Building Research Data Services for Web Archives ==&lt;br /&gt;
 &lt;br /&gt;
* Jefferson Bailey,  jefferson@archive.org, Internet Archive&lt;br /&gt;
* Vinay Goel, vinay@archive.org, Internet Archive&lt;br /&gt;
&lt;br /&gt;
More and more libraries and archives are creating web archiving programs.  For both new and established programs, these archives can consist of hundreds of thousands, if not millions, of born-digital resources within a single collection; as such, they are ideally suited for large-scale computational study and analysis. Yet current access methods for web archives consist largely of browsing the archived web in the same manner as browsing the live web and the size of these collections and complexity of the WARC format can make aggregate analysis difficult. This talk will describe a project to create new ways for users and researchers to access and study web archives by offering extracted and post-processed datasets derived from web collections. Working with the 325+ institutions and their 2600+ collections within the Archive-It service, the Internet Archive is building methods to deliver a variety of datasets culled from collections of web content, including extracted metadata packaged in JSON, longitudinal link graph data, named entities, and other types of data. The talk will cover the technical details of building dataset production pipelines with Apache Pig, Hadoop, and tools like Stanford NER, the programmatic aspects of building data services for archives and researchers, and ongoing work to create new ways to access and study web archives.&lt;br /&gt;
&lt;br /&gt;
== Awesome Pi, LOL! ==&lt;br /&gt;
&lt;br /&gt;
* Matt Connolly, mconnolly@cornell.edu, Cornell University Library&lt;br /&gt;
* Jennifer Colt, jrc88@cornell.edu, Cornell University Library&lt;br /&gt;
&lt;br /&gt;
Inspired by Harvard Library Lab’s “Awesome Box” project, Cornell’s Library Outside the Library (LOL) group is piloting a more automated approach to letting our users tell us which materials they find particularly stunning. Armed with a Raspberry Pi, a barcode scanner, and some bits of kit that flash and glow, we have ventured into the foreign world of hardware development. This talk will discuss what it’s like for software developers and designers to get their hands dirty, how patrons are reacting to the Awesomizer, and LOL’s not-afraid-to-fail philosophy of experimentation.&lt;br /&gt;
&lt;br /&gt;
== You Gotta Keep 'em Separated: The Case for &amp;quot;Bento Box&amp;quot; Discovery Interfaces ==&lt;br /&gt;
 &lt;br /&gt;
* Jason Thomale,  jason.thomale@unt.edu, University of North Texas Libraries&lt;br /&gt;
&lt;br /&gt;
I know, I know--proposing a talk about Resource Discovery is like, ''so'' 2010.&lt;br /&gt;
&lt;br /&gt;
The thing is, practically all of us--in academic libraries at least--have a similar set up for discovery, with just a few variations, and so talking about it still seems useful. Stop me if this sounds familiar. You've got a single search box on the library homepage as a starting point for discovery. And it's probably a tabbed affair, with an option for searching the catalog for books, an option for searching a discovery service for articles, an option for searching databases, and maybe a few others. Maybe you have an option to search everything at once--probably the default, if you have it. And, if you're a crazy hepcat, maybe you ''only'' have your one search that searches everything, with no tabs.&lt;br /&gt;
&lt;br /&gt;
Now, the question is, for your &amp;quot;everything&amp;quot; search, are you doing a combined list of results, or are you doing it bento-box style, with a short results list from each category displayed in its own compartment?&lt;br /&gt;
&lt;br /&gt;
At UNT, we've been holding off on implementing an &amp;quot;everything&amp;quot; search, for various reasons. One reason is that the evidence for either style hasn't been very clear. There's this persistent paradox that we just can't reconcile: users tell us, through word and action, that they prefer searching Google, yet, libraries aren't Google, and there are valid design reasons why we shouldn't try to oversimplify our discovery interfaces to be like Google. And there's user data that supports both sides.&lt;br /&gt;
&lt;br /&gt;
Holding off on making this decision has granted us 2 years of data on how people use our tabbed search interface that does ''not'' include an &amp;quot;everything&amp;quot; search. Recently I conducted a thorough analysis of this data--specifically the usage and query data for our catalog and discovery system (Summon). And I think it helps make the case for a bento box style discovery interface. To be clear, it isn't exactly the smoking gun that I was hoping for, but the picture it paints I think is telling. At the very least, it points away from a combined-results approach.&lt;br /&gt;
&lt;br /&gt;
I'm proposing a talk discussing the data we've collected, the trends we've seen, and what I think it all means--plus other reasons that we're jumping on the &amp;quot;bento box&amp;quot; discovery bandwagon and why I think &amp;quot;bento box&amp;quot; is at this point the path that least sells our souls.&lt;br /&gt;
&lt;br /&gt;
== Don’t know about you, but I’m feeling like SHA-2!: Checksumming with Taylor Swift ==&lt;br /&gt;
 &lt;br /&gt;
* Ashley Blewer!, ashley.blewer@gmail.com&lt;br /&gt;
&lt;br /&gt;
Checksum technology is used all over the place, from git commits to authenticating Linux packages. It is most commonly used in the digital preservation field to monitor materials in storage for changes that will occur over time or used in the transmission of files during duplication. But do you even checksum, bro? I want this talk to move checksums from a position of mysterious macho jargon to something everyone can understand and want to use. I think a lot of people have heard of checksum but don’t know where to begin when it comes to actually using it at their institution. And cryptography is hella intimidating! This talk will cover what checksums are, how they can be integrated into a library or archival workflow, protecting collections requiring additional levels of security, algorithms used to verify file fixity and how they are different, and other aspects of cryptographic technology. Oh, and please note that all points in this talk will be emphasized or lightly performed through Taylor Swift lyrics. Seriously, this talk will consist of at least 50% Taylor Swift. Can you, like, even?&lt;br /&gt;
&lt;br /&gt;
== Level Up Your Coding with Code Club (yes, you can talk about it) ==&lt;br /&gt;
&lt;br /&gt;
* Coral Sheldon-Hess, coral@sheldon-hess.org&lt;br /&gt;
&lt;br /&gt;
Reading code is a necessary part of becoming a better developer. It gives you more experience and more insight into How Things Are (or Aren't) Done; it builds your intuition about how to solve problems with code; and it increases your confidence that you, too, can tackle whatever technological problems you're facing.&lt;br /&gt;
&lt;br /&gt;
But you don't have to read code alone! (Which is good. It's really not fun to read code alone.) &lt;br /&gt;
&lt;br /&gt;
In late 2014, a group of librarians formed two Code Clubs, inspired by [http://bloggytoons.com/code-club/ this talk by Saron] (of Bloggytoons fame). I'd like to tell you about how we've structured our Code Clubs, what has gone well, what we've learned, and what you need to do to form your own Code Club. I'll share a list of the codebases we've looked at, too, to help you get your own Code Club off the ground! &lt;br /&gt;
&lt;br /&gt;
== The Growth of a Programmer ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:jgo | Joshua Gomez]], Getty Research Institute, jgomez@getty.edu&lt;br /&gt;
&lt;br /&gt;
Just like other creative endeavors, software developers can experience periods of great productivity or find themselves in a rut. After contemplating the alternating periods in my own career I've noticed several factors that have effected my own professional growth and happiness, including: mentorship, structure, community, teamwork, environment, formal education, etc. Not all of the factors need to be present at all times; but some mixture of them is critical for continued growth. In this talk, I will articulate these factors, discuss how they can effect a developer's career, and how they can be sought out when missing. This talk is aimed at both new developers looking to strike their own path as well as the veterans that lead or mentor them.&lt;br /&gt;
&lt;br /&gt;
== Developing a Fedora 4.0 Content Model for Disk Images ==&lt;br /&gt;
&lt;br /&gt;
* Matthew Farrell, matthew.j.farrell@duke.edu, Duke University Libraries&lt;br /&gt;
* Alexandra Chassanoff, achass@email.unc.edu, BitCurator Access Project Manager&lt;br /&gt;
&lt;br /&gt;
As the acquisition of born-digital materials grows, institutions are seeking methods to facilitate easy ingest into their repositories and provide access to disk images and files derived or extracted from disk images. In this session, we describe our development of a Fedora 4.0 Content model for disk images, including acceptable image file formats and the rationale behind those choices.  We will also discuss efforts to integrate the disk image content model into the BitCurator Access environment. Unlike generalized, format-agnostic content models which might treat the disk image as a generic bitstream, a content model designed for disk images enables expression of relationships among associated content in the collection such as files extracted from images and other born-digital and digitized material associated with the same creator.  It also enables capture of file-system attributes such as file paths, timestamps, whether files are allocated/deleted, etc.  Further, a disk image content model suggests further steps repositories can take in order to transform and re-use associated metadata generated during the creation and forensic analysis of the disk image.&lt;br /&gt;
&lt;br /&gt;
== Data acquisition and publishing tools in R ==&lt;br /&gt;
&lt;br /&gt;
* Scott Chamberlain,  scott@ropensci.org, rOpenSci/UC Berkeley - first-time presenter&lt;br /&gt;
&lt;br /&gt;
R is an open source programming environment that is widely used among researchers in many fields. R is powerful because it's free, increasingly robust, and facilitates reproducible research, an increasingly sought after goal in academia. Although tools for data manipulation/visualization/analysis are well developed in R, data acquisition and publishing tools are not. rOpenSci is a collaborative effort to create the tools necessary to complete the reproducible research workflow. This presentation discusses the need for these tools, including examples, including interacting with the repositories Mendeley, Dryad, DataONE, and Figshare. In addition, we are building tools for searching scholarly metadata and acuiring full text of open access articles in a standarized way across metadata providers (e.g., Crossref, DataCite, DPLA) and publishers (e.g., PLOS, PeerJ, BMC, Pubmed). Last, we are building out tools for data reading and writing in Ecologial Metadata Language (EML).&lt;br /&gt;
&lt;br /&gt;
== SPLUNK: Log File Analysis ==&lt;br /&gt;
&lt;br /&gt;
* Jim LeFager, jlefager@depaul.edu, DePaul University Library&lt;br /&gt;
DePaul University Library recently took over monitoring and maintaining of the library EZproxy servers this past year and using Splunk, a machine data analysis tool, we are able to gather information and statistics on our electronic resource usage in addition to monitoring the servers. Splunk is a tool that can collect, analyze, and visualize log files and other machine data in real time and this has allowed for gathering realtime usage statistics for our electronic resources allowing us to filter by multiple facets including IP Range, Group Membership (student, faculty), so that we can see who is accessing our resources and from where. Splunk allows our library to query our data and create rich custom dashboards as well as create alerts that can be triggered when certain conditions are met, such as error codes, which can send an email alert to a group of users. We will be leveraging Splunk to monitor all library web applications going forward. This talk will review setting up Splunk and best practices in using the available features and customizations available including creating queries, alerts, and custom dashboards.  &lt;br /&gt;
&lt;br /&gt;
== Your code does not exist in a vacuum ==&lt;br /&gt;
* Becky Yoose, yoosebec at grinnell dot edu, Grinnell College (Done a lightning talk, MC duties, but have not presented a prepared talk)&lt;br /&gt;
&lt;br /&gt;
“If you have something to say, then say it in code…” - Sebastian Hammer, code4lib 2009&lt;br /&gt;
&lt;br /&gt;
In its 10 year run, code4lib has covered the spectrum of libtech development, from search to repositories to interfaces. However, during this time there has been little discussion about this one little fact about development - code does not exist in a vacuum. &lt;br /&gt;
&lt;br /&gt;
Like the comment above, code has something to say. A person’s or organization’s culture and beliefs influences code in all steps of the development cycle. What development method you use, tools, programming languages, licenses - everything is interconnected with and influenced by the philosophies, economics, social structures, and cultural beliefs of the developer and their organization/community.&lt;br /&gt;
&lt;br /&gt;
This talk will discuss these interconnections and influences when one develops code for libraries, focusing on several development practices (such as “Fail Fast, Fail Often” and Agile)   and licensing choices (such as open source) that libtech has either tried to model or incorporate into mainstream libtech practices. It’ll only scratch the surface of the many influences present in libtech development, but it will give folks a starting point to further investigate these connections at their own organizations and as a community as a whole.&lt;br /&gt;
&lt;br /&gt;
tl;dr - this will be a messy theoretical talk about technology and libraries. No shiny code slides, no live demos. You might come out of this talk feeling uncomfortable. Your code does not exist in a vacuum. Then again, you don’t exist in a vacuum either.&lt;br /&gt;
&lt;br /&gt;
== The Metadata Hopper: Mapping and Merging Metadata Standards for Simple, User-Friendly Access ==&lt;br /&gt;
&lt;br /&gt;
* Tracy Seneca, tjseneca@uic.edu, University of Illinois at Chicago&lt;br /&gt;
* Esther Verreau: verreau1@uic.edu, University of Illinois at Chicago&lt;br /&gt;
&lt;br /&gt;
The Chicago Collections Consortium: 15 institutions and growing!  8 distinct EAD standards! At least 3 permutations of MARC, and we lost count of the varieties of custom CONTENTdm image collections.  Not to mention the 14,730 unique subject terms, nearly all of which lead our poor end-users to exactly one organization's content. &lt;br /&gt;
&lt;br /&gt;
All large content aggregation projects have faced this challenge, and there are a few emerging tools to help us wrangle disparate metadata into new contexts.  The Metadata Hopper is one such tool. The Metadata Hopper enables archivists to map their local metadata standards to standardized deposit records, and tags those materials using a shared vocabulary, integrating them into a user-friendly portal without disrupting local practices. In last year's Code4Lib lightning talk we described the challenges that the Chicago Collections Consortium faces in creating shared, in-depth access to archival and digital collections about Chicago history and culture across CCC member organizations. This year, thanks to the Andrew W. Mellon Foundation, we have a working Django application to demonstrate.  In this talk we'll discuss the design that enables multiple layers of flexibility, from the ability to accept a variety of metadata standards to designing for an open source audience.&lt;br /&gt;
&lt;br /&gt;
http://chicagocollectionsconsortium.org&lt;br /&gt;
&lt;br /&gt;
== Programmers are not projects: lessons learned from managing humans ==&lt;br /&gt;
&lt;br /&gt;
* Erin White, erwhite@vcu.edu, Virginia Commonwealth University - first-time presenter&lt;br /&gt;
&lt;br /&gt;
Managing projects is one thing, but managing people is another. Whether we’re hired as managers or grow “organically” into management roles, sometimes technical people end up leading technical teams (gasp!). I’ll talk about lessons I’ve learned about hiring, retaining, and working long-term and day-to-day with highly tech-competent humans. I’ll also talk about navigating the politics of libraryland, juggling different types of projects, and working with constrained budgets to make good things and keep talented people engaged.&lt;br /&gt;
&lt;br /&gt;
== Practical Strategies for Picking Low-Hanging Fruits to Improve Your Library's Web Usability and UX ==&lt;br /&gt;
&lt;br /&gt;
* Bohyun Kim, bkim@hshsl.umaryland.edu, University of Maryland, Baltimore&lt;br /&gt;
&lt;br /&gt;
Have you ever tried to fix an obvious (to you at least!) problem in Web usability or UX (user experience) only to face strong resistance from the library staff? Are you a strong advocate for making library resources, systems, services, and space as usable as possible, but do you often find yourself struggling to get the point across and/or obtain the crucial buy-in from colleagues and administrators? &lt;br /&gt;
&lt;br /&gt;
There is no shortage of Web usability and UX guidelines. But applying them to a library and implementing desired changes often involve a long and slow process. To tackle this issue, this talk will focus on how to utilize the 'expert review' process (aka 'heuristic evaluation') as a preliminary or even preparatory step before embarking on more time-and-labor-intensive usability testing and user research. Several examples from  simple fixes to more nuanced usability and UX issues in libraries will be discussed to your heart's content. The goal of this talk is to provide practical strategies for picking as many low-hanging fruits as possible to make a real (albeit small) difference to your library's Web usability and UX effectively and efficiently.&lt;br /&gt;
&lt;br /&gt;
== A Semantic Makeover for CMS Data ==&lt;br /&gt;
&lt;br /&gt;
* Bill Levay, wjlevay@gmail.com, Linked Jazz Project&lt;br /&gt;
&lt;br /&gt;
How can we take semi-structured but messy metadata from a repository like CONTENTdm and transform it into rich linked data? Working with metadata from Tulane’s Hogan Jazz Archive Photography Collection, the Linked Jazz Project used Open Refine and Python scripts to tease out proper names, match them with name authority URIs, and specify FOAF relationships between musicians who appear together in photographs. Additional RDF triples were created for any dates associated with the photos, and for those images with place information we employed GeoNames URIs. Historical images and data that were siloed can now interact with other datasets, like Linked Jazz’s rich set of names and personal relationships, and can be visualized [link to come] or otherwise presented on the web in any number of ways. I have not previously presented at a Code4Lib conference.&lt;br /&gt;
&lt;br /&gt;
== Taking User Experience (UX) to new heights ==&lt;br /&gt;
 &lt;br /&gt;
* Kayne Richens, kayne.richens@deakin.edu.au, Deakin University&lt;br /&gt;
&lt;br /&gt;
User Experience, or &amp;quot;UX&amp;quot;, is for more than just websites. At Deakin University Library we're exploring ways to improve the user experience inside our campus library spaces, by putting new technologies front and centre in the overall experience for our students. How are we doing this? We’re collaborating with the University's IT department and exploring the following Library-changing opportunities:&lt;br /&gt;
&lt;br /&gt;
- Augmented Reality for Way-finding: We’re tackling that infamous thing that all Libraries can't get right – way-finding. We're enhancing library tour information and way-finding experiences by introducing augmented reality solutions.&lt;br /&gt;
 &lt;br /&gt;
- Heat mapping the library with wi-fi: We’re using our existing wi-fi infrastructure to present &amp;quot;heat maps&amp;quot; of library space utilisation, allowing our users to easily locate the space that best suits their needs, whether it be busy spaces to collaborate, or quiet spaces to study. And by overlaying computer usage and group study room bookings, users can quickly locate the space they need.&lt;br /&gt;
 &lt;br /&gt;
- Video chat library service: We’re piloting video-conferencing facilities in our group study rooms and spaces, connecting users and librarians and other professionals.&lt;br /&gt;
         &lt;br /&gt;
This talk will look at how these different technologies will be brought together to provide improved user experiences, as well some of the evidence and reasons that helped us to identify our needs, so you can too.&lt;br /&gt;
&lt;br /&gt;
==How to Hack it as a Working Parent: or, Should Your Face be Bathed in the Blue Glow of a Phone at 2 AM?==&lt;br /&gt;
&lt;br /&gt;
*Margaret Heller, Loyola University Chicago, mheller1@luc.edu&lt;br /&gt;
*Christina Salazar, California State University Channel Islands, christina.salazar@csuci.edu&lt;br /&gt;
*May Yan, Ryerson University, may.yan@ryerson.ca&lt;br /&gt;
&lt;br /&gt;
Modern technology has made it easier than ever for parents employed in technical environments to keep up with work at all hours and in all locations. This makes it possible to work a flexible schedule, but also may lead to problems with work/life balance and furthering unreasonable expectations about working hours. Add to that shifting gender roles and limited paid parental leave in the United States and you have potential for burnout and a certainty for anxiety. It raises the additioal question of whether the “always connected” mindset puts up a barrier to some populations who otherwise might be better represented in open source and library technology communities. &lt;br /&gt;
&lt;br /&gt;
This presentation will address tools that are useful for working parents in technical library positions, and share some lessons learned about using these tools while maintaining a reasonable work/life balance. We will consider a question that Karen Coyle raised back in 1996: &lt;br /&gt;
“What if the thousands of hours of graveyard shift amateur hacking wasn't really the best way to get the job done? That would be unthinkable.” &lt;br /&gt;
&lt;br /&gt;
For those who are able to take an extended parental leave, we will present strategies for minimizing the impact to your career and your employer. For those (particularly in the United States) who are only able to take a short leave will require different strategies. Despite different levels of preparation, all are useful exercises in succession planning and making a stronger workplace and future ability to work a flexible schedule through reviewing workloads, cross-training personnel, hiring contract replacements, and creative divisions of labor. Such preparation makes work better for everyone, kids or no kids or caretakers of any kind.&lt;br /&gt;
&lt;br /&gt;
==Making your digital objects embeddable around the web==&lt;br /&gt;
 &lt;br /&gt;
* Jessie Keck, jkeck@stanford.edu, Stanford University Libraries&lt;br /&gt;
* Jack Reed, pjreed@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
With more and more content from our digital repositories making their way into our discovery environments we quickly realize that we’re repeatedly re-inventing the wheel when it comes to creating “Viewers” for these digital objects.  With various different types of viewers necessary (books, images, audio, video, geospatial data, etc) the burden of getting these viewers into various environments (topic guides, blogs, catalogs, etc) becomes exponential.&lt;br /&gt;
&lt;br /&gt;
In this talk we’ll discuss how Stanford University Libraries implemented an oEmbed service to create an extensible viewer framework for all of its digital content. Using this service we’ve been able to easily integrate viewers into various discovery applications as well as make it easy for end users who discover our objects to easily embed customized versions into their own websites and blogs.&lt;br /&gt;
&lt;br /&gt;
==So you want to make your geospatial data discoverable==&lt;br /&gt;
 &lt;br /&gt;
* Jack Reed, pjreed@stanford.edu, Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
Finding data for research or coursework can be one of the most time intensive tasks for a scholar or student. We introduce GeoBlacklight, an open source, multi-institutional software project focused on solving these common challenges at institutions across the world. GeoBlacklight prioritizes user experience, integrates with many GIS tools, and streamlines the use and organization of geospatial data. This talk will provide an introduction to the software, demonstrate current functionality, and provide a road map for future work.&lt;br /&gt;
&lt;br /&gt;
== Clueless-Driven Development: How I learned to migrate to Fedora 4 ==&lt;br /&gt;
&lt;br /&gt;
* Adam Wead, awead@psu.edu, Penn State University&lt;br /&gt;
&lt;br /&gt;
Recently I was tasked with migrating the content from our Fedora3 repository to the new Fedora4 repository architecture.&lt;br /&gt;
Despite a wealth of community support, I had no idea how to approach, or even begin to solve this problem. I knew I&lt;br /&gt;
wanted to follow best practices and use test-driven  development to build my solution, but had no idea where to start.&lt;br /&gt;
Despite this initial setback, I was able to start writing tests with only a  vague understanding of the problem. As my&lt;br /&gt;
tests exposed where my understanding of the problem was flawed, my code evolved, and within a week I had arrived  at a&lt;br /&gt;
working solution that exhibited all the hallmarks of good testing and software design.&lt;br /&gt;
&lt;br /&gt;
This talk recounts the process I went through from starting with practically nothing, to arriving at a working solution.&lt;br /&gt;
You can follow the rules of  test-driven development, but you can write tests in an expressive way to describe the&lt;br /&gt;
problem instead of just describing what the code should do. It was also essential to begin testing from an integration&lt;br /&gt;
viewpoint as opposed to a unit one, because at the outset the units were unknown and were later realized through further&lt;br /&gt;
development. For the presentation, I will be demonstrating using RSpec and Ruby. All the code examples will be related&lt;br /&gt;
to the Hydra software stack; however, I hope to show  that the processes at work will be applicable in any context.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Designing and Leading a Kick A** Tech Team ==&lt;br /&gt;
 &lt;br /&gt;
* Sibyl Schaefer,  sschaefer@rockarch.org, Rockefeller Archive Center&lt;br /&gt;
&lt;br /&gt;
New managers are often promoted without receiving management training, yet management is not something you just figure out. The experience of being expected to know how to manage, yet not being trained to do so often results in new managers feeling isolated and unsure how to move from making to managing. In this talk I’ll focus on my own managerial experience of designing and leading an archival tech team in a small independent archives. Topics covered will include hiring, delegating, creating a team culture, and leading people whose specialized knowledge exceeds your own. The talk take-aways should be applicable to managers and employees at large and small institutions alike.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==American (Archives) Horror Story: LTO Failure and Data Loss ==&lt;br /&gt;
 &lt;br /&gt;
* Rebecca Fraimow, rebecca_fraimow@wgbh.org, NDSR Resident, WGBH&lt;br /&gt;
* Casey Davis, casey_davis@wgbh.org, Project Manager, American Archive of Public Broadcasting, WGBH&lt;br /&gt;
&lt;br /&gt;
Here’s a story to send shivers down archival spines: when transferring video files off LTO for the American Archive project, WGBH got an initial failure rate of 57%.   After repeat tries, the rates improved; still, an unnervingly large percentage of files were never able to be transferred successfully.   Even more unnerving, going public with our horror story got a big response from other archives using LTO -- it seems like many institutions are having similarly scary results.   What are the real risks with LTO tape?  Are there steps that archives should be taking to better circumvent those risks?  This presentation will share information about LTO storage failures across archives world and discuss the process of investigating the problem at WGBH by testing different methods of data retrieval from LTO (direct and networked downloads, individual file retrieval and bulk data dump, use of LTO 4 and LTO 6 decks) and using checksum comparisons and file analysis and characterization tools such as ffprobe, mediainfo and exiftool to analyze failed files.  We'll also present whatever results we’ve managed to turn up by the time of Code4Lib!&lt;br /&gt;
&lt;br /&gt;
== PBCore in Action: Three Words, Not Two! ==&lt;br /&gt;
 &lt;br /&gt;
* Casey E. Davis,  casey_davis@wgbh.org, Project Manager, American Archive of Public Broadcasting, WGBH&lt;br /&gt;
* Andrew (Drew) Myers, andrew_myers@wgbh.org, Supervising Developer, WGBH&lt;br /&gt;
&lt;br /&gt;
In 2001, public media representatives developed the PBCore XML schema to establish a common language for managing metadata about their analog and digital audio and video. Since then, PBCore has been adopted by a number of organizations and archivists in the moving image archival community. The schema has also undergone a few revisions, but on more than one occasion it was left orphaned and with little to no support.&lt;br /&gt;
 &lt;br /&gt;
Times have changed. You may have heard the news that PBCore is back in action as part of the American Archive of Public Broadcasting initiative and via the Association of Moving Image Archivists (AMIA) PBCore Advisory Subcommittee. A group of archivists, public media stakeholders, and engaged users have come together to provide necessary support for the standard and to see to its further development.&lt;br /&gt;
 &lt;br /&gt;
At this session, we'll discuss the scope and uses of PBCore in digital preservation and access, report on the progress and goals of the PBCore Advisory Subcommittee, and share how the group (by the time of the conference) will have transformed the XML schema into an RDF ontology, bringing PBCore into the second decade of the 21st century. #PBHardcore&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Collaborating to Avert the Digital Graveyard==&lt;br /&gt;
&lt;br /&gt;
* Harish Nayak, hnayak@library.rochester.edu, University of Rochester Libraries &lt;br /&gt;
* Sean Morris, smorris@library.rochester.edu, University of Rochester Libraries &lt;br /&gt;
&lt;br /&gt;
In 1995, the Robbins Library at the University of Rochester created a digital collection of Arthurian texts, images, and bibliographies. Together with medieval scholars, we recently completed the redesign and development of an interface for this collection. Using FRBR concepts, we re-conceptualized organization and editing workflow from the ground up in a mobile-first Drupal-based project. &lt;br /&gt;
&lt;br /&gt;
In this talk we will describe the project as well as how we utilized the techniques of work practice study and user centered design to maintain engagement with reluctant stakeholders, nontechnical scholars, and VERY meticulous graduate students.  Neither of us have previously presented at a Code4Lib conference.&lt;br /&gt;
&lt;br /&gt;
==Docker? VMs? EC2? Yes! With Packer.io==&lt;br /&gt;
&lt;br /&gt;
* Kevin S. Clarke, ksclarke@gmail.com, Digital Library Programmer, UCLA&lt;br /&gt;
&lt;br /&gt;
There are a lot of exciting ways to deploy a software stack nowadays. Many of our library systems are fully virtualized. Docker is a compelling alternative, and there are also cloud options like Amazon's EC2. This talk will introduce Packer.io, a tool for creating identical machine images for multiple platforms (e.g., Docker, VMWare, VirtualBox, EC2, GCE, OpenStack, et al.) all from a single source configuration.  It works well with Ansible, Chef, Puppet, Salt, and plain old Bash scripts. And, it's designed to be scriptable so that builds can be automated. This presentation will show how easy it is to use Packer.io to bring up a set of related services like Fedora 4, Grinder (for stress testing), and Graphite (for charting metrics). As an added value, all the buzzwords in this proposal will be defined and explained!&lt;br /&gt;
&lt;br /&gt;
== Technology on your Wrist: Cross-platform Smartwatch Development for Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:sanderson|Steven Carl Anderson]], sanderson@bpl.org, Boston Public Library (no previously accepted prepared talks but have done lightning talks in the past)&lt;br /&gt;
&lt;br /&gt;
I'll be the first to admit: smartwatches are unlikely to completely revolutionize how a library provides online services. But I believe they still represent an opportunity to further enhance existing library services and resources in a unique way.&lt;br /&gt;
&lt;br /&gt;
At the Boston Public Library (BPL), we're in the initial phases of designing a modest smartwatch app to provide notifications for circulation availability and checked-out-material due-date alerts by the end of current year. We're starting small, but we plan to evolve the concept over time as we see what (if any) traction such an application gets with potential users. For example, we plan to explore the possibility of adding &amp;quot;nearest branch to my current location&amp;quot; functionality to this app.&lt;br /&gt;
&lt;br /&gt;
Despite the &amp;quot;development phase&amp;quot; of this application as of this writing, this talk is not being given by a novice. As a technology enthusiast, I've released [http://www.phdgaming.com/smartwatch_projects/ five smartwatch applications] and have had two of those be finalists in a [http://www.phdgaming.com/samsung_challenge/ Samsung sponsored development challenge]. This experience not only will allow for the BPL to avoid many beginner mistakes in its smartwatch app development but also gives a much more complete understanding of the smartwatch development ecosystem.&lt;br /&gt;
&lt;br /&gt;
This talk will explore the following questions:&lt;br /&gt;
&lt;br /&gt;
* What kinds of online library services could potentially be transformed or translated into the smartwatch/wearable domain? What kinds of services are better left alone? These questions are currently being explored and I'll talk about our plans and experiences. Included will be any statistical information from our application launch along with statistics from my personal development.&lt;br /&gt;
&lt;br /&gt;
* How to support all the different operating systems these devices run without painful modifications to your codebase. (There's Tizen that is used by Samsung's Gear 2 and Gear S, Android Wear that is used by most other non-Apple manufacturers, then there is Apple's upcoming smartwatch itself, etc.)&lt;br /&gt;
&lt;br /&gt;
* How to support different screen resolutions on such a small device. From round to rectangular to perfectly square, smartwatches come in all different shapes these days.&lt;br /&gt;
&lt;br /&gt;
* What are the app stores like on these platforms? As I support multiple applications through different distribution networks, a guide to navigating how to distribute one's app is included and I'll reveal how these systems work “behind the curtain.”&lt;br /&gt;
&lt;br /&gt;
* What are common issues and pitfalls to avoid when doing development? Tips on broken APIs and how to cope or optimizing your code will be included.&lt;br /&gt;
&lt;br /&gt;
==Seeing the Forest From the Trees: The Art of Creating Workflows for Digital Projects ==&lt;br /&gt;
 &lt;br /&gt;
* Jen LaBarbera, j.labarbera@neu.edu, NDSR Resident, Northeastern University&lt;br /&gt;
* Joey Heinen, joseph_heinen@harvard.edu, NDSR Resident, Harvard University&lt;br /&gt;
* Rebecca Fraimow, rebecca_fraimow@wgbh.org, NDSR Resident, WGBH&lt;br /&gt;
* Tricia Patterson, triciap@mit.edu, NDSR Resident, MIT&lt;br /&gt;
&lt;br /&gt;
We have to &amp;quot;turn projects into programs&amp;quot; in order to create a solid and sustainable digital preservation initiative...but what the heck does that even mean? What does that look like?&lt;br /&gt;
&lt;br /&gt;
In this talk, members of the inaugural Boston cohort of the National Digital Stewardship Residency will discuss one piece of our digital preservation test kitchen: our stabs at creating digital workflows that will (hopefully) help our institutions turn digital preservation projects into programs. Specifically, we will talk about how difficult it is to create a general and overarching workflow for digital preservation tasks (e.g. ingest into repositories, format migrations, etc.) that incorporates various technical tools while also taking into account the myriad and unending list of possible exceptions or special scenarios. Turning these complicated, specific processes into a simplified and generalized workflow is an art. We haven't necessarily perfected that art yet, but in this talk, we'll share what has worked for us -- and what hasn't. We’ll also touch on the importance of documentation, and achieving that delicate balance of adequately thorough documentation that doesn’t pose the risk of information avalanche. These processes often create more questions than answers, but we'll share the answers that we (and our mentors) have found along the way!&lt;br /&gt;
&lt;br /&gt;
== Annotations as Linked Data with Fedora4 and Triannon (a Real Use Case for RDF!) ==&lt;br /&gt;
&lt;br /&gt;
* Rob Sanderson, azaroth@stanford.edu,  Stanford University Libraries&lt;br /&gt;
* Naomi Dushay, ndushay@stanford.edu,  Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
Annotations on content resources allow users to contribute knowledge within the digital repository space.  W3C Open Annotation provides a comprehensive model for web annotation on all types of content, using Linked Data as a fundamental framework.  Annotation clients generate instances of this model, typically using a JSON serialization, but need to store that data somewhere using a standard interaction pattern so that best of breed clients, servers, and data can be mixed and matched.&lt;br /&gt;
&lt;br /&gt;
Stanford is using Fedora4 for managing Open Annotations, via a middleware component called Triannon.  Triannon receives the JSON data from the annotation client, and uses the Linked Data Platform API implementation in Fedora4 to create, retrieve, update and delete the constituent resources.  Triannon could be easily modified to use other LDP implementations, or could be modified to work with linked data other than annotations.&lt;br /&gt;
&lt;br /&gt;
== Helping Google (and scholars, researchers, educators, &amp;amp; the public) find archival audio ==&lt;br /&gt;
&lt;br /&gt;
* Anne Wootton, anne@popuparchive.org, Pop Up Archive (www.popuparchive.org)&lt;br /&gt;
&lt;br /&gt;
Culturally significant digital audio collections are hard to discover on the web. There are major barriers keeping this valuable media from scholars, researchers, and the general public:&lt;br /&gt;
&lt;br /&gt;
Audio is opaque: you can’t picture sound, or skim the words in a recording. &lt;br /&gt;
Audio is hard to share: there’s no text to interact with. &lt;br /&gt;
Audio is not text: but since text is the medium of the web, there’s no path for audiences to find content-rich audio.&lt;br /&gt;
Audio metadata is inconsistent and incomplete.&lt;br /&gt;
&lt;br /&gt;
At Pop Up Archive, we're helping solve this problem making the spoken word searchable. We began as a UC-Berkeley School of Information Master's thesis to provide better access to recorded sound for audio producers, journalists, and historians. Today, Pop Up Archive processes thousands of hours of sound from all over the web to create automatic, timestamped transcripts and keywords, working with media companies and institutions like NPR, KQED, HuffPost Live, Princeton, and Stanford. We're building collections of sound from journalists, media organizations, and oral history archives from around the world. Pop Up Archive is supported by the John S. and James L. Knight Foundation, the National Endowment for the Humanities, and 500 Startups.&lt;br /&gt;
&lt;br /&gt;
== Digital Content Integrated with ILS Data for User Discovery:  Lessons Learned ==&lt;br /&gt;
&lt;br /&gt;
* Naomi Dushay, ndushay@stanford.edu,  Stanford University Libraries&lt;br /&gt;
* Laney McGlohon, laneymcg@stanford.edu,  Stanford University Libraries&lt;br /&gt;
&lt;br /&gt;
So you want to expose your digital content in your discovery interface, integrated with the data from your ILS?  How do you make the best information user searchable?  How do you present complete, up to date search results with a minimum of duplicate entries?&lt;br /&gt;
&lt;br /&gt;
At Stanford, we have these cases and more:&lt;br /&gt;
* digital content with no metadata in ILS&lt;br /&gt;
* digital content for metadata in ILS&lt;br /&gt;
* digital content with its own metadata derived from ILS metadata.&lt;br /&gt;
&lt;br /&gt;
We will describe our efforts to accommodate multiple updatable metadata sources for materials in the ILS and our Digital Object Repository while presenting users with reduced duplication in SearchWorks.  Included will be some failures, some successes, and an honest assessment of where we are now.&lt;br /&gt;
&lt;br /&gt;
== Show All the Things: Kanban for Libraries == &lt;br /&gt;
&lt;br /&gt;
* Mike Hagedon, mhagedon@email.arizona.edu, University of Arizona Libraries&lt;br /&gt;
&lt;br /&gt;
The web developers at the University of Arizona Libraries had a problem: we were working on a major website rebuild project with no clear way to prioritize it against our other work. We knew we wanted to follow Agile principles and initially chose Scrum to organize and communicate about our work. But we found that certain core pieces of Scrum did not work for our team. Then we discovered Kanban, an Agile meta-process for organizing work (team or individual) that treats the work more as a flow than as a series of fixed time boxes. I’ll be talking about our journey toward finding a process that works for our team and how we’ve applied the principles of Kanban to better get our work done. Specifically, we talk about principles like how to visualize all your work, how to limit how much you’re doing (to get more done!), and how to optimize the flow of your work.&lt;/div&gt;</summary>
		<author><name>Michaelhagedon</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2014_Breakout_I_(Tuesday)&amp;diff=40882</id>
		<title>2014 Breakout I (Tuesday)</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2014_Breakout_I_(Tuesday)&amp;diff=40882"/>
				<updated>2014-03-25T19:52:15Z</updated>
		
		<summary type="html">&lt;p&gt;Michaelhagedon: /* Tools for Instruction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Islandora==&lt;br /&gt;
&lt;br /&gt;
==Relevance Search &amp;amp; Ranking==&lt;br /&gt;
&lt;br /&gt;
==Metadata Harvesting Normalization &amp;amp; Enrichement @ Scale==&lt;br /&gt;
&lt;br /&gt;
* Seeing same tools that were brought forth and then not hearing anything else about them.&lt;br /&gt;
* Conversations with folk about normalizing and enriching metadata.&lt;br /&gt;
* Lets hear about tools and things they are working on.&lt;br /&gt;
* Particular processes.&lt;br /&gt;
&lt;br /&gt;
==VuFind Update==&lt;br /&gt;
&lt;br /&gt;
==Telecommunicating Support Group==&lt;br /&gt;
&lt;br /&gt;
@mjgiarlo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BIBFRAME==&lt;br /&gt;
&lt;br /&gt;
==Archivespace==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==User Experience==&lt;br /&gt;
&lt;br /&gt;
''Blame @erinrwhite for cruddy notes.''&lt;br /&gt;
&lt;br /&gt;
Coral facilitates. Her question for the group is, how can we overcome the focus on library experience and focus instead on user experience?&lt;br /&gt;
&lt;br /&gt;
=== what's your problem?===&lt;br /&gt;
&lt;br /&gt;
Introductions. Common problems:&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Just make a web page!&amp;quot;&lt;br /&gt;
* &amp;quot;I can figure this weird arcane and overly complicated thing out. Why can't anyone else?&amp;quot;&lt;br /&gt;
* Convincing stakeholders that design isn't print design, doesn't need to be the same for everyone and isn't static.&lt;br /&gt;
* Redesign by committee, have mercy!&lt;br /&gt;
* Getting ready for a redesign&lt;br /&gt;
* Publishing/learning about UX research. Who's going through IRB, who's publishing this stuff?&lt;br /&gt;
* Devaluation of UX work in library, funding or mandate&lt;br /&gt;
* UX not being built into organizational policies etc.&lt;br /&gt;
* How can we scale up UX above and beyond one-project research? Expanding to include more projects beyond the website?&lt;br /&gt;
* How can we convince our organizations to not recreate the org chart with the website?&lt;br /&gt;
* Trying to create a UX position that is beyond web librarian&lt;br /&gt;
* Not just testing sites with librarians (!)&lt;br /&gt;
* What about user experience for back-of-house software?&lt;br /&gt;
* I'm a web team of one. Help?&lt;br /&gt;
* &amp;quot;Put the MARC view back in the catalog!&amp;quot;&lt;br /&gt;
* Do we really need to default to advanced search? Battling the exceptions vs the average user?&lt;br /&gt;
* Taking a guerrilla approach to UX research&lt;br /&gt;
* Not a lot of staff in digital area&lt;br /&gt;
* Moving from a culture of complaint to a culture of...fixing&lt;br /&gt;
&lt;br /&gt;
A couple folks here working in organizations that have UX and assessment built into the culture. Thanks in advance for your knowledge, y'all!&lt;br /&gt;
&lt;br /&gt;
=== themes===&lt;br /&gt;
&lt;br /&gt;
We broke into sub-groups:&lt;br /&gt;
&lt;br /&gt;
* Making time&lt;br /&gt;
* Changing culture&lt;br /&gt;
* Beyond the website&lt;br /&gt;
&lt;br /&gt;
==Worldcat Search API==&lt;br /&gt;
&lt;br /&gt;
Help beta test early release of the new WorldCat Search API&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Spotlight: Exhibits, Curated Collections and Blacklight==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Tools for Instruction==&lt;br /&gt;
&lt;br /&gt;
LMS Integration, Guide on the side, LibGuides, etc.&lt;br /&gt;
&lt;br /&gt;
Participants:&lt;br /&gt;
* Mike Hagedon; interests: [http://code.library.arizona.edu/gots Guide on the Side], LTI, LMS integration, subject/course guides&lt;br /&gt;
*&lt;br /&gt;
*&lt;br /&gt;
*&lt;/div&gt;</summary>
		<author><name>Michaelhagedon</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2014_preconference_proposals&amp;diff=40588</id>
		<title>2014 preconference proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2014_preconference_proposals&amp;diff=40588"/>
				<updated>2014-03-09T21:51:43Z</updated>
		
		<summary type="html">&lt;p&gt;Michaelhagedon: /* Managing Projects: Or I'm in charge, now what? (aka PM4Lib) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= PROPOSALS ARE CLOSED : PLEASE DO NOT ADD NEW PRECONFERENCES TO THIS PAGE =&lt;br /&gt;
&lt;br /&gt;
Proposals were accepted through December 6th, 2013.&lt;br /&gt;
&lt;br /&gt;
It would be really, super duper helpful if folks who think they might want to attend a pre-conference could indicate interest by adding your name to a session below. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Note===&lt;br /&gt;
Attendance at a pre-conference will require a small fee ''due at the time of conference registration&amp;quot;.&lt;br /&gt;
 &lt;br /&gt;
Although this was specified in the email announcements relating to pre-conferences, it was not added to this page until December 2nd.  I (Adam C.) apologize for the omission and I hope this will not cause any &amp;quot;sticker shock.&amp;quot;  Putting your name on this list does not incur any obligation on your part, but we'll be using it to gauge interest and work out room assignments.&lt;br /&gt;
&lt;br /&gt;
Please put your pre-conference on the list in the following format:&lt;br /&gt;
&lt;br /&gt;
=Code4Lib 2014 Pre-Conference Proposals=&lt;br /&gt;
&lt;br /&gt;
===Drupal4lib Sub-con Barcamp===&lt;br /&gt;
=====Full Day=====&lt;br /&gt;
&lt;br /&gt;
* Contact [[User:highermath|Cary Gordon]], cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
This will be a full day of self-selected barcamp style sessions. Anyone who wants to present can write down the topic on an index card and, after the keynote, we will vote to choose what we want to see. Attendees can also pick a topic and attempt to talk someone else into presenting on it.&lt;br /&gt;
&lt;br /&gt;
This event is open to the library community. There will be a nominal fee (t/b/d) for non-Code4LibCon attendees (subject to organizer approval).&lt;br /&gt;
&lt;br /&gt;
[[resources to help you learn drupal]]&lt;br /&gt;
&lt;br /&gt;
====Interested in Attending:====&lt;br /&gt;
&lt;br /&gt;
=====All Day=====&lt;br /&gt;
&lt;br /&gt;
* Renna Tuten &lt;br /&gt;
&lt;br /&gt;
=====Morning=====&lt;br /&gt;
&lt;br /&gt;
* Kevin Reiss&lt;br /&gt;
* Charlie Morris (NCSU) - glad to see this again this year!&lt;br /&gt;
* Paula Gray-Overtoom&lt;br /&gt;
* Laurie Lee Moses&lt;br /&gt;
&lt;br /&gt;
=====Afternoon=====&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Open Refine Hackfest===&lt;br /&gt;
'''&amp;quot;Half-Day&amp;quot;'''&lt;br /&gt;
* Contact [[User:bibliotechy|Chad Nelson]], chadbnelson@gmail.com&lt;br /&gt;
&lt;br /&gt;
[http://openrefine.org/ Open Refine] is a powerful open source tool for wrangling messy data that can also be used to help in the creation of Linked Data via the [https://github.com/OpenRefine/OpenRefine/wiki/Reconciliation-Service-API Reconciliation API]. It is possible to write reconciliation services against API's, like the [http://iphylo.blogspot.com/2013/04/reconciling-author-names-using-open.html VIAF service] or, even just against local authority files for helping maintain authority control&lt;br /&gt;
&lt;br /&gt;
The session would first introduce Open Refine, then walk through building a reconciliation service, and the rest of the session would be a hackfest where we build new reconciliation services for public consumption or local use. &lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Adam Constabaris&lt;br /&gt;
&amp;lt;li&amp;gt;Jason Stirnaman&lt;br /&gt;
&amp;lt;li&amp;gt;Joshua Gomez&lt;br /&gt;
&amp;lt;li&amp;gt;Sam Kome&lt;br /&gt;
&amp;lt;li&amp;gt;Mike Beccaria&lt;br /&gt;
&amp;lt;li&amp;gt;Angela Zoss&lt;br /&gt;
&amp;lt;li&amp;gt;A. Soroka&lt;br /&gt;
&amp;lt;li&amp;gt; Matt Zumwalt&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Responsive Design Hackfest===&lt;br /&gt;
'''&amp;quot;Half-Day [Afternoon]&amp;quot;''' &lt;br /&gt;
* Contact Jim Hahn, University of Illinois, jimfhahn@gmail.com&lt;br /&gt;
* Contact David Ward, University of Illinois, dh-ward@illinois.edu&lt;br /&gt;
&lt;br /&gt;
This structured hackfest will give attendees an opportunity to explore methods to create responsive mobile apps using the Bootstrap framework [http://getbootstrap.com/]and a set of APIs for accessing library data. We will start with an API template for creating space-based mobile tools that draw from work coming out of the IMLS funded Student/Library Collaborative grant [http://www.library.illinois.edu/nlg_student_apps]. Available APIs will include a room reservation template and codebase for implementing at any campus and the set of Minrva catalog APIs generating JSONP [http://minrvaproject.org/services.php]. &lt;br /&gt;
&lt;br /&gt;
Hosts will give a brief report of a study on student hacking projects and interests in mobile library apps that are the basis for the templates utilized in this Hackathon. By the end of the pre-conference attendees will have a sample responsive mobile web app in Bootstrap 3 to bring back to their campus which can plug into their site-based content.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Intro to Blacklight ===&lt;br /&gt;
'''&amp;quot;Half-Day [Morning]&amp;quot;''' &lt;br /&gt;
* Contact: Chris Beer, Stanford University, cabeer@stanford.edu&lt;br /&gt;
* TA: Bess Sadler, Stanford University, bess@stanford.edu&lt;br /&gt;
&lt;br /&gt;
This session will be walk-through of the architecture of Blacklight, the community, and an introduction to building a Blacklight-based application. Each participant will have the opportunity to build a simple Blacklight application, and make basic customizations, while using a test-driven approach.&lt;br /&gt;
&lt;br /&gt;
For more information about Blacklight see our wiki ( http://projectblacklight.org/ ) and our GitHub repo ( https://github.com/projectblacklight/blacklight ). We will also send out some brief instructions beforehand for those that would like to setup their environments to follow along and get Blacklight up and running on their local machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Megan Kudzia&lt;br /&gt;
# Bret Davidson&lt;br /&gt;
# Coral Sheldon-Hess&lt;br /&gt;
# Cory Lown&lt;br /&gt;
# Emily Daly&lt;br /&gt;
# Angela Zoss&lt;br /&gt;
# Sean Aery&lt;br /&gt;
# Francis Kayiwa&lt;br /&gt;
# Heidi Frank&lt;br /&gt;
# Junior Tidal&lt;br /&gt;
# Ian Chan&lt;br /&gt;
# Ted Lawless&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Blacklight Hackfest===&lt;br /&gt;
'''&amp;quot;Half-Day [Afternoon]&amp;quot;''' &lt;br /&gt;
* Contact Chris Beer, Stanford University, cabeer@stanford.edu&lt;br /&gt;
&lt;br /&gt;
This afternoon hackfest is both a follow-on to the Intro to Blacklight morning session to continue building Blacklight-based applications, and also an opportunity for existing Blacklight contributors and members of the Blacklight community to exchange common patterns and approaches into reusable gems or incorporate customizations into Blacklight itself.&lt;br /&gt;
&lt;br /&gt;
For more information about Blacklight see our wiki ( http://projectblacklight.org/ ) and our GitHub repo ( https://github.com/projectblacklight/blacklight ).&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Shaun Ellis&lt;br /&gt;
# Kevin Reiss&lt;br /&gt;
# Megan Kudzia&lt;br /&gt;
# Erik Hatcher&lt;br /&gt;
# Emily Daly&lt;br /&gt;
# Laurie Lee Moses&lt;br /&gt;
# Francis Kayiwa&lt;br /&gt;
# Ted Lawless&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===RailsBridge: Intro to programming in Ruby on Rails===&lt;br /&gt;
'''&amp;quot;Half-Day&amp;quot; [morning]'''&lt;br /&gt;
* Contact Justin Coyne, Data Curation Experts, justin@curationexperts.com&lt;br /&gt;
&lt;br /&gt;
Interested in learning how to program? Want to build your own web application? Never written a line of code before and are a little intimidated? There's no need to be! RailsBridge is a friendly place to get together and learn how to write some code.&lt;br /&gt;
&lt;br /&gt;
RailsBridge is a great workshop that opens the doors to projects like Blacklight and Hydra.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
1. Ayla Stein&lt;br /&gt;
&lt;br /&gt;
2. Heidi Dowding&lt;br /&gt;
&lt;br /&gt;
3. Caitlin Christian-Lamb&lt;br /&gt;
&lt;br /&gt;
4. Scott Bacon&lt;br /&gt;
&lt;br /&gt;
5. [[User:RileyChilds | Riley Childs]]&lt;br /&gt;
&lt;br /&gt;
6. Carolina Garcia&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Managing Projects: Or I'm in charge, now what? (aka PM4Lib)===&lt;br /&gt;
'''Full-Day'''&lt;br /&gt;
&lt;br /&gt;
Contact: &lt;br /&gt;
* [[User:rosy1280|Rosalyn Metz]], rosalynmetz@gmail.com&lt;br /&gt;
* [[User:yoosebj|Becky Yoose]], yoosebec@grinnell.edu&lt;br /&gt;
&lt;br /&gt;
This will be a full day session on project management.  We'll cover&lt;br /&gt;
* '''Kicking off the Project''' -- project lifecycle, project constraints, scoping/goals, stakeholders, assessment&lt;br /&gt;
* '''Planning the Project''' -- project charters, work breakdown structures, responsibilities, estimating time, creating budgets&lt;br /&gt;
* '''Executing the Project''' -- status meeting, status reports, issue management&lt;br /&gt;
* '''Finishing the Project''' -- achieving the goal, post mortems, project v. product&lt;br /&gt;
This is a revival of rosy1280's LITA Forum Pre-Conference, but better (because iteration is good) and adapted to c4lib types.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Robin Dean&lt;br /&gt;
# Erin White&lt;br /&gt;
# Andrew Darby&lt;br /&gt;
# Sam Kome&lt;br /&gt;
# Ryan Scherle&lt;br /&gt;
# Will Shaw&lt;br /&gt;
# Liz Milewicz&lt;br /&gt;
# Cynthia &amp;quot;Arty&amp;quot; Ng&lt;br /&gt;
# Laurie Lee Moses (if I don't do the Hackfest for Blacklight)&lt;br /&gt;
# Ranti Junus&lt;br /&gt;
# Bohyun Kim (Afternoon)&lt;br /&gt;
# Mike Hagedon&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Fail4Lib 2014===&lt;br /&gt;
'''Half Day [TBD, probably afternoon]'''&lt;br /&gt;
&lt;br /&gt;
Contacts: &lt;br /&gt;
* Andreas Orphanides, akorphan (at) ncsu.edu&lt;br /&gt;
* Jason Casden, jmcasden (at) ncsu.edu&lt;br /&gt;
&lt;br /&gt;
The task of design (and the work that we do as library coders) is intimately tied to failure. Failures, both big and small, motivate us to create and improve. Failures are also occasionally the result of our work. Understanding and embracing failure, encouraging enlightened risk-taking, and seeking out opportunities to fail and learn are essential to success in our field. At Fail4Lib, we'll talk about our own experiences with projects gone wrong, explore some famous design failures in the real world, and talk about how we can come to terms with the reality of failure, to make it part of our creative process -- rather than something to be feared.&lt;br /&gt;
&lt;br /&gt;
The schedule may include the following:&lt;br /&gt;
&lt;br /&gt;
* Case studies. We'll look at some classic failures from the literature: What can we learn from the mistakes of others?&lt;br /&gt;
* Confessionals, for those willing to share. Talk about your own experiences with rough starts, labor pains, and doomed projects in your own work: What can we learn from our own (and each others') failures?&lt;br /&gt;
* Group therapy. Let's talk about how to deal with risk management, failed projects, experimental endeavors, and more: How can we make ourselves, our colleagues, and our organizations more fault tolerant? How do we make sure we fail as productively as possible?&lt;br /&gt;
&lt;br /&gt;
''Interested in attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
#Bret Davidson&lt;br /&gt;
#Mike Graves&lt;br /&gt;
#Jason Stirnaman&lt;br /&gt;
#Julia Bauder&lt;br /&gt;
#Linda Ballinger&lt;br /&gt;
#Scott Hanrath&lt;br /&gt;
#Caitlin Christian-Lamb&lt;br /&gt;
#Ian Walls&lt;br /&gt;
#Scott Bacon &lt;br /&gt;
#mx matienzo&lt;br /&gt;
#Chris Sharp&lt;br /&gt;
#Junior Tidal&lt;br /&gt;
#Julie Rudder&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===CLLAM @ code4lib===&lt;br /&gt;
'''(Computational Linguistics for Libraries, Archives and Museums)'''&lt;br /&gt;
&lt;br /&gt;
'''Full Day'''&lt;br /&gt;
&lt;br /&gt;
Contacts: &lt;br /&gt;
* Douglas W. Oard (primary), oard (at) umd.edu &lt;br /&gt;
* Corey Harper, corey (dot) harper (at) nyu.edu&lt;br /&gt;
* Robert Sanderson, azaroth42 (at) gmail.com &lt;br /&gt;
* Robert Warren, rwarren (at) math.carleton.ca&lt;br /&gt;
&lt;br /&gt;
We will hack at the intersection of diverse content from Libraries, Archives and Museums and bleeding edge tools from computational linguistics for slicing and dicing that content. Did you just acquire the email archives of a startup company? Maybe you can automatically build an org chart. Have you got metadata in a slew of languages? Perhaps you can search it all using one query. Is name authority control for e-resources getting too costly? Let’s see if entity linking techniques can help. These are just a few teasers. &lt;br /&gt;
&lt;br /&gt;
There’ll be plenty of content and tools supplied, but please bring your own [data] too -- you’ll hack with it in new ways throughout the day. We’ll get started with some lightning talks on what we’ve brought,then we’ll break up into groups to experiment and work on the ideas that appeal. Three guaranteed outcomes: you’ll walk away with new ideas, new tools, and new people you’ll have met.&lt;br /&gt;
&lt;br /&gt;
''Interested in attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Devon Smith&lt;br /&gt;
# Kevin S. Clarke&lt;br /&gt;
# Jason Stirnaman&lt;br /&gt;
# Joshua Gomez&lt;br /&gt;
# Carolina Garcia&lt;br /&gt;
# Tom Burton-West&lt;br /&gt;
# Dan Scott&lt;br /&gt;
# Devin Higgins&lt;br /&gt;
# Mark Breedlove&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== GeoHydra: Managing geospatial content ===&lt;br /&gt;
&lt;br /&gt;
'''Half-day [Afternoon]'''&lt;br /&gt;
&lt;br /&gt;
* Contact: Darren Hardy, Stanford University, drh@stanford.edu&lt;br /&gt;
* Moderator: Bess Sadler, Stanford University, bess@stanford.edu&lt;br /&gt;
&lt;br /&gt;
Do you have digitized maps, GIS datasets like Shapefiles, aerial photography,&lt;br /&gt;
etc., all of which you want to integrate into your digital repository? In this&lt;br /&gt;
workshop, we will discuss how Hydra can provide discovery, delivery, and&lt;br /&gt;
management services for geospatial assets, as well as solicit questions about&lt;br /&gt;
your own GIS projects. We aim to help answer the following questions you might have about putting geospatial data into your Hydra-based digital library:&lt;br /&gt;
&lt;br /&gt;
* What are the types of geospatial data?&lt;br /&gt;
* How to dive into Hydra?&lt;br /&gt;
* How to model geospatial holdings with Hydra?&lt;br /&gt;
* How to discover and view geospatial data?&lt;br /&gt;
* How to build a geospatial data infrastructure?&lt;br /&gt;
* What are common approaches and problems?&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Esmé Cowles&lt;br /&gt;
# David Drexler&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Technology, Librarianship, and Gender: Moving the conversation forward===&lt;br /&gt;
'''Full Day'''&lt;br /&gt;
&lt;br /&gt;
Contact: Lisa Rabey lisa @ biblyotheke dot net | [http://twitter.com/pnkrcklibrarian @pnkrcklibrarian]&lt;br /&gt;
&lt;br /&gt;
'''Description'''&lt;br /&gt;
&lt;br /&gt;
Librarianship is largely made up of women, yet women are significantly underrepresented in tech positions, on any level, within libraries themselves. Why? What are we doing to encourage women to become more involved in STEM within librarianship? What kind of message are we sending when library technology keynotes remain almost resolutely male? How are we changing the face of technology, not only within libraries, but with the field itself? How are we training our staff and colleagues in the areas of fairness and removal of bias? Our vendors?&lt;br /&gt;
&lt;br /&gt;
Lots of tough questions.&lt;br /&gt;
&lt;br /&gt;
While the conversation has been going on via various blogs and articles within the last few years, it was given a public face at [http://infotoday.com/il2013/day.asp?day=Monday#session_D105 Internet Librarian 2013] where a panel of 7 (four women, three men) gave personal experiences on the above and then opened up the conversation to the audience. As eye opening and enriching the conversation was, a 45 minute panel was not enough. One thing remains clear: We need to keep the conversation moving forward and start making some radical changes in the way we think, act, and how we need to harness this to start making real changes within librarianship itself.&lt;br /&gt;
&lt;br /&gt;
Topics to include:  Fairness, bias, impostor syndrome, code of conducts, sexual harassment, training opportunities, support systems,  mentoring, ally support, and more&lt;br /&gt;
&lt;br /&gt;
Those attending should expect: Begin with opening up the conversation of experiences and talking about what is most needed, spending remaining time putting together live, usable solutions to start implementing as well as pushing the conversation forward at local levels&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
=====All Day=====&lt;br /&gt;
1. Kate Kosturski&lt;br /&gt;
&lt;br /&gt;
2. Valerie Aurora&lt;br /&gt;
&lt;br /&gt;
3. Declan Fleming (I'd be good with a half day too)&lt;br /&gt;
&lt;br /&gt;
4. mx matienzo (likewise ok w/ half day)&lt;br /&gt;
&lt;br /&gt;
5. Ginny Boyer (I'd be good with a half day too)&lt;br /&gt;
&lt;br /&gt;
=====Morning=====&lt;br /&gt;
1. Shaun Ellis&lt;br /&gt;
&lt;br /&gt;
2. Jason Casden&lt;br /&gt;
&lt;br /&gt;
3. Bohyun Kim&lt;br /&gt;
&lt;br /&gt;
=====Afternoon=====&lt;br /&gt;
1. Ayla Stein&lt;br /&gt;
&lt;br /&gt;
2. Heidi Dowding&lt;br /&gt;
&lt;br /&gt;
3. Coral Sheldon-Hess&lt;br /&gt;
&lt;br /&gt;
4. Cory Lown&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===FileAnalyzer: Rapid Development of File Manipulation Tasks===&lt;br /&gt;
'''&amp;quot;Half-Day&amp;quot; [morning]'''&lt;br /&gt;
* Contact Terry Brady, twb27@georgetown.edu&lt;br /&gt;
&lt;br /&gt;
The FileAnalyzer (http://georgetown-university-libraries.github.io/File-Analyzer/) is an application designed to solve a number of library automation challenges:&lt;br /&gt;
&lt;br /&gt;
* validating digitized and reformatted files&lt;br /&gt;
* validating vendor statistics for counter compliance&lt;br /&gt;
* preparing collections of digital files for archiving and ingest&lt;br /&gt;
* manipulating ILS import and export files&lt;br /&gt;
&lt;br /&gt;
The File Analyzer application was used by the US National Archives to validate 3.5 million digitized images from the 1940 Census. After implementing a customized ingest workflow within the File Analyzer, the Georgetown University Libraries was able to process an ingest backlog of over a thousand files of digital resources into DigitalGeorgetown, the Libraries’ Digital Collections and Institutional Repository platform. Georgetown is currently developing customized workflows that integrate Apache Tika, BagIt, and Marc conversion utilities.&lt;br /&gt;
&lt;br /&gt;
The File Analyzer is a desktop application with a powerful framework for implementing customized file validation and transformation rules. As new rules are deployed, they are presented to users within a user interface that is easy (and powerful) to use.&lt;br /&gt;
&lt;br /&gt;
The first half of this session will be targeted to potential users and developers.  The second half of the session will be targeted towards developers who are interested in developing custom rules for the application.&lt;br /&gt;
&lt;br /&gt;
''Session Overview''&lt;br /&gt;
* Overview of the application&lt;br /&gt;
* Running sample file tests/transformations through the application&lt;br /&gt;
* Compiling and building the application&lt;br /&gt;
* Coding a custom file processing task&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Michael Doran&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Collecting social media data with Social Feed Manager===&lt;br /&gt;
'''Half-Day [Morning]'''&lt;br /&gt;
&lt;br /&gt;
Contacts: &lt;br /&gt;
* Dan Chudnov, GW Libraries, dchud (at) gwu.edu&lt;br /&gt;
* Dan Kerchner, GW Libraries, kerchner (at) gwu.edu&lt;br /&gt;
* Laura Wrubel, GW Libraries, lwrubel (at) gwu.edu&lt;br /&gt;
&lt;br /&gt;
Social media data is a popular material for research and a new format for building collections.  What does it take to collect meaningfully from Twitter, Tumblr, YouTube, Weibo, Facebook, and other sites?  We will:&lt;br /&gt;
* Introduce options for collections, including both high- and low-end commercial offerings. Discuss what it means to collect these resources, covering boundaries, policies, and workflows required to develop a social media collection program in your institution.&lt;br /&gt;
* Explore the Twitter API in depth, with hands-on opportunities for those w/laptops and others who want to team up w/them&lt;br /&gt;
* Help you get started using the free [http://gwu-libraries.github.io/social-feed-manager Social Feed Manager] (SFM) app we're developing at GW to create your first collections. We’ll demo its use and demo a clean install (those w/environments can follow along)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Declan Fleming&lt;br /&gt;
# Esmé Cowles&lt;br /&gt;
# Jason Stirnaman&lt;br /&gt;
# Liz Milewicz&lt;br /&gt;
# Ranti Junus&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Intro to Git ===&lt;br /&gt;
'''&amp;quot;Half-Day [tbd - probably afternoon]&amp;quot;''' &lt;br /&gt;
* Contact: Erin Fahy, Stanford University, efahy at stanford.edu&lt;br /&gt;
* TA: Michael Klein, Northwestern University, michael.klein at northwestern.edu&lt;br /&gt;
&lt;br /&gt;
This session will cover the fundamentals of git by discussing/going through (time allowing):&lt;br /&gt;
* what is a distributed version control system&lt;br /&gt;
* what is git and github&lt;br /&gt;
* initializing a repo on a remote server/github&lt;br /&gt;
* cloning an existing repo&lt;br /&gt;
* creating a branch&lt;br /&gt;
* contributing code to a repo&lt;br /&gt;
* how to handle merge conflicts&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Sam Kome&lt;br /&gt;
# Paula Gray-Overtoom&lt;br /&gt;
# Liz Milewicz&lt;br /&gt;
# Michael Doran&lt;br /&gt;
# Caitlin Christian-Lamb&lt;br /&gt;
# [[User:RileyChilds|Riley Childs]]&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Archival discovery and use ===&lt;br /&gt;
'''Full Day''' &lt;br /&gt;
&lt;br /&gt;
Contacts: &lt;br /&gt;
* Tim Shearer, UNC Chapel Hill, tshearer at email.unc.edu, &lt;br /&gt;
* Will Sexton, Duke, will.sexton at duke.edu&lt;br /&gt;
&lt;br /&gt;
This is a full day pre-conference about archival collections and will cover the intersections of archives, workflows, technologies, discovery, and use.&lt;br /&gt;
&lt;br /&gt;
Morning agenda: focused talks around (but not limited to) issues such as:&lt;br /&gt;
* Crowd-sourcing description to enhance collecitons&lt;br /&gt;
* Linked data and authority&lt;br /&gt;
* Mass digitization and sustainable workflows&lt;br /&gt;
* Digitized objects in context (images and other objects in finding aids)&lt;br /&gt;
* Too many cooks in the kitchen: versioning&lt;br /&gt;
* Global-, intra-, and inter- discovery of archival materials via finding aids &lt;br /&gt;
* and more...&lt;br /&gt;
&lt;br /&gt;
Afternoon agenda:  Focused talks around specific tools followed by general discussion, connections, opportunities, aspirations, and planning.&lt;br /&gt;
&lt;br /&gt;
Tool examples:&lt;br /&gt;
* Archivespace&lt;br /&gt;
* STEADy&lt;br /&gt;
* &amp;quot;RAMP&amp;quot; (Remixing Archival Metadata Project)&lt;br /&gt;
* OpenRefine&lt;br /&gt;
* Aeon&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
Morning:&lt;br /&gt;
* Julia Bauder&lt;br /&gt;
&lt;br /&gt;
Afternoon:&lt;br /&gt;
* your name&lt;br /&gt;
&lt;br /&gt;
All day:&lt;br /&gt;
&lt;br /&gt;
# Josh Wilson&lt;br /&gt;
# Sam Kome&lt;br /&gt;
# Linda Ballinger&lt;br /&gt;
# Caitlin Christian-Lamb&lt;br /&gt;
# Laurie Lee Moses (seriously hard to decide here!)&lt;br /&gt;
# David Bass&lt;br /&gt;
# John Rees&lt;br /&gt;
# Lynn Eaton&lt;br /&gt;
# Hillel Arnold&lt;br /&gt;
# Susan Ivey&lt;br /&gt;
# Kristen Merryman&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===AV Content Slam===&lt;br /&gt;
'''Half-Day [morning]'''&lt;br /&gt;
Contacts:&lt;br /&gt;
* Kara Van Malssen, kara (at) avpreserve.com&lt;br /&gt;
* Lauren Sorenson, laurens (at) bavc.org&lt;br /&gt;
* Steven Villereal , villereal (at) gmail.com&lt;br /&gt;
A morning BarCamp/unconference for practitioners and coders who work with audiovisual content. The agenda will be attendee-driven, with a focus on sharing, synthesizing, and improving workflow strategies and documentation for software-based approaches to wrangling and providing access to audio and video content.&lt;br /&gt;
Possible topics of discussion might include:&lt;br /&gt;
* Use of format id and characterization/metadata extraction tools for AV&lt;br /&gt;
* Creating and using time-based metadata&lt;br /&gt;
* Managing (moving, fixity checking, etc) massive files (like uncompressed video)&lt;br /&gt;
For a better idea of the topics and concerns that have informed some past AV-themed events, check out the event wikis for [http://wiki.curatecamp.org/index.php/CURATEcamp_AVpres_2013 CURATEcamp AVpres 2013] as well as the [http://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_%26_Digital_Library_Federation_Hack_Day_2013 AMIA/DLF 2013 Hack Day] .&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here:&lt;br /&gt;
&lt;br /&gt;
# A. Soroka&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===OCLC Web Services Hackfest===&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Half-Day&amp;quot; [afternoon]&lt;br /&gt;
&lt;br /&gt;
Contact: Shelley Hostetler, Community Manager, Developer Network hostetls[at]oclc.org&lt;br /&gt;
&lt;br /&gt;
This half-day hackfest will explore some of the OCLC Developer Network web services. We will provide an overview of some of the common topics such as the general REST-based architecture for most services and how to use some new authentication clients. The group can then decide to take a deep dive into a particular API and/or write a client library for the community.&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Obey the Testing Goat!: Test Driven Web Development From The Ground Up===&lt;br /&gt;
'''Half-Day [tbd - probably afternoon]'''&lt;br /&gt;
* Contact [[User:Mredar|Mark Redar]], mredar[at]gmail.com&lt;br /&gt;
&lt;br /&gt;
Test driven development is a proven method for producing better quality code. But I've found it hard to follow a strict TDD methodology when starting new web projects. How do you write that first test when there is no code or web pages created yet.&lt;br /&gt;
&lt;br /&gt;
In this session, we will follow the excellent book [http://shop.oreilly.com/product/0636920029533.do &amp;quot;Test-Driven Web Development with Python&amp;quot;] to create a simple web site in Django following TDD from the first character typed. Come ready to code and test. No prior knowledge of python or Django required.&lt;br /&gt;
&lt;br /&gt;
By the end of this session, you should be able to  [http://www.obeythetestinggoat.com/ &amp;quot;Obey the Testing Goat&amp;quot;] from the start to finish for your next project.&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here:&lt;br /&gt;
&lt;br /&gt;
# Charlie Morris (NCSU)&lt;br /&gt;
# Jason Stirnaman&lt;br /&gt;
# Joshua Gomez&lt;br /&gt;
# Liz Milewicz&lt;br /&gt;
# Scott Hanrath&lt;br /&gt;
# Mike Beccaria&lt;br /&gt;
# Sean Aery&lt;br /&gt;
# Carolina Garcia&lt;br /&gt;
# Heidi Frank&lt;br /&gt;
# Chung Kang&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Summon Hackfest ===&lt;br /&gt;
&lt;br /&gt;
Presenter: Eddie Newwirth and presenters from Summon libraries&lt;br /&gt;
Contact: Scott Schuetze (first DOT last @ serialssolutions. com)&lt;br /&gt;
&lt;br /&gt;
The Summon Hackfest (10:30am-12pm) will be a great opportunity for libraries using the Summon service to talk about improving discovery of resources, share their creative customizations and code, and exchange ideas about ways they can leverage the Summon API to better meet the needs of their users.&lt;br /&gt;
 &lt;br /&gt;
The Summon Hackfest is open to all libraries currently using ProQuest discovery and management services (Intota, Summon, Ulrich’s or the 360 suite of services), whether they are attending Code4Lib or are just in the area.&lt;br /&gt;
 &lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[:Category:Code4Lib2014]]&lt;/div&gt;</summary>
		<author><name>Michaelhagedon</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=Libraries_Sharing_Code&amp;diff=36931</id>
		<title>Libraries Sharing Code</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=Libraries_Sharing_Code&amp;diff=36931"/>
				<updated>2013-02-16T19:59:58Z</updated>
		
		<summary type="html">&lt;p&gt;Michaelhagedon: added University of Arizona&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A number of libraries have organizational repositories in GitHub.  These can be very valuable and we attempt to collect them here.&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/gwu-libraries/ George Washington University Libraries]&lt;br /&gt;
* [https://github.com/gvsulib Grand Valley State University Libraries]&lt;br /&gt;
* [https://github.com/nypl/ The New York Public Library]&lt;br /&gt;
* [https://github.com/NYULibraries NYU Libraries]&lt;br /&gt;
* [https://github.com/psu-stewardship Penn State Digital Stewardship]&lt;br /&gt;
* [https://github.com/adsabs/ SAO/NASA Astrophysics Data System]&lt;br /&gt;
* [https://github.com/ucsdlib?tab=repositories UCSD Library]&lt;br /&gt;
* [https://github.com/ualibraries The University of Arizona Libraries]&lt;br /&gt;
* [https://github.com/ui-libraries University of Iowa Libraries]&lt;br /&gt;
* [https://github.com/ndlib University of Notre Dame] (And [https://github.com/ndlibersa the CORAL stuff])&lt;br /&gt;
* [https://github.com/yalemssa Manuscripts and Archives, Yale University Library]&lt;br /&gt;
* [https://github.com/yorkulibraries York University Libraries]&lt;br /&gt;
&lt;br /&gt;
Empty (but we hope they put code in it soon!)&lt;br /&gt;
* [https://github.com/chattlibrary Chattanooga Public Library]&lt;br /&gt;
* [https://github.com/DarienLibrary Darien (CT) Library]&lt;br /&gt;
&lt;br /&gt;
Non-github open source code sites.&lt;br /&gt;
* University of Florida: SobekCM software [http://sourceforge.net/directory/?q=sobekcm Sourceforge], [http://code.google.com/p/sobekcm/ Google code], [http://ufdc.ufl.edu/software UFDC institutional site]&lt;/div&gt;</summary>
		<author><name>Michaelhagedon</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2013_breakout_sessions_reports&amp;diff=36662</id>
		<title>2013 breakout sessions reports</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2013_breakout_sessions_reports&amp;diff=36662"/>
				<updated>2013-02-12T22:36:14Z</updated>
		
		<summary type="html">&lt;p&gt;Michaelhagedon: Instruction minutes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Tuesday, Feb.12, 2013 ==&lt;br /&gt;
* CodeCraft - Writing better code - '''location: Room D''' &lt;br /&gt;
* Code4Lib Journal discussion of editorial process (open to anyone) - '''Main Room, front right corner''' &lt;br /&gt;
* Tools for instruction: Guide on the Side, LMS integration, subject / course guides, etc. - '''Main Room, rear right corner''' Minutes: [[2013_instruction_breakout]]  &lt;br /&gt;
* Marc4J, SolrMarc, and MARC -&amp;gt; Solr in general -- Next steps - '''Room E''' Minutes: [[2013_marc_breakout]]&lt;br /&gt;
* Building / Keeping relevant skills - How do you access training, develop skills, and keep current while still doing your day job - '''Room F'''&lt;br /&gt;
* Cupcakes4Lib -- A Pilgrimage - '''Registration Table''' &lt;br /&gt;
* relevance ranking and testing - '''Main room, left rear corner'''&lt;br /&gt;
* Fedora4Lib: Developer Challenge! (http://fedora4lib.org/hack/) - '''Main room, left front corner'''&lt;br /&gt;
&lt;br /&gt;
== Wednesday, Feb.13, 2013 ==&lt;br /&gt;
* Group 42/Topic&lt;br /&gt;
* Group 43/Topic&lt;br /&gt;
*etc.&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2013]]&lt;/div&gt;</summary>
		<author><name>Michaelhagedon</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2013_instruction_breakout&amp;diff=36661</id>
		<title>2013 instruction breakout</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2013_instruction_breakout&amp;diff=36661"/>
				<updated>2013-02-12T22:35:05Z</updated>
		
		<summary type="html">&lt;p&gt;Michaelhagedon: Credit&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Instruction Tools Breakout Session February 12th, 2013 ==&lt;br /&gt;
&lt;br /&gt;
'''Bold stuff''' seemed especially important. &lt;br /&gt;
&lt;br /&gt;
=== Research/Subject/Course Guides ===&lt;br /&gt;
&lt;br /&gt;
What brought people to use Libguides?&lt;br /&gt;
&lt;br /&gt;
* Librarians don't have enough control over our website&lt;br /&gt;
* '''Librarians who are creating content wanted something easier'''&lt;br /&gt;
* Library a la carte is shut down - programming resources were too expensive versus buying Libguides&lt;br /&gt;
* '''Cost per year of Libguides is much cheaper than developing something ourselves. Everything is in XML which is great for exporting data.'''&lt;br /&gt;
&lt;br /&gt;
How are people getting stuff into Libguides?&lt;br /&gt;
&lt;br /&gt;
* There is a place for reusing links to databases in Libguides. You can use the Serials Solutions importer, or build your own reusable list.&lt;br /&gt;
* No tutorials management structure in Libguides.&lt;br /&gt;
&lt;br /&gt;
Overcoming staleness in guides&lt;br /&gt;
&lt;br /&gt;
* You can include RSS feeds, add any scripts via text editor&lt;br /&gt;
* No write access to Libguides databases&lt;br /&gt;
* Now able to use jQuery in Libguides&lt;br /&gt;
&lt;br /&gt;
=== Learning Management System Integration ===&lt;br /&gt;
&lt;br /&gt;
* At U of AZ Library has an opt-out tab called &amp;quot;Library Tools&amp;quot; that displays an iFrame that has a web application that looks like a Libguide. Librarians can create these tabs/portals and attach them to this. If there isn't a specific course portal, they receive the subject guide. At section level, instructors can create their own through drag and drop interface. Looking at moving off Library ala Carte and making this the main subject guides. Not sure how to share this with the community because uses an institution-specific API.&lt;br /&gt;
* Cost of keeping library-specific content up to date with certain LMS (Blackboard) can become problematic.&lt;br /&gt;
* '''[http://www.imsglobal.org/toolsinteroperability2.cfm LTI/ Basic LTI] passes contextual information about courses to another system. Blackboard supports this very well. Moodle has this option as a plug-in. Need to get data from Blackboard and feed into library application to easily setup iFrame for library content in their course menu. LTI isn't a magic bullet. It sends the user role and course ID. Need to depend on LMS administator to do the right thing. If the course ID isn't consistent you'd have to parse through it. Duke wrote PHP/MySQL that maps between LMS and guides.'''&lt;br /&gt;
* '''NCSU looking at course reserves link and library-specific content in the LMS. Spingshare has a tool, but sounds expensive. More info on reserves4lib list.'''&lt;br /&gt;
* '''Vendors don't seem to understand the need in libraries for integration with the LMS. Maybe there's an opportunity for community of librarians wanting to integrate with LMS.'''&lt;br /&gt;
&lt;br /&gt;
* How integrated with the LMS should libraries be? All looks like the LMS to students.&lt;br /&gt;
* Is there a way to get away from using iFrames and building directly in the LMS?&lt;br /&gt;
* '''Faculty outreach is key with integrating Library content with the LMS. Use marketing through Liaison Librarians and opt-in or opt-out block. If you can integrate with the LMS template faculty members seem to be okay with it.'''&lt;br /&gt;
* Integrated LMS content includes things like library seach engines, course/subject guides and reserves (this is with Moodle). Students get subject and course guides if they are available.&lt;br /&gt;
&lt;br /&gt;
=== Tutorials (like Guide on the Side, video tutorials, etc.) ===&lt;br /&gt;
* Instructional content at NCSU - trying to overcome outdated content. Developing three-tiered approach. There will be online content (videos or online tutorials), lesson plans for librarians, and turnkey library lesson plans librarians could send to faculty members. Putting more effort on things that go stale faster.&lt;br /&gt;
* Hard to maintain video tutorials made with systems like Camtasia. Make sure we're not replicating already available how-tos (like for Google).&lt;br /&gt;
* '''[http://code.library.arizona.edu/gots Guide on the Side] [https://github.com/ualibraries/Guide-on-the-Side github] is a tool that allows users to take a self-guided tour of library resources and services. Developed by U. Of AZ Every user can edit every other tutorial in the system. Some libraries planning on using Guide on the Side for training. It would be great to be able to share Guide on the Side tutorials among each other.'''&lt;br /&gt;
* '''If you have too few instruction librarians, take the approach of making generic tutorials and give the audience as much autonomy as possible.'''&lt;br /&gt;
&lt;br /&gt;
Thanks to Amy Deschenes for taking notes!&lt;/div&gt;</summary>
		<author><name>Michaelhagedon</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2013_instruction_breakout&amp;diff=36660</id>
		<title>2013 instruction breakout</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2013_instruction_breakout&amp;diff=36660"/>
				<updated>2013-02-12T22:34:05Z</updated>
		
		<summary type="html">&lt;p&gt;Michaelhagedon: Created page with &amp;quot;== Instruction Tools Breakout Session February 12th, 2013 ==  '''Bold stuff''' seemed especially important.   === Research/Subject/Course Guides ===  What brought people to use L...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Instruction Tools Breakout Session February 12th, 2013 ==&lt;br /&gt;
&lt;br /&gt;
'''Bold stuff''' seemed especially important. &lt;br /&gt;
&lt;br /&gt;
=== Research/Subject/Course Guides ===&lt;br /&gt;
&lt;br /&gt;
What brought people to use Libguides?&lt;br /&gt;
&lt;br /&gt;
* Librarians don't have enough control over our website&lt;br /&gt;
* '''Librarians who are creating content wanted something easier'''&lt;br /&gt;
* Library a la carte is shut down - programming resources were too expensive versus buying Libguides&lt;br /&gt;
* '''Cost per year of Libguides is much cheaper than developing something ourselves. Everything is in XML which is great for exporting data.'''&lt;br /&gt;
&lt;br /&gt;
How are people getting stuff into Libguides?&lt;br /&gt;
&lt;br /&gt;
* There is a place for reusing links to databases in Libguides. You can use the Serials Solutions importer, or build your own reusable list.&lt;br /&gt;
* No tutorials management structure in Libguides.&lt;br /&gt;
&lt;br /&gt;
Overcoming staleness in guides&lt;br /&gt;
&lt;br /&gt;
* You can include RSS feeds, add any scripts via text editor&lt;br /&gt;
* No write access to Libguides databases&lt;br /&gt;
* Now able to use jQuery in Libguides&lt;br /&gt;
&lt;br /&gt;
=== Learning Management System Integration ===&lt;br /&gt;
&lt;br /&gt;
* At U of AZ Library has an opt-out tab called &amp;quot;Library Tools&amp;quot; that displays an iFrame that has a web application that looks like a Libguide. Librarians can create these tabs/portals and attach them to this. If there isn't a specific course portal, they receive the subject guide. At section level, instructors can create their own through drag and drop interface. Looking at moving off Library ala Carte and making this the main subject guides. Not sure how to share this with the community because uses an institution-specific API.&lt;br /&gt;
* Cost of keeping library-specific content up to date with certain LMS (Blackboard) can become problematic.&lt;br /&gt;
* '''[http://www.imsglobal.org/toolsinteroperability2.cfm LTI/ Basic LTI] passes contextual information about courses to another system. Blackboard supports this very well. Moodle has this option as a plug-in. Need to get data from Blackboard and feed into library application to easily setup iFrame for library content in their course menu. LTI isn't a magic bullet. It sends the user role and course ID. Need to depend on LMS administator to do the right thing. If the course ID isn't consistent you'd have to parse through it. Duke wrote PHP/MySQL that maps between LMS and guides.'''&lt;br /&gt;
* '''NCSU looking at course reserves link and library-specific content in the LMS. Spingshare has a tool, but sounds expensive. More info on reserves4lib list.'''&lt;br /&gt;
* '''Vendors don't seem to understand the need in libraries for integration with the LMS. Maybe there's an opportunity for community of librarians wanting to integrate with LMS.'''&lt;br /&gt;
&lt;br /&gt;
* How integrated with the LMS should libraries be? All looks like the LMS to students.&lt;br /&gt;
* Is there a way to get away from using iFrames and building directly in the LMS?&lt;br /&gt;
* '''Faculty outreach is key with integrating Library content with the LMS. Use marketing through Liaison Librarians and opt-in or opt-out block. If you can integrate with the LMS template faculty members seem to be okay with it.'''&lt;br /&gt;
* Integrated LMS content includes things like library seach engines, course/subject guides and reserves (this is with Moodle). Students get subject and course guides if they are available.&lt;br /&gt;
&lt;br /&gt;
=== Tutorials (like Guide on the Side, video tutorials, etc.) ===&lt;br /&gt;
* Instructional content at NCSU - trying to overcome outdated content. Developing three-tiered approach. There will be online content (videos or online tutorials), lesson plans for librarians, and turnkey library lesson plans librarians could send to faculty members. Putting more effort on things that go stale faster.&lt;br /&gt;
* Hard to maintain video tutorials made with systems like Camtasia. Make sure we're not replicating already available how-tos (like for Google).&lt;br /&gt;
* '''[http://code.library.arizona.edu/gots Guide on the Side] [https://github.com/ualibraries/Guide-on-the-Side github] is a tool that allows users to take a self-guided tour of library resources and services. Developed by U. Of AZ Every user can edit every other tutorial in the system. Some libraries planning on using Guide on the Side for training. It would be great to be able to share Guide on the Side tutorials among each other.'''&lt;br /&gt;
* '''If you have too few instruction librarians, take the approach of making generic tutorials and give the audience as much autonomy as possible.'''&lt;/div&gt;</summary>
		<author><name>Michaelhagedon</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2013_preconference_proposals&amp;diff=31647</id>
		<title>2013 preconference proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2013_preconference_proposals&amp;diff=31647"/>
				<updated>2013-01-14T17:09:33Z</updated>
		
		<summary type="html">&lt;p&gt;Michaelhagedon: Adding myself to sessions&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Please sign up to attend by January 15th. Doesn't mean you can't change your mind, but we will use the host committee will use these numbers to assign rooms.&lt;br /&gt;
&lt;br /&gt;
Proposals '''now closed'''.&lt;br /&gt;
&lt;br /&gt;
Spaces available: 4+ Rooms&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
=== Talk Title ===&lt;br /&gt;
 &lt;br /&gt;
* Presenter/Leader, affiliation (optional), and email address (mandatory!)&lt;br /&gt;
* Second Presenter/Leader, affiliation, email address, if applicable&lt;br /&gt;
&lt;br /&gt;
Description.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Full Day==&lt;br /&gt;
&lt;br /&gt;
===Drupal4lib Sub-con Barcamp===&lt;br /&gt;
&lt;br /&gt;
* Contact [[User:highermath|Cary Gordon]], cgordon@chillco.com or &lt;br /&gt;
* [[User:cdmo|Charlie Morris]], NCSU Libraries, cdmorris@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
This will be a full day of self-selected barcamp style sessions. Anyone who wants to present can write down the topic on an index card and, after the keynote, we will vote to choose what we want to see. Attendees can also pick a topic and attempt to talk someone else into presenting on it.&lt;br /&gt;
&lt;br /&gt;
If we run out of topics, we will pay homage to the project by testing patches for Drupal 8. It is easy, and we will show you how to do this invaluable task.&lt;br /&gt;
&lt;br /&gt;
This event is open to the library community. There is a nominal fee ($10) for non-Code4LibCon attendees.&lt;br /&gt;
&lt;br /&gt;
Local Drupal uber-ninja Larry Garfield will stop by to answer questions and give us some guidance.&lt;br /&gt;
&lt;br /&gt;
====I plan on attending:====&lt;br /&gt;
&lt;br /&gt;
=====All Day=====&lt;br /&gt;
*Margaret Heller&lt;br /&gt;
*Mahria Lebow, mahria at uw edu&lt;br /&gt;
*Paula Gray-Overtoom, pgrayove at gmail.com&lt;br /&gt;
*Dhanushka Samarakoon, dhanu80 at g mail com&lt;br /&gt;
&lt;br /&gt;
=====Morning=====&lt;br /&gt;
* [[User:Kevenj|Keven Jeffery]]&lt;br /&gt;
* Sean Chen&lt;br /&gt;
&lt;br /&gt;
=====Afternoon=====&lt;br /&gt;
* Kevin Reiss, Princeton University Library, kr2 at princeton.edu (afternoon only)&lt;br /&gt;
* Christina Salazar (afternoon only)&lt;br /&gt;
* Sarah Dooley (afternoon)&lt;br /&gt;
* Josh Wilson, joshwilsonnc at gmail (likely afternoon only)&lt;br /&gt;
* Ken Varnum, varnum at umich e-d-u&lt;br /&gt;
* Cody Hennesy, chennesy at library berkeley edu&lt;br /&gt;
&lt;br /&gt;
==Half Day Morning==&lt;br /&gt;
=== Open space session ===&lt;br /&gt;
&lt;br /&gt;
* Dan Chudnov, dchud at gwu edu&lt;br /&gt;
&lt;br /&gt;
The rest of code4libcon is pretty well structured these days; come in the morning for a few hours of old-school [http://en.wikipedia.org/wiki/Open-space_technology open space technology] unconference.  Bring a rough talk or idea you want to share or questions you have or something you want to learn about or discuss with other people, and be ready to tell us about it.  Use it as extra prep time for your upcoming prepared or lightning talk if you want.  We'll plan the morning out a little bit at the beginning, but not too much.  What we do will be up to the people there in the room.&lt;br /&gt;
&lt;br /&gt;
If there's interest, we could start with a &amp;quot;welcome to code4lib&amp;quot; introductory session for newcomers.&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* Devon Smith&lt;br /&gt;
* Esmé Cowles, escowles@ucsd.edu&lt;br /&gt;
* Jason Casden&lt;br /&gt;
* Ryan Eby&lt;br /&gt;
* mark matienzo&lt;br /&gt;
* Donald Mennerich&lt;br /&gt;
* Patrick Berry, pberry@csuchico.edu&lt;br /&gt;
* Kåre Fiedler Christiansen, kfc@statsbiblioteket.dk&lt;br /&gt;
&lt;br /&gt;
=== Delivery services ===&lt;br /&gt;
* Ted Lawless, Brown University Library, tlawless at brown edu.  &lt;br /&gt;
* Kevin Reiss, Princeton University Library, kr2 at princeton edu.&lt;br /&gt;
&lt;br /&gt;
Are you interested in making it easier for users to obtain copies of known items?  Do you feel your OpenURL and Interlibrary Loan software could be streamlined?  This pre-conference workshop will focus on providing services that deliver content to users.  Discovery systems are doing a better job of exposing library holdings but there's still a lot of work to do actually get the content in the users hands.  &lt;br /&gt;
&lt;br /&gt;
Possible topics/activities include:&lt;br /&gt;
* group discussion of what some libraries have done in this area&lt;br /&gt;
* comparisons of different approaches to addressing delivery &lt;br /&gt;
* overview of tools available &lt;br /&gt;
* sharing of strategies and experiences&lt;br /&gt;
* time to work with and review open source code in this area. Some possible tools to install and test out [https://github.com/team-umlaut/umlaut Umlaut], [https://github.com/lawlesst/heroku-360link Py360 Link]. &lt;br /&gt;
 &lt;br /&gt;
Resources and background information:&lt;br /&gt;
* [https://github.com/team-umlaut/umlaut/wiki/What-is-Umlaut-anyway What-is-Umlaut-anyway] &lt;br /&gt;
* [http://journal.code4lib.org/articles/7308 Hacking 360 Link: A hybrid approach]&lt;br /&gt;
* [http://journal.code4lib.org/articles/108 Auto-Populating an ILL form with the Serial Solutions Link Resolver API]&lt;br /&gt;
* [http://lawlesst.github.com/notebook/delivery.html Focusing on Delivery]&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* Ken Varnum, varnum at umich e-d-u&lt;br /&gt;
* Ayla Stein&lt;br /&gt;
* Curtis Thacker&lt;br /&gt;
* Rosalyn Metz rosalynmetz at gmail com&lt;br /&gt;
* James Van Mil - james.vanmil at gmail com&lt;br /&gt;
* Andrew Nagy&lt;br /&gt;
* Ranti Junus&lt;br /&gt;
* Aaron Collier - acollier at csufresno edu&lt;br /&gt;
* Demian Katz - demian dot katz at villanova dot edu&lt;br /&gt;
* Jacob Andresen - jacob at reindex dot dk&lt;br /&gt;
&lt;br /&gt;
=== Intro to Blacklight CANCELLED ===&lt;br /&gt;
&lt;br /&gt;
PLEASE NOTE: This pre-conference has been cancelled in favor of joining forces with the RailsBridge workshop. The afternoon Blacklight session will still be offered.&lt;br /&gt;
&lt;br /&gt;
=== RailsBridge Intro to Ruby on Rails ===&lt;br /&gt;
* Jason Ronallo, North Carolina State University Libraries, jnronall@ncsu.edu&lt;br /&gt;
* Mark Bussey, Data Curation Experts (mark at curationexperts.com)&lt;br /&gt;
* Shaun Ellis (helper), Princeton University Library, shaune@princeton.edu&lt;br /&gt;
* Ross Singer, Talis, rossfsinger@gmail.com&lt;br /&gt;
* Adam Wead (helper), Rock and Roll Hall of Fame, awead@rockhall.org&lt;br /&gt;
* Bess Sadler, Stanford University, bess@stanford.edu&lt;br /&gt;
* Anyone else want to come and help folks? Contact Jason.&lt;br /&gt;
&lt;br /&gt;
RailsBridge comes to code4lib! We'll follow the RailsBridge curriculum (http://railsbridge.org) to provide a gentle introduction to Ruby on Rails. Topics covered include an introduction to the Ruby language, the Rails framework, and version control with git. Participants will build a working Rails application. &lt;br /&gt;
&lt;br /&gt;
There will be some pre-preconference preparation needed so that we can effectively use our time. Details to come.&lt;br /&gt;
&lt;br /&gt;
* Note: Attendees can follow up with the Intro to Blacklight afternoon session, which will be tailored for folks new to Ruby&lt;br /&gt;
&lt;br /&gt;
Please add your name below and fill out the [https://docs.google.com/spreadsheet/viewform?formkey=dEpxd0tzU1ZscnU5QUUtd0JGUk9qQkE6MA#gid=0 experience survey].&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
# First and last name and email address&lt;br /&gt;
# John MacGillivray&lt;br /&gt;
# Jon Stroop - jstroop at princeton&lt;br /&gt;
# Christina Salazar - christina{dot}salazar{at}csuci{dot}edu&lt;br /&gt;
# Karen Coombs - coombsk{at}oclc{dot}org&lt;br /&gt;
# Becky Yoose - b dot yoose at google overlord&lt;br /&gt;
# Jeremy Morse - jgmorse at umich&lt;br /&gt;
# Julia Bauder - julia{dot}bauder{at}gmail{dot}com &lt;br /&gt;
# Chung Kang&lt;br /&gt;
# Karen Miller - k-miller3{at}northwestern{dot}edu&lt;br /&gt;
# Betsy Coles - bcoles{at}caltech{dot}edu&lt;br /&gt;
# Jay Luker - jay{dot}luker{at}gmail{dot}com&lt;br /&gt;
# Santi Thompson&lt;br /&gt;
# Sarah Dooley - sarah{at}nclive{dot}org&lt;br /&gt;
# Brandon Dudley&lt;br /&gt;
# Ken Irwin&lt;br /&gt;
# Dennis Ogg - ogg{at}ucar{dot}edu&lt;br /&gt;
# Ian Walls - iwalls{at}library{dot}umass{dot}edu&lt;br /&gt;
# Steven Villereal – villereal{at}gmail{dot}com&lt;br /&gt;
# Hillel Arnold - hillel{dot}arnold{at}gmail{dot}com&lt;br /&gt;
# Josh Wilson - joshwilsonnc at gmail&lt;br /&gt;
# Cynthia Ng - cynthia [dot] s [dot] ng [at] gmail&lt;br /&gt;
# Ian Chan&lt;br /&gt;
# Heidi Frank - hf36{at}nyu{dot}edu&lt;br /&gt;
# Mark Mounts - mark{dot}mounts{at}dartmouth{dot}edu&lt;br /&gt;
# Bill McMillin - wmcmilli{at}pratt {dot}edu&lt;br /&gt;
# David Lacy - david dot lacy at villanova dot edu&lt;br /&gt;
# Courtney Greene - crgreene at indiana dot edu&lt;br /&gt;
# Laney McGlohon - lmcglohon@getty.edu&lt;br /&gt;
# Nancy Enneking - nenneking@getty.edu&lt;br /&gt;
# Jason Raitz - jcraitz at ncsu dot edu&lt;br /&gt;
# Nick Cappadona&lt;br /&gt;
# Steven Marsden - steven.marsden@ryerson.ca&lt;br /&gt;
# Linda Ballinger - ballingerl at newberry dot org&lt;br /&gt;
# Brendan Quinn - brendan-quinn at northwestern dot edu&lt;br /&gt;
# Michael Levy - mlevy {at}ushmm {dot}org&lt;br /&gt;
# Michael North   (m-north at northwestern dot edu)&lt;br /&gt;
# Shawn Averkamp - shawnaverkamp{at}gmail{dot}com&lt;br /&gt;
# Allan Berry - allan{dot}berry{at}gmail{dot}com&lt;br /&gt;
# Andrew Darby - agdarby at miami dot edu&lt;br /&gt;
# Cody Hennesy - chennesy at library dot berkeley dot edu&lt;br /&gt;
# Devin Higgins - higgi135 at msu dot edu&lt;br /&gt;
# Emily Zervas - emily{dot}zervas{at}gmail{dot}com&lt;br /&gt;
# Rob Dumas - rdumas {at} chipublib {dot} org&lt;br /&gt;
# Evan Boyd - eboyd /at/ ctschicago /period/ edu&lt;br /&gt;
# William Hicks - William{dot}hicks{at}unt{dot}edu&lt;br /&gt;
# Lauren Ajamie - lauren dot ajamie at nd dot edu&lt;br /&gt;
# David Anderson - david dot anderson3 at nih dot gov&lt;br /&gt;
# David Bucknum - dabu at loc dot gov&lt;br /&gt;
&lt;br /&gt;
===Intro to NoSQL Databases===&lt;br /&gt;
* Joshua Gomez, George Washington University, jngomez at gwu edu&lt;br /&gt;
&lt;br /&gt;
Since Google published its paper on BigTable in 2006, alternatives to the traditional relational database model have been growing in both variety and popularity. These new databases (often referred to as NoSQL databases) excel at handling problems faced by modern information systems that the traditional relational model cannot. They are particularly popular among organizations tackling the so-called &amp;quot;Big Data&amp;quot; problems. However, there are always tradeoffs involved when making such dramatic changes. Understanding how these different kinds of databases are designed and what they can offer is essential to the decision making process. In this precon I will discuss some of the various types of new databases (key-value, columnar, document, graph) and walk through examples or exercises using some of their open source implementations like Riak, HBase, CouchDB, and Neo4j.&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* Esha Datta&lt;br /&gt;
* Trevor Thornton&lt;br /&gt;
* Michael Doran&lt;br /&gt;
* Ray Schwartz - schwartzr2@wpunj.edu&lt;br /&gt;
* Kevin Clarke&lt;br /&gt;
* Andreas Orphanides&lt;br /&gt;
* Tommy Ingulfsen - tommying{at}caltech{dot}edu&lt;br /&gt;
* Harrison Dekker&lt;br /&gt;
* Eric James eric dot james at yale dot edu&lt;br /&gt;
* Sean Crowe - sean.crowe@uc.edu&lt;br /&gt;
* Scott Hanrath&lt;br /&gt;
* Erin Fahy - erin.fahy at mtholyoke edu&lt;br /&gt;
* Karen Coyle - kcoyle at kcoyle.net&lt;br /&gt;
* Charles Draper&lt;br /&gt;
* David Uspal&lt;br /&gt;
* Shawn Kiewel - smkiewel at uga dot edu&lt;br /&gt;
* Stephanie Collett - stephanie dot collett at ucop dot edu&lt;br /&gt;
* Declan Fleming - declan at declan dot net&lt;br /&gt;
* David Gonzalez - d.gonzalez26 at umiami dot edu&lt;br /&gt;
* Jeff Peterson - gpeterso at umn dot edu&lt;br /&gt;
* May Chan - msuicat at gmail dot com&lt;br /&gt;
* Kathryn Stine - kathryn dot stine at ucop dot edu&lt;br /&gt;
* Tim Thompson - t.thompson5{at}miami{dot}edu&lt;br /&gt;
* Eben English - eenglish [at] bpl dot org&lt;br /&gt;
* Marisa Strong - marisa dot strong at ucop dot edu&lt;br /&gt;
* Michael Lindsey - mackeral at gmail dot com&lt;br /&gt;
* Mike Hagedon - hagedonm at u dot library dot arizona dot edu&lt;br /&gt;
&lt;br /&gt;
==Half Day Afternoon==&lt;br /&gt;
=== Data Visualization Hackfest ===&lt;br /&gt;
* Chris Beer, cabeer at stanford.edu&lt;br /&gt;
* Dan Chudnov, dchud at gwu edu&lt;br /&gt;
&lt;br /&gt;
* Description: Want to hack/design/plan/document on a team of people who enjoy learning by creating?  Interested in data visualization?  Well, this hackfest is for you.  Not familiar with the concept of a hackfest?  See Roy Tennant's [http://www.libraryjournal.com/article/CA332564.html &amp;quot;Where Librarians Go To Hack&amp;quot;] and the page for the [http://access2010.lib.umanitoba.ca/node/3.html Access 2010 Hackfest].  We propose a half-day hackfest with a focus on visualization library data -- think stuff like library catalog data, access/circulation statistics, etc. Here's how it works, roughly: &lt;br /&gt;
 - we'll (you'll!) do lightning tutorials for some data visualization tools, toolkits (R? d3js? ?), datasets.&lt;br /&gt;
 - we'll separate into groups and hack on stuff.&lt;br /&gt;
 - at the end of the day, we'll present our progress.&lt;br /&gt;
&lt;br /&gt;
Not a code hacker?  No worries; all skill sets and backgrounds are valuable! &lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* Devon Smith&lt;br /&gt;
* Esha Datta&lt;br /&gt;
* Ray Schwartz - schwartzr2@wpunj.edu&lt;br /&gt;
* Karen Coombs - coombsk{at}oclc{dot}org&lt;br /&gt;
* Julia Bauder - julia{dot}bauder{at}gmail{dot}com&lt;br /&gt;
* Jason Stirnaman (jstirnaman at kumc.edu)&lt;br /&gt;
* Joshua Gomez&lt;br /&gt;
* Ayla Stein&lt;br /&gt;
* Harrison Dekker&lt;br /&gt;
* Ian Walls - iwalls{at}library{dot}umass{dot}edu&lt;br /&gt;
* Scott Hanrath&lt;br /&gt;
* [[User:Kevenj|Keven Jeffery]]&lt;br /&gt;
* James Van Mil - james.vanmil at gmail com&lt;br /&gt;
* Sean Crowe - sean.crowe@uc.edu&lt;br /&gt;
* Karen coyle - kcoyle at kcoyle.net&lt;br /&gt;
* David Lacy - david dot lacy at villanova dot edu&lt;br /&gt;
* mark matienzo&lt;br /&gt;
* David Uspal&lt;br /&gt;
* Emily Lynema - ejlynema at ncsu dot edu&lt;br /&gt;
* Sean Chen&lt;br /&gt;
* Donald Mennerich&lt;br /&gt;
* Allan Berry - allan{dot}berry{at}gmail{dot}com&lt;br /&gt;
* Declan Fleming - declan at declan dot net&lt;br /&gt;
* Chick Markley -- chick at qrhino dot com&lt;br /&gt;
* Rosalyn Metz -- rosalynmetz at gmail com&lt;br /&gt;
* Devin Higgins - higgi135 at msu dot edu&lt;br /&gt;
* Emily Zervas emily{dot}zervas{at}gmail{dot}com&lt;br /&gt;
* May Chan -- msuicat at gmail dot com&lt;br /&gt;
* Kathryn Stine - kathryn dot stine at ucop dot edu&lt;br /&gt;
* Tim Thompson - t.thompson5{at}miami{dot}edu&lt;br /&gt;
&lt;br /&gt;
=== Intro to Hydra ===&lt;br /&gt;
* Adam Wead, Rock and Roll Hall of Fame (awead at rockhall.org)&lt;br /&gt;
* Justin Coyne, Data Curation Experts (justin.coyne at curationexperts.com)&lt;br /&gt;
* Mark Bussey, Data Curation Experts (mark at curationexperts.com)&lt;br /&gt;
&lt;br /&gt;
Hydra (http://projecthydra.org) is a free and open source repository solution that is being used by institutions on both sides of the North Atlantic to provide access to their digital content.  Hydra provides a versatile and feature rich environment for end-users and repository administrators alike. Leveraging Blacklight as its front end discovery interface, the hydra project provides a suite of software components, data models, and design patterns for building a robust and sustainable digital repository, as well as a community of support for ongoing development. This workshop will provide an introduction to the hydra project and its software components. Attendees will leave with enough knowledge to get started building their own local repository solutions. This workshop will be led by Adam Wead of the Rock and Roll Hall of Fame. &lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* Jeremy Prevost&lt;br /&gt;
* Dennis Ogg - ogg{at}ucar{dot}edu&lt;br /&gt;
* Terry Brady&lt;br /&gt;
* Betsy Coles - bcoles{at}caltech{dot}edu&lt;br /&gt;
* Brendan Quinn - brendan-quinn at northwestern dot edu&lt;br /&gt;
* Shawn Kiewel - smkiewel at uga dot edu&lt;br /&gt;
* Steven Villereal – villereal{at}gmail{dot}com&lt;br /&gt;
* Ryan Eby&lt;br /&gt;
* Dean Farrell&lt;br /&gt;
* Ian Chan&lt;br /&gt;
* Mark Mounts - mark{dot}mounts{at}dartmouth{dot}edu&lt;br /&gt;
* Carl Jones&lt;br /&gt;
* Laney McGlohon - lmcglohon@getty.edu&lt;br /&gt;
* Nancy Enneking - nenneking@getty.edu&lt;br /&gt;
* Allan Berry - allan{dot}berry{at}gmail{dot}com&lt;br /&gt;
* Andrew Darby - agdarby at miami dot edu&lt;br /&gt;
* Kåre Fiedler Christiansen - kfc@statsbiblioteket.dk&lt;br /&gt;
&lt;br /&gt;
=== Intro to Blacklight ===&lt;br /&gt;
* Bess Sadler, Stanford University Library (bess at stanford.edu)&lt;br /&gt;
* Jason Ronallo, NC State (jronallo at gmail.com)&lt;br /&gt;
* Shaun Ellis (helper), Princeton University Library, (shaune@princeton.edu)&lt;br /&gt;
&lt;br /&gt;
Blacklight (http://projectblacklight.org) is a free and open source discovery interface built on solr and ruby on rails. It is used by institutions such as Stanford University, NC State, WGBH, Johns Hopkins University, the Rock and Roll Hall of Fame, and an ever expanding community of adopters and contributors. Blacklight can be used as a front-end discovery solution for an ILS, or the contents of a digital repository, or to provide a unified discovery solution for many siloed collections. In this workshop we will cover the basics of solr indexing and searching, setting up and customizing Blacklight, and leave time for Q&amp;amp;A around local issues people might encounter. &lt;br /&gt;
&lt;br /&gt;
Note: this workshop will be tailored as a follow-on to the morning's RailsBridge Intro to Ruby on Rails workshop, but everyone is welcome&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* John MacGillivray&lt;br /&gt;
* Jon Stroop&lt;br /&gt;
* Jeremy Morse - jgmorse at umich&lt;br /&gt;
* Karen Miller - k-miller3{at}northwestern{dot}edu&lt;br /&gt;
* Tommy Ingulfsen - tommying{at}caltech{dot}edu&lt;br /&gt;
* Chung Kang&lt;br /&gt;
* Santi Thompson&lt;br /&gt;
* Brandon Dudley&lt;br /&gt;
* Ken Irwin&lt;br /&gt;
* Hillel Arnold&lt;br /&gt;
* Heidi Frank - hf36{at}nyu{dot}com&lt;br /&gt;
* Chris Sharp - csharp{at}georgialibraries{dot}org&lt;br /&gt;
* Bill McMillin - wmcmilli{at} pratt{dot} edu&lt;br /&gt;
* Jason Raitz - jcraitz at ncsu dot edu&lt;br /&gt;
* Linda Ballinger - ballingerl at newberry dot org&lt;br /&gt;
* Tim Thompson - t.thompson5{at}miami{dot}edu&lt;br /&gt;
* David Gonzalez - d.gonzalez26 at umiami dot edu&lt;br /&gt;
* Courtney Greene - crgreene at indiana dot edu&lt;br /&gt;
* Evan Boyd - eboyd /at/ ctschicago /period/ edu&lt;br /&gt;
* William Hicks - William{dot}hicks{at}unt{dot}edu&lt;br /&gt;
* Lauren Ajamie - lauren dot ajamie at nd dot edu&lt;br /&gt;
* David Anderson - david dot anderson3 at nih dot gov&lt;br /&gt;
* Michael Lindsey - mackeral at gmail dot com&lt;br /&gt;
* David Bucknum - dabu at loc dot gov&lt;br /&gt;
&lt;br /&gt;
=== DPLA Intro/Hacking ===&lt;br /&gt;
 &lt;br /&gt;
* Presenter(s)/Leader(s): TBD&lt;br /&gt;
* Guy Who'd Be Interested in Helping: Jay Luker, Smithsonian Astrophysics Data System (jluker at cfa.harvard.edu)&lt;br /&gt;
&lt;br /&gt;
This is a stub proposal entered solely to beat the submission deadline. I think there's be sufficient interest in this session, but only thought of it yesterday and haven't had time to coordinate with actual DPLA'ers and confirm that any of them are definitely coming.&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* First and last name&lt;br /&gt;
&lt;br /&gt;
=== Fail4lib ===&lt;br /&gt;
* Jason Casden, NCSU Libraries (jmcasden at ncsu.edu)&lt;br /&gt;
* Andreas Orphanides, NCSU Libraries (akorphan at ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
The Code4lib community is full of driven people who embrace the risks that are often associated with new projects. While these traits lead to the incredible projects that are presented at Code4lib, creative technical work also often leads to unexpected, vexing, or disappointing results even from eventually successful projects (however you define the term). Learning more about how our colleagues deal with failure in various contexts could lead to the development of better methods for communicating the value of productive failure, modifying project plans (&amp;quot;The Pivot&amp;quot;), and failing more cheaply.&lt;br /&gt;
&lt;br /&gt;
Hopefully we can define the format as a group, but a fairly high level of participation is crucial if this is to be a worthwhile preconference. Some possible agenda items that could be mixed and matched to fill the afternoon:&lt;br /&gt;
&lt;br /&gt;
# Given willing presenters, a series of 10-20 minute presentations that go into some depth about specific failures.&lt;br /&gt;
# Depending on the number of participants, either a multi- or single-track series of unconference-like themed discussions on various aspects of failure, possibly including themes like:&lt;br /&gt;
#* Technical failure&lt;br /&gt;
#* Failure to effectively address a real user need&lt;br /&gt;
#* Overinvestment&lt;br /&gt;
#* Outreach/Promotion failure&lt;br /&gt;
#* Design/UX failure&lt;br /&gt;
#* Project team communication failure&lt;br /&gt;
#* Missed opportunities (risk-averse failure)&lt;br /&gt;
#* Successes gleaned from failures&lt;br /&gt;
# A panel of participants who have prepared in advance to answer moderator and audience questions about their experience with failure.&lt;br /&gt;
# A prepared reading assignment that we could all forget to read, creating a shared fail in order to start the preconference on the right foot.&lt;br /&gt;
&lt;br /&gt;
I'll serve as a moderator (if needed) and participant and would welcome more organizers. I am happy to be outvoted by participants on any of these points--I just want to get us talking about our screw-ups, blind spots, and anvils dropping from the sky.&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* Becky Yoose&lt;br /&gt;
* Lisa Rabey&lt;br /&gt;
* Cynthia Ng (maybe) - cynthia [dot] s [dot] ng [at] gmail&lt;br /&gt;
* Patrick Berry, pberry@csuchico.edu&lt;br /&gt;
&lt;br /&gt;
=== Solr 4 In Depth ===&lt;br /&gt;
* Contact: Erik Hatcher (erik.hatcher at lucidworks.com)&lt;br /&gt;
&lt;br /&gt;
The long awaited and much anticipated Solr 4 has been released!   It's a really big deal.  There are so many improvements, it makes the head spin.  This session will cover the major feature improvements from Lucene's flexible indexing and scoring API up through SolrCloud in a digestable half-day format. Sounds like this is an evening thing that might happen at a bar somewhere?&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* First and last name&lt;br /&gt;
* Erin Fahy - erin.fahy at mtholyoke edu&lt;br /&gt;
* Esmé Cowles, escowles@ucsd.edu&lt;br /&gt;
* Jon Stroop&lt;br /&gt;
* Adam Constabars&lt;br /&gt;
* Kevin Clarke&lt;br /&gt;
* Jacob Andresen (jacob at reindex dot dk)&lt;br /&gt;
* Ted Lawless (tlawless at brown dot edu)&lt;br /&gt;
* Jay Luker&lt;br /&gt;
* Tom Burton-West&lt;br /&gt;
* Curtis Thacker&lt;br /&gt;
* Eric James eric dot james at yale dot edu&lt;br /&gt;
* Bess Sadler (bess at stanford dot edu)&lt;br /&gt;
* Michael North&lt;br /&gt;
* Charles Draper&lt;br /&gt;
* Nick Cappadona&lt;br /&gt;
* Stephanie Collett - stephanie dot collett at ucop dot edu&lt;br /&gt;
* Kalee Sprague - kalee dot sprague at yale dot edu&lt;br /&gt;
* Jeff Peterson - gpeterso at umn dot edu&lt;br /&gt;
* Erik Hetzner&lt;br /&gt;
* Demian Katz - demian dot katz at villanova dot edu&lt;br /&gt;
* Eben English - eenglish at bpl dot org&lt;br /&gt;
* Raman Chandrasekar &lt;br /&gt;
* Jason Ronallo - jnronall@ncsu.edu&lt;br /&gt;
* Eric Larson - elarson@library.wisc.edu&lt;br /&gt;
* Mike Hagedon - hagedonm at u dot library dot arizona dot edu&lt;br /&gt;
[[Category:Code4Lib2013]]&lt;/div&gt;</summary>
		<author><name>Michaelhagedon</name></author>	</entry>

	</feed>