<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.code4lib.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tburtonw</id>
		<title>Code4Lib - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.code4lib.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tburtonw"/>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/Special:Contributions/Tburtonw"/>
		<updated>2026-04-09T06:30:29Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.26.2</generator>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2014_preconference_proposals&amp;diff=40240</id>
		<title>2014 preconference proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2014_preconference_proposals&amp;diff=40240"/>
				<updated>2014-01-13T19:46:22Z</updated>
		
		<summary type="html">&lt;p&gt;Tburtonw: /* CLLAM @ code4lib */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= PROPOSALS ARE CLOSED : PLEASE DO NOT ADD NEW PRECONFERENCES TO THIS PAGE =&lt;br /&gt;
&lt;br /&gt;
Proposals were accepted through December 6th, 2013.&lt;br /&gt;
&lt;br /&gt;
It would be really, super duper helpful if folks who think they might want to attend a pre-conference could indicate interest by adding your name to a session below. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Note===&lt;br /&gt;
Attendance at a pre-conference will require a small fee ''due at the time of conference registration&amp;quot;.&lt;br /&gt;
 &lt;br /&gt;
Although this was specified in the email announcements relating to pre-conferences, it was not added to this page until December 2nd.  I (Adam C.) apologize for the omission and I hope this will not cause any &amp;quot;sticker shock.&amp;quot;  Putting your name on this list does not incur any obligation on your part, but we'll be using it to gauge interest and work out room assignments.&lt;br /&gt;
&lt;br /&gt;
Please put your pre-conference on the list in the following format:&lt;br /&gt;
&lt;br /&gt;
=Code4Lib 2014 Pre-Conference Proposals=&lt;br /&gt;
&lt;br /&gt;
===Drupal4lib Sub-con Barcamp===&lt;br /&gt;
=====Full Day=====&lt;br /&gt;
&lt;br /&gt;
* Contact [[User:highermath|Cary Gordon]], cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
This will be a full day of self-selected barcamp style sessions. Anyone who wants to present can write down the topic on an index card and, after the keynote, we will vote to choose what we want to see. Attendees can also pick a topic and attempt to talk someone else into presenting on it.&lt;br /&gt;
&lt;br /&gt;
This event is open to the library community. There will be a nominal fee (t/b/d) for non-Code4LibCon attendees (subject to organizer approval).&lt;br /&gt;
&lt;br /&gt;
[[resources to help you learn drupal]]&lt;br /&gt;
&lt;br /&gt;
====Interested in Attending:====&lt;br /&gt;
&lt;br /&gt;
=====All Day=====&lt;br /&gt;
&lt;br /&gt;
* Renna Tuten &lt;br /&gt;
&lt;br /&gt;
=====Morning=====&lt;br /&gt;
&lt;br /&gt;
* Kevin Reiss&lt;br /&gt;
* Charlie Morris (NCSU) - glad to see this again this year!&lt;br /&gt;
* Paula Gray-Overtoom&lt;br /&gt;
* Laurie Lee Moses&lt;br /&gt;
&lt;br /&gt;
=====Afternoon=====&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Open Refine Hackfest===&lt;br /&gt;
'''&amp;quot;Half-Day&amp;quot;'''&lt;br /&gt;
* Contact [[User:bibliotechy|Chad Nelson]], chadbnelson@gmail.com&lt;br /&gt;
&lt;br /&gt;
[http://openrefine.org/ Open Refine] is a powerful open source tool for wrangling messy data that can also be used to help in the creation of Linked Data via the [https://github.com/OpenRefine/OpenRefine/wiki/Reconciliation-Service-API Reconciliation API]. It is possible to write reconciliation services against API's, like the [http://iphylo.blogspot.com/2013/04/reconciling-author-names-using-open.html VIAF service] or, even just against local authority files for helping maintain authority control&lt;br /&gt;
&lt;br /&gt;
The session would first introduce Open Refine, then walk through building a reconciliation service, and the rest of the session would be a hackfest where we build new reconciliation services for public consumption or local use. &lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Adam Constabaris&lt;br /&gt;
&amp;lt;li&amp;gt;Ray Schwartz&lt;br /&gt;
&amp;lt;li&amp;gt;Jason Stirnaman&lt;br /&gt;
&amp;lt;li&amp;gt;Joshua Gomez&lt;br /&gt;
&amp;lt;li&amp;gt;Sam Kome&lt;br /&gt;
&amp;lt;li&amp;gt;Mike Beccaria&lt;br /&gt;
&amp;lt;li&amp;gt;Angela Zoss&lt;br /&gt;
&amp;lt;li&amp;gt;A. Soroka&lt;br /&gt;
&amp;lt;li&amp;gt; Matt Zumwalt&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Responsive Design Hackfest===&lt;br /&gt;
'''&amp;quot;Half-Day [Afternoon]&amp;quot;''' &lt;br /&gt;
* Contact Jim Hahn, University of Illinois, jimfhahn@gmail.com&lt;br /&gt;
* Contact David Ward, University of Illinois, dh-ward@illinois.edu&lt;br /&gt;
&lt;br /&gt;
This structured hackfest will give attendees an opportunity to explore methods to create responsive mobile apps using the Bootstrap framework [http://getbootstrap.com/]and a set of APIs for accessing library data. We will start with an API template for creating space-based mobile tools that draw from work coming out of the IMLS funded Student/Library Collaborative grant [http://www.library.illinois.edu/nlg_student_apps]. Available APIs will include a room reservation template and codebase for implementing at any campus and the set of Minrva catalog APIs generating JSONP [http://minrvaproject.org/services.php]. &lt;br /&gt;
&lt;br /&gt;
Hosts will give a brief report of a study on student hacking projects and interests in mobile library apps that are the basis for the templates utilized in this Hackathon. By the end of the pre-conference attendees will have a sample responsive mobile web app in Bootstrap 3 to bring back to their campus which can plug into their site-based content.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Intro to Blacklight ===&lt;br /&gt;
'''&amp;quot;Half-Day [Morning]&amp;quot;''' &lt;br /&gt;
* Contact: Chris Beer, Stanford University, cabeer@stanford.edu&lt;br /&gt;
* TA: Bess Sadler, Stanford University, bess@stanford.edu&lt;br /&gt;
&lt;br /&gt;
This session will be walk-through of the architecture of Blacklight, the community, and an introduction to building a Blacklight-based application. Each participant will have the opportunity to build a simple Blacklight application, and make basic customizations, while using a test-driven approach.&lt;br /&gt;
&lt;br /&gt;
For more information about Blacklight see our wiki ( http://projectblacklight.org/ ) and our GitHub repo ( https://github.com/projectblacklight/blacklight ). We will also send out some brief instructions beforehand for those that would like to setup their environments to follow along and get Blacklight up and running on their local machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Megan Kudzia&lt;br /&gt;
# Bret Davidson&lt;br /&gt;
# Coral Sheldon-Hess&lt;br /&gt;
# Cory Lown&lt;br /&gt;
# Emily Daly&lt;br /&gt;
# Angela Zoss&lt;br /&gt;
# Sean Aery&lt;br /&gt;
# Francis Kayiwa&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Blacklight Hackfest===&lt;br /&gt;
'''&amp;quot;Half-Day [Afternoon]&amp;quot;''' &lt;br /&gt;
* Contact Chris Beer, Stanford University, cabeer@stanford.edu&lt;br /&gt;
&lt;br /&gt;
This afternoon hackfest is both a follow-on to the Intro to Blacklight morning session to continue building Blacklight-based applications, and also an opportunity for existing Blacklight contributors and members of the Blacklight community to exchange common patterns and approaches into reusable gems or incorporate customizations into Blacklight itself.&lt;br /&gt;
&lt;br /&gt;
For more information about Blacklight see our wiki ( http://projectblacklight.org/ ) and our GitHub repo ( https://github.com/projectblacklight/blacklight ).&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Shaun Ellis&lt;br /&gt;
# Kevin Reiss&lt;br /&gt;
# Megan Kudzia&lt;br /&gt;
# Erik Hatcher&lt;br /&gt;
# Emily Daly&lt;br /&gt;
# Laurie Lee Moses&lt;br /&gt;
# Francis Kayiwa&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===RailsBridge: Intro to programming in Ruby on Rails===&lt;br /&gt;
'''&amp;quot;Half-Day&amp;quot; [morning]'''&lt;br /&gt;
* Contact Justin Coyne, Data Curation Experts, justin@curationexperts.com&lt;br /&gt;
&lt;br /&gt;
Interested in learning how to program? Want to build your own web application? Never written a line of code before and are a little intimidated? There's no need to be! RailsBridge is a friendly place to get together and learn how to write some code.&lt;br /&gt;
&lt;br /&gt;
RailsBridge is a great workshop that opens the doors to projects like Blacklight and Hydra.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
1. Ayla Stein&lt;br /&gt;
&lt;br /&gt;
2. Heidi Dowding&lt;br /&gt;
&lt;br /&gt;
3. Caitlin Christian-Lamb&lt;br /&gt;
&lt;br /&gt;
4. Scott Bacon&lt;br /&gt;
&lt;br /&gt;
5. [[User:RileyChilds | Riley Childs]]&lt;br /&gt;
&lt;br /&gt;
6. Carolina Garcia&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Managing Projects: Or I'm in charge, now what? (aka PM4Lib)===&lt;br /&gt;
'''Full-Day'''&lt;br /&gt;
&lt;br /&gt;
Contact: &lt;br /&gt;
* [[User:rosy1280|Rosalyn Metz]], rosalynmetz@gmail.com&lt;br /&gt;
* [[User:yoosebj|Becky Yoose]], yoosebec@grinnell.edu&lt;br /&gt;
&lt;br /&gt;
This will be a full day session on project management.  We'll cover&lt;br /&gt;
* '''Kicking off the Project''' -- project lifecycle, project constraints, scoping/goals, stakeholders, assessment&lt;br /&gt;
* '''Planning the Project''' -- project charters, work breakdown structures, responsibilities, estimating time, creating budgets&lt;br /&gt;
* '''Executing the Project''' -- status meeting, status reports, issue management&lt;br /&gt;
* '''Finishing the Project''' -- achieving the goal, post mortems, project v. product&lt;br /&gt;
This is a revival of rosy1280's LITA Forum Pre-Conference, but better (because iteration is good) and adapted to c4lib types.&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Robin Dean&lt;br /&gt;
# Erin White&lt;br /&gt;
# Andrew Darby&lt;br /&gt;
# Sam Kome&lt;br /&gt;
# Ryan Scherle&lt;br /&gt;
# Will Shaw&lt;br /&gt;
# Liz Milewicz&lt;br /&gt;
# Cynthia &amp;quot;Arty&amp;quot; Ng&lt;br /&gt;
# Laurie Lee Moses (if I don't do the Hackfest for Blacklight)&lt;br /&gt;
# Ranti Junus&lt;br /&gt;
# Bohyun Kim (Afternoon)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Fail4Lib 2014===&lt;br /&gt;
'''Half Day [TBD, probably afternoon]'''&lt;br /&gt;
&lt;br /&gt;
Contacts: &lt;br /&gt;
* Andreas Orphanides, akorphan (at) ncsu.edu&lt;br /&gt;
* Jason Casden, jmcasden (at) ncsu.edu&lt;br /&gt;
&lt;br /&gt;
The task of design (and the work that we do as library coders) is intimately tied to failure. Failures, both big and small, motivate us to create and improve. Failures are also occasionally the result of our work. Understanding and embracing failure, encouraging enlightened risk-taking, and seeking out opportunities to fail and learn are essential to success in our field. At Fail4Lib, we'll talk about our own experiences with projects gone wrong, explore some famous design failures in the real world, and talk about how we can come to terms with the reality of failure, to make it part of our creative process -- rather than something to be feared.&lt;br /&gt;
&lt;br /&gt;
The schedule may include the following:&lt;br /&gt;
&lt;br /&gt;
* Case studies. We'll look at some classic failures from the literature: What can we learn from the mistakes of others?&lt;br /&gt;
* Confessionals, for those willing to share. Talk about your own experiences with rough starts, labor pains, and doomed projects in your own work: What can we learn from our own (and each others') failures?&lt;br /&gt;
* Group therapy. Let's talk about how to deal with risk management, failed projects, experimental endeavors, and more: How can we make ourselves, our colleagues, and our organizations more fault tolerant? How do we make sure we fail as productively as possible?&lt;br /&gt;
&lt;br /&gt;
''Interested in attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
#Bret Davidson&lt;br /&gt;
#Mike Graves&lt;br /&gt;
#Ray Schwartz&lt;br /&gt;
#Jason Stirnaman&lt;br /&gt;
#Julia Bauder&lt;br /&gt;
#Linda Ballinger&lt;br /&gt;
#Scott Hanrath&lt;br /&gt;
#Caitlin Christian-Lamb&lt;br /&gt;
#Ian Walls&lt;br /&gt;
#Scott Bacon &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===CLLAM @ code4lib===&lt;br /&gt;
'''(Computational Linguistics for Libraries, Archives and Museums)'''&lt;br /&gt;
&lt;br /&gt;
'''Full Day'''&lt;br /&gt;
&lt;br /&gt;
Contacts: &lt;br /&gt;
* Douglas W. Oard (primary), oard (at) umd.edu &lt;br /&gt;
* Corey Harper, corey (dot) harper (at) nyu.edu&lt;br /&gt;
* Robert Sanderson, azaroth42 (at) gmail.com &lt;br /&gt;
* Robert Warren, rwarren (at) math.carleton.ca&lt;br /&gt;
&lt;br /&gt;
We will hack at the intersection of diverse content from Libraries, Archives and Museums and bleeding edge tools from computational linguistics for slicing and dicing that content. Did you just acquire the email archives of a startup company? Maybe you can automatically build an org chart. Have you got metadata in a slew of languages? Perhaps you can search it all using one query. Is name authority control for e-resources getting too costly? Let’s see if entity linking techniques can help. These are just a few teasers. &lt;br /&gt;
&lt;br /&gt;
There’ll be plenty of content and tools supplied, but please bring your own [data] too -- you’ll hack with it in new ways throughout the day. We’ll get started with some lightning talks on what we’ve brought,then we’ll break up into groups to experiment and work on the ideas that appeal. Three guaranteed outcomes: you’ll walk away with new ideas, new tools, and new people you’ll have met.&lt;br /&gt;
&lt;br /&gt;
''Interested in attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Devon Smith&lt;br /&gt;
# Kevin S. Clarke&lt;br /&gt;
# Jason Stirnaman&lt;br /&gt;
# Joshua Gomez&lt;br /&gt;
# Carolina Garcia&lt;br /&gt;
# Tom Burton-West&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== GeoHydra: Managing geospatial content ===&lt;br /&gt;
&lt;br /&gt;
'''Half-day [Afternoon]'''&lt;br /&gt;
&lt;br /&gt;
* Contact: Darren Hardy, Stanford University, drh@stanford.edu&lt;br /&gt;
* Moderator: Bess Sadler, Stanford University, bess@stanford.edu&lt;br /&gt;
&lt;br /&gt;
Do you have digitized maps, GIS datasets like Shapefiles, aerial photography,&lt;br /&gt;
etc., all of which you want to integrate into your digital repository? In this&lt;br /&gt;
workshop, we will discuss how Hydra can provide discovery, delivery, and&lt;br /&gt;
management services for geospatial assets, as well as solicit questions about&lt;br /&gt;
your own GIS projects. We aim to help answer the following questions you might have about putting geospatial data into your Hydra-based digital library:&lt;br /&gt;
&lt;br /&gt;
* What are the types of geospatial data?&lt;br /&gt;
* How to dive into Hydra?&lt;br /&gt;
* How to model geospatial holdings with Hydra?&lt;br /&gt;
* How to discover and view geospatial data?&lt;br /&gt;
* How to build a geospatial data infrastructure?&lt;br /&gt;
* What are common approaches and problems?&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Esmé Cowles&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Technology, Librarianship, and Gender: Moving the conversation forward===&lt;br /&gt;
'''Full Day'''&lt;br /&gt;
&lt;br /&gt;
Contact: Lisa Rabey lisa @ biblyotheke dot net | [http://twitter.com/pnkrcklibrarian @pnkrcklibrarian]&lt;br /&gt;
&lt;br /&gt;
'''Description'''&lt;br /&gt;
&lt;br /&gt;
Librarianship is largely made up of women, yet women are significantly underrepresented in tech positions, on any level, within libraries themselves. Why? What are we doing to encourage women to become more involved in STEM within librarianship? What kind of message are we sending when library technology keynotes remain almost resolutely male? How are we changing the face of technology, not only within libraries, but with the field itself? How are we training our staff and colleagues in the areas of fairness and removal of bias? Our vendors?&lt;br /&gt;
&lt;br /&gt;
Lots of tough questions.&lt;br /&gt;
&lt;br /&gt;
While the conversation has been going on via various blogs and articles within the last few years, it was given a public face at [http://infotoday.com/il2013/day.asp?day=Monday#session_D105 Internet Librarian 2013] where a panel of 7 (four women, three men) gave personal experiences on the above and then opened up the conversation to the audience. As eye opening and enriching the conversation was, a 45 minute panel was not enough. One thing remains clear: We need to keep the conversation moving forward and start making some radical changes in the way we think, act, and how we need to harness this to start making real changes within librarianship itself.&lt;br /&gt;
&lt;br /&gt;
Topics to include:  Fairness, bias, impostor syndrome, code of conducts, sexual harassment, training opportunities, support systems,  mentoring, ally support, and more&lt;br /&gt;
&lt;br /&gt;
Those attending should expect: Begin with opening up the conversation of experiences and talking about what is most needed, spending remaining time putting together live, usable solutions to start implementing as well as pushing the conversation forward at local levels&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
=====All Day=====&lt;br /&gt;
1. Kate Kosturski&lt;br /&gt;
&lt;br /&gt;
2. Valerie Aurora&lt;br /&gt;
&lt;br /&gt;
3. Declan Fleming (I'd be good with a half day too)&lt;br /&gt;
&lt;br /&gt;
=====Morning=====&lt;br /&gt;
1. Shaun Ellis&lt;br /&gt;
&lt;br /&gt;
2. Jason Casden&lt;br /&gt;
&lt;br /&gt;
3. Bohyun Kim&lt;br /&gt;
&lt;br /&gt;
=====Afternoon=====&lt;br /&gt;
1. Ayla Stein&lt;br /&gt;
&lt;br /&gt;
2. Heidi Dowding&lt;br /&gt;
&lt;br /&gt;
3. Coral Sheldon-Hess&lt;br /&gt;
&lt;br /&gt;
4. Cory Lown&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===FileAnalyzer: Rapid Development of File Manipulation Tasks===&lt;br /&gt;
'''&amp;quot;Half-Day&amp;quot; [morning]'''&lt;br /&gt;
* Contact Terry Brady, twb27@georgetown.edu&lt;br /&gt;
&lt;br /&gt;
The FileAnalyzer (https://github.com/Georgetown-University-Libraries/File-Analyzer) is an application designed to solve a number of library automation challenges:&lt;br /&gt;
&lt;br /&gt;
* validating digitized and reformatted files&lt;br /&gt;
* validating vendor statistics for counter compliance&lt;br /&gt;
* preparing collections of digital files for archiving and ingest&lt;br /&gt;
* manipulating ILS import and export files&lt;br /&gt;
&lt;br /&gt;
The File Analyzer application was used by the US National Archives to validate 3.5 million digitized images from the 1940 Census. After implementing a customized ingest workflow within the File Analyzer, the Georgetown University Libraries was able to process an ingest backlog of over a thousand files of digital resources into DigitalGeorgetown, the Libraries’ Digital Collections and Institutional Repository platform. Georgetown is currently developing customized workflows that integrate Apache Tika, BagIt, and Marc conversion utilities.&lt;br /&gt;
&lt;br /&gt;
The File Analyzer is a desktop application with a powerful framework for implementing customized file validation and transformation rules. As new rules are deployed, they are presented to users within a user interface that is easy (and powerful) to use.&lt;br /&gt;
&lt;br /&gt;
The first half of this session will be targeted to potential users and developers.  The second half of the session will be targeted towards developers who are interested in developing custom rules for the application.&lt;br /&gt;
&lt;br /&gt;
''Session Overview''&lt;br /&gt;
* Overview of the application&lt;br /&gt;
* Running sample file tests/transformations through the application&lt;br /&gt;
* Compiling and building the application&lt;br /&gt;
* Coding a custom file processing task&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
#Ray Schwartz&lt;br /&gt;
# Michael Doran&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Collecting social media data with Social Feed Manager===&lt;br /&gt;
'''Half-Day [Morning]'''&lt;br /&gt;
&lt;br /&gt;
Contacts: &lt;br /&gt;
* Dan Chudnov, GW Libraries, dchud (at) gwu.edu&lt;br /&gt;
* Dan Kerchner, GW Libraries, kerchner (at) gwu.edu&lt;br /&gt;
* Laura Wrubel, GW Libraries, lwrubel (at) gwu.edu&lt;br /&gt;
&lt;br /&gt;
Social media data is a popular material for research and a new format for building collections.  What does it take to collect meaningfully from Twitter, Tumblr, YouTube, Weibo, Facebook, and other sites?  We will:&lt;br /&gt;
* Introduce options for collections, including both high- and low-end commercial offerings. Discuss what it means to collect these resources, covering boundaries, policies, and workflows required to develop a social media collection program in your institution.&lt;br /&gt;
* Explore the Twitter API in depth, with hands-on opportunities for those w/laptops and others who want to team up w/them&lt;br /&gt;
* Help you get started using the free [http://gwu-libraries.github.io/social-feed-manager Social Feed Manager] (SFM) app we're developing at GW to create your first collections. We’ll demo its use and demo a clean install (those w/environments can follow along)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Declan Fleming&lt;br /&gt;
# Esmé Cowles&lt;br /&gt;
# Jason Stirnaman&lt;br /&gt;
# Ray Schwartz&lt;br /&gt;
# Liz Milewicz&lt;br /&gt;
# Ranti Junus&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Intro to Git ===&lt;br /&gt;
'''&amp;quot;Half-Day [tbd - probably afternoon]&amp;quot;''' &lt;br /&gt;
* Contact: Erin Fahy, Stanford University, efahy at stanford.edu&lt;br /&gt;
* TA: Michael Klein, Northwestern University, michael.klein at northwestern.edu&lt;br /&gt;
&lt;br /&gt;
This session will cover the fundamentals of git by discussing/going through (time allowing):&lt;br /&gt;
* what is a distributed version control system&lt;br /&gt;
* what is git and github&lt;br /&gt;
* initializing a repo on a remote server/github&lt;br /&gt;
* cloning an existing repo&lt;br /&gt;
* creating a branch&lt;br /&gt;
* contributing code to a repo&lt;br /&gt;
* how to handle merge conflicts&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
# Ray Schwartz&lt;br /&gt;
# Sam Kome&lt;br /&gt;
# Paula Gray-Overtoom&lt;br /&gt;
# Liz Milewicz&lt;br /&gt;
# Michael Doran&lt;br /&gt;
# Caitlin Christian-Lamb&lt;br /&gt;
# [[User:RileyChilds|Riley Childs]]&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Archival discovery and use ===&lt;br /&gt;
'''Full Day''' &lt;br /&gt;
&lt;br /&gt;
Contacts: &lt;br /&gt;
* Tim Shearer, UNC Chapel Hill, tshearer at email.unc.edu, &lt;br /&gt;
* Will Sexton, Duke, will.sexton at duke.edu&lt;br /&gt;
&lt;br /&gt;
This is a full day pre-conference about archival collections and will cover the intersections of archives, workflows, technologies, discovery, and use.&lt;br /&gt;
&lt;br /&gt;
Morning agenda: focused talks around (but not limited to) issues such as:&lt;br /&gt;
* Crowd-sourcing description to enhance collecitons&lt;br /&gt;
* Linked data and authority&lt;br /&gt;
* Mass digitization and sustainable workflows&lt;br /&gt;
* Digitized objects in context (images and other objects in finding aids)&lt;br /&gt;
* Too many cooks in the kitchen: versioning&lt;br /&gt;
* Global-, intra-, and inter- discovery of archival materials via finding aids &lt;br /&gt;
* and more...&lt;br /&gt;
&lt;br /&gt;
Afternoon agenda:  Focused talks around specific tools followed by general discussion, connections, opportunities, aspirations, and planning.&lt;br /&gt;
&lt;br /&gt;
Tool examples:&lt;br /&gt;
* Archivespace&lt;br /&gt;
* STEADy&lt;br /&gt;
* &amp;quot;RAMP&amp;quot; (Remixing Archival Metadata Project)&lt;br /&gt;
* OpenRefine&lt;br /&gt;
* Aeon&lt;br /&gt;
&lt;br /&gt;
''Interested in Attending''&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here&lt;br /&gt;
&lt;br /&gt;
Morning:&lt;br /&gt;
* Julia Bauder&lt;br /&gt;
&lt;br /&gt;
Afternoon:&lt;br /&gt;
* your name&lt;br /&gt;
&lt;br /&gt;
All day:&lt;br /&gt;
&lt;br /&gt;
# Josh Wilson&lt;br /&gt;
# Sam Kome&lt;br /&gt;
# Linda Ballinger&lt;br /&gt;
# Caitlin Christian-Lamb&lt;br /&gt;
# Laurie Lee Moses (seriously hard to decide here!)&lt;br /&gt;
# David Bass&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===AV Content Slam===&lt;br /&gt;
'''Half-Day [morning]'''&lt;br /&gt;
Contacts:&lt;br /&gt;
* Kara Van Malssen, kara (at) avpreserve.com&lt;br /&gt;
* Lauren Sorenson, laurens (at) bavc.org&lt;br /&gt;
* Steven Villereal , villereal (at) gmail.com&lt;br /&gt;
A morning BarCamp/unconference for practitioners and coders who work with audiovisual content. The agenda will be attendee-driven, with a focus on sharing, synthesizing, and improving workflow strategies and documentation for software-based approaches to wrangling and providing access to audio and video content.&lt;br /&gt;
Possible topics of discussion might include:&lt;br /&gt;
* Use of format id and characterization/metadata extraction tools for AV&lt;br /&gt;
* Creating and using time-based metadata&lt;br /&gt;
* Managing (moving, fixity checking, etc) massive files (like uncompressed video)&lt;br /&gt;
For a better idea of the topics and concerns that have informed some past AV-themed events, check out the event wikis for [http://wiki.curatecamp.org/index.php/CURATEcamp_AVpres_2013 CURATEcamp AVpres 2013] as well as the [http://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_%26_Digital_Library_Federation_Hack_Day_2013 AMIA/DLF 2013 Hack Day] .&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here:&lt;br /&gt;
&lt;br /&gt;
# A. Soroka&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===OCLC Web Services Hackfest===&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Half-Day&amp;quot; [afternoon]&lt;br /&gt;
&lt;br /&gt;
Contact: Shelley Hostetler, Community Manager, Developer Network hostetls[at]oclc.org&lt;br /&gt;
&lt;br /&gt;
This half-day hackfest will explore some of the OCLC Developer Network web services. We will provide an overview of some of the common topics such as the general REST-based architecture for most services and how to use some new authentication clients. The group can then decide to take a deep dive into a particular API and/or write a client library for the community.&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Obey the Testing Goat!: Test Driven Web Development From The Ground Up===&lt;br /&gt;
'''Half-Day [tbd - probably afternoon]'''&lt;br /&gt;
* Contact [[User:Mredar|Mark Redar]], mredar[at]gmail.com&lt;br /&gt;
&lt;br /&gt;
Test driven development is a proven method for producing better quality code. But I've found it hard to follow a strict TDD methodology when starting new web projects. How do you write that first test when there is no code or web pages created yet.&lt;br /&gt;
&lt;br /&gt;
In this session, we will follow the excellent book [http://shop.oreilly.com/product/0636920029533.do &amp;quot;Test-Driven Web Development with Python&amp;quot;] to create a simple web site in Django following TDD from the first character typed. Come ready to code and test. No prior knowledge of python or Django required.&lt;br /&gt;
&lt;br /&gt;
By the end of this session, you should be able to  [http://www.obeythetestinggoat.com/ &amp;quot;Obey the Testing Goat&amp;quot;] from the start to finish for your next project.&lt;br /&gt;
&lt;br /&gt;
If you would be interested in attending, please indicate by adding your name (but not email address, etc.) here:&lt;br /&gt;
&lt;br /&gt;
# Charlie Morris (NCSU)&lt;br /&gt;
# Jason Stirnaman&lt;br /&gt;
# Joshua Gomez&lt;br /&gt;
# Liz Milewicz&lt;br /&gt;
# Scott Hanrath&lt;br /&gt;
# Mike Beccaria&lt;br /&gt;
# Sean Aery&lt;br /&gt;
# Carolina Garcia&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Summon Hackfest and ProQuest Discovery &amp;amp; Management Technologies Users Group ===&lt;br /&gt;
&lt;br /&gt;
Presenter: Eddie Newwirth and presenters from Summon libraries&lt;br /&gt;
Contact: Scott Schuetze (first DOT last @ serialssolutions. com)&lt;br /&gt;
&lt;br /&gt;
The morning hackfest (10:30am-12pm) will be a great opportunity for libraries using the Summon service to share their creative customizations and code and exchange ideas about ways they can leverage the Summon API to better meet the needs of their users.&lt;br /&gt;
 &lt;br /&gt;
The ProQuest Discovery &amp;amp; Management Technologies User Group (1pm-4pm) will feature updates from product managers, presentations by several libraries sharing different aspects of their experiences with ProQuest discovery and management services, and an interactive session designed to let you share your stories and discuss ideas.&lt;br /&gt;
 &lt;br /&gt;
The Summon Hackfest and User Group are open to all libraries currently using ProQuest discovery and management services (Intota, Summon, Ulrich’s or the 360 suite of services), whether they are attending Code4Lib or are just in the area.&lt;br /&gt;
 &lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[:Category:Code4Lib2014]]&lt;/div&gt;</summary>
		<author><name>Tburtonw</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2013_preconference_proposals&amp;diff=29237</id>
		<title>2013 preconference proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2013_preconference_proposals&amp;diff=29237"/>
				<updated>2012-12-07T18:48:11Z</updated>
		
		<summary type="html">&lt;p&gt;Tburtonw: /* Solr 4 In Depth */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Proposals '''now closed'''.&lt;br /&gt;
&lt;br /&gt;
Spaces available: 4+ Rooms&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
=== Talk Title ===&lt;br /&gt;
 &lt;br /&gt;
* Presenter/Leader, affiliation (optional), and email address (mandatory!)&lt;br /&gt;
* Second Presenter/Leader, affiliation, email address, if applicable&lt;br /&gt;
&lt;br /&gt;
Description.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Full Day==&lt;br /&gt;
&lt;br /&gt;
===Drupal4lib Sub-con Barcamp===&lt;br /&gt;
&lt;br /&gt;
* Contact [[User:highermath|Cary Gordon]], cgordon@chillco.com or &lt;br /&gt;
* [[User:cdmo|Charlie Morris]], NCSU Libraries, cdmorris@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
This will be a full day of self-selected barcamp style sessions. Anyone who wants to present can write down the topic on an index card and, after the keynote, we will vote to choose what we want to see. Attendees can also pick a topic and attempt to talk someone else into presenting on it.&lt;br /&gt;
&lt;br /&gt;
If we run out of topics, we will pay homage to the project by testing patches for Drupal 8. It is easy, and we will show you how to do this invaluable task.&lt;br /&gt;
&lt;br /&gt;
Local Drupal uber-ninja Larry Garfield will stop by to answer questions and give us some guidance.&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* Kevin Reiss, Princeton University Library, kr2 at princeton.edu (afternoon only)&lt;br /&gt;
* Christina Salazar (afternoon only)&lt;br /&gt;
* Sarah Dooley (afternoon)&lt;br /&gt;
&lt;br /&gt;
==Half Day Morning==&lt;br /&gt;
=== Open space session ===&lt;br /&gt;
&lt;br /&gt;
* Dan Chudnov, dchud at gwu edu&lt;br /&gt;
&lt;br /&gt;
The rest of code4libcon is pretty well structured these days; come in the morning for a few hours of old-school [http://en.wikipedia.org/wiki/Open-space_technology open space technology] unconference.  Bring a rough talk or idea you want to share or questions you have or something you want to learn about or discuss with other people, and be ready to tell us about it.  Use it as extra prep time for your upcoming prepared or lightning talk if you want.  We'll plan the morning out a little bit at the beginning, but not too much.  What we do will be up to the people there in the room.&lt;br /&gt;
&lt;br /&gt;
If there's interest, we could start with a &amp;quot;welcome to code4lib&amp;quot; introductory session for newcomers.&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* Devon Smith&lt;br /&gt;
* First and last name&lt;br /&gt;
* Esmé Cowles&lt;br /&gt;
* Jason Casden&lt;br /&gt;
&lt;br /&gt;
=== Delivery services ===&lt;br /&gt;
* Ted Lawless, Brown University Library, tlawless at brown edu.  &lt;br /&gt;
* Kevin Reiss, Princeton University Library, kr2 at princeton edu.&lt;br /&gt;
&lt;br /&gt;
Are you interested in making it easier for users to obtain copies of known items?  Do you feel your OpenURL and Interlibrary Loan software could be streamlined?  This pre-conference workshop will focus on providing services that deliver content to users.  Discovery systems are doing a better job of exposing library holdings but there's still a lot of work to do actually get the content in the users hands.  &lt;br /&gt;
&lt;br /&gt;
Possible topics/activities include:&lt;br /&gt;
* panel discussion of what some libraries have done in this area&lt;br /&gt;
* comparisons of different approaches to addressing delivery &lt;br /&gt;
* overview of tools available &lt;br /&gt;
* sharing of strategies and experiences&lt;br /&gt;
* time to work with and review open source code in this area. Some possible tools to install and test out [https://github.com/team-umlaut/umlaut Umlaut], [https://github.com/lawlesst/py360link Py360 Link]. &lt;br /&gt;
 &lt;br /&gt;
Resources and background information:&lt;br /&gt;
* [https://github.com/team-umlaut/umlaut/wiki/What-is-Umlaut-anyway What-is-Umlaut-anyway] &lt;br /&gt;
* [http://journal.code4lib.org/articles/7308 Hacking 360 Link: A hybrid approach]&lt;br /&gt;
* [http://journal.code4lib.org/articles/108 Auto-Populating an ILL form with the Serial Solutions Link Resolver API]&lt;br /&gt;
* [http://lawlesst.github.com/notebook/delivery.html Focusing on Delivery]&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* Ken Varnum&lt;br /&gt;
* Ayla Stein&lt;br /&gt;
&lt;br /&gt;
=== Intro to Blacklight ===&lt;br /&gt;
* Bess Sadler, Stanford University Library (bess at stanford.edu)&lt;br /&gt;
* Justin Coyne, MediaShelf (justin.coyne at yourmediashelf.com)&lt;br /&gt;
&lt;br /&gt;
Blacklight (http://projectblacklight.org) is a free and open source discovery interface built on solr and ruby on rails. It is used by institutions such as Stanford University, University of Virginia, WGBH, Johns Hopkins University, the Rock and Roll hall of fame, and an ever expanding community of adopters and contributors. Blacklight can be used as a front-end discovery solution for an ILS, or the contents of a digital repository, or to provide a unified discovery solution for many siloed collections. In this workshop we will cover the basics of solr indexing and searching, setting up and customizing Blacklight, and leave time for Q&amp;amp;A around local issues people might encounter. &lt;br /&gt;
&lt;br /&gt;
Note: this workshop can be a standalone intro, or attendees can follow up with the intro to hydra workshop in the afternoon.&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* Linda Ballinger&lt;br /&gt;
* Terry Brady&lt;br /&gt;
* First and last name&lt;br /&gt;
&lt;br /&gt;
=== RailsBridge Intro to Ruby on Rails ===&lt;br /&gt;
* Jason Ronallo, North Carolina State University Libraries, jnronall@ncsu.edu&lt;br /&gt;
* Mark Bussey, Data Curation Experts (mark at curationexperts.com)&lt;br /&gt;
* Shaun Ellis (helper), Princeton University Library, shaune@princeton.edu&lt;br /&gt;
* Ross Singer, Talis, rossfsinger@gmail.com&lt;br /&gt;
* Adam Wead (helper), Rock and Roll Hall of Fame, awead@rockhall.org&lt;br /&gt;
* Anyone else want to come and help folks? Contact Jason.&lt;br /&gt;
&lt;br /&gt;
RailsBridge comes to code4lib! We'll follow the RailsBridge curriculum (http://railsbridge.org) to provide a gentle introduction to Ruby on Rails. Topics covered include an introduction to the Ruby language, the Rails framework, and version control with git. Participants will build a working Rails application. &lt;br /&gt;
&lt;br /&gt;
There will be some pre-preconference preparation needed so that we can effectively use our time. Details to come.&lt;br /&gt;
&lt;br /&gt;
* Note: Attendees can follow up with the Intro to Blacklight afternoon session, which will be tailored for folks new to Ruby&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* First and last name&lt;br /&gt;
* Shawn Kiewel&lt;br /&gt;
* Jon Stroop&lt;br /&gt;
* Christina Salazar&lt;br /&gt;
* Karen Coombs - coombsk{at}oclc{dot}org&lt;br /&gt;
* Becky Yoose&lt;br /&gt;
* Jeremy Morse&lt;br /&gt;
* Julia Bauder&lt;br /&gt;
* Chung Kang&lt;br /&gt;
* Karen Miller&lt;br /&gt;
* Betsy Coles&lt;br /&gt;
* Jay Luker&lt;br /&gt;
* Santi Thompson&lt;br /&gt;
* Sarah Dooley&lt;br /&gt;
&lt;br /&gt;
===Intro to NoSQL Databases===&lt;br /&gt;
* Joshua Gomez, George Washington University, jngomez at gwu edu&lt;br /&gt;
&lt;br /&gt;
Since Google published its paper on BigTable in 2006, alternatives to the traditional relational database model have been growing in both variety and popularity. These new databases (often referred to as NoSQL databases) excel at handling problems faced by modern information systems that the traditional relational model cannot. They are particularly popular among organizations tackling the so-called &amp;quot;Big Data&amp;quot; problems. However, there are always tradeoffs involved when making such dramatic changes. Understanding how these different kinds of databases are designed and what they can offer is essential to the decision making process. In this precon I will discuss some of the various types of new databases (key-value, columnar, document, graph) and walk through examples or exercises using some of their open source implementations like Riak, HBase, CouchDB, and Neo4j.&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* First and last name&lt;br /&gt;
* Esha Datta&lt;br /&gt;
* Trevor Thornton&lt;br /&gt;
* Michael Doran&lt;br /&gt;
* Ray Schwartz&lt;br /&gt;
* Kevin Clarke&lt;br /&gt;
* Andreas Orphanides&lt;br /&gt;
* Tommy Ingulfsen&lt;br /&gt;
&lt;br /&gt;
==Half Day Afternoon==&lt;br /&gt;
=== Data Visualization Hackfest ===&lt;br /&gt;
* Chris Beer, cabeer at stanford.edu&lt;br /&gt;
* Dan Chudnov, dchud at gwu edu&lt;br /&gt;
&lt;br /&gt;
* Description: Want to hack/design/plan/document on a team of people who enjoy learning by creating?  Interested in data visualization?  Well, this hackfest is for you.  Not familiar with the concept of a hackfest?  See Roy Tennant's [http://www.libraryjournal.com/article/CA332564.html &amp;quot;Where Librarians Go To Hack&amp;quot;] and the page for the [http://access2010.lib.umanitoba.ca/node/3.html Access 2010 Hackfest].  We propose a half-day hackfest with a focus on visualization library data -- think stuff like library catalog data, access/circulation statistics, etc. Here's how it works, roughly: &lt;br /&gt;
 - we'll (you'll!) do lightning tutorials for some data visualization tools, toolkits (R? d3js? ?), datasets.&lt;br /&gt;
 - we'll separate into groups and hack on stuff.&lt;br /&gt;
 - at the end of the day, we'll present our progress.&lt;br /&gt;
&lt;br /&gt;
Not a code hacker?  No worries; all skill sets and backgrounds are valuable! &lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* First and last name&lt;br /&gt;
* Devon Smith&lt;br /&gt;
* Esha Datta&lt;br /&gt;
* Ray Schwartz&lt;br /&gt;
* Karen Coombs - coombsk{at}oclc{dot}org&lt;br /&gt;
* Julia Bauder&lt;br /&gt;
* Jason Stirnaman (jstirnaman at kumc.edu)&lt;br /&gt;
* Joshua Gomez&lt;br /&gt;
* Ayla Stein&lt;br /&gt;
&lt;br /&gt;
=== Intro to Hydra ===&lt;br /&gt;
* Adam Wead, Rock and Roll Hall of Fame (awead at rockhall.org)&lt;br /&gt;
* Mike Giarlo, Penn State Information Technology Services (michael at psu.edu)&lt;br /&gt;
* Mark Bussey, Data Curation Experts (mark at curationexperts.com)&lt;br /&gt;
&lt;br /&gt;
Hydra (http://projecthydra.org) is a free and open source repository solution that is being used by institutions on both sides of the North Atlantic to provide access to their digital content.  Hydra provides a versatile and feature rich environment for end-users and repository administrators alike. Leveraging Blacklight as its front end discovery interface, the hydra project provides a suite of software components, data models, and design patterns for building a robust and sustainable digital repository, as well as a community of support for ongoing development. This workshop will provide an introduction to the hydra project and its software components. Attendees will leave with enough knowledge to get started building their own local repository solutions. This workshop will be led by Adam Wead of the Rock and Roll Hall of Fame. &lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* Jeremy Prevost&lt;br /&gt;
* Dennis Ogg&lt;br /&gt;
* Linda Ballinger&lt;br /&gt;
* Terry Brady&lt;br /&gt;
* Betsy Coles&lt;br /&gt;
* First and last name&lt;br /&gt;
&lt;br /&gt;
=== Intro to Blacklight ===&lt;br /&gt;
* Bess Sadler, Stanford University Library (bess at stanford.edu)&lt;br /&gt;
* Justin Coyne, MediaShelf (justin.coyne at yourmediashelf.com)&lt;br /&gt;
* Jason Ronallo, NC State (jronallo at gmail.com)&lt;br /&gt;
* Shaun Ellis (helper), Princeton University Library, (shaune@princeton.edu)&lt;br /&gt;
&lt;br /&gt;
Blacklight (http://projectblacklight.org) is a free and open source discovery interface built on solr and ruby on rails. It is used by institutions such as Stanford University, NC State, WGBH, Johns Hopkins University, the Rock and Roll Hall of Fame, and an ever expanding community of adopters and contributors. Blacklight can be used as a front-end discovery solution for an ILS, or the contents of a digital repository, or to provide a unified discovery solution for many siloed collections. In this workshop we will cover the basics of solr indexing and searching, setting up and customizing Blacklight, and leave time for Q&amp;amp;A around local issues people might encounter. &lt;br /&gt;
&lt;br /&gt;
Note: this workshop will be tailored as a follow-on to the morning's RailsBridge Intro to Ruby on Rails workshop, but everyone is welcome&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* First and last name&lt;br /&gt;
* Shawn Kiewel&lt;br /&gt;
* Jon Stroop&lt;br /&gt;
* Jeremy Morse&lt;br /&gt;
* Karen Miller&lt;br /&gt;
* Tommy Ingulfsen&lt;br /&gt;
* Chung Kang&lt;br /&gt;
* Santi Thompson&lt;br /&gt;
&lt;br /&gt;
=== DPLA Intro/Hacking ===&lt;br /&gt;
 &lt;br /&gt;
* Presenter(s)/Leader(s): TBD&lt;br /&gt;
* Guy Who'd Be Interested in Helping: Jay Luker, Smithsonian Astrophysics Data System (jluker at cfa.harvard.edu)&lt;br /&gt;
&lt;br /&gt;
This is a stub proposal entered solely to beat the submission deadline. I think there's be sufficient interest in this session, but only thought of it yesterday and haven't had time to coordinate with actual DPLA'ers and confirm that any of them are definitely coming.&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* First and last name&lt;br /&gt;
&lt;br /&gt;
=== Fail4lib ===&lt;br /&gt;
* Jason Casden, NCSU Libraries (jmcasden at ncsu.edu)&lt;br /&gt;
* Andreas Orphanides, NCSU Libraries (akorphan at ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
The Code4lib community is full of driven people who embrace the risks that are often associated with new projects. While these traits lead to the incredible projects that are presented at Code4lib, creative technical work also often leads to unexpected, vexing, or disappointing results even from eventually successful projects (however you define the term). Learning more about how our colleagues deal with failure in various contexts could lead to the development of better methods for communicating the value of productive failure, modifying project plans (&amp;quot;The Pivot&amp;quot;), and failing more cheaply.&lt;br /&gt;
&lt;br /&gt;
Hopefully we can define the format as a group, but a fairly high level of participation is crucial if this is to be a worthwhile preconference. Some possible agenda items that could be mixed and matched to fill the afternoon:&lt;br /&gt;
&lt;br /&gt;
# Given willing presenters, a series of 10-20 minute presentations that go into some depth about specific failures.&lt;br /&gt;
# Depending on the number of participants, either a multi- or single-track series of unconference-like themed discussions on various aspects of failure, possibly including themes like:&lt;br /&gt;
#* Technical failure&lt;br /&gt;
#* Failure to effectively address a real user need&lt;br /&gt;
#* Overinvestment&lt;br /&gt;
#* Outreach/Promotion failure&lt;br /&gt;
#* Design/UX failure&lt;br /&gt;
#* Project team communication failure&lt;br /&gt;
#* Missed opportunities (risk-averse failure)&lt;br /&gt;
#* Successes gleaned from failures&lt;br /&gt;
# A panel of participants who have prepared in advance to answer moderator and audience questions about their experience with failure.&lt;br /&gt;
# A prepared reading assignment that we could all forget to read, creating a shared fail in order to start the preconference on the right foot.&lt;br /&gt;
&lt;br /&gt;
I'll serve as a moderator (if needed) and participant and would welcome more organizers. I am happy to be outvoted by participants on any of these points--I just want to get us talking about our screw-ups, blind spots, and anvils dropping from the sky.&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* First and last name&lt;br /&gt;
* Becky Yoose&lt;br /&gt;
&lt;br /&gt;
=== Solr 4 In Depth ===&lt;br /&gt;
* Contact: Erik Hatcher (erik.hatcher at lucidworks.com)&lt;br /&gt;
&lt;br /&gt;
The long awaited and much anticipated Solr 4 has been released!   It's a really big deal.  There are so many improvements, it makes the head spin.  This session will cover the major feature improvements from Lucene's flexible indexing and scoring API up through SolrCloud in a digestable half-day format.&lt;br /&gt;
&lt;br /&gt;
'''I plan on attending:'''&lt;br /&gt;
* First and last name&lt;br /&gt;
* Esmé Cowles&lt;br /&gt;
* Jon Stroop&lt;br /&gt;
* Adam Constabars&lt;br /&gt;
* Kevin Clarke&lt;br /&gt;
* Jacob Andresen&lt;br /&gt;
* Ted Lawless&lt;br /&gt;
* Jay Luker&lt;br /&gt;
* Tom Burton-West&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2013]]&lt;/div&gt;</summary>
		<author><name>Tburtonw</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2013_talks_proposals&amp;diff=28279</id>
		<title>2013 talks proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2013_talks_proposals&amp;diff=28279"/>
				<updated>2012-11-08T21:39:45Z</updated>
		
		<summary type="html">&lt;p&gt;Tburtonw: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Deadline has been extended by request due to the hurricane/storm.'''&lt;br /&gt;
&lt;br /&gt;
Deadline for talk submission is ''Friday, November 9'' at 11:59pm ET. We ask that no changes be made after this point, so that every voter reads the same thing. You can update your description again after voting closes.&lt;br /&gt;
&lt;br /&gt;
Prepared talks are 20 minutes (including setup and questions), and focus on one or more of the following areas:&lt;br /&gt;
* tools (some cool new software, software library or integration platform)&lt;br /&gt;
* specs (how to get the most out of some protocols, or proposals for new ones)&lt;br /&gt;
* challenges (one or more big problems we should collectively address)&lt;br /&gt;
&lt;br /&gt;
The community will vote on proposals using the criteria of:&lt;br /&gt;
* usefulness&lt;br /&gt;
* newness&lt;br /&gt;
* geekiness&lt;br /&gt;
* uniqueness&lt;br /&gt;
* awesomeness&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
== Talk Title ==&lt;br /&gt;
 &lt;br /&gt;
* Speaker's name, affiliation, and email address&lt;br /&gt;
* Second speaker's name, affiliation, email address, if applicable&lt;br /&gt;
&lt;br /&gt;
Abstract of no more than 500 words.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== All Teh Metadatas Re-Revisited ==&lt;br /&gt;
 &lt;br /&gt;
* Esme Cowles, UC San Diego Library, escowles AT ucsd DOT edu&lt;br /&gt;
* Matt Critchlow, UC San Diego Library, mcritchlow AT ucsd DOT edu&lt;br /&gt;
* Bradley Westbrook, UC San Diego Library, bdwestbrook AT ucsd DOT edu&lt;br /&gt;
&lt;br /&gt;
Last year Declan Fleming presented ALL TEH METADATAS and reviewed our UC&lt;br /&gt;
San Diego Library Digital Asset Management system and RDF data model. You&lt;br /&gt;
may be shocked to hear that all that metadata wasn't quite enough to&lt;br /&gt;
handle increasingly complex digital library and research data in an&lt;br /&gt;
elegant way. Our ad-hoc, 8-year-old data model has also been added to in&lt;br /&gt;
inconsistent ways and our librarians and developers have not always been&lt;br /&gt;
perfectly in sync in understanding how the data model has evolved over&lt;br /&gt;
time.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
In this presentation we'll review our process of locking a team of&lt;br /&gt;
librarians and developers in a room to figure out a new data model, from&lt;br /&gt;
domain definition through building and testing an OWL ontology. We¹ll also&lt;br /&gt;
cover the challenges we ran into, including the review of existing&lt;br /&gt;
controlled vocabularies and ontologies, or lack thereof, and the decisions&lt;br /&gt;
made to cover the gaps. Finally, we'll discuss how we engaged the digital&lt;br /&gt;
library community for feedback and what we have to do next. We all know&lt;br /&gt;
that Things Fall Apart, this is our attempt at Doing Better This Time.&lt;br /&gt;
&lt;br /&gt;
== Modernizing VuFind with Zend Framework 2 ==&lt;br /&gt;
&lt;br /&gt;
* Demian Katz, Villanova University, demian DOT katz AT villanova DOT edu&lt;br /&gt;
&lt;br /&gt;
When setting goals for a new major release of VuFind, use of an existing web framework was an important decision to encourage standardization and avoid reinvention of the wheel.  Zend Framework 2 was selected as providing the best balance between the cutting-edge (ZF2 was released in 2012) and stability (ZF1 has a long history and many adopters).  This talk will examine some of the architecture and features of the new framework and discuss how it has been used to improve the VuFind project.&lt;br /&gt;
&lt;br /&gt;
== Did You Really Say That Out Loud?  Tools and Techniques for Safe Public WiFi Computing  ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:DataGazetteer|Peter Murray]], LYRASIS, Peter.Murray@lyrasis.org&lt;br /&gt;
&lt;br /&gt;
Public WiFi networks, even those that have passwords, are nothing more that an old-time [https://en.wikipedia.org/wiki/Party_line_(telephony) party line]: what every you say can be easily heard by anyone nearby.  &lt;br /&gt;
Remember [https://en.wikipedia.org/wiki/Firesheep Firesheep]?  &lt;br /&gt;
It was an extension to Firefox that demonstrated how easy it was to snag session cookies and impersonate someone else.&lt;br /&gt;
So what are you sending out over the airwaves, and what techniques are available to prevent eavesdropping?&lt;br /&gt;
This talk will demonstrate tools and techniques for desktop and mobile operating systems that you should be using right now -- right here at Code4Lib -- to protect your data and your network activity.&lt;br /&gt;
&lt;br /&gt;
== Drupal 8 Preview — Symfony and Twig ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Highermath|Cary Gordon]], The Cherry Hill Company, cgordon@chillco.com&lt;br /&gt;
&lt;br /&gt;
Drupal is a great platform for building web applications. Last year, the core developers decided to adopt the Symfony PHP framework, because it would lay the groundwork for the modernization (and de-PHP4ification) of the Drupal codebase. As I write this, the Symfony ClassLoader and HttpFoundation libraries are committed to Drupal core, with more elements likely before Drupal 8 code freeze.&lt;br /&gt;
&lt;br /&gt;
It seems almost certain that the Twig templating engine will supplant PHPtemplate as the core Drupal template engine. Twig is a powerful, secure theme building tool that removes PHP from the templating system, the result being a very concise and powerful theme layer.&lt;br /&gt;
&lt;br /&gt;
Symfony and Twig have a common creator, Fabien Potencier, who's overall goal is to rid the world of the excesses of PHP 4.&lt;br /&gt;
&lt;br /&gt;
== Neat! But How Do We Do It? - The Real-world Problem of Digitizing Complex Corporate Digital Objects ==&lt;br /&gt;
&lt;br /&gt;
* Matthew Mariner, University of Colorado Denver, Auraria Library, matthew.mariner@ucdenver.edu&lt;br /&gt;
&lt;br /&gt;
Isn't it neat when you discover that you are the steward of dozens of Sanborn Fire Instance Maps, hundreds of issues of a city directory, and thousands of photographs of persons in either aforementioned medium? And it's even cooler when you decide, &amp;quot;Let's digitize these together and make them one big awesome project to support public urban history&amp;quot;?  Unfortunately it's a far more difficult process than one imagines at inception and, sadly, doesn't always come to fruition.  My goal here is to discuss the technological (and philosophical) problems librarians and archivists face when trying to create ultra-rich complex corporate digital projects, or, rather, projects consisting of at least three facets interrelated by theme.  I intend to address these problems by suggesting management solutions, web workarounds, and, perhaps, a philosophy that might help in determining whether to even move forward or not.  Expect a few case studies of &amp;quot;grand ideas crushed by technological limitations&amp;quot; and &amp;quot;projects on the right track&amp;quot; to follow.   &lt;br /&gt;
 &lt;br /&gt;
== ResCarta Tools building a standard format for audio archiving, discovery and display ==&lt;br /&gt;
&lt;br /&gt;
* [[User:sarney|John Sarnowski]], The ResCarta Foundation, john.sarnowski@rescarta.org&lt;br /&gt;
&lt;br /&gt;
The free ResCarta Toolkit has been used by libraries and archives around the world to host city directories, newspapers, and historic photographs and by aerospace companies to search and find millions of engineering documents.  Now the ResCarta team has released audio additions to the toolkit. &lt;br /&gt;
&lt;br /&gt;
Create full text searchable oral histories, news stories, interviews. or build an archive of lectures; all done to Library of Congress standards.  The included transcription editor allows for accurate correction of the data conversion tool’s output.  Build true archives of text, photos and audio.  A single audio file carries the embedded Axml metadata, transcription, and word location information. Checks with the FADGI BWF Metaedit.&lt;br /&gt;
&lt;br /&gt;
ResCarta-Web presents your audio to IE, Chome, Firefox, Safari, and Opera browsers with full playback and word search capability. Display format is OGG!! &lt;br /&gt;
&lt;br /&gt;
You have to see this tool in action.  Twenty minutes from an audio file to transcribed, text-searchable website.  Be there or be L seven (Yeah, I’m that old)   &lt;br /&gt;
&lt;br /&gt;
== Format Designation in MARC Records: A Trip Down the Rabbit-Hole ==&lt;br /&gt;
 &lt;br /&gt;
* Michael Doran, University of Texas at Arlington, doran@uta.edu&lt;br /&gt;
&lt;br /&gt;
This presentation will use a seemingly simple data point, the &amp;quot;format&amp;quot; of the item being described, to illustrate some of the complexities and challenges inherent in the parsing of MARC records.  I will talk about abstract vs. concrete forms; format designation in the Leader, 006, 007, and 008 fixed fields as well as the 245 and 300 variable fields; pseudo-formats; what is mandatory vs. optional in respect to format designation in cataloging practice; and the differences between cataloging theory and practice as observed via format-related data mining of a mid-size academic library collection. &lt;br /&gt;
&lt;br /&gt;
I understand that most of us go to code4lib to hear about the latest sexy technologies.  While MARC isn't sexy, many of the new tools being discussed still need to be populated with data gleaned from MARC records.  MARC format designation has ramifications for search and retrieval, limits, and facets, both in the ILS and further downstream in next generation OPACs and web-scale discovery tools.  Even veteran library coders will learn something from this session. &lt;br /&gt;
&lt;br /&gt;
== Touch Kiosk 2: Piezoelectric Boogaloo ==&lt;br /&gt;
&lt;br /&gt;
* Andreas Orphanides, North Carolina State University Libraries, akorphan@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
At the NCSU Libraries, we provide realtime access to information on library spaces and services through an interactive touchscreen kiosk in our Learning Commons. In the summer of 2012, two years after its initial deployment, I redeveloped the kiosk application from the ground up, with an entirely new codebase and a completely redesigned user interface. The changes I implemented were designed to remedy previously identified shortcomings in the code and the interface design [1], and to enhance overall stability and performance of the application.&lt;br /&gt;
&lt;br /&gt;
In this presentation I will outline my revision process, highlighting the lessons I learned and the practices I implemented in the course of redevelopment. I will highlight the key features of the HTML/Javascript codebase that allow for increased stability, flexibility, and ease of maintenance; and identify the changes to the user interface that resulted from the usability findings I uncovered in my previous research. Finally, I will compare the usage patterns of the new interface to the analysis of the previous implementation to examine the practical effect of the implemented changes.&lt;br /&gt;
&lt;br /&gt;
I will also provide access to a genericized version of the interface code for others to build their own implementations of similar kiosk applications.&lt;br /&gt;
&lt;br /&gt;
[1] http://journal.code4lib.org/articles/5832&lt;br /&gt;
&lt;br /&gt;
== Wayfinding in a Cloud: Location Service for libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Petteri Kivimäki, The National Library of Finland, petteri.kivimaki@helsinki.fi&lt;br /&gt;
&lt;br /&gt;
Searching for books in large libraries can be a difficult task for a novice library user. This paper presents The Location Service, software as a service (SaaS) wayfinding application developed and managed by The National Library of Finland, which is targeted for all the libraries. The service provides additional information and map-based guidance to books and collections by showing their location on a map, and it can be integrated with any library management system, as the integration happens by adding a link to the service in the search interface. The service is being developed continuously based on the feedback received from the users.&lt;br /&gt;
&lt;br /&gt;
The service has two user interfaces: One for the customers and one for the library staff for managing the information related to the locations. The UI for the customers is fully customizable by the libraries, and the customization is done via template files by using the following techniques: HTML, CSS, and Javascript/jQuery. The service supports multiple languages, and the libraries have a full control of the languages, which they want to support in their environment.&lt;br /&gt;
&lt;br /&gt;
The service is written in Java and it uses Spring and Hibernate frameworks. The data is stored in PostgreSQL database, which is shared by all the libraries. They do not possess a direct access to the database, but the service offers an interface, which makes it possible to retrieve XML data over HTTP. Modification of the data via admin UI, however, is restricted, and access on the other libraries’ data is blocked.&lt;br /&gt;
&lt;br /&gt;
== Empowering Collection Owners with Automated Bulk Ingest Tools for DSpace ==&lt;br /&gt;
&lt;br /&gt;
* Terry Brady, Georgetown University, twb27@georgetown.edu&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Library has developed a number of applications to expedite the process of ingesting content into DSpace.&lt;br /&gt;
* Automatically inventory a collection of documents or images to be uploaded&lt;br /&gt;
* Generate a spreadsheet for metadata capture based on the inventory&lt;br /&gt;
* Generate item-level ingest folders, contents files and dublin core metadata for the items to be ingested&lt;br /&gt;
* Validate the contents of ingest folders prior to initiating the ingest to DSpace&lt;br /&gt;
* Present users with a simple, web-based form to initiate the batch ingest process&lt;br /&gt;
&lt;br /&gt;
The applications have eliminated a number of error-prone steps from the ingest workflow and have significantly reduced a number of tedious data editing steps.  These applications have empowered content experts to be in charge of their own collections. &lt;br /&gt;
&lt;br /&gt;
In this presentation, I will provide a demonstration of the tools that were built and discuss the development process that was followed.&lt;br /&gt;
&lt;br /&gt;
== Quality Assurance Reports for DSpace Collections ==&lt;br /&gt;
&lt;br /&gt;
* Terry Brady, Georgetown University, twb27@georgetown.edu&lt;br /&gt;
&lt;br /&gt;
The Georgetown University Library has developed a collection of quality assurance reports to improve the consistency of the metadata in our DSpace collections.  The report infrastructure permits the creation of query snippets to test for possible consistency errors within the repository such as items missing thumbnails, items with multiple thumbnails, items missing a creation date, items containing improperly formatted dates, items without duplicated metadata fields, items recently added items across the repository, a community or a collection&lt;br /&gt;
&lt;br /&gt;
These reports have served to prioritize programmatic data cleanup tasks and manual data cleanup tasks.  The reports have served as a progress tracker for data cleanup work and will provide on-going monitoring of the metadata consistency of the repository.&lt;br /&gt;
&lt;br /&gt;
In this presentation, I will provide a demonstration of the tools that were built and discuss the development process that was followed.&lt;br /&gt;
&lt;br /&gt;
== A Hybrid Solution for Improving Single Sign-On to a Proxy Service with Squid and EZproxy through Shibboleth and ExLibris’ Aleph X-Server ==&lt;br /&gt;
&lt;br /&gt;
* Alexander Jerabek, UQAM - Université du Québec à Montréal, jerabek.alexander_j@uqam.ca&lt;br /&gt;
* Minh-Quang Nguyen, UQAM - Université du Québec à Montréal, nguyen.minh-quang@uqam.ca&lt;br /&gt;
&lt;br /&gt;
In this talk, we will describe how we developed and implemented a hybrid solution for improving single sign-on in conjunction with the library’s proxy service. This hybrid solution consists of integrating the disparate elements of EZproxy, the Squid workflow, Shibboleth, and the Aleph X-Server. We will report how this new integrated service improves the user experience. To our knowledge, this new service is unique and has not been implemented anywhere else. We will also present some statistics after approximately one year in production.&lt;br /&gt;
&lt;br /&gt;
See article: http://journal.code4lib.org/articles/7470&lt;br /&gt;
&lt;br /&gt;
== HTML5 Video Now! ==&lt;br /&gt;
&lt;br /&gt;
* Jason Ronallo, North Carolina State University Libraries, jnronall@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
Can you use HTML5 video now? Yes.&lt;br /&gt;
&lt;br /&gt;
I'll show you how to get started using HTML5 video, including gotchas, tips, and tricks. Beyond the basics we'll see the power of having video integrated into HTML and the browser. Finally, we'll look at examples that push the limits and show the exciting future of video on the Web.&lt;br /&gt;
&lt;br /&gt;
My experience comes from technical development of an oral history video clips project. I developed the technical aspects of the project, including video processing, server configuration, development of a public site, creation of an administrative interface, and video engagement analytics. Major portions of this work have been open sourced under an MIT license.&lt;br /&gt;
&lt;br /&gt;
== Hybrid Archival Collections Using Blacklight and Hydra ==&lt;br /&gt;
&lt;br /&gt;
* Adam Wead, Rock and Roll Hall of Fame and Museum, awead@rockhall.org&lt;br /&gt;
&lt;br /&gt;
At the Library and Archives of the Rock and Roll Hall of Fame, we use available tools such as Archivists' Toolkit to create EAD finding aids of our collections.  However, managing digital content created from these materials and the born-digital content that is also part of these collections represents a significant challenge.  In my presentation, I will discuss how we solve the problem of our hybrid collections by using Hydra as a digital asset manager and Blacklight as a unified presentation and discovery interface for all our materials.&lt;br /&gt;
&lt;br /&gt;
Our strategy centers around indexing ead xml into Solr as multiple documents: one for each collection, and one for every series, sub-series and item contained within a collection.  For discovery, we use this strategy to leverage item-level searching of archival collections alongside our traditional library content.  For digital collections, we use this same technique to represent a finding aid in Hydra as a set of linked objects using RDF.  New digital items are then linked to these parent objects at the collection and series level.  Once this is done, the items can be exported back out to the Blacklight solr index and the digital content appears along with the rest of the items in the collection.&lt;br /&gt;
&lt;br /&gt;
== Making the Web Accessible through Solid Design ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:Cynthia|Cynthia Ng]] from Ryerson University Library &amp;amp; Archives&lt;br /&gt;
&lt;br /&gt;
In libraries, we are always trying our best to be accessible to everyone and we make every effort to do so physically, but what about our websites? Web designers are great at talking about the user experience and how to improve it, but what sometimes gets overlooked is how to make a site more accessible and meet accessibility guidelines. While guidelines are necessary to cover a minimum standard, web accessibility should come from good web design without ‘sacrificing’ features. While it's difficult to make a website fully accessible to everyone, there are easy, practical ways to make a site as accessible as possible.&lt;br /&gt;
&lt;br /&gt;
While the focus will be on websites and meeting the Web Accessibility Guidelines WCAG, the presentation will also touch on how to make custom web interfaces accessible.&lt;br /&gt;
&lt;br /&gt;
== Getting People to What They Need Fast! A Wayfinding Tool to Locate Books &amp;amp; Much More ==&lt;br /&gt;
 &lt;br /&gt;
* Steven Marsden, Ryerson University Library &amp;amp; Archives, steven dot marsden at ryerson dot ca&lt;br /&gt;
* [[User:Cynthia|Cynthia Ng]], Ryerson University Library &amp;amp; Archives&lt;br /&gt;
&lt;br /&gt;
Having a bewildered, lost user in the building or stacks is a common occurrence, but we can help our users find their way through enhanced maps and floor plans.  While not a new concept, these maps are integrated into the user’s flow of information without having to load a special app. The map not only highlights the location, but also provides all the related information with a link back to the detailed item view. During the first stage of the project, it has only be implemented for books (and other physical items), but the 'RULA Finder' is built to help users find just about anything and everything in the library including study rooms, computer labs, and staff. With a simple to use admin interface, it makes it easy for everyone, staff and users. &lt;br /&gt;
&lt;br /&gt;
The application is written in PHP with data stored in a MySQL database. The end-user interface involves jQuery, JSON, and the library's discovery layer (Summon) API.&lt;br /&gt;
&lt;br /&gt;
The presentation will not only cover the technical aspects, but also the implementation and usability findings.&lt;br /&gt;
&lt;br /&gt;
== De-sucking the Library User Experience ==&lt;br /&gt;
 &lt;br /&gt;
* Jeremy Prevost, Northwestern University, j-prevost {AT} northwestern [DOT] edu&lt;br /&gt;
&lt;br /&gt;
Have you ever thought that library vendors purposely create the worst possible user experience they can imagine because they just hate users? Have you ever thought that your own library website feels like it was created by committee rather than for users because, well, it was? I’ll talk about how we used vendor supplied APIs to our ILS and Discovery tool to create an experience for our users that sucks at least a little bit less.&lt;br /&gt;
&lt;br /&gt;
The talk will provide specific examples of how inefficient or confusing vendor supplied solutions are from a user perspective along with our specific streamlined solutions to the same problems. Code examples will be minimal as the focus will be on improving user experience rather than any one code solution of doing that. Examples may include the seemingly simple tasks of renewing a book or requesting an item from another campus library.&lt;br /&gt;
&lt;br /&gt;
== Solr Testing Is Easy with Rspec-Solr Gem ==&lt;br /&gt;
&lt;br /&gt;
* Naomi Dushay, Stanford University, ndushay AT stanford DOT edu&lt;br /&gt;
&lt;br /&gt;
How do you know if &lt;br /&gt;
&lt;br /&gt;
* your idea for &amp;quot;left anchoring&amp;quot; searches actually works?&lt;br /&gt;
* your field analysis for LC call numbers accommodates a suffix between the first and second cutter without breaking the rest of LC call number parsing?&lt;br /&gt;
* tweaking Solr configs to improve, say, Chinese searching, won't break Turkish and Cyrillic?&lt;br /&gt;
* changes to your solrconfig file accomplish what you wanted without breaking anything else?&lt;br /&gt;
&lt;br /&gt;
Avoid the whole app stack when writing Solr acceptance/relevancy/regression tests!  Forget cucumber and capybara.  This gem lets you easily (only 4 short files needed!) write tests like this, passing arbitrary parameters to Solr:&lt;br /&gt;
&lt;br /&gt;
  it &amp;quot;unstemmed author name Zare should precede stemmed variants&amp;quot; do&lt;br /&gt;
    resp = solr_response(author_search_args('Zare').merge({'fl'=&amp;gt;'id,author_person_display', 'facet'=&amp;gt;false}))&lt;br /&gt;
    resp.should include(&amp;quot;author_person_display&amp;quot; =&amp;gt; /\bZare\W/).in_each_of_first(3).documents&lt;br /&gt;
    resp.should_not include(&amp;quot;author_person_display&amp;quot; =&amp;gt; /Zaring/).in_each_of_first(20).documents&lt;br /&gt;
  end&lt;br /&gt;
      &lt;br /&gt;
  it &amp;quot;Cyrillic searching should work:  Восемьсoт семьдесят один день&amp;quot; do&lt;br /&gt;
    resp = solr_resp_doc_ids_only({'q'=&amp;gt;'Восемьсoт семьдесят один день'})&lt;br /&gt;
    resp.should include(&amp;quot;9091779&amp;quot;)&lt;br /&gt;
  end&lt;br /&gt;
   &lt;br /&gt;
  it &amp;quot;q of 'String quartets Parts' and variants should be plausible &amp;quot; do&lt;br /&gt;
    resp = solr_resp_doc_ids_only({'q'=&amp;gt;'String quartets Parts'})&lt;br /&gt;
    resp.should have_at_least(2000).documents&lt;br /&gt;
    resp.should have_the_same_number_of_results_as(solr_resp_doc_ids_only({'q'=&amp;gt;'(String quartets Parts)'}))&lt;br /&gt;
    resp.should have_more_results_than(solr_resp_doc_ids_only({'q'=&amp;gt;'&amp;quot;String quartets Parts&amp;quot;'}))&lt;br /&gt;
  end&lt;br /&gt;
   &lt;br /&gt;
  it &amp;quot;Traditional Chinese chars 三國誌 should get the same results as simplified chars 三国志&amp;quot; do&lt;br /&gt;
    resp = solr_response({'q'=&amp;gt;'三國誌', 'fl'=&amp;gt;'id', 'facet'=&amp;gt;false}) &lt;br /&gt;
    resp.should have_at_least(240).documents&lt;br /&gt;
    resp.should have_the_same_number_of_results_as(solr_resp_doc_ids_only({'q'=&amp;gt;'三国志'})) &lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
See&lt;br /&gt;
   http://rubydoc.info/github/sul-dlss/rspec-solr/frames&lt;br /&gt;
   https://github.com/sul-dlss/rspec-solr&lt;br /&gt;
&lt;br /&gt;
and our production relevancy/acceptance/regression tests slowly migrating from cucumber to:&lt;br /&gt;
   https://github.com/sul-dlss/sw_index_tests&lt;br /&gt;
&lt;br /&gt;
== Northwestern's Digital Image Library ==&lt;br /&gt;
&lt;br /&gt;
*Mike Stroming, Northwestern University Library, m-stroming AT northwestern DOT edu&lt;br /&gt;
*Edgar Garcia, Northwestern University Library, edgar-garcia AT northwestern DOT edu&lt;br /&gt;
&lt;br /&gt;
At Northwestern University Library, we are about to release a beta version of our Digital Image Library (DIL).  DIL is an implementation of the Hydra technology that provides a Fedora repository solution for discovery of and access to over 100,000 images for staff, students, and scholars. Some important features are:&lt;br /&gt;
&lt;br /&gt;
*Build custom collection of images using drag-and-drop&lt;br /&gt;
*Re-order images within a collection using drag-and-drop&lt;br /&gt;
*Nest collections within other collections&lt;br /&gt;
*Create details/crops of images&lt;br /&gt;
*Zoom, rotate images&lt;br /&gt;
*Upload personal images&lt;br /&gt;
*Retrieve your own uploads and details from a collection&lt;br /&gt;
*Export a collection to a PowerPoint presentation&lt;br /&gt;
*Create a group of users and authorize access to your images&lt;br /&gt;
*Batch edit image metadata&lt;br /&gt;
&lt;br /&gt;
Our presentation will include a demo, explanation of the architecture, and a discussion of the benefits of being a part of the Hydra open-source community.&lt;br /&gt;
&lt;br /&gt;
== Two standards in a software (to say nothing of Normarc) ==&lt;br /&gt;
&lt;br /&gt;
*Zeno Tajoli, CINECA (Italy), z DOT tajoli AT cineca DOT it&lt;br /&gt;
&lt;br /&gt;
With this presentation I want to show how ILS Koha handles the support of three differnt MARC dialects:&lt;br /&gt;
MARC21, Unimarc and Normarc. The main points of the presentation:&lt;br /&gt;
&lt;br /&gt;
*Three MARC at MySQL level&lt;br /&gt;
*Three MARC at API level&lt;br /&gt;
*Three MARC at display&lt;br /&gt;
*Can I add a new format ?&lt;br /&gt;
&lt;br /&gt;
== Future Friendly Web Design for Libraries ==&lt;br /&gt;
&lt;br /&gt;
*[[User:michaelschofield|Michael Schofield]], Alvin Sherman Library, Research, and Information Technology Center, mschofied[dot]nova[dot]edu&lt;br /&gt;
&lt;br /&gt;
Libraries on the web are afterthoughts. Often their design is stymied on one hand by red tape imposed by the larger institution and on the other by an overload of too democratic input from colleagues. Slashed budgets / staff stretched too thin foul-up the R-word (that'd be &amp;quot;redesign&amp;quot;) - but things are getting pretty strange. Notions about the Web (and where it can be accessed) are changing. &lt;br /&gt;
&lt;br /&gt;
So libraries can only avoid refabbing their fixed-width desktop and jQuery Mobile m-dot websites for so long until desktop users evaporate and demand from patrons with web-ready refrigerators becomes deafening. Just when we have largely hopped on the bandwagon and gotten enthusiastic about being online, our users expect a library's site to look and perform great on everything. &lt;br /&gt;
&lt;br /&gt;
Our presence on the web should be built to weather ever-increasing device complexity. To meet users at their point of need, libraries must start thinking Future Friendly.&lt;br /&gt;
&lt;br /&gt;
This overview rehashes the approach and philosophy of library web design, re-orienting it for maximum accessibility and maximum efficiency of design. While just 20 minutes, we'll mull over techniques like mobile-first responsive web design, modular CSS, browser feature detection for progressive enhancement, and lots of nifty tricks.&lt;br /&gt;
&lt;br /&gt;
==BYU's discovery layer service aggregator==&lt;br /&gt;
&lt;br /&gt;
*Curtis	Thacker, Brigham Young University, curtis.thacker AT byu DOT edu&lt;br /&gt;
&lt;br /&gt;
It is clear that libraries will continue to experience rapid change based on the speed of technology. To acknowledge this new reality and to provide rapid response to shifting end user paradigms BYU has developed a custom service aggregator. At first our vendors looked at us a bit funny; however, in the last year they have been astonished with the fluid implementation of new services – here’s the short list:&lt;br /&gt;
&lt;br /&gt;
*filmfinder - a tool for browsing and searching films&lt;br /&gt;
*A custom book recommender service based on checkout data&lt;br /&gt;
*Integrated library services like personell, library hours, study room scheduler and database finder through a custom adwords system.&lt;br /&gt;
*A very geeky and powerful utility used for converting marc XML into primo compliant xml.&lt;br /&gt;
*Embedded floormaps&lt;br /&gt;
*A responsive web design&lt;br /&gt;
*Bing did-you-mean&lt;br /&gt;
*And many more.&lt;br /&gt;
&lt;br /&gt;
I will demo the system, review the archtecture and talk about future plans.&lt;br /&gt;
&lt;br /&gt;
==The Avalon Media System: A Next Generation Hydra Head For Audio and Video Delivery==&lt;br /&gt;
&lt;br /&gt;
* Michael Klein, Senior Software Developer, Northwestern University LIbrary, michael.klein AT northwestern DOT edu&lt;br /&gt;
* Nathan Rogers, Programmer/Analyst, Indiana University, rogersna AT indiana DOT edu&lt;br /&gt;
&lt;br /&gt;
Based on the success of the [http://www.dml.indiana.edu/ Variations] digital music platform, Indiana University and Northwestern University have developed a next generation educational tool for delivering multimedia resources to the classroom. The Avalon Media System (formerly Variations on Video) supports the ingest, media processing, management, and access-controlled delivery of library-managed video and audio collections. To do so, the system draws on several existing, mature, open source technologies:&lt;br /&gt;
&lt;br /&gt;
* The ingest, search, and discovery functionality of the Hydra framework&lt;br /&gt;
* The powerful multimedia workflow management features of Opencast Matterhorn&lt;br /&gt;
* The flexible Engage audio/video player&lt;br /&gt;
* The streaming capabilities of both Red5 Media Server (open source) and Adobe Flash Media Server (proprietary)&lt;br /&gt;
&lt;br /&gt;
Extensive customization options are built into the framework for tailoring the application to the needs of a specific institution.&lt;br /&gt;
&lt;br /&gt;
Our goal is to create an open platform that can be used by other institutions to serve the needs of the academic community. Release 1 is planned for a late February launch with future versions released every couple of months following. For more information visit http://avalonmediasystem.org/ and https://github.com/variations-on-video/hydrant.&lt;br /&gt;
&lt;br /&gt;
== The DH Curation Guide: Building a Community Resource == &lt;br /&gt;
&lt;br /&gt;
*Robin Davis, John Jay College of Criminal Justice, robdavis AT jjay.cuny.edu &lt;br /&gt;
*James Little, University of Illinois Urbana-Champaign, little9 AT illinois.edu  &lt;br /&gt;
&lt;br /&gt;
Data curation for the digital humanities is an emerging area of research and practice. The DH Curation Guide, launched in July 2012, is an educational resource that addresses aspects of humanities data curation in a series of expert-written articles. Each provides a succinct introduction to a topic with annotated lists of useful tools, projects, standards, and good examples of data curation done right. The DH Curation Guide is intended to be a go-to resource for data curation practitioners and learners in libraries, archives, museums, and academic institutions.  &lt;br /&gt;
&lt;br /&gt;
Because it's a growing field, we designed the DH Curation Guide to be a community-driven, living document. We developed a granular commenting system that encourages data curation community members to contribute remarks on articles, article sections, and article paragraphs. Moreover, we built in a way for readers to contribute and annotate resources for other data curation practitioners.  &lt;br /&gt;
&lt;br /&gt;
This talk will address how the DH Curation Guide is currently used and will include a sneak peek at the articles that are in store for the Guide’s future. We will talk about the difficulties and successes of launching a site that encourages community. We are all builders here, so we will also walk through developing the granular commenting/annotation system and the XSLT-powered publication workflow. &lt;br /&gt;
&lt;br /&gt;
== Solr Update == &lt;br /&gt;
&lt;br /&gt;
*Erik Hatcher, LucidWorks, erik.hatcher AT lucidworks.com &lt;br /&gt;
&lt;br /&gt;
Solr is continually improving.  Solr 4 was recently released, bringing dramatic changes in the underlying Lucene library and Solr-level features.  It's tough for us all to keep up with the various versions and capabilities.&lt;br /&gt;
&lt;br /&gt;
This talk will blaze through the highlights of new features and improvements in Solr 4 (and up).  Topics will include: SolrCloud, direct spell checking, surround query parser, and many other features.  We will focus on the features library coders really need to know about.&lt;br /&gt;
&lt;br /&gt;
== Reports for the People == &lt;br /&gt;
&lt;br /&gt;
*Kara Young, Keene State College, NH, kyoung1 at keene.edu&lt;br /&gt;
*Dana Clark, Keene State College, NH, dclark5 at keene.edu&lt;br /&gt;
&lt;br /&gt;
Libraries are increasingly being called upon to provide information on how our programs and services are moving our institutional strategic goals forward.  In support of College and departmental Information Literacy learning outcomes, Mason Library Systems at Keene State College developed an assessment database to record and report assessment activities by Library faculty.  Frustrated by the lack of freely available options for intuitively recording, accounting for, and outputting useful reports on instructional activities, Librarians requested a tool to make capturing and reporting activities (and their lives) easier.  Library Systems was able to respond to this need by working with librarians to identify what information is necessary to capture, where other assessment tools had fallen short, and ultimately by developing an application that supports current reporting imperatives while providing flexibility for future changes.&lt;br /&gt;
&lt;br /&gt;
The result of our efforts was an in-house browser interfaced Assessment Database to improve the process of data collection and analysis.  The application is written in PHP, data stored in a MySQL database, and presented via browser making extensive use of JQuery and JQuery plug-ins for data collection, manipulation, and presentation. &lt;br /&gt;
The presentation will outline the process undertaken to build a successful collaboration with Library faculty from conception to implementation, as well as the technical aspects of our trial-and-error approach. Plus: cool charts and graphs!&lt;br /&gt;
&lt;br /&gt;
==  Network Analyses of Library Catalog Data ==&lt;br /&gt;
 &lt;br /&gt;
* Kirk Hess, University of Illinois at Urbana-Champaign, kirkhess AT illinois.edu&lt;br /&gt;
* Harriett Green, University of Illinois at Urbana-Champaign, green19 AT illinois.edu &lt;br /&gt;
&lt;br /&gt;
Library collections are all too often like icebergs:  The amount exposed on the surface is only a fraction of the actual amount of content, and we’d like to recommend relevant items from deep within the catalog to users. With the assistance of an XSEDE Allocation grant (http://xsede.org), we’ve used R to reconstitute anonymous circulation data from the University of Illinois’s library catalog into separate user transactions. The transaction data is incorporated into subject analyses that use XSEDE supercomputing resources to generate predictive network analyses and visualizations of subject areas searched by library users using Gephi (https://gephi.org/). The test data set for developing the subject analyses consisted of approximately 38,000 items from the Literatures and Languages Library that contained 110,000 headings and 130,620 transactions. We’re currently working on developing a recommender system within VuFind to display the results of these analyses.&lt;br /&gt;
&lt;br /&gt;
== Pitfall! Working with Legacy Born Digital Materials in Special Collections ==&lt;br /&gt;
&lt;br /&gt;
* Donald Mennerich, The New York Public Library, don.mennerich AT gmail.com&lt;br /&gt;
* Mark A. Matienzo, Yale University Library, mark AT matienzo.org&lt;br /&gt;
&lt;br /&gt;
Archives and special collections are being faced with a growing abundance of  born digital material, as well as an abundance of many promising tools for managing them. However, one must consider the potential problems that can arise when approaching a collection containing legacy materials (from roughly the pre-internet era). Many of the tried and true, &amp;quot;best of breed&amp;quot; tools for digital preservation don't always work as they do for more recent materials, requiring a fair amount of ingenuity and use of &amp;quot;word of mouth tradecraft and knowledge exchanged through serendipitous contacts, backchannel conversations, and beer&amp;quot; (Kirschenbaum, &amp;quot;Breaking &amp;lt;code&amp;gt;badflag&amp;lt;/code&amp;gt;&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Our presentation will focus on some of the strange problems encountered and creative solutions devised by two digital archivists in the course of preserving, processing, and providing access to collections at their institutions. We'll be placing particular particular emphasis of the pitfalls and crocodiles we've learned to swing over safely, while collecting treasure in the process. We'll address working with CP/M disks in collections of authors' papers, reconstructing a multipart hard drive backup spread across floppy disks, and more. &lt;br /&gt;
&lt;br /&gt;
== Project &amp;lt;s&amp;gt;foobar&amp;lt;/s&amp;gt; FUBAR ==&lt;br /&gt;
&lt;br /&gt;
* Becky Yoose, Grinnell College, yoosebec AT grinnell DOT edu&lt;br /&gt;
&lt;br /&gt;
Be it mandated from Those In A Higher Pay Grade Than You or self-inflicted, many of us deal with managing major library-related technology projects [1]. It’s common nowadays to manage multiple technology projects, and generally external and internal issues can be planned for to minimize project timeline shifts and quality of deliverables. Life, however, has other plans for you, and all your major library technology infrastructure projects pile on top of each other at the same time. How do you and your staff survive a train wreck of technology projects and produce deliverables to project stakeholders without having to go into the library IT version of the United States Federal Witness Protection Program?&lt;br /&gt;
&lt;br /&gt;
This session covers my experience with the collision of three major library technology projects - including a new institutional repository and an integrated library system migration - and how we dealt with external and internal factors, implemented damage control, and overall lessening the damage from the epic crash. You might laugh, you might cry, you will probably have flashbacks from previous projects, but you will come out of this session with a set of tools to use when you’re dealing with managing mission-critical projects.&lt;br /&gt;
&lt;br /&gt;
[1] Past code4lib talks have covered specific project management strategies, such as Agile, for application development. I will be focusing on and discussing general project management practices in relation to various library technology projects, many of which these strategies include in their own structures.&lt;br /&gt;
&lt;br /&gt;
== Implementing RFID in an Academic Library == &lt;br /&gt;
&lt;br /&gt;
* Scott Bacon, Coastal Carolina University, sbacon AT coastal DOT edu&lt;br /&gt;
&lt;br /&gt;
Coastal Carolina University’s Kimbel Library recently implemented RFID to increase security, provide better inventory control over library materials and enable do-it-yourself patron services such as self checkout. &lt;br /&gt;
&lt;br /&gt;
I’ll give a quick overview of RFID and the components involved and then will talk about how our library utilized the technology. It takes a lot of research, time, money and not too little resourcefulness to make your library RFID-ready. I’ll show how we developed our project timeline, how we assessed and evaluated vendors and how we navigated the bid process. I’ll also talk about hardware and software installation, configuration and troubleshooting and will discuss our book and media collection encoding process. &lt;br /&gt;
&lt;br /&gt;
We encountered myriad issues with our vendor, the hardware and the software. Would we do it all over again? Should your library consider RFID? Caveats abound...&lt;br /&gt;
&lt;br /&gt;
== Coding an Academic Library Intranet in Drupal: Now We're Getting Organizized... ==&lt;br /&gt;
&lt;br /&gt;
* Scott Bacon, Coastal Carolina University, sbacon AT coastal DOT edu&lt;br /&gt;
&lt;br /&gt;
The Kimbel Library Intranet is coded in Drupal 7, and was created to increase staff communication and store documentation. This presentation will contain an overview of our intranet project, including the modules we used, implementation issues, and possible directions in future development phases. I won’t forget to talk about the slew of tasty development issues we faced, including dealing with our university IT department, user buy-in, site navigation, user roles, project management, training and mobile modules (or the lack thereof). And some other fun (mostly) true anecdotes will surely be shared. &lt;br /&gt;
&lt;br /&gt;
The main functions of Phase I of this project were to increase communication across departments and committees, facilitate project management and revise the library's shared drive. Another important function of this first phase was to host mission-critical documentation such as strategic goals, policies and procedures. Phase II of this project will focus on porting employee tasks into the centralized intranet environment. This development phase, which aims to replicate and automate the bulk of staff workflows within a content management system, will be a huge undertaking. &lt;br /&gt;
&lt;br /&gt;
We chose Drupal as our intranet platform because of its extensibility, flexibility and community support. We are also moving our entire library web presence to Drupal in 2013 and will be soliciting any advice on which modules to use/avoid and which third-party services to wrangle into the Drupal environment. Should we use Drupal as the back-end to our entire Web presence? Why or why not?&lt;br /&gt;
&lt;br /&gt;
== Hands off! Best Practices and Top Ten Lists for Code Handoffs ==&lt;br /&gt;
 &lt;br /&gt;
* Naomi Dushay, Stanford University Library, ndushay@stanford.edu&lt;br /&gt;
* Bess Sadler, Stanford University Library, bess@stanford.edu&lt;br /&gt;
&lt;br /&gt;
Transition points in who is the primary developer on an actively developing code base can be a source of frustration for everyone involved. We've tried to minimize that pain point as much as possible through the use of agile methods like test driven development, continuous integration, and modular design. Has optimizing for developer happiness brought us happiness? What's worked, what hasn't, and what's worth adopting? How do you keep your project in a state where you can easily hand it off? &lt;br /&gt;
&lt;br /&gt;
== How to be an effective evangelist for your open source project ==&lt;br /&gt;
 &lt;br /&gt;
* Bess Sadler, Stanford University Library, bess@stanford.edu&lt;br /&gt;
&lt;br /&gt;
The difference between an open source software project that gets new adopters and new contributing community members (which is to say, a project that goes on existing for any length of time) and a project that doesn't, often isn't a question of superior design or technology. It's more often a question of whether the advocates for the project can convince institutional leaders AND front line developers that a project is stable and trustworthy. What are successful strategies for attracting development partners? I'll try to answer that and talk about what we could do as a community to make collaboration easier.  &lt;br /&gt;
&lt;br /&gt;
== Thoughts from an open source vendor - What makes a &amp;quot;good&amp;quot; vendor in a meritocracy? ==&lt;br /&gt;
&lt;br /&gt;
* Matt Zumwalt, Data Curation Experts / MediaShelf / Hydra Project, matt@curationexperts.com&lt;br /&gt;
&lt;br /&gt;
What is the role of vendors in open source?  What should be the position of vendors in a meritocracy?  What are the avenues for encouraging great vendors who contribute to open source communities in valuable ways?  How you answer these questions has a huge impact on a community, and in order to formulate strong answers, you need to be well informed.  Let’s glimpse at the business practicalities of this situation, beginning with 1) an overview of the viable profit models for open-source software, 2) some of the realities of vendor involvement in open source, and 3) an account of the ins &amp;amp; outs of compensation &amp;amp; equity structures within for-profit corporations.&lt;br /&gt;
&lt;br /&gt;
The topics of power &amp;amp; influence, fairness, community participation, software quality, employment and personal profit are fair game, along with software licensing, support,  sponsorship, closed source software and the role of sales people.&lt;br /&gt;
&lt;br /&gt;
This presentation will draw on personal experience from the past seven years spent bootstrapping and running MediaShelf, a small but prolific for-profit consulting company that focuses entirely on open source digital repository software.  MediaShelf has played an active role in creating the Hydra Framework and continuously contributes to maintenance of Fedora and Blacklight. Those contributions have been funded through consulting contracts for authoring &amp;amp; implementing open source software on behalf of organizations around the world.&lt;br /&gt;
&lt;br /&gt;
==Occam’s Reader: A system that allows the sharing of eBooks via Interlibrary Loan==&lt;br /&gt;
&lt;br /&gt;
*Ryan Litsey, Texas Tech University, Ryan DOT Litsey AT ttu.edu&lt;br /&gt;
*Kenny Ketner, Texas Tech University, Kenny DOT Ketner AT ttu.edu&lt;br /&gt;
&lt;br /&gt;
Occam’s Reader is a software platform that allows the transfer and sharing of electronic books between libraries via existing interlibrary loan software. Occam’s Reader allows libraries to meet the growing need to be able to share our electronic resources. In the ever-increasing digital world, many of our collection development plans now include eBook platforms. The problem with eBooks, however, is that they are resources that are locked into the home library. With Occam’s Reader we can continue the centuries-old tradition of resource sharing and also keep up with the changing digital landscape. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Puppet for configuration management when no two servers look alike ==&lt;br /&gt;
* Eugene Vilensky, Senior Systems Administrator, Northwestern University Library, evilensky northwestern edu&lt;br /&gt;
&lt;br /&gt;
Configuration management is hot because it allows one to scale to thousands of machines, all of which look alike, and tightly manage changes across the nodes. Infrastructure as code, implement all changes programmatically, yadda yadda yadda.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, servers which have gone unmanaged for a long time do not look very similar to each other.  Variables come in many forms, usually because of some or all of the following: Who installed the server, where it was installed, where the image was sourced from, when it was installed, where additional packages were sourced, and what kind of software was hosted on it.&lt;br /&gt;
&lt;br /&gt;
Bringing such machines into your configuration management platform is no harder and no easier than some or all of the following options options: 1) blow such machines away and start from scratch, migrate your data. 2) Find the lowest common baseline between the current state and the ideal state and start the work there. 3) implement new features/services on existing unmanaged machines but manage the new features/services.&lt;br /&gt;
&lt;br /&gt;
I will describe our experiences at the library for all three options using the Puppet open-source tool on Enterprise Linux 5 and 6.&lt;br /&gt;
&lt;br /&gt;
== REST &amp;lt;b&amp;gt;IS&amp;lt;/b&amp;gt; Your Mobile Strategy ==&lt;br /&gt;
&lt;br /&gt;
* Richard Wolf, University of Illinois at Chicago, richwolf@uic.edu&lt;br /&gt;
&lt;br /&gt;
Mobile is the new hotness ... and you can't be one of the cool kids unless you've got your own mobile app ... but the road to mobility is daunting.  I'll argue that it's actually easier than it seems ... and that the simplest way to mobility is to bring your data to the party, create a REST API around the data, tell developers about your API, and then let the magic happen.  To make my argument concrete, I'll show (lord help me!) how to go from an interesting REST API to a fun iOS tool for librarians and the general public in twenty minutes.&lt;br /&gt;
&lt;br /&gt;
== ARCHITECTING ScholarSphere: How We Built a Repository App That Doesn't Feel Like Yet Another Janky Old Repository App ==&lt;br /&gt;
&lt;br /&gt;
* Dan Coughlin, Penn State University, danny@psu.edu&lt;br /&gt;
* Mike Giarlo, Penn State University, michael@psu.edu&lt;br /&gt;
&lt;br /&gt;
ScholarSphere is a web application that allows the Penn State research community to deposit, share, and manage its scholarly works.  It is also, as some of our users and our peers have observed, a repository app that feels much more like Google Docs or GitHub than earlier-generation repository applications.  ScholarSphere is built upon the Hydra framework (Fedora Commons, Solr, Blacklight, Ruby on Rails), MySQL, Redis, Resque, FITS, ImageMagick, jQuery, Bootstrap, and FontAwesome.  We'll talk about techniques we used to:&lt;br /&gt;
&lt;br /&gt;
* eliminate Fedora-isms in the application&lt;br /&gt;
* model and expose RDF metadata in ways that users find unobtrusive&lt;br /&gt;
* manage permissions via a UI widget that doesn't stab you in the face&lt;br /&gt;
* harvest and connect controlled vocabularies (such as LCSH) to forms&lt;br /&gt;
* make URIs cool&lt;br /&gt;
* keep the app snappy without venturing into the architectural labyrinth of YAGNI&lt;br /&gt;
* build and queue background jobs&lt;br /&gt;
* expose social features and populate activity streams&lt;br /&gt;
* tie checksum verification, characterization, and version control to the UI&lt;br /&gt;
* let users upload and edit multiple files at once&lt;br /&gt;
&lt;br /&gt;
The application will be demonstrated; code will be shown; and we solemnly commit to showing ABSOLUTELY NO XML.&lt;br /&gt;
&lt;br /&gt;
==Coding with Mittens==&lt;br /&gt;
&lt;br /&gt;
*Jim LeFager, DePaul University Library jlefager@depaul.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Working in an environment where developers have restricted access to servers and development areas, or where you are primarily working in multiple hosted systems with limited access, can be a challenge when you are attempting to incorporate any new functionality or improve an existing one.  Hosted web services present a benefit so that staff time is not dedicated to server maintenance and development, but customization can be difficult and at times impossible.  In many cases, incorporating any current API functionality requires additional work besides the original development work which can be frustrating and inefficient.  The result can be a Frankenstein monster of web services that is confusing to the user and difficult to navigate.  &lt;br /&gt;
&lt;br /&gt;
This talk will focus on some effective best practices, and maybe not so great but necessary practices that we have adopted to develop and improve our user’s experience using javascript/jQuery and CSS to manipulate our hosted environments.  This will include a review of available tools that allow collaborative development in the cloud, as well as examples of jQuery methods that have allowed us to take additional control of these hosted environments as well as track them using Google Analytics.  Included will be examples from Springshare Campus Guides, CONTENTdm and other hosted web spaces that have been ‘hacked’ to improve the UI.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Hacking the DPLA ==&lt;br /&gt;
* Nate Hill, Chattanooga Public Library,  nathanielhill AT gmail.com&lt;br /&gt;
* Sam Klein, Wikipedia, metasj AT gmail.com&lt;br /&gt;
&lt;br /&gt;
The Digital Public Library of America is a growing open-source platform to support digital libraries and archives of all kinds.  DPLA-alpha is available for testing, with data from six initial Hubs.  New APIs and data feeds are in development, with the next release scheduled for April.   &lt;br /&gt;
&lt;br /&gt;
Come learn what we are doing, how to contribute or hack the DPLA roadmap, and how you (or your favorite institution) can draw from and publish through it.  Larger institutions can join as a (content or service) hub, helping to aggregate and share metadata and services from across their {region, field, archive-type}.   We will discuss current challenges and possibilities (UI and API suggestions wanted!), apps being built on the platform, and related digitization efforts.&lt;br /&gt;
&lt;br /&gt;
DPLA has a transparent community and planning process; new participants are always welcome.  Half the time will be for suggestions and discussion.   Please bring proposals, problems, partnerships and possible paradoxes to discuss.&lt;br /&gt;
&lt;br /&gt;
== Introduction to SilverStripe 3.0 ==&lt;br /&gt;
 &lt;br /&gt;
* Ian Walls, University of Massachusetts Amherst, iwalls AT library DOT umass DOT edu&lt;br /&gt;
&lt;br /&gt;
SilverStripe is an open source Content Management System/development framework out of New Zealand, written in PHP, with a solid MVC structure.  This presentation will cover everything you need to know to get started with SilverStripe, including&lt;br /&gt;
* Features (and why you should consider SilverStripe)&lt;br /&gt;
* Requirements &amp;amp; Installation&lt;br /&gt;
* Model-View-Controller&lt;br /&gt;
* Key data types &amp;amp; configuration settings&lt;br /&gt;
* Modules&lt;br /&gt;
* Where to start with customization&lt;br /&gt;
* Community support and participation&lt;br /&gt;
&lt;br /&gt;
== Citation search in SOLR and second-order operators ==&lt;br /&gt;
 &lt;br /&gt;
* Roman Chyla, Astrophysics Data System, roman.chyla AT (cfa.harvad.edu|gmail.com)&lt;br /&gt;
&lt;br /&gt;
Citation search is basically about connections (Is the paper read by a friend of mine more important than others? Get me a paper read by somebody who cites many papers/is cited by many papers?), but the implementation of the citation search is surprisingly useful in many other areas.&lt;br /&gt;
&lt;br /&gt;
I will show 'guts' of the new citation search for astrophysics, it is generic and can be applied recursively to any Lucene query. Some people would call it a second-order operation because it works with the results of the previous (search) function. The talk will see technical details of the special query class, its collectors, how to add a new search operator and how to influence relevance scores. Then you can type with me: friends_of(friends_of(cited_for(keyword:&amp;quot;black holes&amp;quot;) AND keyword:&amp;quot;red dwarf&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Managing Segmented Images and Hierarchical Collections with Fedora-Commons and Solr ==&lt;br /&gt;
&lt;br /&gt;
* David Lacy, Villanova University, david DOT lacy AT villanova.edu&lt;br /&gt;
&lt;br /&gt;
Many of the resources within our digital library are split into parts -- newspapers, scrapbooks and journals being examples of collections of individual scanned pages.  In some cases, groups of pages within a collection, or segments within a particular page, may also represent chapters or articles.&lt;br /&gt;
&lt;br /&gt;
We recently devised a procedure to extract these &amp;quot;segmented resources&amp;quot; into their own objects within our repository, and index them individually in our Discovery Layer.&lt;br /&gt;
&lt;br /&gt;
In this talk I will explain how we dissected and organized these newly created resources with an extension to our Fedora Model, and how we make them discoverable through Solr configurations that facilitate browsable hierarchical relationships and field-collapsed results that group items within relevant resources.&lt;br /&gt;
&lt;br /&gt;
== Google Analytics, Event Tracking and Discovery Tools==&lt;br /&gt;
 &lt;br /&gt;
* Emily Lynema, North Carolina State University Libraries. ejlynema AT ncsu DOT edu&lt;br /&gt;
* Adam Constabaris, North Carolina State University Libraries, ajconsta AT ncsu DOT edu&lt;br /&gt;
&lt;br /&gt;
The NCSU Libraries is using Google Analytics increasingly across its website as a replacement for usage tracking via Urchin. More recently, we have also begun to use the event tracking features in Google Analytics. This has allowed us to gather usage statistics for activities that don’t initiate new requests to the server, such as clicks that hide and show already-loaded content (as in many tabbed interfaces).  Aggregating these events together with pageview tracking in Google Analytics presents a more unified picture of patron activity and can help improve design of tools like the library catalog.  While assuming a basic understanding of the use of Google Analytics pageview tracking, this presentation will start with an introduction to the event tracking capabilities that may be less widely known. &lt;br /&gt;
&lt;br /&gt;
We’ll share library catalog usage data pulled from Google Analytics, including information about  features that are common across the newest wave of catalog interfaces, such as tabbed content, Google Preview, and shelf browse. We will also cover the approach taken for the technical implementation of this data-intensive JavaScript event tracking.&lt;br /&gt;
&lt;br /&gt;
As a counterpart, we can demonstrate how we have begun to use Google Analytics event tracking in a proprietary vendor discovery tool (Serials Solutions Summon). While the same technical ideas govern this implementation, we can highlight the differences (read, challenges) inherent in utilizing this type of event tracking in vendor-owned application vs. a locally developed application.&lt;br /&gt;
&lt;br /&gt;
Along the way, hopefully you’ll learn a little about why you might (or might not) want to use Google Analytics event tracking yourself and see some interesting catalog usage stats.&lt;br /&gt;
&lt;br /&gt;
== Actions speak louder than words: Analyzing large-scale query logs to improve the research experience ==&lt;br /&gt;
&lt;br /&gt;
* Raman Chandrasekar, Serials Solutions, Raman DOT Chandrasekar AT serialssolutions DOT com&lt;br /&gt;
* Ted Diamond, Serials Solutions, Ted DOT Diamond AT serialssolutions DOT com&lt;br /&gt;
&lt;br /&gt;
Analyzing anonymized query and click through logs leads to a better understanding of user behaviors and intentions and provides great opportunities to respond to users with an improved search experience. A large-scale provider of SaaS services, Serials Solutions is uniquely positioned to learn from the dataset of queries aggregated from the Summon service generated by millions of users at hundreds of libraries around the world.&lt;br /&gt;
 &lt;br /&gt;
In this session, we will describe our Relevance Metrics Framework and provide examples of insights gained during its development and implementation. We will also cover recent product changes inspired by these insights. Chandra and Ted, from the Summon dev team, will share insights and outcomes from this ongoing process and highlight how analysis of large-scale query logs helps improve the academic research experience.&lt;br /&gt;
&lt;br /&gt;
== Supporting Gaming in the College Classroom == &lt;br /&gt;
&lt;br /&gt;
*Megan O'Neill, Albion College, moneill AT albion DOT edu&lt;br /&gt;
&lt;br /&gt;
Faculty are increasingly interested both in teaching with games and with gamifying their courses. Introducing digital games and game support for faculty through the library makes a lot of sense, but it comes with a thorny set of issues. This talk will discuss our library's initial steps toward creating a digital gamerspace and game support infrastructure in the library, including:&lt;br /&gt;
1) The scope and acquisitions decisions that make the most sense for us, and 2) Some difficulties we've discovered in trying to get our collection, physical- , digital- and head-space, and infrastructure up and going.&lt;br /&gt;
There will also be an extremely brief overview of WHY we decided to teach with games and to support gamification, what (if anything) to do about mobile gaming, and where games in education might be going.&lt;br /&gt;
&lt;br /&gt;
== Codecraft ==&lt;br /&gt;
 &lt;br /&gt;
* Devon Smith, OCLC Research, smithde@oclc.org&lt;br /&gt;
&lt;br /&gt;
We can think of and talk about software development as science, engineering, and craft. In this presentation, I'll talk about the craft aspect of software. From Wikipedia[1]: &amp;quot;In English, to describe something as a craft is to describe it as lying somewhere between an art (which relies on talent and technique) and a science (which relies on knowledge). In this sense, the English word craft is roughly equivalent to the ancient Greek term techne.&amp;quot; Of the questions who, what, where, why, when, and how, I will focus on why and how, with a minor in where.&lt;br /&gt;
&lt;br /&gt;
'''N.B.''': This will be a NON-TECHNICAL talk.&lt;br /&gt;
&lt;br /&gt;
[1] https://en.wikipedia.org/wiki/Craft#Classification&lt;br /&gt;
&lt;br /&gt;
== KnowBot: A Tool to Manage Reference and Beyond == &lt;br /&gt;
&lt;br /&gt;
* Sarah Park, Northwest Missouri State University&lt;br /&gt;
* Hong Gyu Han, Northwest Missouri State University&lt;br /&gt;
* Lori Mardis, Northwest Missouri State University&lt;br /&gt;
&lt;br /&gt;
Northwest Missouri State University has developed and used RefPole for collecting and analyzing reference statistics since 2005. RefPole was a tool to answer librarians’ needs to manage reference statistics and knowledge among librarians. It was an analysis tool for the library leaders to make decisions on library operations. RefPole was adequate for the internal use; however, it was developed for local access which keeps the collective reference knowledge from being shared beyond the desktop and from being accessed by students and faculty. &lt;br /&gt;
&lt;br /&gt;
In 2011, responding to growing internal and external need, the library has developed a web based knowledge base management system, KnowBot, in Ruby on Rail. KnowBot offers public searching, rating, cloud tagging, librarian, and reporting interfaces. With the additional public interfaces, it also extended reference services 24/7. Librarians can record responses to questions with graphics and multimedia. The reporting interface features not only the simple transactional data, but it also exhibits multi-dimensional analytic tool in real time.&lt;br /&gt;
&lt;br /&gt;
The presenters will demonstrate KnowBot; share the source code; and discuss the use of the knowledge base to answer the organizational and public need.&lt;br /&gt;
&lt;br /&gt;
== Creating a (mostly) integrated Patron Account with SirsiDynix Symphony and ILLiad ==&lt;br /&gt;
&lt;br /&gt;
* Emily Lynema, North Carolina State University Libraries, ejlynema AT ncsu DOT edu&lt;br /&gt;
* Jason Raitz, North Carolina State University Libraries, jcraitz AT ncsu DOT edu&lt;br /&gt;
&lt;br /&gt;
IIn 2012, the NCSU Libraries at long last replaced a vendor “my account” tool that had been running unsupported for years. With the opportunity to create something new, one of the initial goals was a user experience that more seamlessly combined ILS data from SirsiDynix Symphony with ILL data from ILLiad. As a Kuali OLE beta partner, the NCSU Libraries is looking at an ILS migration within the next few years, so another goal was to build the interface on top of a standard so it would not have to be re-written as part of the migration. And the icing on the cake was a transition from a local Perl-based authentication system to the newer campus-wide Shibboleth authentication.&lt;br /&gt;
&lt;br /&gt;
This presentation will start with our design goals for a new user interface, include a demonstration, and describe the simple techniques used to provide a more integrated view of Symphony and ILLiad patron data. The backbone of the actual application is built using Zend’s PHP Framework and integrates eXtensible Catalog’s NCIP Toolkit to reach out to Symphony for patron data. In addition, we can talk about our successes (and difficulties) using jQuery Mobile to create a mobile view using the same underlying code as the web version. As one of our first Shibboleth applications here in the Libraries, this experience also taught us first-hand about some of the challenges of this type of single sign-on.&lt;br /&gt;
&lt;br /&gt;
== SKOS Name Authority in a DSpace Institutional Repository ==&lt;br /&gt;
&lt;br /&gt;
* Tom Johnson, Oregon State University, thomas.johnson@oregonstate.edu&lt;br /&gt;
&lt;br /&gt;
Name ambiguity is widespread in institutional repositories. Searching by author, users are typically greeted by a variety of misspellings and permutations of initials, collision between contributors with similar names, and other problems inherent in uncontrolled (often user-submitted) data. While DSpace has the technical capacity to use controlled names, it relies on outside authority files (from LoC, for example) to do the heavy lifting. For institutional authors, this leaves a major coverage gap and creates namespace pollution on a vast scale (try searching [http://authorities.loc.gov authorities.loc.gov] for &amp;quot;Johnson, John&amp;quot;, sometime). &lt;br /&gt;
&lt;br /&gt;
OSU is solving this problem with an institutionally scoped, low maintenance SKOS/FOAF &amp;quot;name authority file&amp;quot;. People in the IR are assigned URIs, names are maintained as skos:prefLabel, altLabel, or hiddenLabel. We've developed a simple Python application allowing staff to update individual &amp;quot;records&amp;quot;, and code on the DSpace side to access the dataset over SPARQL. This presentation will walk you through where we are now, limitations we've run into, and possibilities for the future.&lt;br /&gt;
&lt;br /&gt;
== Meta-Harvesting: Harvesting the Harvesters ==&lt;br /&gt;
&lt;br /&gt;
* Steven Anderson, Boston Public Library, sanderson AT bpl DOT org&lt;br /&gt;
* Eben English, Boston Public Library, eenglish AT bpl DOT org&lt;br /&gt;
&lt;br /&gt;
The emerging Digital Public Library of America (http://dp.la/) has proposed to aggregate digital content for search and discovery from several regional &amp;quot;service hubs&amp;quot; that will provide metadata via an as-yet-unspecified harvest process. As these service hubs are already harvesters of digital content from myriad sources themselves, the potential for &amp;quot;telephone game&amp;quot;-esque data loss and/or transmutation is a significant danger.&lt;br /&gt;
&lt;br /&gt;
This talk will discuss the experience of Digital Commonwealth (http://www.digitalcommonwealth.org/), a statewide digital repository currently in the process of being revamped, refactored, and redesigned by the Boston Public Library using the Hydra Framework. The repository, which aggregates data from over 20 institutions (some of which are themselves aggregators), is also undergoing a massive metadata cleanup effort as records are prepared to be ingested into the DPLA as one of the regional service hubs. Topics will include automated and manual processes for data crosswalking and cleanup, advanced OAI-PMH chops, and the implications of the (at this time still-emerging) metadata standards and APIs being created by the DPLA.&lt;br /&gt;
&lt;br /&gt;
Every crosswalk, transformation, migration, harvest, or export/ingest of metadata requires informed decision making and precise attention to detail. This talk will provide insight into key decision points and potential quagmires, as well as a discussion of the challenges of dealing with heterogeneous data from a wide variety of institutions.&lt;br /&gt;
&lt;br /&gt;
== Pay No More Than £3 // DIY Digital Curation ==&lt;br /&gt;
 &lt;br /&gt;
* Chris Fitzpatrick, World Maritime University, cf AT wmu DOT se&lt;br /&gt;
&lt;br /&gt;
Are you a small library or archive? &amp;lt;br&amp;gt;&lt;br /&gt;
Do you feel you are being held back by limited technical resources?&amp;lt;br&amp;gt;&lt;br /&gt;
Tired of waiting around for the Google Books Library people to reply to your emails? &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Join the club. Open-source software, hackerspaces, dirt cheap storage, cloud computing, and social media make it possible for any institution to start curating digitally. Today.&lt;br /&gt;
This talk will cover some of the guerrilla tactics being employed to drag a small university's large collection into the internet age. &lt;br /&gt;
&lt;br /&gt;
Topics will include: &lt;br /&gt;
*Cheap and effective document scanning methods.&lt;br /&gt;
*Valuable resources found at your local hackerspace / makerspace / fablab.&lt;br /&gt;
*Metadata enrichment for the not-so-rich and NLP for the people.&lt;br /&gt;
*Utilizing social media to crowdsource your collection building.&lt;br /&gt;
*How to post-process, OCR, PDF, and ePub your documents using Free software.&lt;br /&gt;
*Ways to build out a digital repository with no servers, code, or large 2-year grants required. (ok, maybe some code).&lt;br /&gt;
&lt;br /&gt;
== IIIF: One Image Delivery API to Rule Them All ==&lt;br /&gt;
&lt;br /&gt;
* Willy Mene, Stanford University Libraries, wmene AT stanford DOT edu&lt;br /&gt;
* Stuart Snydman, Stanford University Libraries, snydman AT stanford DOT edu&lt;br /&gt;
&lt;br /&gt;
The International Image Interoperability Framework was conceived of by a group of research and national libraries determined to achieve the holy grail of seamless sharing and reuse of images in digital image repositories and applications.  By converging on common API’s for image delivery, metadata transmission and search, it is catalyzing the development of a new wave of interoperable image delivery software that will surpass the current crop of image viewers, page turners, and navigation systems, and in so doing give scholars an unprecedented level of consistent and rich access to image-based resources across participating repositories.&lt;br /&gt;
&lt;br /&gt;
The IIIF Image API (http://library.stanford.edu/iiif/image-api) specifies a web service that returns an image in response to a standard http or https request. The URL can specify the region, size, rotation, quality characteristics and format of the requested image. A URL can also be constructed to request basic technical information about the image to support client applications.  The API could be adopted by any image repository or service, and can be used to retrieve static images in response to a properly constructed URL.&lt;br /&gt;
&lt;br /&gt;
In this presentation we will review version 1 of the IIIF image api and validator, demonstrate applications by daring early adopters, and encourage widespread adoption.&lt;br /&gt;
&lt;br /&gt;
== Data-Driven Documents: Visualizing library data with D3.js ==&lt;br /&gt;
&lt;br /&gt;
* Bret Davidson, North Carolina State University Libraries, bret_davidson@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
Several JavaScript libraries have emerged over the past few years for creating rich, interactive visualizations using web standards. Few are as powerful and flexible as D3.js[1]. D3 stands apart by merging web standards with a rich API and a unique approach to binding data to DOM elements, allowing you to apply data-driven transformations to a document. This emphasis on data over presentation has made D3 very popular; D3 is used by several prominent organizations including the New York Times[2], GOV.UK[3], and Trulia[4].&lt;br /&gt;
&lt;br /&gt;
Power usually comes at a cost, and D3 makes you pay with a steeper learning curve than many alternatives. In this talk, I will get you over the hump by introducing the core construct of D3, the Data-Join. I will also discuss when you might want to use D3.js, share some examples, and explore some advanced utilities like scales and shapes. I will close with a brief overview of how we are successfully using D3 at NCSU[5] and why investing time in learning D3 might make sense for your library.&lt;br /&gt;
&lt;br /&gt;
*[1]http://d3js.org/&lt;br /&gt;
*[2]http://www.nytimes.com/interactive/2012/08/24/us/drought-crops.html&lt;br /&gt;
*[3]https://www.gov.uk/performance/dashboard&lt;br /&gt;
*[4]http://trends.truliablog.com/vis/pricerange-boston/&lt;br /&gt;
*[5]http://www.lib.ncsu.edu/dli/projects/spaceassesstool&lt;br /&gt;
&lt;br /&gt;
== ''n'' Characters in Search of an Author ==&lt;br /&gt;
&lt;br /&gt;
* Jay Luker, IT Specialist, Smithsonian Astrophysics Data System, jluker@cfa.harvard.edu&lt;br /&gt;
&lt;br /&gt;
When it comes to author names the disconnect between our metadata and what a user might enter into a search box presents challenges when trying to maximize both precision and recall [0]. When indexing a paper written by &amp;quot;Wäterwheels, A&amp;quot; a goal should be to preserve as much as possible the original information. However, users searching by author name may frequently omit the diaeresis and search for simply, &amp;quot;Waterwheels&amp;quot;. The reverse of this scenario is also possible, i.e., your decrepit metadata contains only the ASCII, &amp;quot;Supybot, Zoia&amp;quot;, whereas the user enters, &amp;quot;Supybot, Zóia&amp;quot;. If recall is your highest priority the simple solution is to always downgrade to ASCII when indexing and querying. However this strategy sacrifices precision, as you will be unable to provide an &amp;quot;exact&amp;quot; search, necessary in cases where &amp;quot;Hacker, J&amp;quot; and &amp;quot;Häcker, J&amp;quot; really are two distinct authors.&lt;br /&gt;
&lt;br /&gt;
This talk will describe the strategy ADS[1] has devised for addressing common and edge-case problems faced when dealing with author name indexing and searching. I will cover the approach we devised to not only the transliteration issue described above, but also how we deal with author initials vs. full first and/or middle names, authors who have published under different forms of their name, authors who change their names (wha? people get married?!). Our implementation relies on Solr/Lucene[2], but my goal is an 80/20 mix of high- vs. low-level details to keep things both useful and stackgnostic [3].&lt;br /&gt;
&lt;br /&gt;
*[0] http://en.wikipedia.org/wiki/Precision_and_recall&lt;br /&gt;
*[1] http://www.adsabs.harvard.edu/&lt;br /&gt;
*[2] http://lucene.apache.org/solr/&lt;br /&gt;
*[3] http://en.wikipedia.org/wiki/Portmanteau&lt;br /&gt;
&lt;br /&gt;
== But, does it all still work : Testing Drupal with simpletest and casperjs ==&lt;br /&gt;
&lt;br /&gt;
* David Kinzer - Lead Developer, Jenkins Law Library, dkinzer@jenkinslaw.org&lt;br /&gt;
* Chad Nelson  - Developer, Jenkins Law Library, cnelson@jenkinslaw.org&lt;br /&gt;
&lt;br /&gt;
Most developers know that they should be writing tests along with their code, but not every developer knows how or where to get started. This talk will walk through the nuts and bolts of the testing a medium-sized Drupal site with many integrated moving parts. We’ll talk about unit testing of individual functions with [http://www.simpletest.org/en/overview.html SimpleTest] (and how that has changed how we write functions), functional testing of the user interface with [http://casperjs.org/ casperjs]. We will discuss automating deployment with [http://www.phing.info/ phing], [http://drupal.org/project/drush drush], [http://jenkins-ci.org/ jenkins-ci] &amp;amp; github, which, combined with our tests, removes the “hold-your-breath” feeling before updating our live site. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2013]]&lt;br /&gt;
&lt;br /&gt;
== Relations, Recommendations and PostgreSQL ==&lt;br /&gt;
&lt;br /&gt;
* William Denton, Web Librarian, York University, wdenton@yorku.ca&lt;br /&gt;
* Dan Scott, Systems Librarian, Laurentian University, dscott@laurentian.ca&lt;br /&gt;
&lt;br /&gt;
In 2012, a ragtag group of library hackers from various Ontario &lt;br /&gt;
universities, funded with only train tickets and fueled with Tim Hortons &lt;br /&gt;
coffee, assembled under the Scholars Portal banner to build a common &lt;br /&gt;
circulation data repository and recommendation engine: the Scholars &lt;br /&gt;
Portal Library Usage-based Recommendation Engine (SPLURGE). PostgreSQL, &lt;br /&gt;
the emerging darling of the old-school relational database world, is the &lt;br /&gt;
heart of SPLURGE, and the circulation data for Ontario's 400,000 &lt;br /&gt;
university students is its blood. Two of the contributors to this effort explore the PostgreSQL features &lt;br /&gt;
that SPLURGE uses to ease administration efforts, simplify application &lt;br /&gt;
development, and deliver high performance results. If you don't use &lt;br /&gt;
PostgreSQL for your data, you might want to try it after this &lt;br /&gt;
presentation; if you already do, you'll pick up some new tips and tricks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A Cure for Romnesia: Site Story Web-Archiving ==&lt;br /&gt;
&lt;br /&gt;
* Harihar Shankar, Research Library, Los Alamos National Laboratory, harihar@lanl.gov&lt;br /&gt;
&lt;br /&gt;
The web changes constantly, erasing both inconvenient facts and&lt;br /&gt;
fictions.  At web-scale, preservation organizations cannot be expected&lt;br /&gt;
to keep up by using traditional crawling, and they already miss many&lt;br /&gt;
important versions.  The cure for this is to capture the interactions&lt;br /&gt;
between real browsers and the server, and push these into an archive&lt;br /&gt;
for safe keeping rather than trying to guess when pages change.&lt;br /&gt;
&lt;br /&gt;
Every time the Apache Web Server sends data to a browser, SiteStory’s&lt;br /&gt;
Apache Module also pushes this data to the SiteStory Web Archive. The&lt;br /&gt;
same version of a resource will not be archived more than once, no&lt;br /&gt;
matter how many times it has been requested.  The resulting archive is&lt;br /&gt;
effectively representative of a server's entire history, although&lt;br /&gt;
versions of resources that are never requested by a browser will also&lt;br /&gt;
never be archived.&lt;br /&gt;
&lt;br /&gt;
In this presentation I will give an overview of SiteStory, an&lt;br /&gt;
Open-Source project written in Java that runs as an application under&lt;br /&gt;
Tomcat 6 or greater. SiteStory’s Apache Module is written in C. I will&lt;br /&gt;
also demonstrate the TimeMap tool that visualizes versions of a&lt;br /&gt;
resource available in the SiteStory archive. The TimeMap tool is a&lt;br /&gt;
Firefox browser extension that plots versions of a resource on a&lt;br /&gt;
SIMILE timeline. Since the tools uses the Memento protocol, it can&lt;br /&gt;
also display versions of resources available in Memento compliant web&lt;br /&gt;
archives and content management systems.&lt;br /&gt;
&lt;br /&gt;
== Practical Relevance Ranking for 10 million books. ==&lt;br /&gt;
 &lt;br /&gt;
* Tom Burton-West, University of Michigan Library, tburtonw@umich.edu&lt;br /&gt;
&lt;br /&gt;
[http://www.hathitrust.org/ HathiTrust Full-text search] indexes the full-text and metadata for over 10 million books.  There are many challenges in tuning relevance ranking for a collection of this size.  This talk will discuss some of the underlying issues, some of our experiments to improve relevance ranking, and our ongoing efforts to develop a principled framework for testing changes to relevance ranking.&lt;br /&gt;
&lt;br /&gt;
Some of the topics covered will include:&lt;br /&gt;
&lt;br /&gt;
* Length normalization for indexing the full-text of book-length documents&lt;br /&gt;
* Indexing granularity for books&lt;br /&gt;
&lt;br /&gt;
*Testing new features in Solr 4.0:&lt;br /&gt;
**New ranking formulas that should work better with book-length documents: BM25 and DFR.&lt;br /&gt;
**Grouping/Field Collapsing.  Can we index 3 billion pages and then use Solr's field collapsing feature to rank books according to the most relevant page(s)?&lt;br /&gt;
**Finite State Automota/Block Trees for storing the in-memory index to the index.  Will this allow us to allow wildcards/truncation despite over 2 billion unique terms per index?&lt;br /&gt;
&lt;br /&gt;
*Relevance testing methodologies:Query log analysis, Click models, Interleaving, A/B testing, and Test collection based evaluation.&lt;br /&gt;
&lt;br /&gt;
*Testing of a new high-performance storage system to be installed in early 2013. We will report on any tests we are able to run prior to conference time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Code4Lib2013]]&lt;/div&gt;</summary>
		<author><name>Tburtonw</name></author>	</entry>

	<entry>
		<id>https://wiki.code4lib.org/index.php?title=2012_talks_proposals&amp;diff=9811</id>
		<title>2012 talks proposals</title>
		<link rel="alternate" type="text/html" href="https://wiki.code4lib.org/index.php?title=2012_talks_proposals&amp;diff=9811"/>
				<updated>2011-11-19T00:42:13Z</updated>
		
		<summary type="html">&lt;p&gt;Tburtonw: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Deadline for talk submission is ''Sunday, November 20''.&lt;br /&gt;
&lt;br /&gt;
Prepared talks are 20 minutes (including setup and questions), and focus on one or more of the following areas:&lt;br /&gt;
 * tools (some cool new software, software library or integration platform)&lt;br /&gt;
 * specs (how to get the most out of some protocols, or proposals for new ones)&lt;br /&gt;
 * challenges (one or more big problems we should collectively address)&lt;br /&gt;
&lt;br /&gt;
The community will vote on proposals using the criteria of:&lt;br /&gt;
 * usefulness&lt;br /&gt;
 * newness&lt;br /&gt;
 * geekiness&lt;br /&gt;
 * diversity of topics&lt;br /&gt;
&lt;br /&gt;
Please follow the formatting guidelines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Talk Title: ==&lt;br /&gt;
 &lt;br /&gt;
* Speaker's name, affiliation, and email address&lt;br /&gt;
* Second speaker's name, affiliation, email address, if second speaker&lt;br /&gt;
&lt;br /&gt;
Abstract of no more than 500 words.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VuFind 2.0: Why and How? ==&lt;br /&gt;
&lt;br /&gt;
* Demian Katz, Villanova University, demian.katz@villanova.edu&lt;br /&gt;
&lt;br /&gt;
A major new version of the VuFind discovery software is currently in development.  While VuFind 1.x remains extremely popular, some of its components are beginning to show their age.  VuFind 2.0 aims to retain all the strengths of the previous version of the software while making the architecture cleaner, more modern and more standards-based.  This presentation will examine the motivation behind the update, preview some of the new features to look forward to, and discuss the challenges of creating a developer-friendly open source package in PHP.&lt;br /&gt;
&lt;br /&gt;
== Open Source Software Registry ==&lt;br /&gt;
&lt;br /&gt;
* [[User:DataGazetteer|Peter Murray]], LYRASIS, Peter.Murray@lyrasis.org&lt;br /&gt;
&lt;br /&gt;
LYRASIS is creating and shepherding a [[Registry_E-R_Diagram|registry of library open source software]] as part of its [http://www.lyrasis.org/News/Press-Releases/2011/LYRASIS-Receives-Grant-to-Support-Open-Source.aspx grant from the Mellon Foundation to support the adoption of open source software by libraries].  &lt;br /&gt;
The goal of the grant is to help libraries of all types determine if open source software is right for them, and what combination of software, hosting, training, and consulting works for their situation.  &lt;br /&gt;
The registry is intended to become a community exchange point and stimulant for growth of the library open source ecosystem by connecting libraries with projects, service providers, and events.&lt;br /&gt;
&lt;br /&gt;
The first half of this session will demonstrate the registry functions and describe how projects and providers can get involved.  &lt;br /&gt;
The second half of the session will be a brainstorming suggestion of how to expand the functionality and usefulness of the registry.&lt;br /&gt;
&lt;br /&gt;
== Property Graphs And TinkerPop Applications in Digital Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Brian Tingle, California Digital Library, brian.tingle.cdlib.org@gmail.com&lt;br /&gt;
&lt;br /&gt;
[http://www.tinkerpop.com/ TinkerPop] is an open source software development group focusing on technologies in the [http://en.wikipedia.org/wiki/Graph_database graph database] space.   &lt;br /&gt;
This talk will provide a general introduction to the TinkerPop Graph Stack and the [https://github.com/tinkerpop/gremlin/wiki/Defining-a-Property-Graph property graph model] is uses.  The introduction will include code examples and explanations of the property graph models used by the [http://socialarchive.iath.virginia.edu/ Social Networks in Archival Context] project and show how the historical social graph is exposed as a JSON/REST API implemented by a TinkerPop [https://github.com/tinkerpop/rexster rexster] [https://github.com/tinkerpop/rexster-kibbles Kibble] that contains the application's graph theory logic.  Other graph database applications possible with TinkerPop such as RDF support, and citation analysis will also be discussed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Security in Mind ==&lt;br /&gt;
 &lt;br /&gt;
* Erin Germ, United States Naval Academy, Nimitz Library, germ@usna.edu&lt;br /&gt;
&lt;br /&gt;
I would like to talk about security of library software.&lt;br /&gt;
&lt;br /&gt;
Over the Summer, I discovered a critical vulnerability in a vendor’s software that (verified) allowed me to assume any user’s identity for that site, (verified) switch to any user, and to (unverified, meaning I didn’t not perform this as I didn’t want to “hack” another library’s site) assume the role of any user for any other library who used this particular vendor's software.&lt;br /&gt;
&lt;br /&gt;
Within a 3 hour period, I discovered a 2 vulnerabilities: 1) minor one allowing me to access any backups from any library site, and 2) a critical vulnerability.  From start to finish, the examination, discovery in the vulnerability, and execution of a working exploit was done in less than 2 hours. The vulnerability was a result of poor cookie implementation. The exploit itself revolved around modifying the cookie, and then altering the browser’s permissions by assuming the role of another user.&lt;br /&gt;
&lt;br /&gt;
I do not intend on stating which vendor it was, but I will show how I was able to perform this. If needed, I can do further research and “investigation” into other vendor's software to see what I can “find”.&lt;br /&gt;
&lt;br /&gt;
''If selected, I will contact the vendor to inform them that I will present about this at C4L2012. I do not intend on releasing the name of the vendor.''&lt;br /&gt;
&lt;br /&gt;
== Search Engines and Libraries ==&lt;br /&gt;
 &lt;br /&gt;
* Greg Lindahl, blekko CTO, greg@blekko.com&lt;br /&gt;
&lt;br /&gt;
[https://blekko.com blekko] is a new web-scale search engine which enables end-users to create vertical search engines, through a feature called [http://help.blekko.com/index.php/category/slashtags/ slashtags]. Slashtags can contain as few as 1 or as many as tens of thousands of websites relevant to a narrow or broad topic. We have an extensive set of slashtags curated by a combination of volunteers and an in-house librarian team, or end-users can create and share their own. This talk will cover examples of slashtag creation relevant to libraries, and show how to embed this search into a library website, either using javascript or via our API.&lt;br /&gt;
&lt;br /&gt;
''We have exhibited at a couple of library conferences, and have received a lot of interest. blekko is a free service.''&lt;br /&gt;
&lt;br /&gt;
== Beyond code: Versioning data with Git and Mercurial. ==&lt;br /&gt;
&lt;br /&gt;
* Stephanie Collett, California Digital Library, stephanie.collett@ucop.edu&lt;br /&gt;
* Martin Haye, California Digital Library, martin.haye@ucop.edu&lt;br /&gt;
&lt;br /&gt;
Within a relatively short time since their introduction, [http://en.wikipedia.org/wiki/Distributed_Version_Control_System distributed version control systems] (DVCS) like [http://git-scm.com/ Git] and [http://mercurial.selenic.com/ Mercurial] have enjoyed widespread adoption for versioning code. It didn’t take long for the library development community to start discussing the potential for using DVCS within our applications and repositories to version data. After all, many of the features that have made some of these systems popular in the open source community to version code (e.g. lightweight, file-based, compressed, reliable) also make them compelling options for versioning data.  And why write an entire versioning system from scratch if a DVCS solution can be a drop-in solution? At the [http://www.cdlib.org/ California Digital Library] (CDL) we’ve started using Git and Mercurial in some of our applications to version data. This has proven effective in some situations and unworkable in others. This presentation will be a practical case study of CDL’s experiences with using DVCS to version data. We will explain how we’re incorporating Git and Mercurial in our applications, describe our successes and failures and consider the issues involved in repurposing these systems for data versioning.&lt;br /&gt;
&lt;br /&gt;
==Design for Developers==&lt;br /&gt;
&lt;br /&gt;
*Lisa Kurt, University of Nevada, Reno, lkurt@unr.edu&lt;br /&gt;
&lt;br /&gt;
Users expect good design. This talk will delve into what makes really great design, what to look for, and how to do it. Learn the principles of great design to take your applications, user interfaces, and projects to a higher level. With years of experience in graphic design and illustration, Lisa will discuss design principles, trends, process, tools, and development. Design examples will be from her own projects as well as a variety from industry. You’ll walk away with design knowledge that you can apply immediately to a variety of applications and a number of top notch go-to resources to get you up and running.&lt;br /&gt;
&lt;br /&gt;
==Building research applications with Mendeley==&lt;br /&gt;
&lt;br /&gt;
William Gunn, Mendeley william.gunn@mendeley.com (@mrgunn)&lt;br /&gt;
&lt;br /&gt;
This is partly a tool talk and partly a big idea one.&lt;br /&gt;
&lt;br /&gt;
Mendeley has built the world's largest open database of research and we've now begun to collect some interesting social metadata around the document metadata. I would like to share with the Code4Lib attendees information about using this resource to do things within your application that have previously been impossible for the library community, or in some cases impossible without expensive database subscriptions. One thing that's now possible is to augment catalog search by surfacing information about content usage, allowing people to not only find things matching a query, but popular things or things read by their colleagues. In addition to augmenting search, you can also use this information to augment discovery. Imagine an online exhibit of artifacts from a newly discovered dig not just linking to papers which discuss the artifact, but linking to really good interesting papers about the place and the people who made the artifacts. So the big idea is, &amp;quot;How will looking at the literature from a broader perspective than simple citation analysis change how research is done and communicated? How can we build tools that make this process easier and faster?&amp;quot; I can show some examples of applications that have been built using the Mendeley and PLoS APIs to begin to address this question, and I can also present results from Mendeley's developer challenge which shows what kinds of applications researchers are looking for, what kind of applications peope are building, and illustrates some interesting places where the two don't overlap.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Your UI can make or break the application (to the user, anyway)==&lt;br /&gt;
&lt;br /&gt;
* Robin Schaaf, University of Notre Dame, schaaf.4@nd.edu&lt;br /&gt;
&lt;br /&gt;
UI development is hard and too often ends up as an after-thought to computer programmers - if you were a CS major in college I'll bet you didn't have many, if any, design courses.  I'll talk about how to involve the users upfront with design and some common pitfalls of this approach.  I'll also make a case for why you should do the screen design before a single line of code is written.  And I'll throw in some ideas for increasing usability and attractiveness of your web applications.  I'd like to make a case study of the UI development of our open source ERMS.&lt;br /&gt;
&lt;br /&gt;
==Why Nobody Knows How Big The Library Really Is - Perspective of a Library Outside Turned Insider==&lt;br /&gt;
&lt;br /&gt;
* Patrick Berry, California State University, Chico, pberry@csuchico.edu&lt;br /&gt;
&lt;br /&gt;
In this talk I would like to bring the perspective of an &amp;quot;outsider&amp;quot; (although an avowed IT insider) to let you know that people don't understand the full scope of the library.  As we &amp;quot;rethink education&amp;quot;, it is incumbent upon us to help educate our institutions as to the scope of the library.  I will present some of the tactics I'm employing to help people outside, and in some cases inside, the library to understand our size and the value we bring to the institution.&lt;br /&gt;
&lt;br /&gt;
==Building a URL Management Module using the Concrete5 Package Architecture==&lt;br /&gt;
&lt;br /&gt;
* David Uspal, Villanova University, david.uspal@villanova.edu&lt;br /&gt;
&lt;br /&gt;
Keeping track of URLs utilized across a large website such as a university library, and keeping that content up to date for subject and course guides, can be a pain, and as an open source shop, we’d like to have open source solution for this issue.  For this talk, I intend to detail our solution to this issue by walking step-by-step through the building process for our URL Management module -- including why a new solution was necessary; a quick rundown of our CMS ([http://www.concrete5.org Concrete5], a CMS that isn’t Drupal); utilizing the Concrete5 APIs to isolate our solution from core code (to avoid complications caused by core updates); how our solution was integrated into the CMS architecture for easy installation; and our future plans on the project.&lt;br /&gt;
&lt;br /&gt;
==Building an NCIP connector to OpenSRF to facilitate resource sharing==&lt;br /&gt;
&lt;br /&gt;
* Jon Scott, Lyrasis, jon_scott@wsu.edu and Kyle Banerjee, Orbis Cascade Alliance, banerjek@uoregon.edu &lt;br /&gt;
&lt;br /&gt;
How do you reverse engineer any protocol to provide a new service? Humans (and worse yet, committees) often design verbose protocols built around use cases that don't line up current reality. To compound difficulties, the contents of protocol containers are not sufficiently defined/predictable and the only assistance available is sketchy documentation and kind individuals on the internet willing to share what they learned via trial by fire.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NCIP (Niso Circulation Interchange Protocol) is an open standard that defines a set of messages to support exchange of circulation data between disparate circulation, interlibrary loan, and related applications -- widespread adoption of NCIP would eliminate huge amounts of duplicate processing in separate systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This presentation discusses how we learned enough about NCIP and OpenSRF from scratch to build an NCIP responder for Evergreen to facilitate resource sharing in a large consortium that relies on over 20 different ILSes.&lt;br /&gt;
&lt;br /&gt;
==Practical Agile: What's Working for Stanford, Blacklight, and Hydra==&lt;br /&gt;
&lt;br /&gt;
* Naomi Dushay, Stanford University Libraries, ndushay@stanford.edu&lt;br /&gt;
&lt;br /&gt;
Agile development techniques can be difficult to adopt in the context of library software development.  Maybe your shop has only one or two developers, or you always have too many simultaneous projects.   Maybe your new projects can’t be started until 27 librarians reach consensus on the specifications.&lt;br /&gt;
&lt;br /&gt;
This talk will present successful Agile- and Silicon-Valley-inspired practices we’ve adopted at Stanford and/or in the Blacklight and Hydra projects.  We’ve targeted developer happiness as well as improved productivity with our recent changes.  User stories, dead week, sight lines … it’ll be a grab bag of goodies to bring back to your institution, including some ideas on how to adopt these practices without overt management buy in.&lt;br /&gt;
&lt;br /&gt;
==Quick and &amp;lt;strike&amp;gt;Dirty&amp;lt;/strike&amp;gt; Clean Usability: Rapid Prototyping with Bootstrap==&lt;br /&gt;
&lt;br /&gt;
* Shaun Ellis, Princeton University Libraries, shaune@princeton.edu &lt;br /&gt;
&lt;br /&gt;
''&amp;quot;The code itself is unimportant; a project is only as useful as people actually find it.&amp;quot;  - Linus Torvalds'' [http://bit.ly/p4uuyy]&lt;br /&gt;
&lt;br /&gt;
Usability has been a buzzword for some time now, but what is the process for making the the transition toward a better user experience, and hence, better designed library sites?  I will discuss the one facet of the process my team is using to redesign the Finding Aids site for Princeton University Libraries (still in development).  The approach involves the use of rapid prototyping, with Bootstrap [http://twitter.github.com/bootstrap/], to make sure we are on track with what users and stakeholders expect up front, and throughout the development process.&lt;br /&gt;
&lt;br /&gt;
Because Bootstrap allows for early and iterative user feedback, it is more effective than the historic Photoshop mockups/wireframe technique.  The Photoshop approach allows stakeholders to test the look, but not the feel -- and often leaves developers scratching their heads.  Being a CSS/HTML/Javascript grid-based framework, Bootstrap makes it easy for anyone with a bit of HTML/CSS chops to quickly build slick, interactive prototypes right in the browser -- tangible solutions which can be shared, evaluated, revised, and followed by all stakeholders (see Minimum Viable Products [http://en.wikipedia.org/wiki/Minimum_viable_product]).  Efficiency is multiplied because the customized prototypes can flow directly into production use, as is the goal with iterative development approaches, such as the Agile methodology.&lt;br /&gt;
&lt;br /&gt;
While Bootstrap is not the only framework that offers grid-based layout, development is expedited and usability is enhanced by Bootstraps use of of &amp;quot;prefabbed&amp;quot; conventional UI patterns, clean typography, and lean Javascript for interactivity.   Furthermore, out-of-the box Bootstrap comes in a fairly neutral palette, so focus remains on usability, and does not devolve into premature discussions of color or branding choices.  Finally, using Less can be a powerful tool in conjunction with Bootstrap, but is not necessary.  I will discuss the pros and cons, and offer examples for how to getting up and running with or without Less.&lt;br /&gt;
&lt;br /&gt;
==Search Engine Relevancy Tuning - A Static Rank Framework for Solr/Lucene==&lt;br /&gt;
&lt;br /&gt;
* Mike Schultz, Amazon.com (formerly Summon Search Architect) mike.schultz@gmail.com&lt;br /&gt;
&lt;br /&gt;
Solr/Lucene provides a lot of flexibility for adjusting relevancy scoring and improving search results.  Roughly speaking there are two areas of concern: Firstly, a 'dynamic rank' calculation that is a function of the user query and document text fields.  And secondly, a 'static rank' which is independent of the query and generally is a function of non-text document metadata.  In this talk I will outline an easily understood, hand-tunable static rank system with a minimal number of parameters.&lt;br /&gt;
&lt;br /&gt;
The obvious major feature of a search engine is to return results relevant to a user query.  Perhaps less obvious is the huge role query independent document features play in achieving that. Google's PageRank is an example of a static ranking of web pages based on links and other secret sauce.  In the Summon service, our 800 million documents have features like publication date, document type, citation count and Boolean features like the-article-is-peer-reviewed.  These fields aren't textual and remain 'static' from query to query, but need to influence a document's relevancy score.  In our search results, with all query related features being equal, we'd rather have more recent documents above older ones, Journals above Newspapers, and articles that are peer reviewed above those that are not. The static rank system I will describe achieves this and has the following features:&lt;br /&gt;
&lt;br /&gt;
* Query-time only calculation - nothing is baked into the index - with parameters adjustable at query time.&lt;br /&gt;
* The system is based on a signal metaphor where components are 'wired' together.  System components allow multiplexing, amplifying, summing, tunable band-pass filtering, string-to-value-mapping all with a bare minimum of parameters.&lt;br /&gt;
* An intuitive approach for mixing dynamic and static rank that is more effective than simple adding or multiplying.&lt;br /&gt;
* A way of equating disparate static metadata types that leads to understandable results ordering.&lt;br /&gt;
&lt;br /&gt;
==Submitting Digitized Book-like things to the Internet Archive==&lt;br /&gt;
&lt;br /&gt;
* Joel Richard, Smithsonian Institution Libraries, richardjm@si.edu&lt;br /&gt;
&lt;br /&gt;
The Smithsonian Libraries has submitted thousands of out-of-copyright items to the Internet Archive over the years. Specifically in relation to the Biodiversity Heritage Library, we have developed an in-house boutique scanning and upload process that became a learning experience in automated uploading to the Archive. As part of the software development, we created a whitepaper that details the combined learning experiences of the Smithsonian Libraries and the Missouri Botanical Garden. We will discuss some of the the contents of this whitepaper in the context of our scanning process and the manner in which we upload items to the Archive. &lt;br /&gt;
&lt;br /&gt;
Our talk will include a discussion of the types of files and their formats used by the Archive, processes that the Archive performs on uploaded items, ways of interacting and affecting those processes, potential pitfalls and solutions that you may encounter when uploading, and tools that the Archive provides to help monitor and manage your uploaded documents. &lt;br /&gt;
&lt;br /&gt;
Finally, we'll wrap up with a brief summary of how to use things that are on the Internet Archive in your own websites.&lt;br /&gt;
&lt;br /&gt;
== So... you think you want to Host a Code4Lib National Conference, do you? ==&lt;br /&gt;
&lt;br /&gt;
* Elizabeth Duell, Orbis Cascade Alliance, eduell@uoregon.edu&lt;br /&gt;
&lt;br /&gt;
Are you interested in hosting your own Code4Lib Conference? Do you know what it would take? What does BEO stands for? What does F&amp;amp;B Minimum mean? Who would you talk to for support/mentoring? There are so many things to think about: internet support, venue size, rooming blocks, contracts, dietary restrictions and coffee (can't forget the coffee!) just to name a few. Putting together a conference of any size can look daunting, so let's take the scary out of it and replace it with a can do attitude!&lt;br /&gt;
&lt;br /&gt;
Be a step ahead of the game by learning from the people behind the curtain. Ask questions and be given templates/ cheat sheets! &lt;br /&gt;
&lt;br /&gt;
== HTML5 Microdata and Schema.org ==&lt;br /&gt;
 &lt;br /&gt;
* Jason Ronallo, North Carolina State University Libraries, jason_ronallo@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
When the big search engines announced support for HTML5 microdata and the schema.org vocabularies, the balance of power for semantic markup in HTML shifted. &lt;br /&gt;
* What is microdata? &lt;br /&gt;
* Where does microdata fit with regards to other approaches like RDFa and microformats? &lt;br /&gt;
* Where do libraries stand in the worldview of Schema.org and what can they do about it? &lt;br /&gt;
* How can implementing microdata and schema.org optimize your sites for search engines?&lt;br /&gt;
* What tools are available?&lt;br /&gt;
&lt;br /&gt;
== Stack View: A Library Browsing Tool ==&lt;br /&gt;
 &lt;br /&gt;
* Annie Cain, Harvard Library Innovation Lab, acain@law.harvard.edu&lt;br /&gt;
&lt;br /&gt;
In an effort to recreate and build upon the traditional method of browsing a physical library, we used catalog data, including dimensions and page count, to create a [http://librarylab.law.harvard.edu/projects/stackview/ virtual shelf].&lt;br /&gt;
&lt;br /&gt;
This CSS and JavaScript backed visualization allows items to sit on any number of different shelves, really taking advantage of its digital nature.  See how we built Stack View on top of our data and learn how you can create shelves of your own using our open source code.&lt;br /&gt;
&lt;br /&gt;
== “Linked-Data-Ready” Software for Libraries ==&lt;br /&gt;
&lt;br /&gt;
* Jennifer Bowen, University of Rochester River Campus Libraries, jbowen@library.rochester.edu&lt;br /&gt;
&lt;br /&gt;
Linked data is poised to replace MARC as the basis for the new library bibliographic framework.  For libraries to benefit from linked data, they must learn about it, experiment with it, demonstrate its usefulness, and take a leadership role in its deployment. &lt;br /&gt;
&lt;br /&gt;
The eXtensible Catalog Organization (XCO) offers open-source software for libraries that is “linked-data-ready.” XC software prepares MARC and Dublin Core metadata for exposure to the semantic web, incorporating FRBR Group 1 entities and registered vocabularies for RDA elements and roles. This presentation will include a software demonstration, proposed software architecture for creation and management of linked data, a vision for how libraries can migrate from MARC to linked data, and an update on XCO progress toward linked data goals.&lt;br /&gt;
&lt;br /&gt;
== How people search the library from a single search box ==&lt;br /&gt;
&lt;br /&gt;
* Cory Lown, North Carolina State University Libraries, cory_lown@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
Searching the library is complex. There's the catalog, article databases, journal title and database title look-ups, the library website, finding aids, knowledge bases, etc. How would users search if they could get to all of these resources from a single search box? I'll share what we've learned about single search at NCSU Libraries by tracking use of QuickSearch (http://www.lib.ncsu.edu/search/index.php?q=aerospace+engineering), our home-grown unified search application. As part of this talk I will suggest low-cost ways to collect real world use data that can be applied to improve search. I will try to convince you that data collection must be carefully planned and designed to be an effective tool to help you understand what your users are telling you through their behavior. I will talk about how the fragmented library resource environment challenges us to provide useful and understandable search environments. Finally, I will share findings from analyzing millions of user transactions about how people search the library from a production single search box at a large university library.&lt;br /&gt;
&lt;br /&gt;
== An Incremental Approach to Archival Description and Access ==&lt;br /&gt;
&lt;br /&gt;
* Chela Scott Weber, New York University Libraries, chelascott@gmail.com&lt;br /&gt;
* Mark A. Matienzo, Yale University Library, mark@matienzo.org&lt;br /&gt;
&lt;br /&gt;
''This is placeholder text; description coming shortly''&lt;br /&gt;
&lt;br /&gt;
== Making the Easy Things Easy: A Generic ILS API ==&lt;br /&gt;
&lt;br /&gt;
* Wayne Schneider, Hennepin County Library, wschneider@hclib.org&lt;br /&gt;
&lt;br /&gt;
Some stuff we try to do is complicated, because, let's face it, library data is hard. Some stuff, on the other hand, should be easy. Given an item identifier, I should be able to look at item availability. Given a title identifier, I should be able to place a request. And no, I shouldn't have to parse through the NCIP specification or write a SIP client to do it.&lt;br /&gt;
&lt;br /&gt;
This talk will present work we have done on a web services approach to an API for traditional library transactional data, including example applications.&lt;br /&gt;
&lt;br /&gt;
== Your Catalog in Linked Data==&lt;br /&gt;
&lt;br /&gt;
* Tom Johnson, Oregon State University Libraries, thomas.johnson@oregonstate.edu&lt;br /&gt;
&lt;br /&gt;
Linked Library Data activity over the last year has seen bibliographic data sets and vocabularies proliferating from traditional library&lt;br /&gt;
sources. We've reached a point where regular libraries don't have to go it alone to be on the Semantic Web. There is a quickly growing pool of things we can actually ''link to'', and everyone's existing data can be immediately enriched by participating.&lt;br /&gt;
&lt;br /&gt;
This is a quick and dirty road to getting your catalog onto the Linked Data web. The talk  will take you from start to finish, using Free Software tools to establish a namespace, put up a SPARQL endpoint, make a simple data model, convert MARC records to RDF, and link the results to major existing data sets (skipping conveniently over pesky processing time). A small amount of &amp;quot;why linked data?&amp;quot; content will be covered, but the primary goal is to leave you able to reproduce the process and start linking your catalog into the web of data. Appropriate documentation will be on the web.&lt;br /&gt;
&lt;br /&gt;
== Getting the Library into the Learning Management System using Basic LTI == &lt;br /&gt;
&lt;br /&gt;
* David Walker, California State University, dwalker@calstate.edu&lt;br /&gt;
&lt;br /&gt;
The integration of library resources into learning management systems (LMS) has long been something of a holy grail for academic libraries.  The ability to deliver targeted library systems and services to students and faculty directly within their online course would greatly simplify access to library resources.  Yet, the technical barriers to achieving that goal have to date been formidable.  &lt;br /&gt;
&lt;br /&gt;
The recently released Learning Tool Interoperability (LTI) protocol, developed by IMS, now greatly simplifies this process by allowing libraries (and others) to develop and maintain “tools” that function like a native plugin or building block within the LMS, but ultimately live outside of it.  In this presentation, David will provide an overview of Basic LTI, a simplified subset (or profile) of the wider LTI protocol, showing how libraries can use this to easily integrate their external systems into any major LMS.  He’ll showcase the work Cal State has done to do just that.&lt;br /&gt;
&lt;br /&gt;
== Turn your Library Proxy Server into a Honeypot ==&lt;br /&gt;
 &lt;br /&gt;
* Calvin Mah, Simon Fraser University, calvinm@sfu.ca (@calvinmah)&lt;br /&gt;
&lt;br /&gt;
Ezproxy has provided libraries with a useful tool for providing patrons with offsite online access to licensed electronic resources.  This has not gone unnoticed for the unscrupulous users of the Internet who are either unwilling or unable to obtain legitimate access to these materials for themselves.  Instead, they buy or share hacked university computing accounts for unauthorized access.  When undetected, abuse of compromised university accounts can lead to abuse of vendor resources which lead to the blocking of the entire campus block of IP addresses from accessing that resource.&lt;br /&gt;
&lt;br /&gt;
Simon Fraser University Library has been pro actively detecting and thwarting unauthorized attempts through log analysis.  Since SFU has begun analysing our ezproxy logs, the number of new SFU login credentials which are posted and shared in publicly accessible forums has been reduced to zero.   Since our log monitoring began in 2008, the annual average number of SFU login credentials  that are compromised or hacked is 140.  Instead of being a single point of weakness in campus IT security, the library’s proxy server is a honeypot exposing weak passwords, keystroke logging trojans installed on patron PCs and campus network password sniffers.&lt;br /&gt;
&lt;br /&gt;
This talk will discuss techniques such as geomapping login attempts, strategies such as seeding phishing attempts and tools such as statistical log analysis used in detecting compromised login credentials.  &lt;br /&gt;
&lt;br /&gt;
== Relevance Ranking in the Scholarly Domain ==&lt;br /&gt;
&lt;br /&gt;
* Tamar Sadeh, PhD, Ex Libris Group, tamar.sadeh@exlibrisgroup.com&lt;br /&gt;
&lt;br /&gt;
The greatest challenge for discovery systems is how to provide users with the most relevant search results, given the immense landscape of available content. In a manner that is similar to human interaction between two parties, in which each person adjusts to the other in tone, language, and subject matter, discovery systems would ideally be sophisticated and flexible enough to adjust their algorithms to individual users and each user’s information needs. &lt;br /&gt;
&lt;br /&gt;
When evaluating the relevance of an item to a specific user in a specific context, relevance-ranking algorithms need to take into account, in addition to the degree to which the item matches the query, information that is not embodied in the item itself. Such information, which includes the item’s scholarly value, the type of search that the user is conducting (e.g., an exploratory search or a known-item search), and other factors, enables a discovery system to fulfill user expectations that have been shaped by experience with Web search engines.  &lt;br /&gt;
&lt;br /&gt;
The session will focus on the challenges of developing and evaluating relevance-ranking algorithms for the scholarly domain. Examples will be drawn mainly from the relevance-ranking technology deployed by the Ex Libris Primo discovery solution. &lt;br /&gt;
&lt;br /&gt;
== Mobile Library Catalog using Z39.50 ==&lt;br /&gt;
 &lt;br /&gt;
* James Paul Muir, The Ohio State University, muir.29@osu.edu&lt;br /&gt;
&lt;br /&gt;
A talk about putting a new spin on an age-old technology, creating a universal interface, which exposes any Z39.50 capable library catalog as a simple, useful and universal REST API for use in native mobile apps and mobile web.&lt;br /&gt;
&lt;br /&gt;
The talk includes the exploration and demonstration of the Ohio State University’s native app “OSU Mobile” for iOS and Android and shows how the library catalog search was integrated.&lt;br /&gt;
&lt;br /&gt;
The backbone of the project is a REST API, which was created in a weekend using a PHP framework that translates OPAC XML results from the Z39.50 interface into mobile-friendly JSON formatting.&lt;br /&gt;
&lt;br /&gt;
Raw Z39.50 search results contain all MARC information as well as local holdings.  &lt;br /&gt;
Configurable search fields and the ability to select which fields to include in the JSON output make this solution a perfect fit for any Z39.50-capable library catalog.&lt;br /&gt;
  &lt;br /&gt;
Looking forward, possibilities for expansion include the use of Off Campus Sign-In for online resources so mobile patrons can directly access online resources from a smartphone (included in the Android version of OSU Mobile) as well as integration with library patron account.&lt;br /&gt;
&lt;br /&gt;
Enjoy this alternative to writing a custom OPAC adapter or using a 3rd party service for exposing library records and use the proven and universal Z39.50 interface directly against your library catalog. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DMPTool: Guidance and Resources for your data management plan ==&lt;br /&gt;
 &lt;br /&gt;
* Marisa Strong, California Digital Libary, marisa.strong@ucop.edu&lt;br /&gt;
&lt;br /&gt;
A number of U.S. funding agencies such as the National Science Foundation require researchers to supply detailed, cost-effective plans for managing research data, called Data Management Plans.  To help researchers with this requirement, the California Digital Library (CDL) along with  several organizations, came together to develop the DMPTool. The goal of the DMPTool is to provide researchers with guidance, links to resources and help with writing data management plans.&lt;br /&gt;
&lt;br /&gt;
The tool presents the requirements specific to a funding agency along with detailed help in a wizard-style interface.  Users can create, edit, preview, and export a plan into various formats. Institutions can also announce events, workshops, and data management information via the DMPTool blog available from within the tool.&lt;br /&gt;
&lt;br /&gt;
This open-source, Ruby on Rails software tool is hosted on a SLES VM by CDL.  The tool is integrated with Shibboleth, federated single sign-on software, which allows users to login via their home institutions.  We had a geographically distributed development team sharing code on Bitbucket.&lt;br /&gt;
&lt;br /&gt;
This talk will demo the features of the application as well as highlight the agile development practices and methods used to successfully design and build the application on an aggressive schedule.&lt;br /&gt;
&lt;br /&gt;
== Lies, Damned Lies, and Lines of Code Per Day ==&lt;br /&gt;
 &lt;br /&gt;
* James Stuart, Columbia University, james.stuart@columbia.edu&lt;br /&gt;
&lt;br /&gt;
We've all heard about that one study that showed that Pair Programming was 20% efficient than working alone. Or maybe you saw on a blog that study that showed that programmers who write fewer lines of code per day are more efficient...or was it less efficient? And of course, we all know that programmers who work in (Ruby|Python|Java|C|Erlang) have been shown to be more efficient.&lt;br /&gt;
&lt;br /&gt;
A quick examination of some of the research surrounding programming efficiency and methodology, with a focus on personal productivity, and how to incorporate the more believable research into your own team's workflow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==An Anatomy of a Book Viewer==&lt;br /&gt;
&lt;br /&gt;
*Mohammed Abuouda, Bibliotheca Alexandrina, mohammed.abuouda@bibalex.org&lt;br /&gt;
&lt;br /&gt;
Bibliotheca Alexandria (BA) hosts 210,000 digital books in different languages available at http://dar.bibalex.org. It includes the largest collection of digitized Arabic books. Using open source  tools, BA has developed a modular book viewer that can be deployed in any environment to provide the users with a great personalized reading experience. BA’s book viewer provides several services that make this possible: morphological search in different languages, localization, server load balancing, scalability and image processing. Personalization features includes different types of annotation such as sticky notes, highlighting and underlining. It also provides the ability to embed the viewer in any webpage and change its skin.&lt;br /&gt;
&lt;br /&gt;
In this talk we will describe the book viewer architecture, its modular design and how to incorporate it in your current environment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Carrier: Digital Signage System ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:jmspargu|Justin Spargur]], The University of Arizona, spargurj@u.library.arizona.edu&lt;br /&gt;
 &lt;br /&gt;
Carrier is a web-based digital signage application written using JavaScript, PHP, MySQL that can be used on any device with an internet connection and a web browser. Used across the University of Arizona Libraries campuses, Carrier can display any web-based content, allowing users to promote new library collections and services via images, web pages, or videos. Users can easily manage the order in which slides are delivered, manage the length that slides are displayed for, set dates for when slides should be shown, and even specify specific locations where slides should be presented. &lt;br /&gt;
 &lt;br /&gt;
In addition to marketing purposes, Carrier can be used to send both low and high priority alerts to patrons. Alerts can be sent through the administrative interface, via RSS feeds, and even through a Twitter feed, allowing for easy integration with existing campus emergency notification systems.&lt;br /&gt;
 &lt;br /&gt;
I will describe the technical underpinnings of Carrier, challenges that we’ve faced since its implementation, enhancements planned for the next release of the software, and discuss our plans for releasing this software for others to use '''for free'''.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== We Built It.  They Came.  Now What? ==&lt;br /&gt;
 &lt;br /&gt;
* [[User:evviva|Evviva Weinraub]], Oregon State University, evviva.weinraub@oregonstate.edu&lt;br /&gt;
 &lt;br /&gt;
You have a great idea for something new or useful.  You build it, put it out there on GitHub, do a couple of presentations, maybe a press release and BAM, suddenly you’ve created a successful Open Source tool that others are using.  Great!&lt;br /&gt;
&lt;br /&gt;
Fast-forward 3 years. &lt;br /&gt;
&lt;br /&gt;
You still believe in the product, but you can no longer be solely responsible for taking care of it.  Just putting it out there has made it a tool others use, but how do you find a community of folks who believe in the product as much as you do and are willing to commit the time and energy into building, sustaining and moving this project forward.  Or just figuring out if you should bother trying?&lt;br /&gt;
&lt;br /&gt;
In 2006, OSU Libraries built an Interactive Course Assignment system called Library a la Carte – think LibGuides only Open Source.  We now find ourselves in just this predicament.  &lt;br /&gt;
&lt;br /&gt;
What can we do as a community to move beyond our build-first-ask-questions-later mentality and embed sustainability into our new and existing ideas and products without moving toward commercialization?  I fully expect we’ll end up with more questions than answers, but let’s spend some talking about our predicament and yours and think about how we can come out the other side. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contextually Rich Collections Without the Risk: Digital Forensics and Automated Data Triage for Digital Collections ==&lt;br /&gt;
&lt;br /&gt;
* [[User:kamwoods|Kam Woods]], University of North Carolina at Chapel Hill, kamwoods@email.unc.edu&lt;br /&gt;
* Cal Lee, University of North Carolina at Chapel Hill, callee -- at -- ils -- unc -- edu&lt;br /&gt;
* Matthew Kirschenbaum, University of Maryland, mkirschenbaum@gmail.com&lt;br /&gt;
&lt;br /&gt;
Digital libraries and archives are increasingly faced with a significant backlog of unprocessed data along with an accelerating stream of incoming material. These data often arrive from donor organizations, institutions, and individuals on hard drives, optical and magnetic disks, flash memory devices, and even complete hardware (traditional desktop computers and mobile systems). &lt;br /&gt;
&lt;br /&gt;
Information on these devices may be sensitive, obscured by operating system arcana, or require specialized tools and procedures to parse. Furthermore, the sheer volume of materials being handled means that even simple tasks such as providing useful content reports can be impractical (or impossible) in current workflows.&lt;br /&gt;
&lt;br /&gt;
Many of the tasks currently associated with data triage and analysis can be simplified and performed with improved coverage and accuracy through the use of open source digital forensics tools. In this talk we will discuss recent developments in providing digital librarians and archivists with simple, open source tools to accomplish these tasks.  We will discuss tools and methods be tested, developed and packaged as part of the [http://bitcurator.net BitCurator] project.  These tools can be used to reduce or eliminate laborious, error-prone tasks in existing workflows and put valuable time back into the hands of digital librarians and archivists -- time better used to identify and tackle complex tasks that *cannot* be solved by software.&lt;br /&gt;
&lt;br /&gt;
== Finding Movies with FRBR and Facets ==&lt;br /&gt;
 &lt;br /&gt;
* Kelley McGrath, University of Oregon, kelleym@uoregon.edu&lt;br /&gt;
&lt;br /&gt;
How might the Functional Requirements for Bibliographic Records (FRBR) model and faceted navigation improve access to film and video in libraries? I will describe the design and implementation of a FRBR-inspired prototype discovery interface ([http://blazing-sunset-24.heroku.com/ http://blazing-sunset-24.heroku.com/]) using Solr and Blacklight . This approach demonstrates how FRBR can enable a work-centric view that is focused on the original movie or program while supporting users in selecting an appropriate version.&lt;br /&gt;
&lt;br /&gt;
The prototype features two sets of facets, which independently address two important information needs: (1) &amp;quot;What kind of movie or program do you want to watch?&amp;quot; (e.g., a 1970s TV sitcom, something directed by Kurosawa, or an early German horror film); (2) &amp;quot;How do you want to watch it? Where do you want to get it from?&amp;quot; (e.g., on Blu-ray, with Spanish subtitles, available at the local public library). This structure enables patrons to narrow, broaden and pivot across facet values instead of limiting them to the tree-structured hierarchy common with existing FRBR applications. &lt;br /&gt;
&lt;br /&gt;
This type of interface requires controlled data values mapped to FRBR group 1 entities, which in many cases are not available in existing MARC bibliographic records. I will discuss ongoing work using the XC Metadata Services Toolkit ([http://www.extensiblecatalog.org/ http://www.extensiblecatalog.org/]) to extract and normalize data from existing MARC records for videos in order to populate a FRBRized, faceted discovery interface.&lt;br /&gt;
&lt;br /&gt;
==Escaping the Black Box — Building a Platform to Foster Collaborative Innovation==&lt;br /&gt;
&lt;br /&gt;
* Karen Coombs, OCLC, coombsk@oclc.org&lt;br /&gt;
* Kathryn Harnish, OCLC harnishk@oclc.org&lt;br /&gt;
&lt;br /&gt;
Exposed Web services offer an unprecedented opportunity for collaborative innovation — that’s one of the hallmarks of Web-based services like Amazon, Google, and Facebook.  These environments are popular not only for their native feature sets, but also for the array of community-developed apps that can run in them.  The creativity of the development communities that work in these systems brings new value to all types of users.&lt;br /&gt;
&lt;br /&gt;
What if the library community could realize this same level of collaborative innovation around its systems?  What kinds of support would be necessary to transform library systems from “black boxes” to more open, accessible environments in which value is created and multiplied by the user community?&lt;br /&gt;
&lt;br /&gt;
In this session, we’ll discuss the challenges and opportunities OCLC faced in creating just that kind of environment.  The recently-released OCLC “cooperative platform” provides improved access to a wide variety of OCLC’s data and services, allowing library developers and other interested partners to collaborate, innovate, and share new solutions with fellow libraries.  We’ll describe the open standards and technologies we’ve put in play in as we:&lt;br /&gt;
* exposed robust Web services that provide access to both data and business logic; &lt;br /&gt;
* created an architecture for integrating community-built applications in OCLC (and other) products; and &lt;br /&gt;
* developed an infrastructure to support community development, collaboration, and app sharing&lt;br /&gt;
&lt;br /&gt;
Learn how OCLC is helping to open the “black box” -- and give libraries the freedom to become true partners in the evolution of their library systems.&lt;br /&gt;
&lt;br /&gt;
== Code inheritance; or, The Ghosts of Perls Past  ==&lt;br /&gt;
&lt;br /&gt;
* Jon Gorman, University of Illinois, jtgorman@illinois.ed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Any organization has a history not found in its archives or museums. Mysteries exist that origins are lost to the collective institutional knowledge.  Despite what has been forgotten by humans, our servers and computers still keep running. Instructions crafted long ago execute like digital ghosts following orders of masters who have long since left.&lt;br /&gt;
&lt;br /&gt;
The University of Illinois has a fair amount of Perl code created by several different developers. This code includes software that handles our data feeds coming both in and out of campus, reports against our Voyager system, some web applications, and more.&lt;br /&gt;
&lt;br /&gt;
I'll touch a little on the historical legacy and why Perl is used. From there I'll share some tips, best practices, and some of the mistakes I've made in trying to maintain this code. Most of the advice will transition to any language, but code and libraries discussed will be Perl. The presentation will also touch on some internal debate on whether or not to port parts of our Perl codebase.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Recorded Radio/TV broadcasts streamed for library users ==&lt;br /&gt;
&lt;br /&gt;
 * Kåre Fiedler Christiansen, The State and University Library Denmark, kfc@statsbiblioteket.dk&lt;br /&gt;
 * Mads Villadsen, The State and University Library Denmark, mv@statsbiblioteket.dk&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Provide online access to the Radio/TV collection,&amp;quot; my boss said. About 500,000&lt;br /&gt;
hours of Danish broacast radio and TV. Easy, right? Well, half a year later &lt;br /&gt;
we'd done it, but it turned out to involve practically every it employee in the &lt;br /&gt;
library and quite a few non-technical people as well.&lt;br /&gt;
&lt;br /&gt;
Combining our Fedora-based DOMS repository system with our Lucene-based Summa&lt;br /&gt;
search system with our WAYF-based single-signon system with an upgrade of our&lt;br /&gt;
SAN system for enough speed to deliver the content with an ffmpeg-based &lt;br /&gt;
transcoding workflow system with a Wowza-based streaming server, and sprinkling&lt;br /&gt;
it all with a nice user-friendly web frontend turned out to be quite a challenge,&lt;br /&gt;
but also one of the most engaging experiences for a long time.&lt;br /&gt;
&lt;br /&gt;
Of course we were immidiately shut down, since the legal details weren't quite&lt;br /&gt;
as clear as we thought they were, but take an exclusive preview at &lt;br /&gt;
http://developer.statsbiblioteket.dk/kultur/ - username/password: code4lib.&lt;br /&gt;
&lt;br /&gt;
== NoSQL Bibliographic Records: Implementing a Native FRBR Datastore with Redis ==&lt;br /&gt;
 * Jeremy Nelson, Colorado College, jeremy.nelson@coloradocollege.edu&lt;br /&gt;
&lt;br /&gt;
In October, the Library of Congress issued a news release, &amp;quot;A Bibliographic Framework for the Digital Age&amp;quot; outlining a list of requirements for a New Bibliographic Framework Environment. Responding to this challenge, this talk will demonstrate a Redis (http://redis.io) FRBR datastore proof-of-concept that, with a lightweight python-based interface, can meet these requirements. &lt;br /&gt;
&lt;br /&gt;
Because FRBR is an Entity-Relationship model; it is easily implemented as key-value within the primitive data structures provided by Redis.  Redis' flexibility makes it easy to associate arbitrary metadata and vocabularies, like MARC, METS, VRA or MODS, with FRBR entities and inter-operate with legacy and emerging standards and practices like RDA Vocabularies and LinkedData.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Upgrading from Catalog to Discovery Environment: A Consortial Approach ==&lt;br /&gt;
 &lt;br /&gt;
* Spencer Lamm, Swarthmore College, slamm1@swarthmore.edu&lt;br /&gt;
* Chelsea Lobdell, Swarthmore College, clobdel1@swarthmore.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Almost two years ago the Tri-College Consortium of Haverford, Swarthmore, and Bryn Mawr Colleges embarked upon a journey to provide enhanced end-user experience and discoverability with our library applications. Our solution was to implement an integration of ExLibris's Primo Central into Villanova's VuFind for a dual-channel searching experience. We present a case study of the collaborative and technical aspects of our process.&lt;br /&gt;
&lt;br /&gt;
At a high level we will describe our approach to project management and decision making.  We used a multi-tiered structure of working groups with an iterative design-feedback implementation cycle.  We will relay lessons learned from our experience: successes, failures, and unexpected hurdles.&lt;br /&gt;
&lt;br /&gt;
At a lower, technical level we will discuss the vufind search module architecture; the workflow of creating a new search channel; a Primo API parser; and the data structures of the Primo API response and the Primo SearchObject. Time permitting, we will also outline how we modified VuFind's Innovative driver to work with our ILS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Improving geospatial data access for researchers and students ==&lt;br /&gt;
 &lt;br /&gt;
* Dileshni Jayasinghe, Scholars Portal, University of Toronto, d.jayasinghe@utoronto.ca&lt;br /&gt;
* Sepehr Mavedati, Scholars Portal, University of Toronto, sepehr.mavedati@utoronto.ca&lt;br /&gt;
&lt;br /&gt;
Scholars GeoPortal (http://geo.scholarsportal.info) was created as a platform for online delivery of geospatial data resources to the Ontario Council of University Libraries community. Prior to the start of this project, each institution was storing data locally, and had its own practice for distributing datasets to users. This ranged from home grown online data delivery systems to burning data on to DVDs for each individual request. Most institutions had limited resources and expertise to create and maintain a sophisticated delivery system on their own. Led by OCUL Map, GIS librarians, staff at Scholars Portal in partnership with the Government of Ontario, the GeoPortal project began in 2009.&lt;br /&gt;
&lt;br /&gt;
Our talk will focus on the design and architecture of Scholars Portal's solution to support maps and geospatial data, and how we distribute these data collections to our users. &lt;br /&gt;
&lt;br /&gt;
The system consists of 4 main components: metadata management system, map server, spatial database, and the web application.&lt;br /&gt;
&lt;br /&gt;
*Metadata Management: customized metadata editor with data hosted in MarkLogic, providing text and spatial queries&lt;br /&gt;
*Map Server: ArcGIS Server&lt;br /&gt;
*Spatial database: MS SQL Server with spatial extension&lt;br /&gt;
*Web application: Javascript web application using Dojo and Esri’s Javascript API&lt;br /&gt;
 &lt;br /&gt;
For other code4libbers who are interested in a similar system, we will also discuss the open source alternatives for each component (GeoNetwork, MapServer, etc.), and challenges and limitations we faced trying to use some of these tools. We'd also like to pick your brains on how we can make this application better. What can we do differently?&lt;br /&gt;
&lt;br /&gt;
== LibX 2.0 ==&lt;br /&gt;
 &lt;br /&gt;
* Godmar Back, Virginia Tech, godmar@gmail.com&lt;br /&gt;
&lt;br /&gt;
We would like to provide the Code4Lib community with an update on what we've accomplished with LibX (which we last presented in 2009) - where we've gone, what our users are thinking, and how both its technology and its adapter community can be included in the code4lib world.&lt;br /&gt;
&lt;br /&gt;
== Introducing the DuraSpace Incubator ==&lt;br /&gt;
&lt;br /&gt;
* Jonathan Markow, DuraSpace, jjmarkow@duraspace.org&lt;br /&gt;
&lt;br /&gt;
DuraSpace is planning to launch a new incubation program for the benefit of open source projects that wish to become part of our organization, in the interest of helping them to become sustainable, community-driven projects and supporting them afterwards with umbrella services that help them to thrive.  From time to time DuraSpace becomes aware of open source software projects in the preservation, archiving, or repository space that are in search of a community “home”.  The motivation might be that the project is simply trying to attract more developers, that it would like to develop a more robust community of users and service providers, that its current organizational sponsorship is in question, or that it would like to take advantage of an existing and compatible organization's best practices and administrative infrastructure rather than create a new one of its own. DuraSpace is now prepared to leverage its resources, experience, and reputation in the community to help these projects become, or continue to be, successful. Projects emerging from incubation will become officially recognized as DuraSpace projects.  This briefing presents highlights of the DuraSpace Incubator and invites questions and feedback from participants.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== In-browser data storage and me ==&lt;br /&gt;
&lt;br /&gt;
* Jason Casden, North Carolina State University Libraries, jason_casden@ncsu.edu&lt;br /&gt;
&lt;br /&gt;
When it comes to storing data in web browsers on a semi-persistent basis, there are several partially-adopted, semi-deprecated, product-specific, or even universally accepted options. These include models such as key-value stores, relational databases, and object stores. I will present some of these options and discuss possible applications of these technologies in library services. In addition to quoting heavily from Mark Pilgrim's excellent chapter on this topic, I will weave in my own experience utilizing in-browser data storage in an iPad-based data collection tool to successfully improve performance and data stability while reducing network dependence. See also: HTML5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Coding for the past, archiving for the future … and the Salman Rushdie Papers ==&lt;br /&gt;
 &lt;br /&gt;
* Peter Hornsby, Emory University Libraries, phornsb@emory.edu&lt;br /&gt;
&lt;br /&gt;
Cultural heritage production is moving to the digital medium and libraries use of repository solutions such as Fedora Commons and DSpace are a solid response to this change. But how do we go from, for instance a selection of 90's computing technology to  a collection of digital objects ready for ingest into your institution's local repository? Once you have ingested your digital objects how are you going to provide access to these resources? The arrival of the Salman Rushdie Papers, which contain 10 years of Sir Salman Rushdie's digital life, gave Emory University Libraries the opportunity to explore these questions. I would like to to talk about the approach the Emory University Libraries adopted, what we learned and the coding challenges that remain.&lt;br /&gt;
&lt;br /&gt;
==  Indexing big data with Tika, Solr &amp;amp; map-reduce ==&lt;br /&gt;
&lt;br /&gt;
* Scott Fisher, California Digital Library, scott.fisher AT ucop BORK edu&lt;br /&gt;
* Erik Hetzner, California Digital Library, erik.hetzner AT ucop BORK edu&lt;br /&gt;
&lt;br /&gt;
The Web Archiving Service at the California Digital Library has&lt;br /&gt;
crawled a large amount of data, in every format found on the web: 30&lt;br /&gt;
TB, comprising about 600 million fetched URLs. In this talk we will&lt;br /&gt;
discuss how we parsed this data using Tika and map-reduce, and how we&lt;br /&gt;
indexed this data with Solr, tweaked the relevance ranking, and were&lt;br /&gt;
able to provide our users with a better search experience.&lt;br /&gt;
&lt;br /&gt;
== ALL TEH METADATAS! or How we use RDF to keep all of the digital object metadata formats thrown at us. ==&lt;br /&gt;
&lt;br /&gt;
* Declan Fleming, University of California, San Diego, dfleming AT ucsd DING edu&lt;br /&gt;
&lt;br /&gt;
What's the right metadata standard to use for a digital repository?  There isn't just one standard that fits documents, videos, newspapers, audio files, local data, etc.  And there is no standard to rule them all.  So what do you do?  At UC San Diego Libraries, we went down a conceptual level and attempted to hold every piece of metadata and give each holding place some context, hopefully in a common namespace.  RDF has proven to be the ideal solution, and allows us to work with MODS, PREMIS, MIX, and just about anything else we've tried.  It also opens up the potential for data re-use and authority control as other metadata owners start thinking about and expressing their data in the same way.  I'll talk about our workflow which takes metadata from a stew of various sources (CSV dumps, spreadsheet data of varying richness, MARC data, and MODS data), normalizes them into METS by our Metadata Specialists who create an assembly plan, and then ingests them into our digital asset management system.  The result is a [http://dl.dropbox.com/u/6923768/Work/DAMS%20object%20rdf%20graph.png beautiful graph] of RDF triples with metadata poised to be expressed as [https://libraries.ucsd.edu/digital/ HTML], RSS, METS, XML, and opens linked data possibilities that we are just starting to explore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== HathiTrust Large Scale Search: Scalability meets Usability ==&lt;br /&gt;
&lt;br /&gt;
* Tom Burton-West, DLPS, University of Michigan Library, tburtonw AT umich edu&lt;br /&gt;
&lt;br /&gt;
[http://www.hathitrust.org/ HathiTrust Large-Scale search] provides full-text search services over  nearly 10 million full-text books using Solr for the back-end.  Our index is around 5-6 TB in size and each shard contains over 3 billion unique terms due to content in over 400 languages and dirty OCR.&lt;br /&gt;
&lt;br /&gt;
Searching the full-text of 10 million books often results in very large result sets.  By conference time a number of [http://www.hathitrust.org/full-text-search-features-and-analysis features] designed to help users narrow down large result sets and to do exploratory searching will either be in production or in preparation for release. There are often trade-offs between implementing desirable user features and keeping response time reasonable in addition to the traditional search trade-offs of precision versus recall.  &lt;br /&gt;
&lt;br /&gt;
We will discuss various [http://www.hathitrust.org/blogs/large-scale-search scalability] and usability issues including:&lt;br /&gt;
* Trade-offs between desirable user features and keeping response time reasonable and scalable &lt;br /&gt;
* Our solution to providing the ability to search within the 10 million books and also search within each book&lt;br /&gt;
* Migrating the [http://babel.hathitrust.org/cgi/mb personal collection builder application] from a separate Solr instance to an app which uses the same back-end as full-text search.&lt;br /&gt;
* Design of a scalable multilingual spelling suggester&lt;br /&gt;
* Providing advanced search features combining MARC metadata with OCR&lt;br /&gt;
** The dismax mm and tie parameters&lt;br /&gt;
** Weighting issues and tuning relevance ranking&lt;br /&gt;
* Displaying  only the most &amp;quot;relevant&amp;quot; facets&lt;br /&gt;
* Tuning relevance ranking &lt;br /&gt;
* Dirty OCR issues&lt;br /&gt;
* CJK tokenizing and other multilingual issues.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Code4Lib2012]]&lt;/div&gt;</summary>
		<author><name>Tburtonw</name></author>	</entry>

	</feed>