2013 talks proposals
Deadline for talk submission is Friday, November 2 at 5pm PT.
Prepared talks are 20 minutes (including setup and questions), and focus on one or more of the following areas:
- tools (some cool new software, software library or integration platform)
- specs (how to get the most out of some protocols, or proposals for new ones)
- challenges (one or more big problems we should collectively address)
The community will vote on proposals using the criteria of:
- usefulness
- newness
- geekiness
- uniqueness
- awesomeness
Please follow the formatting guidelines:
== Talk Title == * Speaker's name, affiliation, and email address * Second speaker's name, affiliation, email address, if applicable Abstract of no more than 500 words.
Contents
- 1 Modernizing VuFind with Zend Framework 2
- 2 Did You Really Say That Out Loud? Tools and Techniques for Safe Public WiFi Computing
- 3 Drupal 8 Preview — Symfony and Twig
- 4 Neat! But How Do We Do It? - The Real-world Problem of Digitizing Complex Corporate Digital Objects
- 5 ResCarta Tools building a standard format for audio archiving, discovery and display
- 6 Format Designation in MARC Records: A Trip Down the Rabbit-Hole
- 7 Touch Kiosk 2: Piezoelectric Boogaloo
- 8 Wayfinding in a Cloud: Location Service for libraries
- 9 The Metadata Überset: Using a METS Document to Create and Update Fedora Data Streams
- 10 Empowering Collection Owners with Automated Bulk Ingest Tools for DSpace
Modernizing VuFind with Zend Framework 2
- Demian Katz, Villanova University, demian DOT katz AT villanova DOT edu
When setting goals for a new major release of VuFind, use of an existing web framework was an important decision to encourage standardization and avoid reinvention of the wheel. Zend Framework 2 was selected as providing the best balance between the cutting-edge (ZF2 was released in 2012) and stability (ZF1 has a long history and many adopters). This talk will examine some of the architecture and features of the new framework and discuss how it has been used to improve the VuFind project.
Did You Really Say That Out Loud? Tools and Techniques for Safe Public WiFi Computing
- Peter Murray, LYRASIS, Peter.Murray@lyrasis.org
Public WiFi networks, even those that have passwords, are nothing more that an old-time party line: what every you say can be easily heard by anyone nearby. Remember Firesheep? It was an extension to Firefox that demonstrated how easy it was to snag session cookies and impersonate someone else. So what are you sending out over the airwaves, and what techniques are available to prevent eavesdropping? This talk will demonstrate tools and techniques for desktop and mobile operating systems that you should be using right now -- right here at Code4Lib -- to protect your data and your network activity.
Drupal 8 Preview — Symfony and Twig
- Cary Gordon, The Cherry Hill Company, cgordon@chillco.com
Drupal is a great platform for building web applications. Last year, the core developers decided to adopt the Symfony PHP framework, because it would lay the groundwork for the modernization (and de-PHP4ification) of the Drupal codebase. As I write this, the Symfony ClassLoader and HttpFoundation libraries are committed to Drupal core, with more elements likely before Drupal 8 code freeze.
It seems almost certain that the Twig templating engine will supplant PHPtemplate as the core Drupal template engine. Twig is a powerful, secure theme building tool that removes PHP from the templating system, the result being a very concise and powerful theme layer.
Symfony and Twig have a common creator, Fabien Potencier, who's overall goal is to rid the world of the excesses of PHP 4.
Neat! But How Do We Do It? - The Real-world Problem of Digitizing Complex Corporate Digital Objects
- Matthew Mariner, University of Colorado Denver, Auraria Library, matthew.mariner@ucdenver.edu
Isn't it neat when you discover that you are the steward of dozens of Sanborn Fire Instance Maps, hundreds of issues of a city directory, and thousands of photographs of persons in either aforementioned medium? And it's even cooler when you decide, "Let's digitize these together and make them one big awesome project to support public urban history"? Unfortunately it's a far more difficult process than one imagines at inception and, sadly, doesn't always come to fruition. My goal here is to discuss the technological (and philosophical) problems librarians and archivists face when trying to create ultra-rich complex corporate digital projects, or, rather, projects consisting of at least three facets interrelated by theme. I intend to address these problems by suggesting management solutions, web workarounds, and, perhaps, a philosophy that might help in determining whether to even move forward or not. Expect a few case studies of "grand ideas crushed by technological limitations" and "projects on the right track" to follow.
ResCarta Tools building a standard format for audio archiving, discovery and display
- John Sarnowski, The ResCarta Foundation, john.sarnowski@rescarta.org
The free ResCarta Toolkit has been used by libraries and archives around the world to host city directories, newspapers, and historic photographs and by aerospace companies to search and find millions of engineering documents. Now the ResCarta team has released audio additions to the toolkit.
Create full text searchable oral histories, news stories, interviews. or build an archive of lectures; all done to Library of Congress standards. The included transcription editor allows for accurate correction of the data conversion tool’s output. Build true archives of text, photos and audio. A single audio file carries the embedded Axml metadata, transcription, and word location information. Checks with the FADGI BWF Metaedit.
ResCarta-Web presents your audio to IE, Chome, Firefox, Safari, and Opera browsers with full playback and word search capability. Display format is OGG!!
You have to see this tool in action. Twenty minutes from an audio file to transcribed, text-searchable website. Be there or be L seven (Yeah, I’m that old)
Format Designation in MARC Records: A Trip Down the Rabbit-Hole
- Michael Doran, University of Texas at Arlington, doran@uta.edu
This presentation will use a seemingly simple data point, the "format" of the item being described, to illustrate some of the complexities and challenges inherent in the parsing of MARC records. I will talk about abstract vs. concrete forms; format designation in the Leader, 006, 007, and 008 fixed fields as well as the 245 and 300 variable fields; pseudo-formats; what is mandatory vs. optional in respect to format designation in cataloging practice; and the differences between cataloging theory and practice as observed via format-related data mining of a mid-size academic library collection.
I understand that most of us go to code4lib to hear about the latest sexy technologies. While MARC isn't sexy, many of the new tools being discussed still need to be populated with data gleaned from MARC records. MARC format designation has ramifications for search and retrieval, limits, and facets, both in the ILS and further downstream in next generation OPACs and web-scale discovery tools. Even veteran library coders will learn something from this session.
Touch Kiosk 2: Piezoelectric Boogaloo
- Andreas Orphanides, North Carolina State University Libraries, akorphan@ncsu.edu
At the NCSU Libraries, we provide realtime access to information on library spaces and services through an interactive touchscreen kiosk in our Learning Commons. In the summer of 2012, two years after its initial deployment, I redeveloped the kiosk application from the ground up, with an entirely new codebase and a completely redesigned user interface. The changes I implemented were designed to remedy previously identified shortcomings in the code and the interface design [1], and to enhance overall stability and performance of the application.
In this presentation I will outline my revision process, highlighting the lessons I learned and the practices I implemented in the course of redevelopment. I will highlight the key features of the HTML/Javascript codebase that allow for increased stability, flexibility, and ease of maintenance; and identify the changes to the user interface that resulted from the usability findings I uncovered in my previous research. Finally, I will compare the usage patterns of the new interface to the analysis of the previous implementation to examine the practical effect of the implemented changes.
I will also provide access to a genericized version of the interface code for others to build their own implementations of similar kiosk applications.
[1] http://journal.code4lib.org/articles/5832
Wayfinding in a Cloud: Location Service for libraries
- Petteri Kivimäki, The National Library of Finland, petteri.kivimaki@helsinki.fi
Searching for books in large libraries can be a difficult task for a novice library user. This paper presents The Location Service, software as a service (SaaS) wayfinding application developed and managed by The National Library of Finland, which is targeted for all the libraries. The service provides additional information and map-based guidance to books and collections by showing their location on a map, and it can be integrated with any library management system, as the integration happens by adding a link to the service in the search interface. The service is being developed continuously based on the feedback received from the users.
The service has two user interfaces: One for the customers and one for the library staff for managing the information related to the locations. The UI for the customers is fully customizable by the libraries, and the customization is done via template files by using the following techniques: HTML, CSS, and Javascript/jQuery. The service supports multiple languages, and the libraries have a full control of the languages, which they want to support in their environment.
The service is written in Java and it uses Spring and Hibernate frameworks. The data is stored in PostgreSQL database, which is shared by all the libraries. They do not possess a direct access to the database, but the service offers an interface, which makes it possible to retrieve XML data over HTTP. Modification of the data via admin UI, however, is restricted, and access on the other libraries’ data is blocked.
The Metadata Überset: Using a METS Document to Create and Update Fedora Data Streams
- Jennifer Eustis, University of Connecticut Libraries, jennifer.eustis@lib.uconn.edu
The University of Connecticut Libraries is currently building a Fedora digital repository. We investigated the viability of using, Islandora to meet our needs of an administration module on top of Fedora. As our analysis came to end, we found that Islandora did not meet all of our needs. On the one hand, Islandora was convenient in terms of a being an already existing solution with a robust support community. We also liked that Islandora created and updated the Dublin Core (DC) and Metadata Object Description Schema (MODS) data streams seamlessly in Fedora Digital Objects. This was because we had decided to use MODS as our normalized metadata standard for descriptive metadata. On the other hand, our content model architecture was more atomistic than that of Islandora. Our content creators felt that content should be managed in separate Fedora Digital Objects whereas Islandora managed content as data streams. Furthermore, our content creators wanted a more tailored management system that Islandora could not deliver. Because of these concerns, we decided to build our own repository management system. This decision entailed rethinking how to replicate Islandora’s functionality of creating and updating the DC and MODS data streams. When speaking to the programmer on the project, one of his tasks was to ingest METS (Metadata Encoding Transmission Standard) documents from Archivematica into Fedora. The programmer and I talked about the idea of using METS as a metadata standard to help us create and update data streams. As the metadata librarian on the project, I also wanted a way to be able to update other types of metadata beyond the description such as important rights or preservation metadata, which would not be in the same data streams as the DC and/or MODS. Moreover, I knew that metadata would not be the same in all of our Fedora Digital Objects. Each of our Fedora Digital Objects would need to have data streams created and updated with appropriate metadata. This lead to the decision of using a METS document with all the relevant metadata needed to create and update data streams. All of our Fedora Digital Objects would have such a METS document. Because these METS documents dealt with a rather large portion of our available metadata, we began to refer to them as the superset and eventually the METS überset document. As a result, as the metadata librarian, I created a METS Profile for the METS überset document that has just been approved and registered by the Library of Congress. In this presentation, I will describe my process of how the METS überset document came to be and the specifications outlined in its METS Profile. I will also highlight the lessons learned from the process.
Empowering Collection Owners with Automated Bulk Ingest Tools for DSpace
- Terry Brady, Georgetown University, twb27@georgetown.edu
The Georgetown University Library has developed a number of applications to expedite the process of ingesting content into DSpace.
- Automatically inventory a collection of documents or images to be uploaded
- Generate a spreadsheet for metadata capture based on the inventory
- Generate item-level ingest folders, contents files and dublin core metadata for the items to be ingested
- Validate the contents of ingest folders prior to initiating the ingest to DSpace
- Present users with a simple, web-based form to initiate the batch ingest process
The applications have eliminated a number of error-prone steps from the ingest workflow and have significantly reduced a number of tedious data editing steps. These applications have empowered content experts to be in charge of their own collections.
In this presentation, I will provide a demonstration of the tools that were built and discuss the development process that was followed.