Feel free to email Paul Joseph (code4libBC Chair) at paul.joseph@ubc.ca with questions or comments.
== '''Lightning Talk Proposals and Hackfest/Breakout Suggestions''' ==: Submit them [https://docs.google.com/forms/d/1NVEGsJZvqNLyqxATdYvNonGuPmlDAFOJn-R2vGpIvWg/viewform here]. View submissions [https://docs.google.com/forms/d/1NVEGsJZvqNLyqxATdYvNonGuPmlDAFOJn-R2vGpIvWg/viewanalytics here].
Submit them [https://docs.google.com/forms/d/1NVEGsJZvqNLyqxATdYvNonGuPmlDAFOJn-R2vGpIvWg/viewform here].== '''Lightning Talk Proposals''' ==
View John Durno, University of Victoria* Filling up the Internet Archive using their S3-like API. UVic recently uploaded 750G of old newspapers and metadata (over 15,000 issues) to the IA via their API, based on Amazon's S3, by way of a simple python script making use of the boto library and a wrapper supplied by one of the IA developers. The API proved surprisingly robust, and I'd like to spread the word. Peter Tyrrell, Andornot* Setting up Apache Solr to index and search over multiple source types: database and fielded data, Excel/CSV, scanned mags and newspapers, PDFs, word processor documents, websites, geolocations, etc. Focus will be on schema and DataImportHandler considerations, plus amusing anecdotes as time allows.* Another option would be: scripts that parse a PDF into a TIF, JPG, TXT, and positional XML per page via djvulibre and imagemagick libraries. Make 'em ready for indexing and flexible display.* I could maybe go over how to (and how NOT to) represent and display hierarchical (cough, archival) data in an Apache Solr index. Mostly this would be a juicy rant about how just how ruddy difficult I found it. Stefan Khan-Kernahan, The University of British Columbia* UBC is launching an in-house product for managing course reserves that helps streamline workflows between faculty & library, within library staff (e.g copyright control etc.), and library & student, which I'd like to present on, the content of which would be on completed modules to date and learning lessons for others Marcus Emmanuel Barnes, Simon Fraser University* Normalizing existing digitized content into standardized packages for robust long-term management. A report on SFU Library's METS-Bagger tool, with a discussion of the benefits, design principles used for the packaging specification, and potential next steps. Colleen Bell, University of the Fraser Valley* I've been using PHP, JSON, and Libguides widgets to integrate Libguides content into our ERM and ERM content into our Libguides. This is particularly useful for libraries using SFU's researcher suite, but could provide ideas for anyone, since the code generated by the PHP can be displayed in any web page. Mark Jordan, Simon Fraser University* Libraries are realizing the potential for exposing their locally managed content as Linked Data. One of the types of local data that offers a lot of potential for leveraging Linked Data's capabilities is the controlled subject terms applied to local digital collections. I would like to demonstrate how I've enriched SFU's Editorial Cartoons Collection's descriptive metadata with URIs from http://id.loc.gov, paying particular attention to those from the Thesaurus for Graphic Material.* Explanation and demo of docr/smd, a distributed Optical Character Recognition platform designed to use smartphones and tablets to do the OCR. May Chan, Burnaby Public Library* Hackfests for the Uninitiated. For all sorts of reasons, hackfests can be intimidating to first-timers and especially to those who have little or no programming ability. To encourage those new to this form of collaborative learning, my LT will relate key a-ha! moments from my first hackfest experience, especially some difficult truths learned.* The Code4Lib Conference Gender and Minority Scholarships. One of the ways Code4Lib supports gender and cultural diversity is to offer conference scholarships to women, transgendered persons and persons of ethnic or aboriginal descent. As a way to encourage potential BC applicants, this LT will give some nuanced background on the scholarship program and application process. Calvin Mah, SFU Library* SFU Library - Hours Database Sarah Sutherland, Canadian Legal Information Institute* I would like to discuss the process involved in evaluating the responses to requests for proposals for technology projects. There are often several very good submissions [httpsonce the basic requirements are met, and at that point it becomes more about the style of the vendor and what kind of project it is. We recently went through this process, so I will use some anonymized examples from our process to illustrate my talk. == '''Hackfest/Breakout Suggestions''' == John Durno, University of Victoria* Develop an Omeka module that uses the Internet Archive to host video and audio content, essentially using Omeka as the front end user interface while taking advantage of the IA's media delivery/streaming capabilities. I envision two components:content and metadata would be uploaded via Omeka's admin interface. The IA's media player would be embedded in the public interface for content delivery. Stefan Khan-Kernahan, The University of British Columbia* Building a more engaging digital asset viewer than what is provided by ContentDM/competitors. Details: current digital asset presentation (e.g Content DM), whilst providing all the ""necessary"" information for the user (image + metadata etc.) simply lack in user engagement. If universities are expecting to build interest in these collections among current/docsfuture students, they need to cater for a more involved experience.googleI am proposing an image viewer for digital assets that allows tagging/hotspot that trigger supplementary information beyond metadata (e.g. video explanations of areas on maps, how they came to be etc) Karen J. Nelson, Capilano Unversity Library* Could we have a quickie: 1. FRBR explanation. 2. ditto data exchange. 3. ditto linked data. 4. bibframe. 5. WEMI language Jonathan Jacobsen, Andornot* I'm working on a virtual exhibit project using Omeka right now, so I second the idea of an Omeka breakout session. Would love to connect with some other Omeka users/developers. In particular, to discuss the Solr plug-in. May Chan, Burnaby Public Library and Mark Jordan, Simon Fraser University* New bibliographic standards and Linked Data. This breakout session will provide opportunities for participants to explore and experiment with new and emerging models for bibliographic data, such as FRBR, the DCMI Abstract Model, and BIBFRAME within the context of Resource Description Framework (RDF) and Linked Data. Practical outcomes of the session will include converting MARC21 data into MARCXML and Dublin Core XML, using the BIBFRAME tool (http://bibframe.org/tools/) to transform MARCXML into BIBFRAME resources, and linking data values used as access points in MARC21 records to URIs from the Library of Congress’s Linked Data Service at http://id.lov.gov. Because this breakout will take the approach of supporting self-directed learning in a collaborative environment, participants should prepare for this session by reviewing the following:** A Quick Intro to Linked Data / Michael Hausenblas*** Slides: http://www.slideshare.net/mediasemanticweb/quick-linked-data-introduction*** Video: http://www.youtube.com/formswatch?v=qMjkI4hJej0** Linked Open Data: What is it? /dEuropeana*** Video: http:/1NVEGsJZvqNLyqxATdYvNonGuPmlDAFOJn/vimeo.com/36752317** 30 Minute Guide to RDF and Linked Data / Ian Davis*** http://www.slideshare.net/iandavis/30-R2vGpIvWgminute-guide-to-rdf-and-linked-data** DCMI Abstract Model*** http:/viewanalytics here]/dublincore.org/documents/abstract-model/** RDA Relationships Oveview*** http://www.rdatoolkit.org/backgroundfiles/RelationshipsOverview_10_9_09.pdf** Moving Library Metadata Towards Linked Data / Jennifer Bowen*** http://www.slideshare.net/JenniferBowen/moving-library-metadata-toward-linked-data-opportunities-provided-by-the-extensible-catalog** BIBFRAME tutorial / Jeremy Nelson*** http://tuttdemo.coloradocollege.edu/calcon-2013-session/