Changes

Jump to: navigation, search

2012 talks proposals

3,484 bytes added, 19:46, 27 May 2016
Beyond code: Versioning data with Git and Mercurial.
Deadline for talk submission is was ''Sunday, November 20''.(The deadline for 2012 Talks proposals is now closed.)
Prepared talks are 20 minutes (including setup and questions), and focus on one or more of the following areas:
== Beyond code: Versioning data with Git and Mercurial. ==
* Stephanie Charlie Collett, California Digital Library, stephaniecharlie.collett@ucop.edu
* Martin Haye, California Digital Library, martin.haye@ucop.edu
Mendeley has built the world's largest open database of research and we've now begun to collect some interesting social metadata around the document metadata. I would like to share with the Code4Lib attendees information about using this resource to do things within your application that have previously been impossible for the library community, or in some cases impossible without expensive database subscriptions. One thing that's now possible is to augment catalog search by surfacing information about content usage, allowing people to not only find things matching a query, but popular things or things read by their colleagues. In addition to augmenting search, you can also use this information to augment discovery. Imagine an online exhibit of artifacts from a newly discovered dig not just linking to papers which discuss the artifact, but linking to really good interesting papers about the place and the people who made the artifacts. So the big idea is, "How will looking at the literature from a broader perspective than simple citation analysis change how research is done and communicated? How can we build tools that make this process easier and faster?" I can show some examples of applications that have been built using the Mendeley and PLoS APIs to begin to address this question, and I can also present results from Mendeley's developer challenge which shows what kinds of applications researchers are looking for, what kind of applications peope are building, and illustrates some interesting places where the two don't overlap.
Slides from my talk are here: http://db.tt/PMaqFoVw
==Your UI can make or break the application (to the user, anyway)==
==Search Engine Relevancy Tuning - A Static Rank Framework for Solr/Lucene==
* Mike Schultz, Amazon.com (formerly Summon Search Architect) , mike.schultz@gmail.com
Solr/Lucene provides a lot of flexibility for adjusting relevancy scoring and improving search results. Roughly speaking there are two areas of concern: Firstly, a 'dynamic rank' calculation that is a function of the user query and document text fields. And secondly, a 'static rank' which is independent of the query and generally is a function of non-text document metadata. In this talk I will outline an easily understood, hand-tunable static rank system with a minimal number of parameters.
== DMPTool: Guidance and resources to build a data management plan==
* Marisa Strong, California Digital Libary, marisa.strong@ucop.edu
* [[User:kamwoods|Kam Woods]], University of North Carolina at Chapel Hill, kamwoods@email.unc.edu
* Cal Lee, University of North Carolina at Chapel Hill, callee -- at -- ils -- unc -- edu
* Matthew Kirschenbaum, University of Maryland, mkirschenbaum@gmail.com
Digital libraries and archives are increasingly faced with a significant backlog of unprocessed data along with an accelerating stream of incoming material. These data often arrive from donor organizations, institutions, and individuals on hard drives, optical and magnetic disks, flash memory devices, and even complete hardware (traditional desktop computers and mobile systems).
== Building a Code4Lib 2012 Conference Mobile App with the Kuali Mobility Framework ==
* Michelle Suranofsky, Lehigh University, michelle dot suranofsky at lehigh dot edu
* Tod Olson, University of Chicago, tod at uchicago dot edu
* Michelle Suranofsky, Lehigh University, michelle dot suranofsky at lehigh dot edu
Hot off the heals of the Kuali Days 2011 Conference, we thought it would be fun to take the newly released Kuali Mobility for Enterprise framework for a test drive by creating a Code4Lib Conference Mobile App.
[http://kuali.org/mobility Kuali Mobility for Enterprise (KME)] is an open source framework for developing and deploying applications to connect mobile devices to an institution's information resources. Applications  may be deployed as mobile websites or as installable apps. The KME framework makes heavy use of HTML5, CSS, and Javascript, and builds on other open source projects like PhoneGap and JQuery Mobile.
We will discuss the mechanics of the Kuali Mobility framework along with the experience using it to create a mobile app. for the Code4Lib conference.
== The Golden Road (To Unlimited Devotion): Building a Socially Contructed Constructed Archive of Grateful Dead Artifacts ==
* Robin Chandler, University of California (Santa Cruz), chandler [at] ucsc [dot] edu
This talk will discuss the challenges of merging a traditional archive with a socially constructed one. We will also present the first round of development and explain how we're using tools like Omeka, ContentDM, UC3 Merritt, djatoka, Kaltura, Google Maps, and Solr to lay the foundation for a robust and engaging site. Future directions, like the integration/development of better curation tools and what we hope to learn from opening the archive to contributions from a large community of fans, will also be discussed.
 
== Library News - A gathering place for library and tech news, and more ==
 
* Matt Phillips, Harvard Library Innovation Lab, mphillips@law.harvard.edu
 
 
[http://news.librarycloud.org Library News] is gathering place for people to share and discuss news from the technology and library worlds. Think [http://news.ycombinator.com Hacker News], but for library dorks instead of startup dorks.
 
Library News is more than a news and discussion site, it analyzes submitted links and shares its observations. One example of this sharing is the exposure of popular blogs: Library News tracks submitted blog entries and tallies them up, creating a list of most popular blogs in the community. This most popular list is exposed as an HTML document and as an [http://en.wikipedia.org/wiki/OPML OPML] download (The OPML file can be loaded directly into an RSS reader and be used as an always up-to-date "starter pack" of popular blogs in the library and tech spaces).
 
 
My rough talk outline:
* Demo Library News
* Present how Library News goes beyond normal discussion sites (the tools that allow to explore community submitted links)
* Discuss where Library News fits with the current library news ecosystem
 
 
Find more information about Library News at the [http://news.librarycloud.org/faq Library News FAQ]
 
== Data-Mining Repository Contents to Auto-populate Scholarly Research Repository Submission Metadata ==
* Mark Diggory, Head of U.S. Operations
 
The existing body of Open Access scholarly research is a well classified and described dataset. However, in Institutional Repositories it can be the case that there are insufficient resources to invest for cataloging and maintaining rich metadata descriptions of contributed content. This is especially the case when collections are populated and maintained by non-librarians. A great deal of classifiable detail preexists within files that are submitted to scholarly repositories. Utilizing existing Open Source technologies capable of extracting this information, a process can be provided to submitters and repository maintainers to suggest appropriate subject classifications and types for descriptive metadata during submission and update of repository items. This talk will provide an overview of an approach for utilizing machine learning as a tool for the auto population of subject classifications and content types.
 
== Mining Wikipedia for Book Articles ==
* Paul Deschner, Harvard Library Innovation Lab, deschner@law.harvard.edu
 
Suppose you were developing a browsing tool for library materials and wanted to include Wikipedia articles and categories whenever available -- how would you do it? There is no API or other data service which one can use to get a comprehensive listing of every page in Wikipedia devoted to the discussion of a book.
 
This talk will focus on the tools, workflows and data sources we have used to approach this problem. Tools and workflows include the use of Infobox ISBN's and other standard identifiers, analysis of Wikipedia categories and category hierarchies, exploitation of article abstracts and titles, and Mechanical Turk resources. Data sources include Dbpedia triple stores and Wikimedia XML/SQL dumps. So far, we have harvested around 60,000 book articles. This is an exploration in dealing with open, relatively unstructured Web content, and in aggregating answers to the same question using quite diverse techniques.
[[Category: Code4Lib2012]]
 
[[Category:Talk Proposals]]
224
edits

Navigation menu