Changes

C4lMW14 - Code4Lib Journal as epub

915 bytes added, 00:40, 24 July 2014
no edit summary
Idea came from: https://blogs.aalto.fi/blog/epublishing-with-pandoc/
 
 
Jon's quick & crazy hack...
get_links.xsl
<nowiki><&lt;?xml version="1.0"?><&lt;xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<&lt;xsl:output method="text" />
<&lt;xsl:template match="fullTextUrl"> <&lt;xsl:value-of select="." /><&lt;xsl:text><&lt;/xsl:text><&lt;/xsl:template> &lt;xsl:template match="text()" /> &lt;/xsl:stylesheet>
<xsl:template match="text()" />
</xsl:stylesheet>
</nowiki>
<nowiki>
wget http://journal.code4lib.org/issues/issue1/feed/doaj
mv doaj toc.xml
xsltproc get_links.xslt toc.xml | xargs -n 1 -i{} wget -r -l 1 --no-parent -k {}
xsltproc get_links.xslt toc.xml | xargs -n 1 -i{} wget -r -l 1 -A jpg,jpeg,png,gif -k {}
</nowiki>Summary====== Unfortunately we didn't get a Wordpress VM setup in time that would emulate the settings of the journal.code4lib site. We looked at a couple of plugins, but all looked like they would still require several manual steps (goal would be to have it so every new issue just gets released as epub). Downloading the page via save-as and using Calibre did a decent job, but is awkward. XML-RPC seems to require a username + password, but might be feasible. Problem with most scraping programs (wget mainly was used, although some sites seem to advocate for HTTrack) is* the list of links on the left hand to other issues* the images are stored not related to the paths the posts are on So if you scrape the page and restrict to just that level and loewr, you don't get images, but otherwise you get more. And it's still largely clumsy and not automated. - Summary added by Jon Gorman after the fact....
98
edits