adds abstract support to JSTOR and ProQuest translators
adds abstract support to RIS translator
uses dcterms "abstract" property in Zotero RDF for abstract export
- closes#351, scrapers with PDF downloads should use downloadAssociatedFiles instead of automaticSnapshots
there are some problems with snapshot titles. see bug #436.
- closes#430, Amazon translator causing utilities.js to throw exception
- officially deprecated Zotero.Utilities.getNodeString() (use doc.evaluate and nodeValue or textContent instead, or access attributes directly; these options take the nearly the same amount of code, should be faster, and don't unnecessarily bloat our utilities)
- updated word integration to the latest version
His note:
1. adds the conference paper item type (currently only exported to BibTeX as inproceedings)
2. Fixes bug with editor names in BibTeX export
3. Provides more intelligent naming for entities in BibTeX exports. Previously items would be named something like Wagstrom2006, Wagstrom2006-1, etc. However, I noticed that this ordering could get changed around pretty easily in the export process, resulting in bad references in articles. We can't really be having that now can we? The keys are now take the first word of the title, stripping out a few common words. For example, If I had a paper called "Zoteros impact on time to author scholarly papers", it would have a key of "wagstrom_zotero_2006", which is much more constant.
There was still an editor field bug after Patrick's patch that I corrected, and author and editor fields seem to be handled properly now.
Also addresses #384, option to prevent escaping of curly brackets in BibTeX output
I believe this patch actually now prevents escaping of curly braces by default, however (according to Simon) it should still be based on a pref or option of some kind
Added minVersion and maxVersion times to existing scrapers, setting 1.0.0b3.r1 as minVersion for any >4096 characters; these could theoretically now be added back to the repository without problems, but there's not really much reason to test that theory at the moment
- added CiteBase OpenURL search translator (although CiteBase COinS still won't work, because you can't look most of them up with the CiteBase resolver; ugh)
- fixed Amazon translator type ID (12 -> 4)
(The current repo system is a bit flawed in that translators need to be inserted with CURRENT_TIMESTAMP but scrapers.sql can't be, so scrapers.sql needs to be updated with the repo timestamp after the fact to prevent new installs from unnecessarily grabbing the changed scrapers (or they need to be post-dated to a timestamp after the UTC time of their repository insert but preferably not by more than 24 hours). Suffice it to say, we'll have a more automated solution for this in the future.)
closes#348, OpenURL should use only relevant parts of dates
closes#354, Error saving History Cooperative article
closes#356, Embedded Dublin Core scraper incorrectly saves web pages as item type "book"
closes#355, PubMed translator problem
closes#368, RIS/Endnote export hijack doesn't go into active collection
fixes an issue with quotation marks in bibliographies exported as RTF
fixes an issue with bibliographies and non-English locales
closes#165, verify import/export can carry all data for all fields and item types
closes#168, make sure MODS import works with files from external sources
Including in the DB, which it turns out isn't really all that bad (thanks, among other things, to SQLite's ability to DROP tables within transactions without autocommitting (which MySQL can't do))
closes#313, Blacklist known ad sites from scraper detection
closes#306, some New York Times ads prevent page from being recognized
closes#308, attachment import bug
currently, the ad site blacklist is located at the top of ingester/browser.js. at some point, we may want to switch this to a database table.
Changed "Scholar" to "Zotero", everywhere
Apologies to anyone with working copy changes, but there are probably the fewer at this moment than there will be again.
Hopefully this won't break anything, though existing prefs will be lost. I avoided scholar.google.com--if you know any other legitimate "scholar"s in the code, be sure to fix them once I'm done here.
This is a multi-commit change--there's at least one more coming. *Do not update to this version! It won't work!*
(The problem with the current system is that any local translators or styles will be wiped out on upgrades (though not auto-updates), but the solution for that is probably to just offer an SQL file that the user can put custom SQL statements in to be run on upgrades (sorta the same idea as user.js in Firefox). Will deal with that at a later date, though.)
1b) However, I also did, in fact, break scraping completely, so my previous statement was actually correct. Fix for that coming right up.
2) Fixed problem with translators table getting wiped out completely whenever system.sql was updated (from r671, I believe). Right. Moved the DROP and CREATE statements for translators into translators.sql.
Closes#304, change references to "website" to "web page"
More changes as per discussions with Dan:
- Linked URLs have been given a second chance at life, though they still shouldn't be used for (most, if any) scrapers (which should use snapshots or the URL field instead)
- Renamed the "website" item type to "webpage"
- Removed "web page" from the New Item menu
- Added Save Link To Current Page toolbar button
- Added toolbar separator between New Item buttons and link/attachment/note to differentiate
- Added limited metadata (URL and accessDate) for attachments
- URL for attachments now stored in itemData (itemAttachments.originalPath is no longer used, but I'm probably not gonna worry about it and just wait for SQLite to support dropping columns with ALTER TABLE) -- getURL() removed in favor of getField('url')
- Snapshots now say "View Snapshot"
- Added Show File button to file attachments to show in filesystem
- Added timed note field to attachments for single notes and adjusted Item.updateNote(), etc. to work with attachments
- Fixed bug with manually bound params in fulltext indexer and Item.save() (execute() vs. executeStep()) -- any recently added items probably aren't in the fulltext index because of this
Known bugs/issues:
- Attachment metadata and notes probably aren't properly imported/exported now (and accessDate definitely isn't)
- Scrapers don't save metadata properly
- Attachment title should be editable
- File attachments could probably use some more metadata (#275, more or less, though they won't be getting tabs)
- closes#217, ability to exclude notes/attachments from select items window
- closes#244, ability to quick search from select items window
- fixes a bug with footnotes in Word integration
- fixes a bug in InnoPAC translator where items would sometimes appear twice
- import translators no longer fail when trying to import an item with no name
- the T2/BT field becomes the publication title when no JO/JF field is available (fixes newspaper issues)
- Y2 is now treated as part of the date if and only if it is improperly formatted (seriously, why can't Thomson get their own specs straight?)
- work around EndNote's strange behavior of putting article titles into notes for no apparent reason
- RIS export gives dates as per specification
- fixed a bug that could have (potentially) caused problems formatting "January"
- allow translators to access strToDate function
MODS uses the encoding as specified in the <?xml tag, or else UTF-8
RIS uses IBM850, since the spec says "IBM Extended Character Set" and it's the only code page Mozilla supports. (should I do this? or just use unicode?)
MARC uses UTF-8, since I don't think there's any way to get full MARC-8 support, and UTF-8 is now the preferred encoding anyway
positions "saving item" window in a slightly better place on Windows
the UMich bug was actually bigger than I though. as it turns out, the HiddenDOMWindow in Windows is not a chrome window, so i had to modify createHiddenBrowser() to attach the hidden browser object to an existing browser window. i don't believe this should have any adverse effects for snapshots, etc., but Dan, correct me if i'm wrong. it would be nice to be able to create a real chrome instance instead of a XUL element, but all of my attempts at doing so have failed.
i've fixed the Amazon.com bug (i think) and made the translator show a "Could Not Save Item" prompt rather than show an empty list, but if you see any other pages where this happens, let me know
- make EBSCO scraper work better through a proxy
- shorten Accession Number -> Accession No, Journal Abbreviation -> Journal Abbr, Publication Title -> Publication. it does look a bit stranger, but it also makes the interface more functional (especially for those of us without giant widescreen LCDs ;-)
modifies scrapers to use dates in the format that comes out of the page, rather than converting to SQL
adds Scholar.Date.formatDate() to provide a pretty representation of dates
- Scholar.strToDate() accepts a string date and returns an object containing year, month, day, and part
- capture access date whenever URL is captured
- updated Zotero.dot to use new namespaces
closes#160, cache regular expressions
closes#188, rewrite MARC handling functions
MARC-based translators should now produce item types besides "book." right now, artwork, film, and manuscript are available. MARC also has codes for various types of audio (speech, music, etc.) and maps.
the EBSCO translator does not yet produce attachments. i sent them an email because their RIS export is invalid (the URLs come after the "end of record" field) and i'm waiting to see if they'll fix it before i try to fix it myself.
the EBSCO translator is unfortunately a bit slow, because it has to make 5 requests in order to get RIS export. the alternative (scraping individual item pages) would be even slower.
regular expression caching can be turned off by disabling extensions.scholar.cacheTranslatorData in about:config. if you leave it on, you'll have to restart Firefox after updating translators.
import/export of file data should work for all file types _except_ snapshots (in this situation, export is working, but import is not yet complete; see #193)
also, fixes a potential security issue that could have allowed malicious web translators to post local data to remote sites (although, given we maintain the central repository and there's no easy way to install a translator, the risk would have been minimal to begin with).
closes#3, Overflow metadata dumps into "extra" field
add "extra" data where such data is useful and conveniently accessible (not available for XML-based export or MARC formats yet)
add links to permanent URLs
download associated files from full text sources (if extensions.scholar.downloadAssociatedFiles preference is enabled)
fix WorldCat translator
improve InnoPAC translator (it now works on Georgetown search results pages, albeit slowly, because it must first realize the catalog is misconfigured)
tag items from SIRSI and WorldCat
return to putting the full lengths of books into "pages," because some citation styles require it
fix COinS (broken a few revisions ago)
closes#186, stop translators from hanging
when a document loads inside a frameset, we now check whether we can scrape each individual frame.
all functions involving tabs have been vastly simplified, because in the process of figuring this out, i discovered Firefox 2's new tab events.
if a translator throws an exception inside loadDocument(), doGet(), doPost(), or processDocuments(), a translate error message will appear, and the translator will not hang
closes#180, make all contextual menu export/create bibliography options work right
also:
- add Chicago Note style output
- unregister RDF data sources from cache after import
references #178, changes to various date fields
- updates CSL to work with the latest schema. we can now (almost) generate completely valid APA style. the only issue is that there's no syntax for specifying short forms for page and creator type labels.
- updates scrapers to use date field rather than year field.
- removes now-unnecessary translation engine code pertaining to year field.
Scholar should now attempt to process citation information from EndNote download links (MIME types application/x-endnote-refer and application/x-research-info-systems). in situations where Scholar cannot process the information, a standard helper app dialog will appear. this behavior is controlled by the preference extensions.scholar.parseEndNoteMIMETypes.
closes#76, implement extensible search/retrieval architecture for obtaining metadata
OpenURL COinS lookup is now implemented using a real search architecture system. at the moment, it works with Open WorldCat for books, CrossRef for journal articles (provided the COinS object contains a DOI or an ISSN), and PubMed when a PMID is available.
OpenURL lookup now works for books. this means that all that's necessary to add scrapable book metadata to a page is an ISBN, as shown below:
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.isbn=1579550088"></span>
also, we can now scrape Open WorldCat and Wikipedia Book Sources pages with no specialized code involved.
i'm still looking for a better way of looking up journal article metadata. it's currently implemented with CrossRef, but CrossRef simply will not work without a DOI, and is also incomplete (only holds the last name of the first author).
closes#163, make translator API allow creator types besides author
import and export in the multi-ontology RDF format should now work properly. collections, notes, and see also are all preserved. more extensive testing will be necessary later.
closes#4, Make printable version
- moves functions for creating and deleting hidden browser objects to scholar.js (from ingester.js), since these are necessary for printing as well
- allows saving bibliography in HTML or printing bibliography. style support is not yet complete (pending finalization of 0.9 version of CSL specification).
closes#100, migrate ingester to Scholar.Translate
closes#88, migrate scrapers away from RDF
closes#9, pull out LC subject heading tags
references #87, add fromArray() and toArray() methods to item objects
API changes:
all translation (import/export/web) now goes through Scholar.Translate
all Scholar-specific functions in scrapers start with "Scholar." rather than the jumbled up piggy bank un-namespaced confusion
scrapers now longer specify items through RDF (the beginning of an item.fromArray()-like function exists in Scholar.Translate.prototype._itemDone())
scrapers can be any combination of import, export, and web (type is the sum of 1/2/4 respectively)
scrapers now contain functions (doImport, doExport, doWeb) rather than loose code
scrapers can call functions in other scrapers or just call the function to translate itself
export accesses items item-by-item, rather than accepting a huge array of items
MARC functions are now in the MARC import translator, and accessed by the web translators
new features:
import now works
rudimentary RDF (unqualified dublin core only), RIS, and MARC import translators are implemented (although they are a little picky with respect to file extensions at the moment)
items appear as they are scraped
MARC import translator pulls out tags, although this seems to slow things down
no icon appears next to a the URL when Scholar hasn't detected metadata, since this seemed somewhat confusing
apologizes for the size of this diff. i figured if i was going to re-write the API, i might as well do it all at once and get everything working right.
caveats:
- it's not human readable. mozilla doesn't nest blank nodes, so everything's scattered throughout the file. it would be relatively easy to do post-processing with E4X or even regexps to correct this.
- there's no generic callNumber field, so all callNumbers are encoded as LCC.
adds container creation routines to dataMode rdf
changes Dublin Core export to Unqualified Dublin Core, and removes DC Terms qualifiers
adds export of seeAlso info and project hierarchy to RDF. for now, this is embedded in the modsCollection root element.
uses nodeIDs for Dublin Core RDF.
adjusts the Google Books translator to work with the latest revision of the site
renames the MODS translator to just MODS, because "Metadata Object Description Schema (MODS)" was too long for the export dialog
- fixes a bug that could result in the Scrape Progress chrome thingy sticking around forever
- makes chrome thingy disappear when URL changes or when tabs are switched
- changes scrapers table to translators table; all import/export/web translators now belong in this table
- adds Scholar.Translate to handle translation issues. eventually, Scholar.Ingester.Document will become part of this interface
- adds Scholar_File_Interface (in fileInterface.js) to handle UI for export and eventually import. (David, when you have time, please connect Scholar_File_Interface.exportFile to a button.)
- adds an export translator for MODS. all of our metadata, but not our hierarchy (projects, etc.) translates directly and unambiguously into valid MODS. eventually, we can use RDF or another format to handle hierarchy.
- adds utilities.getVersion() and utilities.inArray() for simplified scraper coding
- fixes minor interface issues with the nifty chrome scraping status window
- Move processDocuments, a function for loading a DOM representation of a document or set of documents, to Scholar.Utilities.HTTP
- Add Scholar.Ingester.ingestURL, a simplified function to scrape a URL (closes#33)