Motley Marginalia

A Digital Scratchpad by Gabriel J. Gardner

0%

Notes from Ex Libris Users of North America 2018

Spokane, WA

5/1/2018

Reception at Gonzaga
Spent most of this time catching up with people from the UMN system.

5/2/2018

Opening keynote

Notable figures: DevDay was the largest ever and filled up in 4 days, clearly a big demand for this.
Over 1k attendees this year, bigger than ever.

All the conference planning is done in-house, by the volunteer committee of librarians - this is getting harder as the conference gets bigger.

Marshall Breeding

Very objective - does not endorse Ex Libris products

General trend across all academic libraries - more electronic than print collections. Discoverability is increasingly complex and more of it happens outside of the library as people seem to accept what Google Scholar or whatever serves them. Big trend is for libraries to get into data management/research data repositories as universities compete for grant funding and more mandates.

In his own career, he’s seen big separation between the tech needs of academic and public libraries - more specialization in vendor products, less products that are used by both public and academic libraries. He coined “library services platform” - to mean modern ILS with API and standards support and interoperability. Modern ILSs with API support are more complex and robust than any previous library software and so they take a big business investment to make. Thus it is no surprise that there aren’t many competitors. Need big up front investment to program and then to maintain. An open source LSP would be difficult to get off ground.

He has seen huge consolidation in the industry - bad for libraries when it comes to RFPs but theoretically each firm has more development support. Seeing more vertical consolidation/integration i.e. Ex Libris bought by ProQuest. Matti Shem Tov, former CEO of Ex Libris now CEO of ProQuest.

Resource management and discovery products are basically mature. New areas are where library software can help their universities support the research and funding agenda.

He is seeing that Alma/Primo seems to be capturing the “high end” of the market - bigger universities. OCLC WOrldShare Management is getting the “bottom end” with their growth coming from libraries at smaller institutions.

Library Automation Perception Survey - Responses for ALma show good overall satisfaction, but lower ratings for handling of PRINT materials, even below OCLC WorldShare. Discovery is getting harder than ever - publishers report that there is a decline in access coming from link resolvers.

Q&A

  • will open source have a role in the future?; yes. koha and similar systems, Folio, are holding their market share. they are a huge disadvantage when it comes to discovery since there’s a lot of economic value in the private closed metadata indexes like Summon/Primo Central Index
  • will artificial intelligence help with indexing?: to a point, right now it isn’t taking off widely, we are early in the game; likely more AI indexing in the future
  • how will library software be used by campuses in the future? we don’t know, but are seeing this trend already. of course it would be absurd to think that the LIS would be primary software. but with APIs we are seeing more integration of systems across universities than ever.

Ex Libris Exec time

Eric Ex Libris NA president

Noted that ExL basically never sunsets any product - not sure if he’s happy about that or what. Quoting someone else “in order to change, we must change”. Encourages everyone to read the NMC 2017 Horizons report and the EDUCAUSE 2018 Top IT Issues report.

Bar ExL President

He says they want libraries to succeed, if libraries fail, ExLibris fails.
Company values: continuous innovation, openness, service excellence, community collaboration

Offering products to put ‘libraries at the heart of teaching and learning’: Leganto, RefWorks, Pivot, Esploro, campusM. Moving towards being a “higher ed cloud platform”. ExL doesn’t want to do everything, they want to do things well and let their systems work with other vendors. In 2017 they passed the point where more than 50% of all traffic to ALMA comes from APIs, now only 40% of ALMA interaction is done by staff users.

They are complying with GDPR and have posted their data policies and privacy.

Dvir Hoffman

In 2018 they are planning to make separate “requestable for resource sharing” policies for physical and digital items.

Schlomi

www.exlibrisgroup.com/evaluating-impact-of-discovery-services
Ongoing accessibility and UI improvements to Primo are needed and are ongoing. Launching Primo Studio to allow for drag and drop updating of primo CSS/HTML/JS customizations.
Committed to keeping both Summon and Primo as products. They are duplicating the summon ability to do ‘deep search’ of newspapers in Primo and PCI.

1:30 Discovery Panel

Consortial environments pose unique problems, mainly of governance and consensus building.
Discovery layers expose bad bib records. PNX norm rules can be used to impact discovery by pre-pending appending information to the MARC
Primo VE eliminates the need to publish from Alma to Primo.

Customization:

New School has customized ‘Ares’. Guelph has added, via JS injection, instructions on how to use Boolean. Lots of Primo libraries are using StackMap.

2:30 discovery panel 2

How do they deal with local content?

Guelf does ‘force blending’ to boost local content. Dutch library puts their local content as default search, expand more clicks to get PCI
Mannheim library. Every 5 years they do a survey about library satisfaction, overall people are satisfied, saw increase in satisfaction as result of moving to new UI. Guelph uses GA in PRIMO to see which areas of page are being clicked on. Coupled with usability testing. People there didn’t use TAGS very often.

There are a lot of blind spots in PCI subject availability. Guelph looks at PCI when examining new e-resource purchases, availability there tips the balance in a product’s favor.

Primo Analytics:

Mannheim just uses the OOTB. Guelph uses GA and is looking into HOTJAR. Lots of summon libraries using GA.

All these libraries using the new Resource Recommender. The Primo user community was very upset with the initial move to cloud hosted Primo. But they’ve all come around and collaborate more now. The NERS enhancement process has been working well.
All that is lacking for big Primo customizations is knowledge of AngularJS coding and time.

3:45 Alma Analytics Dashboards

Dashboards can be created for all librarians that will give them all the information they need without them having to write analytics reports. Problem with using dashboards designed by others is that often we are all using Alma differently - OOTB dashboards will need revision.

Possibilities:

  • IPEDS Data
  • Weeding criteria, can export to excel a list of all titles matching

General pointers:

  • columns are collapsible
  • Analyses need to be created before you make the dashboards
  • Inline prompts should not be used in Dashboards - there is a different ‘dashboard prompt’.
  • the ‘lifecycle’ filter will pull out deleted records - which Alma holds on to
  • To make filters respond to prompts set the operator to ‘is prompted’
  • prompts must be made in the same subject areas - cross subject analyses are not possible
  • just right-click on any column with numerical values there’s a ‘show column total’ option - you don’t need to do that in Excel or write additional calculation in your reports
  • Need admin analytics role to show dashboards to non-admin alma users
  • dashboard display is not individualized by user ID, can only be set to display for particular ROLES
  • dashboards are inherently public, can’t control who looks at them - assuming users have permissions

There will be a PDF walkthrough in the conference repository

4:45 Resource discovery in the academic ecosystem

ExL does research regularly on how people use their products. They see wide variation between users and products, should think about all of this in terms of system “flows” and how traffic goes from one to another. Students, instructors, “in depth researches” all use the tools differently.

User stories : the default OOTB search results are based on usability testing of the ranking and boosting algorithms. Single sign on is essential, one that carries over the login from all the sites used.

Some power users have browser plugins but most people don’t. They see some of traffic coming from LMSs - so students are working off of reading lists. This requires education of faculty.

Bento boxes - seem bad based on what they’ve heard from users. If you have to teach people to use a system you’re already at a disadvantage compared with things that are much more commercially tested.

ExL takes privacy seriously complying in GDPR and all that, making any personalization opt in only and working at aggregate levels. Without making individual users profiles, they can still do content relations - this item often viewed before/after/with these items. This is better than nothing.

5/3/2018

9:00 Primo Product Update

Yuval Kiselstein

The ‘Collections’ discovery feature - made with a few use cases but users have expanded.

Newspaper search coming: they’ve had a lot of requests over the years to separate out Newspapers from all other content. Newspapers search: new landing page, feature specific newspapers, highlight newspapers. Scoped search options. They are adding many more newspaper index coverage options. The normal Primo results will not have newspapers in them if Newspaper Search scope is turned on. Instead there is a link to the newspaper scope in the Primo filters area.

New Open access indicator icon will be rolled out soon.

Current big project is “exposing” primo data to the web using the schema.org framework. This will allow more linked data possibilities.

General search improvements: adjusted algorithm to favor works by an author over works about an author. AND and & are now treated the same. Planned search improvements: get a handle on book reviews.

Really pushing the use of Syndetics to enrich the catalog records. They have internal data sources showing that users spend more time on each record page when Syndetics is used.

Resource recommender: coming improvement will be no longer needing the list of exact match triggers to get a recommendations to show in search results.

Highlight reference entries in search results - no change to records just an emphasized display in primo - at first

They are planning on making more ‘contextual relations’ between books and book reviews - this may eventually show up as a link in the full record for a book. This feature still in research mode, not even alpha development

The developers are very grateful for the customer organizations and voting - community decides development areas and coding priorities

In development:

  • Primo-Zotero integration so that the PNX records can be parsed by Zotero - no need to use the RIS file
  • making new REST APIs.
  • primo open discovery framework - ExL trying to work closely with developer community they are rolling developer publications into the new
  • planned ‘seamless workflow’ between primo and refworks
  • planned closer integration between Leganto and Primo with a one-click-push from primo into Leganto

Primo Studio is a web tool that lets you customize your primo and add development community addons. Right hand side is your primo in sandbox, left side is “playground”: themes with color picker, change images, upload packages. Add and implement ‘AddOns’ from the developer community. No new abilities to customize = but they have made basic customization easier. They are leveraging development work done by libraries and making sharing easier by centralizing it - rather than lots of people writing emails to the listservs. When you move to primo studio, you can upload your current package and begin using studio with your existing configurations.

10:00 How to conduct a usability study

Tennessee Tech wanted to know how students used primo - so they did a big testing project

Results:

  • big problem for them was truncated facet labels
  • change ‘scholarly/peer reviewed’ to just ‘peer reviewed’
  • alphabetize the content type facets - big improvement in people using that facet
  • CITATIONS v CITATION - in full record this was confusing: change CITATIONS to CITED BY or WORKS CITED depending on context
  • changed CITATION to CITE THIS
  • changed SEND TO to TOOLS
  • changed ‘full text available’ to ‘full text options’
  • report a problem - they had student users go through the steps to report a problem and found that their form was way too long. hardly anyone ever filled it out. so they wrote a script to ID and LOG the submission url, ip, os and browser information that gets submitted to library but user doesn’t have to enter that information.
  • big confusion about the name of ‘ejournals’ which was their A-Z journal list.
  • they disabled the TAGS feature because that feature adds to the master metadata index for everyone to see.
  • finding about pinning: students didn’t realize they needed to log in for their pins to be saved to their profile. some lost records. Ex Libris says no way now to force Sign In in order to Pin. so they made an Idea Exchange development request - asking for votes

Why do usability studies? Because NO LIBRARY STAFF/FACULTY CAN APPROACH A SYSTEM WITH FRESH EYES They recommended: Usability testing for library web sites book. Methods: using Morea usability software, $2000, records user face and eyes, matches up where they look on screen and shows mouse trail. Got IRB approval. Very scripted tests with a task list, tasks were scored as ‘completed’ or not and time per task recorded. To avoid collecting identifiable behavior data for the test, all the patrons used Primo as a dummy account. They tested 15 students. Recruited students for $5 dollars. Advertised via message alerts on the campus LMSs. Had a moderator work with students, and observer look on the data feed in another room.

Audience recommends Silverback, and Fast Stone.

Yair from ProQuest 1:15

Many boring details and buzzwords about ongoing merger of ExL and PQ. They are merging exlibrs and ProQuest support to align processes between the two customer groups. There are still 2 SF instances but they’re being brought into harmony for a consistent experience in case handling between exl and pq. They had a 40% increase year over year in salesforce cases - this was mainly due to merging of refworks into the same salesforce queue.
About 35% of cases are related to CONTENT, not UI or any bugs.

They’ve rolled out the Ex Libris academy: no charge, video library and quizzes about all recent ex libris products. NOTE: trust.exlibrisgroup.com trust center, look into this.
ProQuest is taking data center capacity seriously, opened 2 brand new summon instances in North America, soon moving to 2 instances of PCI, so there will be redundancy - no more PCI outages. They are or will be complying completely with the EU’s GDPR and also US FedRAMP regulations.

“Content is not king. Content is the kingdom.” ProQuest is committed to bringing new content to Ex Libris - “The ProQuest effect”: many summon databases now in PCI, many new resource collections in the Alma Community Zone. PQ continues to do ongoing content enrichment, right now focusing on Alexander Street Press.

Ex Libris Management Q&A

Q: why is certain functionality restricted to print and not integrated with electronic?
A: …

Q: why is alma knowledge base so far behind in having current content?
A: this is something they’re working on

Q: about GDPR, library would like to anonymize loan data but also allow for opt in
A: anonymization policies must be set at the institution level

Q: question about letting some customers piggy back on other’s cases - linking them. so that the scope of the problem becomes obvious
A: they are limited by the capabilities of Salesforce. they are tagging cases and linking them together within the confines of what salesforce will allow

Q: ex libris and serial solutions continue to be separate in terms of eresources what is being done to integrate them?
A: r&D is merged. management is merged. they are getting round to merging systems at an application level

Q: what are the plans for metadata remediation in the community zone?
A: they can’t apply the same authority control to every record because ex libris is a global company - not all customers want to use LCSH.

Q: accessibility - increasingly important
A: VIPA is something on the website that we can see on the accessibility development roadmap

Q: some products (leganto, esploro) require the use of Alma. are there plans to bundle any other products into ALMA?
A: no. the other products are standalone and have preexisting user communities. no value added from forcing alma

JSTOR DDA in Alma 3:00PM

DDA plans in the past at UMN: DDA plans are work to set up but once you get it in place it is set it and forget it. In 2017 UMN started JSTOR DDA, but JSTOR also offers an EBA plan “evidence based acquisitions”.

JSTOR pros: good content, lots of plan flexibility, JSTOR promised MARC records supplied via OCLC Worldshare
JSTOR cons: no EDI invoicing, no GOBI communication, this was a new program in 2017 so JSTOR was not very helpful with setup
Rather complicated to get JSTOR and GOBI to work together, there is no direct communication, lots of autogenerated emails which require human touch.

JSTOR generates monthly excel reports. UMN sends weekly holdings file to GOBI, all the deduplication is basically done by people on a regular basis. Lots of automation between worldshare collection manager. Titles drop out of the DDA collection in Worldshare and enter the ‘all purchased’ collection after trigger fires. They use a PYTHON script that matches on ISBN to handle invoicing. All ISBN matches get output into an ALMA input profile. Items without matches are put into a separate file and invoiced manually.

Q&A
Why did they go with JSTOR? A: content is DRM-free, and the profiles were very configurable.
Have they figured out if it is worth all the manual labor? A: No. GOBI and JSTOR ‘may be working on this’, some beginning communication between the firms has taken place.

Perpetual Access “Problem” 4:00

Two types of perpetual access: “perpetual” i.e. ongoing, and post-cancellation access - access to content you paid for after cancelling a product. If you have a perpetual access clause in the license, how do you keep track of it, how do you enforce it? Also, why aren’t you negotiating for perpetual access in all your contracts?

Current practice for lots of libraries anecdotally seems to be assuming that everything will work out fine. There are problems with this:

  • perpetual access may/will cost,
    • pay for hosting fees
    • or one-time fees of them sending you commercial drives
    • or a high quality download which you then need to pay to host
  • costs typically increase as the library has to do more rather than the publisher doing it

Do you really know your PA rights? where is this information?
Knowing all your PA options helps with purchase decisions, cancellation, moves to storage, purge of print, etc. You can store PA license information in ALMA under the Electronic Collections Editor -> general tab. Unfortunately, it doesn’t go down to the title list. So more detailed information is needed.
Most libraries don’t have PA information recorded in a central location, it would require looking at past invoices, past purchase requests, and holdings in Alma.

UMN Plan; devised to look at every license and compile a database (homegrown coding) list and updating the records in Alma. The other thing to keep in mind here is if you have ILL rights for electronic content - you need the ability to loan.
There is a NERS request 5242 about developing this in Alma.
Bottom line is that perpetual access requires work at some point. You either do the work ahead of time, or do it at cancellation and possibly mess up because under pressure or not used to thinking about this.

Q&A
If we don’t have the old purchase records how can we tell what we’re entitled to? Don’t know - you’d have to work with the vendor for whatever copies they’ll give you.

5/4/2018

Springshare integration 9:00

Libinsight can be integrated with Alma.
Most common integration is piping libguides into Summon or Primo.
Libguides into Alma/Primo 2 ways: OAI-PMH harvesting, or Resource recommender. There have been other ways to ingest libguides into Alma in the past. Don’t use those, use OAI-PMH. All libguides have Dublin Core metadata. Get oia_dc URL from the libguides data export.

Import:

  • PBO with OAI harvester - 3 pipes
  • also need to add guides as a facet type
  • see slides for details

Also there is a springy techblog post about getting libchat into Primo - compare methods. Colorado school of mines has the chat widget inserted into Primo.
Resource recommender - can do most work in excel and then just upload. Ex libris is making improvements to Primo resource recommendations so might be better to just go this way.

E-reserves: can get OAI link/data for e-reserves, all pushing manually, wouldn’t need to make marc records for the course links. If we get libguides CMS, can add additional Dublin Core fields to the ereserves records.

Can pull in basically any libapps data for display somewhere else using API or OAI-PMH.

  • Libanswers has an API and so can be integrated into Alma. pipe the FAQs into Primo
  • Libcal has an API that can talk to Alma- can handle equipment checkouts via libcal and update to alma patron records
  • Can add Primo data into Libguides searches - under Admin -> Search Source.

ETSU is using APIs and libguides CMS to make a pre-search bento box landing page that they show their new books on the lbiguides - updated regularly using Alma Analytics API. They are also managing their LG AtoZ list in Alma, then exporting it and transforming it via a script, then manually uploading into Libguides. ETSU also reverse engineered the way Primo gets book covers and has a script to use that method for free to pull an image for their new books display.

City College London is injecting FAQs onto certain screens of the Primo v.1 interface.

Northeastern is piping all their ‘report a problems’ into libanswers.

9:55 Intermediate analytics

Ex libris vocabulary is often counterintuitive- you need to learn the vocabulary.
‘yellow boxes’ are numbers that you can do math with, other boxes may require transformation before math can be done.
Prompt tips: search criteria are often super strict - exact match as typed, date and time down to the second, etc. So review documentation if anything seems fishy. Filter options are what you can use to exclude things from a report. ‘edit formula’ - create bins: this lets you organize and will make a ‘none of the above’ box which can catch any errors or anomalies.

Can do all sorts of stuff in analytics without coding or scripting , you just have to learn the system. Use the CAST function to do math with numbers in Analytics that are stored as text. Analytics data into alma has gotten much easier since the May Alma release - just need to specify file path of analytics report that you want the data drawn from.

“how do I do X in OBIEE” (version number) this is what you need to Google for extra help; can’t get results if you google how do i do X in alma analytics. There is a help button in alma analytics, but you need to have a very clear question in order to use it and understand.

11:00 Facilitating Faceting

There are lots of OOTB facets in Primo and they work well but don’t get very granular. Most librarians want to distinguish between books/ebooks, dvds/streaming, audio/streaming. The “configuring resource types in Primo” case study from ExLibris has clear directions - Matthew Warmock email him for copy (ex libris employee).

Can make more NORM RULES to act on any MARC indicator and transform the PNX to use any additional facets you want to come up with, based on your Mapping tables. Very difficult to material type facet for ebooks, because there’s multiple ways to express this in MARC. Suggest MARC 008: 23. Fortunately Exlibris has already filtered books out, so the other norm rule that looks at 008:23 can build on that.

They saw problems with their streaming content being grouped as ‘Other’ material type; had to role back some changes. A lot of things are possible, it really depends on how standardized and quality the underlying MARC records are.

11:55 - Understanding User Behavior on Zero results searches

Their university switched from classic primo to new UI primo, did usability testing and as they moved to new primo, they saw a big drop in the number of zero-result searches in the new UI. Why? it wasn’t because total number of searches declined, so it was related to UI improvements or change in user behavior.

They made a couple ad hoc guesses about why but nothing really made sense. So they ran all queries through the Primo API and compared differences between new and classic Primo. Caveats - there were PCI changes over the time period so the comparison is not scientific, also collections not from PCI have changed (weeding, new books, etc.). What did the data show? Basically 5 categories: user error, true zero, no full text, new records added, primo error.

They used online-utility.org text analyzer to look at the query strings of the zero results queries. Still no obvious reason jumped out, though the biggest category increase was “primo error”. They looked at other libraries experiences via data sharing in zero results, they saw similar patterns. They noted that any time they saw the ‘did you intend to search for’ message that the query was not counted as zero results in primo analytics.

Answer: they found a bug in Primo/primo analytics - apparently many libraries will have had inaccurate zero results queries until exlibrs fixes this.

1:30 Greg Bem potential for equity in Primo

Lots of context about his institution and student population. We need to be clear who our stakeholders are and how they have voice in our services.
This guy is pretty pretentious…

How do you collect feedback on the catalog? It should be clear and low barrier - people should be invited to offer feedback on ANY/EVERY aspect of the catalog - the more we can hear from them the more information we have about their needs. Absolutely must do an accessibility audio of Primo, OOTB it is passable but still some room for improvements - he suggests reaching out to wider campus efforts since it is hard to master this if it isn’t your full time job.

ExLibris VPAT - voluntary product accessibility template

Look at your institutional diversity policies. How can you as a library fit in? Be sure to use this angle in your evaluation reports/documentation.

Q&A

  • Has he changed Primo as a result of equity issues?
    Not yet. They are currently reviewing accessibility.

Lauren Pressley Keynote

Libraries are liminal spaces. They are thresholds where transformation takes place. This liminality is more now than in the past because of the society and financial pressures on higher education. Liminal in the field as well: debates over value/diversity/neutrality/.

Frameworks for change:

  • Bolman & Deal’s four frames
  • StrenghtsFinder and appreciative inquiry
  • liberating structures; focused on bringing in outside/marginalized voices
  • John Cotter’s 8 Steps of change
  • William Bridges’ transition model - in a responsive frame

The clear library application? “Culture eats strategy for breakfast.” Keep a marathon perspective - you almost never get to run a sprint.
Change is inherently risky, so we need managers who create a safe feeling among staff so they can change better. Managers need to create space for people to fail gracefully.

Look into using RACI matrix

She recommends reading primarily outside of the library literature LOL.

Closing business meeting

Some product user groups have declined in membership and will be merged.
Slides are on the Sched site, eventually in Eluna document repositories in about 10 days, ex libris slides will be up in the knowledge center.

Got a lot of feedback about the Schaumberg experience people didn’t like venue. Next ELUNA in Atlanta.

GABRIEL’S NOTES FROM SSP 2017

General Points

From Wednesday, May 31 to Friday, June 2, I attended the 39th Annual Meeting of the Society for Scholarly Publishing in Boston, MA. My travel there was so that I could present on findings of my co-authored research on the piracy of scholarly materials. I did so as part of a panel that also included the COO of Reprints Desk (a document delivery company) and the CEO of Aytpon (which offers various SAAS platforms to publishers). Below are my session notes, which are of varying length depending upon how informative I found each session.

I found the whole affair fascinating. I was one of about 15 librarians at the whole event which had almost 1000 attendees and exhibitors. Conversations were very business and product-focused; panels mainly consisted of competitors talking about their internal research and how method or technology X might solve problem Y. You might think that would make for a competitive environment, but everyone was very cordial and professional. The entire place was filled with competent and intelligent people trying to deliver good products (so that they can make as much money as possible off us). I spent most of my time eavesdropping. Keep your eyes out for more uses of ‘machine learning’/artificial intelligence in the future; there were lots of people thinking about how these tools can be used in publishing and libraries. The keynotes were very good, and both incredibly pessimistic. Of particular interested to you might be my notes from the plenary New and Noteworthy Products session which made it clear that publishers and their B2B vendors want to collect as much information as they possibly can about users of their services in order to monetize it, a very worrisome development from our perspective. Also of note is the RA21 initiative which is a plot by the major publishers to kill IP authentication and systems like EZproxy. (Elsevier is the initiator of the initiative, that was made absolutely clear in the verbal remarks of the session. It is how they plan to end mass piracy; the large pirate sites all rely on compromised EZproxy servers. Yet the RA21 website shows a diverse group of actors, draw your own conclusions as to how much work those others are doing.)

If you are interested, find my (occasionally cryptic) notes below with links to the detailed session descriptions and let me know if you have any questions. To my knowledge, presenter slides and content will only be available to SSP members, of which I am not one, so my notes may be as good as it gets.

WEDNESDAY, MAY 31

PREDATORS, “PIRATES” AND PRIVACY: PUBLISHERS AND LIBRARIANS EXPLORE NEW CHALLENGES IN SCHOLARLY COMMUNICATIONS

8:30 – 10:00AM

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/seminar-2/

Intro remarks from Todd Carpenter, NISO

  • Trust is essential in scholarly publishing

Hillary Corbett

  • Not all predators are equally bad; there are minor and major infractions
  • But most blacklisting is binary, is it possible to capture the shades of gray? Should we for our faculty?
  • Low quality is not necessarily predators
  • Predators are getting more sophisticated, there are “spoofed” journals
  • Librarians need to educate their faculty but particularly students about predatory/low quality journals because many of them are discoverable via Google and even Google Scholar

Kathleen Berryman, Cabell’s – they are taking up the work Beall did

  • Cables is taking the approach of “any misrepresentations” will de-whitelist a journal
  • How is the whitelist different from Beall’s list?
  • They are working off all “objective” metrics. Any infraction moves a journal from the whitelist to the blacklist. The blacklist tallies the number of infractions
  • They don’t think this will solve the problems, awareness needs to be raised, perhaps penalizing methods developed

Rachel Burley, SpringerNature

  • Some spoofed journals are almost replicating known legitimate journals right down to the same HTML/CSS templates and published version formatting
  • Springer nature is redesigning some of theirs to highlight all the associations with professional and scholarly association affiliations, to bring out the trust factor
  • Promoting the “think, check , submit” meme

David, OUP

  • We think that the authors are the “prey” in the predators metaphor
  • But maybe the authors are complicit? Then the prey are the readers or the tenure and promotion committees
  • Recent study showed that 75 percent of authors in predatory journals are from Asia, or India.
  • Time is researchers’ most precious commodity, if their oversight committees do not care about predatory journals, why wouldn’t they publish there?
  • Conspiracy theorist love low quality journals
  • Universities need to get in board. In India, the University Grants Commission put out a Whitelist that everyone will need to use to get ‘credit’.
  • Q&A Isn’t the price of APCs too high? Can’t blame it all on bad motives, many researchers don’t have funds.

Michael Clark and Jason Price

  • Looked at random sample of 2015 articles indexed in Scopus
  • Searched for them in several places, how many of them could you get to the full text in 3 clicks from where you started searching?
  • (Gold, green, “rogue”, pirated)
  • Rogue I.e. On Researchgate, academia.edu, otherwise indexed via GS not on publishers website
  • Pirated, I.e. Sci-Hub
  • Including rogue methods, 55 percent were available
  • They walked through various search workflows to see which one gets the most content the fastest: sci-hub best, followed by google scholar, not worth it to start search via one of the rogue OA sites
  • Next project: looking at availability of OA materials in library discovery systems
  • Q&A Employee from Annual Reviews talked about how they implemented “subscriber links” in google scholar, in collaboration with Google, to get legal access easier. (Need to look into this.)

Craig, Silverchair

  • How to be a pirate: mass downloading, data breaches, uploading of C protected content, password sharing
  • Why did we get sci-hub? Pricing, availability, convenience?
  • The sci-hub “problem” often falls below the abuse monitoring thresholds, so it is impossible to stop through traditional methods
  • What else? Takedowns, but they should not be automated too much
  • Improving security of EZproxy, required https, OCLC software improvements

Daniel Ayala, ProQuest

  • Some data will always be collected, this doesn’t necessarily mean privacy violations
  • Lots of researchers are very suspicious about data collection, as are librarians
  • Security causes access friction, publishers may want to move beyond IP authentication, if they do so, that enables even more individual data collection
  • Publishers need to be transparent about what they’re doing with the data they have
  • NISO has a framework for privacy
  • Single Sign On is increasing, this is good for both convenience and enabling data collection
  • RA21
  • Maybe blockchain could be an authentication method

Peter Katz, Elsevier

  • Elsevier has constant bot monitoring
  • Now citation managers are allowing people to get bulk downloads for corpus of citations, this gets flagged a lot as malicious. Elsevier wants people to download them individually…. or use their API
  • Chinese ISPs, via VPNs are the main source of Elsevier content theft now.
  • Some sites now sell login credentials, phished
  • Sharing SSO passwords is very dangerous (or getting phished) because it gives access to everything
  • The universities where Elsevier consistently sees bulk content theft, have lax security. The ones with robust security see very little activity that turns out to be malicious.
  • Many publishers are developing bots/algorithms that can “hitch a ride on the search URL” and ID sci-hub traffic (which is below the bulk flagging threshold).
  • Q&A
    • Piracy is a symptom of a failing access or customer service or user experience.
    • 2 factor authentication, if you can force it on people, will solve almost all these “problems”
    • Does Elsevier think they’ll be able to take down Sci-hub? Rep says not within the next 3 years.

12:30 – 1:30PM

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/sponsored-session-yewno-unearthing-hidden-value/

  • MIT press is using Yewno
  • Yewno is based on complex systems and linguistics analysis. (Originally developed for “econophysics “ purposes)
  • It is an inference engine
  • “ completely unbiased “ LOL, eyeroll
  • They formed in 2014 and use AI to provide a topical hierarchical map of any document
  • Can ingest any machine readable document, no additional metadata encoding required
  • Provides sub chapter topic breakdowns
  • The concept map can be set to include all the metadata Yewno knows about or only the stuff you have in your catalog
  • MIT press is using this for better discoverability and to determine which books to put in which packages.
  • They have developed many use cases and tested them
  • This tool would be great for collection development
  • Yewno contains comparison features and longitudinal information as well, if the corpus is historical
  • The AI does not use a dictionary!!!!!! Instead it looks at the context in the sentence and groups of sentences. Wow.
  • They have a German version, no plans to go further now.
  • Their main market is B2B, keywords for ecommerce, advertising, publishers. But some libraries are interested in using it in catalogs.

1:30 – 2:00PM

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/sponsored-session-sheridan-out-with-the-old-and-in-with-the-new/

  • PDF was a revolutionary file format for publishers
  • Historically, pdfs were much more downloaded and viewed than HTML
  • Sheridan says HTML is the future! … “the PDF is dead, long live the PDF!”
  • Now they want to do XML-first workflows. Start outline in XML, port to HTML, all edits then take place in WYSIWYG HTML editor that creates XML and can export to PDF once final version is ready. Called ArticleExpress.

Camille

  • Talked about the Rockefeller press workflows. Used to use Word and InDesign. Now ArticleExpress. Big time savings, on average 48 hours per article. The word to InDesign conversions also allowed introduction of errors.
  • Lots of workflow details. Publishing is complicated!
  • Cons of the ArticleExpress: old browsers and Safari don’t work. Staff training time.

Kate, AAP

  • American academy of pediatrics is using the ProofExpress feature (a sub feature of article express)
  • Proof express keeps all the edits in one place, so no collating of the edits, no emailing a bunch of pdfs back and forth. Streamlined process. Cost savings of $23k per year.

Gary

  • Very complicated material about XML…
  • Sheridan also has a authoring product: author cafe, to cut Word out of the picture entirely. All work done in same cloud environment from start to finish. Huge time savings if users (submitting authors) use it.
  • Q&A, ePub working group has been trying to kill the PDF for years. Next revision of the file format will allow for seamless HTML rendering so this will tie together nicely.

2:30 – 3:30PM

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/sponsored-session-editage-helping-publishers-get-closer-to-authors/

  • Editage company spiel
  • They ran an industry survey, distributed in 5 languages, this is a sneak peek
  • For many geographic areas it is not just publish or perish, but English or perish.
  • These communities need help fitting into the western scholcomm ecosystem.
  • Presenting preliminary results
  • Biggest struggle is “manuscript preparation “, then “responding to reviews comments”
  • Most authors don’t see ethics as a problems, but a lot of journal editors do. Big gap here suggesting authors don’t understand traditional scholcomm ethical norms.
  • When seeking help, most authors just use Google, not the help pages on journals websites… LOL
  • Top 3 priority for authors when selecting journal: high impact factor, have published similar papers, short time to publication
  • Most authors don’t think instructions for authors are “clear and complete”. Need to usability test these.
  • 34 percent of authors understand benefits of OA. (Increased access, citation advantage)
    
  • When authors contact journals, majority of the time they were not totally satisfied with replies.
  • Asian culture has big barriers to confronting authorities, journals need to reduce barriers to deal with this.
  • Questions about predatory publishers give results that show authors are aware of the problem, education efforts are working.
  • However, lots of confusion about who should be listed as coauthors.
  • What do authors suggest for improvement in publishing? Time to publication, peer review process, reducing bias “perceived anti Asian bias in the subjective comments answers”
  • Key takeaways: there is a lot that is broken. Publishers can’t fix it all. But they can improve directions, improve communication, reduce time to publication. Big pipeline problem, PhD programs are not preparing grads for nuts and bolts of how papers get written and published.

KEYNOTE: PAULA STEPHAN

4:00 – 5:00PM

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/keynote/

  • Postdoctoral work is basically a jail, low pay, long hours, often not considered employees for fringe benefits purposes
  • Publishing is crucial , it is how people escape the postdoctoral jail
  • There is an arms race here, lots of people competing for 1st or 2nd author in a high impact journal, not many options in each field.
  • Age of (person) at first grant received has increased over the past 30 years.

Read: Zolas et al, 2015 in Science.

  • Most new PhDs will not go into tenure track jobs
  • What are implications for publishing?
  • Number of R&D firms in the US that are publishing has dropped dramatically, only about 10% now.
  • Many firms no longer do basic research, working on applied now.
  • Universities now operate as high end shopping malls, putting up space and branding while a lot of the actual revenue comes from without
  • For many, it isn’t publish or perish, but funding or perish
  • Estimated that PIs spend 42 percent of their time in grant administration/writing
  • The PIs then fill their labs with temporary postdocs or grad students
  • This started in the US but is expanding to other countries rapidly
  • Massive risk aversion. Almost all funding is directed toward “sure bets” .
  • NIH has explicit requirement about continuity of “research direction “. Built in risk and innovation reducer
  • Read “rescuing us biomedical research from its systemic flaws” PNAS
  • Bibliometrics reinforce risk aversion. Novel research is more likely to fail. Hardly anyone publishes negative results.

Reviewers are blinkered by bibliometrics in Nature read this.

  • Highly novel papers are almost always published in low-impact journals.
  • In short, things are disastrous, particularly as we anticipate federal funding declines. State funding of public universities continues to decline. Private foundation funding is increasing but exclusively focused on narrow applied topics - “cure industry”.

Her book How economics shapes science.
She also recommends an NBER book The changing frontier : rethinking science and innovation policy.

THURSDAY, JUNE 1

KEYNOTE: JEFFREY MERVIS: SCIENCE AND THE TRUMP ADMINISTRATION: WHATS NEXT?

9:00 – 10:00AM

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/keynote-2/

  • The “war on science”, e.g. Chris Mooney, rhetoric is overblown. Many republicans support certain science funding, just not the funding democrats always support.
  • Every government policy is a result of money, people, and ideas. What will Trump do?
  • Budget proposal has deep cuts to research funding. This comes not from a thorough analysis of the areas where science funding is wasted, but is an across the board cut.
  • At present, none of the cabinet appointments have any science training or significant attachment to public higher ed.
  • With the exception of NSF, all the agencies are being run by interim admins.
  • Less than 10% of the spots requiring confirmation have been filled. Both Obama and Bush were much farther along at this point. Trump admin seems indifferent to full staffing.
  • Census bureau is not fully staffed or funded, very worrisome since 2020 census is coming up.
  • OSTP funding is actually preserved at Obama level, but trump hasn’t filled leading spots. Unlikely that anyone respectable to scientists will actually take the position, even if nominated.
  • DARPA may get more funding, as may NASA but civilian/domestic science funding will almost certainly fall.
  • The admin has put research funding in a category of programs that “haven’t proven their value “. We need to lobby to correct this.
  • Indirect cost recovery: a proposal we should be scared of. Would cap NIH funding on certain projects. Trump proposal implies that universities are basically making a profit off federal NIH grants.
  • Good news is that the admin is working on a report in “evidence based policy”. We need to watch out for this. Could be good or bad.
  • Pew did a poll about the science march, find support for it was partisan.
  • Republicans “aren’t buying what you’ve been selling” re: the value of most science funding.
  • Q&A
    • What can private citizens do at federal or state level? There are dozens of power centers in DC so your efforts need to be targeted. State research funding is minimal. Perhaps get involved in state redistricting, get more pro-science people elected and in the pipeline.
    • Climate change…
    • What would a recovery look like? We shouldn’t think about science as a unified thing. The funding isn’t unified, the support isn’t. The sky isn’t falling at the same rate for all areas.
    • Can private funding make up the difference? Doubtful. It is increasing but only marginally.
    • Trump admin hasn’t renewed the contracts of the heads of national academy of science. Will we lose a generation of science leaders? These will get filled eventually.

CONCURRENT 1D: RESEARCH DATA POLICIES: CRAFT EFFECTIVE POLICY AND IMPROVE YOUR IMPACT

10:30 – 11:30AM

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/concurrent-1d-adapting-to-the-changing-expectations-of-peer-review/

Rep from Springer-Nature

  • There’s now considerable evidence that making the underlying study data available and linking it increases citation rates
  • Funders are increasingly requiring open data, presently 40 major funders require it
  • Journals policies on data sharing are very confused. Very little directions on how the mandates should be fulfilled.
  • Springer nature has looked at the polices of their journals, there are 4 basic categories. Not encouraged, encouraged, required but no directions or clear enforcement, required and compliance monitored
  • Springer Nature made their polices available CC BY
  • They’ve estimated it only takes about 10 minutes for editors to get the data availability statements from the authors into the papers.
  • For long term success, the policies need monitoring .
  • Implementing this is time consuming, need to engage with each journal individually.

Rep from PLoS

  • Open data has many benefits, replication, validation, new interpretation, inclusion in meta studies
  • PLOS has a strong data sharing policy
  • Less than 1 percent of authors are unwilling to share and abide by the policy
  • Authors need to get credit for the data they Share, the data need to be citable, other writers need to cite it. This is an evolving area.
  • Compliance is murky, does the data behind a figure need to be shared? Or all the data that was collected? PLOS wants to err on the later side.

Rep from Harvard Press

  • This is newish, lots of policies still being developed and adopted. Lots of discipline variation.
  • Biomedical community has lead the policy front.
  • Data she’s collected show that authors comply with even the strongest policies.
  • In biomedical area, trend/preference is for the data to be cited with a persistent identifier in the reference list.
  • Harvard Dataverse is an open repository for any journal
  • Publishers and journals need to plug supported repositories, give authors more guidance
  • Ideally the journals tech systems can work with any APIs that repositories have. (Repositories need APIs…)

Rep from Dryad

  • All data in there is from something that has been published.
  • Every item is “light touch” human curated.
  • It is a non profit governed by a board of customers.
  • Open data should be thought of as preservation, not just for reproducibility
  • The best time to get the data into a repository is before the article is actually published.
  • End result is an article and a dataset that link to and cite each other, API allows embargoes
  • Curation involves making sure data is not sensitive or violating IP, that it is in appropriate file format and doesn’t have a virus.
  • Data download is free, the journals pay to have the data in Dryad.

CONCURRENT 2C: PREPRINTAPOOLOZA! WHAT’S HAPPENING WITH PREPRINTS?

2:00 – 3:00PM

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/concurrent-2c-preprintapooloza-whats-happening-with-preprints-2/

Jessica Polka, ASAPbio

  • Precincts are increasing!!!
  • Researchers are concerned: about getting scooped (not really a valid concern), about presenting unpeer reviewed results (people do this at conferences all the time), preprint will expose potentially misleading information to the general public
  • Most preprints in arXive appear in a peer reviewed journal eventually
  • Transparency about the peer review status of pre prints is essential. No one benefits from conflating preprints with a finalized version.
  • Should preprints be cited? If not, we may be encouraging plagiarism.

Lisa,

  • How does preprints usage differ across disciplines?
  • Broad institute (MIT & Harvard) looked into this
  • Broad office of communication does not publicize preprints or any preliminary results
  • Any intellectual property claims need to be filed before a preprints is made publicly available
  • Big differences among Broad researchers about preprints use, appears to depend on general journal policies for the field (I.e. Whether preprints are allowed in main journals or not)

Rep from Cold Spring Harbor laboratory

  • biorxiv modeled on arxiv
  • All the versions are available, when something is published, it gets linked to the publishers version of record
  • Then water came through the ceiling!!!!!!!!!!!!!! (We moved rooms)
  • Biorxiv is growing exponentially at the moment
  • Even though the papers haven’t gone through peer review, everything that gets uploaded to biorxiv gets viewed by a biologist to verify “fit”.
  • Biorxiv is supported by Chan Zuckerberg foundation
  • They are looking for ways to automate their human upload review and improve the conversion of documents to XML

Rep from PLoS

  • Both Wellcome Trust and CZI are requiring OA deposit and accept preprints.
  • This, along with many preprints servers launching in past 4 years, is big disruptive potential.
  • Preprints accelerate the discovery process and publishing process.
  • PLOS will accept preprints from biorxiv, they also comb over the preprint servers and solicit the authors to publish in PLOS. This is huge, it changes the dynamics between author and publishers
  • Technology interoperability has improved dramatically in past 5 years, this allows rapid conversion of documents. Easy for authors to shift manuscripts around from journal to journal if it is on a preprint server that has an API (and the final publisher can interact with it).
  • Q&A
    • Licensing….. very complicated.
    • The folks at arXive still get “anonymous” comments/reviews via email because they don’t want their criticism to be visible to the authors. Very interesting. Open peer review has a long way to go, not proven model.

CONCURRENT 3A: WILL RICHER METADATA RESCUE RESEARCH?

4:00 – 5:00PM

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/concurrent-3a/

Some panelist

  • Metadata needs to be better, richer, more accurate.
  • More and more, people are trying to map interconnections.
  • Lots of publishers “set it and forget it” once they set up a metadata workflow. Metadata2020 is trying to improve this.
  • The entire scholcomm community needs to be on board with this.
  • Automation of this process is key, the volume is simply too large to use people.
  • Finer gained is always better

Lady from ARL

  • Research libraries are reliant on good metadata. Institutions want to know all their research outputs. This is very hard without reliable quality metadata.
  • Researcher affiliation is surprisingly hard, requiring human curation since use of standard controlled vocabulary isn’t widespread
  • Long discussions ensued. One obvious takeaway, everyone needs to use ORCID! Require one for all your PhD students!
  • What are the tradeoffs in having the researchers/depositors put in the metadata at time of submission? Is there a better way?
  • Maybe making metadata enrichment more collaborative can help.
  • What if we used machine learning, AI, to generate the metadata ? What accuracy level would be acceptable in order to accept the richer records? To have a general use AI comb over so much content, the publishers would need to allow it in their licenses.

FRIDAY, JUNE 2

PLENARY: PREVIEWS SESSION – NEW AND NOTEWORTHY PRODUCT PRESENTATIONS

9:00 – 10:30AM

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/plenary-previews-session-new-and-noteworthy-products/

  • Lots of innovation and disruption in the field
  • Innovation is not a luxury, it is essential to keep customers now
  • Atypon. Seamless individual authentication . Atypons shared WAYF cloud, login once, the participating publishers share user authentication data, so access is seamless across sites and users can be tracked across sites. Will launch late 2017.
  • BenchSci. Trope is that scientists only engage with the literature sporadically in order to keep up to date. Actuality is that many consult it in small doses every week. People are often deluged in content though. Benchsci is an AI product that reads through papers to find biomedical information and allow filtering.
  • Code Ocean. Finding code or algorithms used in research is difficult. Not always available, many languages used, may need debugging. Code ocean hosts code in the cloud, supports all major languages, allows people to run code in the cloud and view in browser. It is a reproducibility platform. Cuts out the GitHub and downloading and other tweaking middlemen processes.
  • Crossref web events. Lots of places in the web refer to scientific information. Crossref and datacite collaborated to develop a tracking mechanism for these references (tweets, Wikipedia, Facebook, blogs, etc.). Not a metrics, no user interface, all the info goes through the API, interpretation of the data is done by the end user, not crossref.
  • Hypothes-is. W3C approved an annotation standard this year. So annotation is coming to the web in a big way, fast. Hypothesis is an nonprofit. Annotation allows all documents to become living, version of record preserved but also able to be continually updated if necessary. Journal clubs are forming using the software, they are also trialing using it for peer review.
  • LibLynx. Identity management, in a world where people use single sign on allows tracking and personalization. Presently, identifying patrons is siloed across many platforms, users can also have access through many IDs. Users have a disconnected access experience, often having to login 2x (library, then platform). By harmonizing ID management, liblynx allows cross site tracking and upselling.
  • NEJM knowledge +. A learning and assessment tool for physicians to pass their board exams. Uses AI to benchmark where people are and only show them relevant content to get them up to the next benchmark level. Originated after detailed consultations with doctors.
  • PSI. Firm that “keeps content safe”. They offer subscription fraud detection services, have recovered over 50 million in damages for their clients in past 10 years. They have a validated database of IP addresses (and API) to ensure that no one is getting unintended access as universities change their IP addresses. They also offer bribery detectors, LOL.
  • RedLink. Had new platform, Remarq. It is their strike back against academia.edu and Researchgate. They embed in existing websites and allow comments and annotation, as well as circulation “sharing”. All of this require only an addition of JavaScript to existing sites. They also offer polling and user interaction detached from an article.
  • SCOPE. Publishers need better metadata to sell their books, in order to do this for the backlist catalog, they have an AI product that generates abstracts and keywords for each chapter which then can go to human curators to verify. “conCise” product name. Several publishers on JSTOR are already use it for chapter abstracts and subject headings
  • Digital science. Dimensions for publishers. There aren’t many accurate leading indicators of use. Dimensions is a knowledge platform that allows tracking and aggregation of funding information, useful for soliciting reviews, IDing conflicts of interest, and scouting new topic areas for sales.
  • UC press, Editoria. Luminos. It is a web based open access workflow management and book development tool. Ingested Word files, then everything else can be done in the platform all the way to the final product. Based on the PubSweet framework. Partnership with Collaborative Knowledge Center.
  • U Michigan Press. Fulcrum. A platform that allows authors to present multimedia content into their work. Ebook don’t allow this (well). Fulcrum is a scalable platform that can allow this and has the player technology embedded in it so meets preservation requirements. Open source.
  • Zapnito. They offer white label “knowledge networks “ and “feeds”. Lots of buzzwords about knowledge frameworks, I’m not really sure what this actually does…. but both SpringerNature and Elsevier are customers.

CONCURRENT 4E: BUT WHAT DO THE USERS REALLY THINK? USER EXPERIENCE AND USABILITY FROM ALL SIDES OF THE TABLE

11:00 – 12:00PM

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/concurrent-4e-but-what-do-the-users-really-think/

  • Little solid research on information seeking behavior beyond the STM world. Humanities publishers have a long way to go, need to do more usability testing
  • At EBSCO they moved user research into the product management department out of the quality assurance department. They are working continuously with product owners of ebooks to improve service.
  • Everyone should read “democratization of UX” on medium. We need to go beyond white papers. They are fine but key points need to break out of the paper container into talking points and query-able databases.
  • Instead of general user testing, to make the results widely understood, many are using ‘user personas ‘. Then create testing panels of users that come close to the personas.
  • In order to make research actionable, the research questions need to be actionable. UX testers also need to stay ahead of the user. Henry ford “if I had asked my customers what they wanted, they would have told me ‘a faster horse’.”
  • To make better products, publishers middlemen vendors and libraries must collaborate on user research.

CONCURRENT 5E: WHO’S FASTER, A PIRATE OR A LIBRARIAN? SCI-HUB, #ICANHAZPDF, ALTERNATIVE ACCESS METHODS, AND THE LESSONS FOR PUBLISHERS AND THEIR CUSTOMERS

1:30 – 2:30

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/concurrent-5e-whos-faster-a-pirate-or-a-librarian/

My panel. Slides not posted online.

CONCURRENT 6F: IMPROVING ACCESS TO SCHOLARLY RESOURCES FROM ANYWHERE ON ANY DEVICE

2:30 – 3:30

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/concurrent-6f-improving-access-to-scholarly-resources/

Ralph, ACS

  • RA21
  • Access methods have changed a lot in the past 30 years but not nearly as much as they could have changed.
  • IP authentication was invented in the mid 90s and we’re still using it. Why? There are other ways.
  • Detailed walkthrough of how hard it is to get to an article from off campus.
  • RA21 is joint initiative between NISO and STM to move away from IP authentication
  • Researchers want seamless access from multiple devices
  • IP authentication facilitates piracy (scihub leverages IP authentication). Publishers want to be able to shut out individual users they suspect of piracy, but shutting off an IP address affects hundreds or thousands of people.
  • There is a black market for university login credentials

R. Wenger from MIT

  • Read Roger Schonfeld post on Scholarly Kitchen 2015/11/13
  • IP authentication is an anachronism and “not sustainable” in the long term.
  • Academic libraries and publishers have an opportunity to collaborate on a new system
  • Libraries care about privacy and walk ins, protection of usage data
  • RA21 is going to use a SAML based system, of which shibboleth is one.
  • There is a lot of inertia behind IP authentication in libraries, will need to be overcome
  • Benefits for libraries are: no need to maintain IP ranges, reduced use of proxy servers, identity management likely outsourced to campus IT, granulated user data.
  • rwenger@mit.edu

Chris from Elsevier

  • SAML is a federated identity management technology (shibboleth is one version)
  • RA21 is doing pilot programs right now, 3. They have a privacy preserving WAYF (where are you from) option.
  • How it would work: sign in once using a university email, it gets stored in the browser, good to go forever on that device in that browser (unless delete the cookie). Cookie is privacy preserving by only using the email domain address, so no individual data stored.
  • www.ra21.org
  • OCLC has been involved in the discussions but is not a formal partner.
  • This not “a cross publisher login system”. Rather, users always login in to their institutional SSO and are then directed back to the content.

CLOSING PLENARY: DESSERT COURSE: A DISCUSSION WITH THE SCHOLARLY KITCHEN CHEFS

3:30 – 4:30

https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/closing-plenary/

Scholarly Kitchen discussions

David Crotty, OUP

  • Why hasn’t scholarly publishing been disrupted already?
  • The number of authors and articles increases every year. But the subscriptions decline almost every year. Thus new growth strategies need to emerge.
  • Publishers are increasingly inserting themselves in every aspect of the research and dissemination process. (Vendor - publisher mergers)

Todd Carpenter, NISO

  • Money is the big problem, states, university, students, everyone is tapped out. Forecast is that resources will only stagnate or decline
  • Have libraries devotions to privacy set them back? How will they deal with the personalization that users “want”?

Bob, American Mathematics Society

  • For most societies, membership is declining, revenue is stagnant or declining
  • They need to demonstrate their value better
  • To outsource or not?
  • There are big questions of scale, societies have trouble competing with the big major publishers. Can they remain independent and in-house?

Kent, Redlink

  • Politics is now greatly affecting the scholcomm ecosystem
  • New UK draft policy would require researchers to show their funded work to government before publication
  • AI may save us. Or it may destroy things.
  • Now that the intermediary institutions are Silicon Valley (not neutral) we need to continually build in skepticism about the algorithms that are determining the results we see.

Notes from Ex Libris Users of North America 2017

Schaumberg, IL

Narrative

From Tuesday May 9 through Friday May 12, I attended the 2017 meeting and conference of Ex Libris Users of North America (ELUNA), which was held at the Renaissance Hotel and Convention Center in Schaumberg, Illinois. I did so at the request of the Dean and Associate Dean in order to ensure that I was armed with the most up-to-date information about the Primo discovery software before our library’s migration to the Alma/Primo system this coming June. Below are my session notes, which are of varying length depending upon how informative I found each session. ELUNA is organized in ‘tracks’ so that sessions pertaining to the same or similar products are spread out over the course of each day and that people interested in only one aspect of Ex Libris’ products, such as Primo, can absorb all the relevant content. The conference was a full service affair with breakfast and lunch provided on premise, quite a welcome change compared to ACRL. I found it to be an education experience and the comradery was quicker to develop compared to other professional events I’ve attended. Not only were all the (non-vendor) attendees working in academic libraries, most of us used (or in our case, will use) the same software on a day-to-day basis; this allowed for conversations to get rather ‘meaty’ in short order.

If you are interested, find my (occasionally cryptic) notes below with links to the detailed (non-plenary) session descriptions and let me know if you have any questions. Presenter slides are supposedly forthcoming but I have not been able to find them yet.

TUESDAY, MAY 9

RECEPTION 6:00PM - 10:00PM

This was a nice opportunity to catch up with old friends from the University of Minnesota.

WEDNESDAY, MAY 10

ELUNA WELCOME AND OPENING KEYNOTE: BETTER TOGETHER: ENRICHING OUR COMMUNITY THOUGH COLLABORATION (PLENARY)

This is the biggest attended ELUNA yet. There is a new ELUNA mailing list domain/website http://exlibrisusers.org

Opening keynote Mary Case, from UI Chicago

  • Cross-institutions collaboration is crucial, it will allow for efficiency and improvement in access to local holdings and non held resources
  • In the past collaboration was state or country based. What if we thought of our community as the world? (ARL, SPARC, CfRL leading examples.) IARLA is a new global entry for library collaboration.
  • How will collection development change as more libraries work together? Distributed print repositories… with responsibility for preservation of their de-duped collections.
  • Data preservation is another area where there are big scale opportunities for collaboration. (Hathi trust, DataONE, DataRefuge, DPN)

GLOBAL & NORTH AMERICA COMPANY UPDATE (PLENARY)

Eric Hines, President of ExL North America

  • 2017 is ExL 30th anniversary as a company
  • Nice pictures of ExL old offices in Chicago (1st in USA), Notre Dame was first customer
  • Rattling off bunch of inspirational quotes.
  • 58% of ExL NA employees have an MLS
  • They are grateful for all the suggestions that their users have provided over the years and want to continue to leverage these.

Matti Shem-Tov, CEO of ExL

  • Over 3,000 institutions in NA use either Primo or Summon
  • They started with 55 developers for Alma, now there are over 100 working on it.
  • There are customers in Korea and China using Alma/primo recently now.
  • APIs are big for them, and will continue to be. They want to have a very open customizable system.
  • There are challenges but they are try to integrate a lot of ExL and Proquest products.
  • Working on shared Summon/PCI content.
  • Over 100 customers are using the new Primo UI
  • They’ve opened new data centers in Canada and China to decrease cloud response times. Committed to exceeding the contractual obligations about speed and responsiveness.
  • On the horizon is product development/enhancement about research data services management and better course management interoperability

NEXT-GENERATION LIBRARY SERVICES UPDATE (PLENARY)

Strategic plan highlights for 2017

  • What they hear from customers: they want efficiency, value, and new/improved services
  • Based on close work with 8 libraries, they’ve developed the new Alma UI. They are committed to continued enhancement of Alma and the UI
  • They want to do more community based testing, shared test plans, to reduce duplication in testing work across customers.
  • They pay attention to the idea exchange, it is where they get product enhancement ideas
  • In December 2016, they crossed a milestone in APIs, with more transactions in Alma coming from APIs then from humans.
  • The Oracle database is the single most expensive aspect of Alma.
  • They will soon introduce a benchmark feature in Alma analytics to compare your institution to other Alma institutions
  • In 2016 Primo searches increased 30% over 2015.
  • By the end of 2017, the coverage in Summon and PCI should be the same, they’ve been working on this since the Proquest acquisition
  • They are working on rolling the Primo Back Office into Alma so that they can be managed within the same system.
  • Leganto looks pretty cool, they have added many features recently.
  • A Leganto experiment at Vanderbilt resulted in a 52% savings on course materials
  • CampusM is a new mobile platform that consolidates all campus activity updates.

PRIMO PRODUCT UPDATE, ROADMAP, Q&A

  • The Primo working group (of users) meets monthly. There was also a hack a thon on the new UI.
  • Coming features/work in progress:
  • Linked data, RestfulAPIs in Primo, improved recommendations
  • They are making promotion of OA materials a priority. More indexing, more linking, maybe an additional OA facet filter
  • Customers are encouraged to vote on PCI content additions on the Idea Exchange
  • Trying to get as much multimedia content in PCI as possible, their other main content priority other than OA
  • They do continuous adjustment of the propriety relevance ranking algorithm
  • Summer release of new UI will have brief record (snippets) views using PCI enriched metadata
  • They have started working with an external auditor to make sure that the new UI will meet all governmental accessibility requirements
  • August release will have a “resource recommender” feature when people sign in and view their patron record.
  • Later 2017 release will have additional linked record display fields
  • The unified PBO/Alma release is in beta testing. Production release will be 2018.
  • They are encouraging group hacks and code sharing. They want all users to be able to use locally developed hacks. This is called the “Primo open discovery framework “
  • The Cited By data all comes from CrossRef currently. But they may expand to include other data sources.
  • Q&A about the propriety ranking algorithms and how they test it to make sure it is actually getting better

THE OPAC AND ‘REAL RESEARCH’ AT HARVARD

  • Harvard has had several OPACs. One was AquaBrowser, which people apparently hated due to too much Web 2.0 gimmicks like automatic word clouds. At one point they had 3 OPACs simultaneously.
  • Recommended reading: Katie Sherwin 2016, university websites, top 10 guidelines
  • Harvard is trialing Yewkno , preliminary usability testing is that it is just a gimmick
  • They did a lot of usability testing and user surveys before picking Primo. Results indicated that people wanted single results page with filters rather than “bento” approaches
  • Lots of people wanted the old catalogue
  • Known Primo Gaps: series title linking, Primo just does a keyword search, gets lots of irrelevant results, users are used to being taken to a ‘browse’ feature
  • Harvard old OPAC also used a sorted list of uniform titles, no analog in Primo.
  • Big detriment of Primo is that the linked data is accomplished using keyword searches, rather than taking users to a browse list. At present, no known fix for this.
  • They still have their classic catalog which gets a fair amount of use, steady through the year, unlike Primo which fluctuates with academic calendar. Indications that staff and “real researchers “ still use the old catalog
  • Read: Karen Markey, 1983. Users expected articles to be in the OPACs even then!

A TALE OF TWO UIS: USABILITY STUDIES OF TWO PRIMO USER INTERFACES

At University of Washington, they did 3 separate usability studies to gauge effectiveness of the new UI. Methods: 12 participants, pre post tests, 60 minute sessions, 7 tasks and talk aloud method. Reward: 15$ bookstore gift cards.

Tasks (on old Primo):

  • Known title search, one that was on loan
  • Article subject search, peer reviewed
  • Course reserved materials
  • Known DVD, available
  • Known article from a citation
  • Print journal
  • Open ended search

Big problems ( low success):

  • Print journal volume, book on loan
  • Points of confusion: e-shelf, no one knows what it is, collection facet, ‘send to’ language
  • Old UI required too many clicks! (Between tabs, etc.)

Then ran usability test on new UI. Basically the same methodology

Problems (low task success):

  • Print journal, known DVD,
  • Points of confusion: tweak my results, save query, hard time finding advanced search
  • People very upset that the page refreshed each time a facet was selected
  • Hyperlinks were largely ignored by users, they wanted “buttons”
  • 3rd usability test where they brought in some old participants and new volunteers and had them compare the two UIs
  • Majority preferred new UI on all the aspects compared
  • Everyone is annoyed that Book Reviews come up in front of main items (relevancy ranking problems)

Enhancements needed:

  • Sign in prompt for ‘My Favorites’
  • Sort option for favorites in the new UI
  • Enable libraries to add custom actions to the actions menu
  • Zotero support

OUR EXPERIENCE WITH PRIMO NEW UI

  • Tennessee tech adopted new Primo in July 2016, never used classic. Previously used Summon. Boston University adopted new Primo in 2017 after using old Primo since 2011.
  • Config: “new UI enabled” for openurl services page, added call number field to search menu, hid sections on the full display that didn’t make sense to end users. Tennessee tech had some big problems when they went live that required ExL development work. They were an early adopter and guinea pig. These were all resolved eventually. BU had done a bunch of customizing the old UI, which ended up anticipating a lot of the changes in the new UI.
  • Overall usability testing BU did found people were satisfied with the new UI. A lot of the defects they encountered have been fixed in the May 2017 release or will be in the August 2017 one.
  • Main complaint, slower page load time. (Found a statistically significant difference in page load time from old to new UI using Google Analytics data.)
  • BU now collapse a lot of the facets by default, since testing showed lots of people didn’t use them when expanded by default.

THURSDAY, MAY 11

EX LIBRIS STRATEGY UPDATE (PLENARY)

  • Since 2008, ExL has been focused on development of products that will allow libraries to focus on areas of real value. Their customers are academic and national libraries and will continue to be. Part of that is focus means focusing on researcher behavior.
  • One area that has changes since 2008 is that funding has increasingly come with strings attached and requirements for data openness and management. Simultaneously people are expected to demonstrate impact: metrics of success.
  • Institutional goals are thus to improve visibility, impact, and compliance. ExL knows this and is working on these areas.
  • There is already a very complicated and robust ecosystem for these areas. The current model is not one that they can easily work in or influence. So they want to “disrupt “ it. Not with a new product but with Alma and PCI
  • Alma is format agnostic so can handle all sorts of metadata. They are continually enriching the PCI/Summon metadata. The disruption comes in the analytical area where they are working on getting good metrics that can intersect with the rich Alma/PCI metadata. (Controlled vocab, Unique ID enrichment, etc.)
  • They are working with development partners already on Alma/PCI starting in July to get these enhancements started. Timeline is that some of the pilot enrichment will be done by end of 2018. General rollout of whatever features are ready in 2019.

CUSTOMER LIFE CYCLE AT EX LIBRIS (PLENARY)

Lots of clichés, suits making bad puns. Very boring.

PRIMO ANALYTICS USE CASES: BREAKING DOWN THE SILOS BETWEEN DISCOVERY SERVICES, INSTRUCTION AND USER EXPERIENCE

  • At University of Oregon, they have an Evaluation, Assessment, User Experience team. And a Online User Interface team.
  • These teams are trying to be informed by the ACRL infolit framework.
  • These teams are new and there was not a lot of collaboration between these areas historically
  • How can Primo analytics answer questions about library instruction?
  • They interviewed 6 instruction librarians about how they might use the Primo dashboard and came up with tasks to test with undergraduate s
  • Undergrads mainly had questions about search functionality whereas grad students focus more on content
  • Problem identified by both librarians and the undergraduates is prevalence of book reviews
  • Students were recruited guerrilla style with bookstore gift cards a reward (5 students)
  • Tasks: find peer reviewed article about us civil war, how to request a book (paging service), find a sociology related database, get a known title book not held by the library but indexed in PCI, course reserves search.
  • No one could find the article, everyone was able to find a database (libguides 2.0 database list), other tasks had mixed success results mostly 2/5 or 3/5.
  • General findings: students didn’t prefilter (advanced search) or post filter - very discouraging. They did Google style long string searching.
  • They are giving regular Primo analytics dashboard info updates to the reference librarians.
  • Primo dashboard limitations: can’t publish dashboard results to Alma. Reports can be emailed out. At OU they are exporting raw data and creating a dashboard for the librarians in a 3rd party application.
  • Supposedly Alma and Primo analytics will merge, it is on the ExL product development roadmap.
  • Presenters didn’t deliver on the promise of handouts or scripts…

MANAGEMENT Q&A (PLENARY)

Uninteresting. After 15 minutes, I went to the bar.

HACKING THE GENERAL ELECTRONIC SERVICE

  • Sawyer Library, had a popular reserve request feature that they wanted to keep going as they moved to Alma/Primo
  • What does it take to make this work? Set up a norm rule to copy the MMSID to an empty field that gets passed through the open URL, (not all fields get passed so need to be careful), then openurl gets processed through Alma API
  • He also wrote a text a call number feature, uses the same norm rule
  • Emery-Williams on GitHub

UX DESIGN & OPEN DISCOVERY PLATFORM: NEW WAYS TO CUSTOMIZE AND DESIGN YOUR LIBRARY DISCOVERY EXPERIENCE

  • People might think that the new UI is just customized CSS and HTML of the old one, but that’s wrong. The entire thing has been rebuilt using AngularJS and separated from Alma and old Primo. All Alma data comes from API calls.
  • New May release feature is the session history in the menu bar
  • Right now, cases in sales force will have to be opened for problematic user tags, but this will eventually be able to be controlled via new Alma/Primo BO
  • Coming feature: resource recommendations (login required) that appear right before search results
  • August release will have multiple facet selection without page reloads? Unclear, but they’re working on it.
  • New white paper about Primo Open Discovery Framework. They are trying to make new UI as customizable as possible
  • Royal Danish Library has released a “search tips “ button and Altmetric API integration.
  • The main developer of new UI talked about accessibility issues that they’ve received pushback on from the accessibility working group. Infinite scroll was one problem so they introduced pagination.
  • New UI also loads slower than old UI but they are continually working on this, compressing things, reducing number of API calls, expanding server capacity.
  • The cited by feature is much more prominent in new UI and shows a trail, which it didn’t previously. They plan to introduce a save citation trail feature in 2018.

JUDGING AN E-BOOK BY ITS COVER: VIRTUAL BROWSE FOR ELECTRONIC RESOURCES

  • As more collections are books, why can’t we virtually browse them?
  • We can, in old UI, but it requires some work.
  • Need to add a PNX norm rule that deals with AVA tag. This AVA tag is an Alma tag, not Marc.
  • See slides in ELUNA document repository for details.
  • $$I , our institution code - add
  • Int means entity type
  • Mostly this works, but there are problems.
  • Some vendors don’t include call numbers, others include non-call number data in call number MARC fields
  • Ebook cutter numbers vary, some are : e, others eb
  • Duplication if an ebook is available in multiple ebook collections
  • Can’t hover in virtual browse to learn more. Can only determine if item is physical or electronic by clicking on the items
  • New UI, will it work? Yes.

SOUTHWEST USER GROUP (ESWUG) REGIONAL USER GROUP MEETING

  • Everyone is new! Except for a couple California community colleges using Aleph.
  • UNLV will be coming on Alma/Primo in 2018. They left III’s Link+
  • Several Arizona institutions will be moving to Alma/Primo in 2017, 2018.

FRIDAY, MAY 12

NEW UI CUSTOMIZATION FOR LIBRARIANS WHO AREN’T DEVELOPERS

Whew… a bit in over my head here. Hopefully someone in the system will learn AngularJS. Or David Walker will save us all? /-:

ACRL Narrative

Baltimore, MD

At the end of March, I attended ACRL 2017 to present some of my scholarship in a contributed paper format and learn about a variety of topics relevant to my professional interests and job responsibilities. ACRL is always a worthwhile learning experience and simultaneously provided me with a chance to reunite with old work colleagues and school friends.

While in Baltimore for ACRL, I attended many sessions/events. They are listed below in order of occurrence as are my sporadic and cryptic notes. Feel free to read them if you are interested.

The session I enjoyed the most was the panel Resilience, Grit, and Other Lies. The panel Re-skilling for a Digital Future got me thinking about how to develop my own abilities and exposed me to some new digital humanities methods; to my knowledge, none of my faculty are working at this level. The two sessions about data gathering on students Closing the “Data Gap” between Libraries and Learning and Data in the Library Is Safe, but That’s Not What Data Is Meant For are on a topic I’ve been following for years with interest and trepidation. They didn’t present any new empirical findings that aren’t in these two articles:

Wednesday, March 22

Exhibits Opening Reception 5:45 - 7:30PM
Dinner with friends, we won the trivia competition at Water Street Tavern!

Thursday, March 23

Changing Tack: A Future-Focused ACRL Research Agenda 8:00 - 9:00AM

ACRL is doing a gap analysis for the areas not covered by the VAL report of a couple years ago. William Harvey from OCLC has developed an info viz tool re: the VAL study and gaps. The goal is to promote impact of academic libs to parent institutions. Looking at all types of institutions, small large, public private, regional.

Did interviews with provosts of all universities in the sample. Provosts were very concerned about educating students for “life long learning”. The data tool they built will provide context for the literature. We know space is important but how about for your type of institutions? We can easily pinpoint studies that apply to specific types of institutions. Developing research question: uni admin is much less concerned about privacy than librarians are.

OCLC research has put together a dashboard that draws on the literature and will allow people to visualize their own institutional data

Resilience, Grit, and Other Lies: Academic Libraries and the Myth of Resiliency 9:40 - 10:40AM

Big think pair share activity - when have you been asked to do more with less and cope at your job? E.g. Coded into job, do more with less as a point of pride, lots of unpaid labor going on, this is a long trend.

What is resilience? Widespread use in ecology and psychology, now picked up by business. What does it do? It individualizes, shifts responsibility from collective to individuals. Applied to society, it naturalizes what may not be natural. “We needed a revolution. We got resilience.” - Alf Hornborg

Grit: typically applied externally to populations “needing” grit, people needing to pull themselves up by their bootstraps. People naturally want to persevere, to a certain extent. Grit, resilience, etc. offer promises and aspirations but never resources. How can we recover if we’re not given tools to do so?

Almost all our marketing is defensive, because of grit/resilience mindset, we take on the shape of our oppression. Absence of power or agency becomes a moral failing under the grit/resilience framework. In libraries, the end game of this logic is Things > People, cutting staff if automation allows or faculty demand certain items.

How do we combat this?

  • Give people space to fail. If you are a manager, allow people to try and fail without punishment.
  • Buy books that critique the grit/resilience models.
  • Take a “melancholy” approach: the system wants you to recover, to adapt. Instead take time to mourn, too complain.
  • The people who we often ask to be gritty are already doing it. Stop asking them to do it more.

Keynote: Roxane Gay 10:55 - 12:15PM

Very ‘woke’, in the parlance of our times. Didn’t strike me as relevant to much of our day to day work.

Closing the “Data Gap” between Libraries and Learning: The Future of Academic Library Value Creation 2:00 - 3:00PM

Libraries and institutions have cared about assessment for a long time. Since measuring “learning” is actually pretty difficult, most institutions measure proxies variables that are typically grouped under “success”. Since 2010, ACRL has been producing research in this area. Most research has been correlational, hard to ID causation. In polls by Educause, majority of students are resigned/OK with learning data and library data being used to help them.

Problems:

  • silos of data
  • granularity
  • too little data

The audience is your wider university, we need to be able to link library data to institutional priorities. The purpose of this work/ research is to improve instruction and services, it isn’t done just for itself.

Problems:

  • organizational culture opposed to data driven decisions
  • data that is incomplete or inaccuracies or proprietary
  • privacy concerns
  • need consent

What’s the future of the LMS/CMS? Next generation digital learning environment ‘NGDLE’. It needs to be interoperability use open standards. Lots of unrest in the LMS world, no one wants an uber application but rather one that will talk to many different apps. Cal Berkeley is doing interesting work in this area with their “learning record system”.

Scott Walter gave some thoughts about how the library has been integrated to the LMS at DePaul. They ID struggling students in research intensive courses and get them in contact with a librarian if they don’t perform well in early metrics. All this happens with a lot of automation. Library did setup, then just answer student questions.

Director of product management for IMS Global talked about how to get stakeholders to the table. Make sure there are privacy standards, interoperability standards, ‘chunk’ things into small manageable issues, general specs always need to be agreed upon before deciding specific technologies.

You Say You Want a Revolution? The Ethical Imperative of Open Access 4:00 - 5:00PM

There is a tension between articles in the ala code of ethics between copyright and access.

Amy Buckland, librarian, we need to live OA, LIS literature in particular is mostly not OA by a large margin. Pressure your colleagues. Get OA policies enacted at your institutions at your colleges, at your departments.

Heather Joseph, SPARC, the need for open access hasn’t gone away. People still need access. We have made good progress in the past 15 years which we shouldn’t lose sight of. Part of the OA movement should include moving away from emphasis on the journal article. There are other products of intellectual labor and other scholarship that gets used and influences the conversation.

Brad Fenwick, Elsevier, there is a dichotomy between ethics and morals, ethics is typically imposed from without via professional associations or law or church, morals are typically from within or localized/individualized. Good publishers want the work they produce to be read, they just want to get paid for it. Elsevier supposedly supports sharing, e.g. Their purchase of SSRN and Mendeley, e.g. They also just launched BioRN. He thinks we need to fall back in principle of “do no harm”. We need evolution of the scholcom system, not a revolution.

Moderation, Q&A

  • How can we bridge the gap between OA publishers and OA advocates?
  • Elsevier: never draw a line in the sand, mixed models are the future
  • HJ, the future needs to have scholars in control of the works they produce. The internet provides us with an opportunity to cut out the middlemen, we need to do that.
  • Elsevier: we need to beware unintended consequences. We need to better think through the implications of the distribution models. Startup costs and ongoing publishing costs are not trivial. They can be disintermediated and spread out but they are not going away.
  • HJ, some good articles in The Economist recently. Funders need to put in mandates.
  • E: mandates will be pushed back against. That is overplaying the OA hand.
  • Mod: how can we convince researchers to publish OA without finder mandates?
    • AB: appeal to self interest and how OA work will be more read and influential.
    • HJ: OA needs to tie itself to university missions
    • E: Elsevier is actually agnostic about OA, they support tenure committees taking a broader approach and moving beyond citation counts.
    • E: because it is now easier than ever to publish, we need to shift from quantity to quality.
  • Mod: what would be the tipping point for OA?
    • HJ, if 51% of all articles are OA but they’re using a APC model, that is not success.
    • AB: the default should be openness, we’ll have won when people assume that articles are open and that subscription required is a curiosity.
    • E: small publishers stand to be greatly harmed by OA

Friday, March 24

ACRL Town Hall: Academic Libraries and New Federal Regulations 10:30 - 11:30AM

No Notes

Using Altmetrics to Transform Collection Development Decisions 1:00 - 2:00PM

What are altmetrics? NISO takes a broad definition, basically any information about use, views, influence. People at Altmetric co. Have a narrower definition. Less about use, more about impact.

They ran a big survey of librarians, STEM librarians had more familiarity and knowledge (self rated) than others. About 1/2 of respondents who do collections development “Never Use” altmetrics in their coll dev decisions. No significant relationship between being on tenure track and knowing about altmetrics. Strongest relationship was that of discipline. Most usage of altmetrics by librarians comes from Scopus and Altmetric.

Why use AM for humanities? Slow obsolescence, so good possibility of rich AM.

Recommend tool is Altmetric Explorer for Librarians.

Case study details about using AM at Mississippi State University. They found it good for IDing books to buy

Data in the Library Is Safe, but That’s Not What Data Is Meant For: Exploring the Longitudinal, Responsible Use of Library and Institutional Data to Understand and Increase Student Success 3:00 - 4:00PM

(UMN and Megan Oakleaf)

The wave of data driven decisions is hitting higher ed, if libraries don’t participate we are going to get washed over. Management wants to see correlational data about the library and how it works for the school. Libraries need to cultivate partnerships with their offices of institutional research. The reality is that most millennials are eager to share if you ask them (obtain informed consent).

At Lewis and Clark they track checkouts, reference questions, and in person instruction. L&C data showed that students who asked ref Qs got better grades and were much less risky to drop out (statistically significant).

Minnesota reports out of their research: They are now using “propensity score matching” now to find effects rather than logistic regression (create pseudo controls). Use of books and databases had biggest effects on academic performance. Ideally all the data is made available ‘real time’ so that interventions can take place and predictive analyzing. Having the library on the learning analytics team/table allows for us to influence conversations on privacy. They are experimenting with a notification in Moodle that shows students how their library usage compares to peers, use peer pressure to drive interactions.

Guy from JISC didn’t say much new. All this should be viewed as a big data problem that will require big data tools, servers, and staff. Don’t try it without these resources  

Re-skilling for a Digital Future: Developing Training and Instruction in Digital Scholarship for Academic Librarians 4:15 - 5:15PM

Are libraries ready for new methods being used by our faculty? Can we use them for our own LIS research? An ARL speck kit suggests there’s a lot of work to be done. There is a new ACRL digital scholarship section.

At HathiTrust research center, they are trying to support text mining of the collections. IMLS funded. They are teaching librarians how to text mine and how to teach it themselves and support other faculty working on it. There are preinstalled algorithms in w/ platform they’ve developed. Also, they are teaching people basic Python scripts for more in depth analysis. Libraries need to think about what infrastructure they need to support this type of research as it is growing considerably.

NCSU lady: Lot of the training material went too deep too fast, too much time to learn. So they partner with other faculty to learn it at an in-house short course. Then develop their other training materials. This resulted in DVIL, Data Visualization Institute for Librarians.

IU lady: IU did a cross training program, developed by the librarians, endorsed by management. No plan to do it again.

Saturday, March 25

No notes taken this day. I presented with my co-authors. Dr. Hayden’s speech was very moving.

Addicted to the Brand? Brand Loyalty Theory as a Means of Understanding Academics’ Scholarly Communication Practices 8:30 - 8:50AM

Academic Libraries, Filtering, and the “Tyranny of Choice” 8:50 - 9:10AM

Shadow Libraries and You: Sci-Hub Usage and the Future of ILL 9:45 - 10:05AM

In a World Where… Librarians Can Access Final Research Projects via the LMS 10:25 - 10:45AM

Keynote: Dr. Carla Hayden 11:00 - 12:15PM

Narrative about ALA

Orlando, FL

From Friday, June 24 through Monday, June 27, 2016, I attended the annual conference of the American Library Association. My primary purpose for traveling to this conference was to participate in a panel to which I was invited. My presentation was on June 25th as part of the RUSA/RUSA-STARS panel: Resource Sharing in Tomorrowland.

I spent my remaining time at the conference attending education programming, discussion groups, and speaking with vendors and other librarians. In the evenings I was fortunate enough to be able to reconnect with past colleagues from the University of Minnesota and alumni from Indiana University. Below are the educational and discussion sessions I attended each day, often accompanied by my brief (hopefully not too cryptic) notes. If anyone has questions about the sessions I attended or what I learned, please contact me directly.

SATURDAY

8:30 – 10:00 AM Expanding Your Assessment Toolbox

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=138560

A lot of us may be doing assessment (formative) but not realizing this:

  • Observing student behavior
  • asking for and dealing with questions on the fly

Summative assessment in oneshots is just a difficult problem, a semester provides many more opportunities. At a bare minimum you need to establish some kind of feedback loop.

Libcal as an instruction calendar has some limitations. Reporting functions don’t really get the job done if you add qualitative info to the calendar events.

At a macro level, it is very important that all librarians use the same instruments and metrics. People can go above and beyond but there has to be a baseline level that captures data valid for all librarians. Moving from the Standards to the Framework is a shift away from behavioral indicators to cognitive indicators.

Activity

Question from the audience “do we even need to assess the framework? (i.e. itself as compared with others)”
Panel response, not really? But the theories there might make your info lit program better.

10:30 – 11:30 AM How Local Libraries Can Protect Global internet Freedom

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=138016

Alison, Library Freedom Project founder, not available, at another conference

The Cypherpunk’s Manifesto is similar to the ALA freedom to read and intellectual freedom statements.

Long discussion about web advertising and 3rd party tracking, Tor (The Onion Router) prevents tracking as long as a user doesn’t log into an account. Connections are encrypted though to protect you if you do log in. E.g., Facebook like button can read Facebook cookies so will know whatever you look at with the like button even if you aren’t logged in currently, if you have ever logged into Facebook on that browser (and haven’t deleted the previous cookie).

The Tor network needs relays (donated) to improve speed. In particular, exit relays are needed, there aren’t many of them because individuals can’t run this type of relay from their homes: ISPs often threaten to cut off access. However, larger institutions with free bandwidth, like libraries and universities, can run exit relays.

Kilton Public Library in Lebanon NH is the first to run an exit relay (public library). Got a lot of press coverage for this.
LFP does workshops about privacy, IT staff at Kilton got the library director’s consent to run a Tor relay, as long as trustees consented. The Feds, DHS in particular, got mad and pressed the board of trustees to shut it down pending review. They did and collected public comments. They received hundreds of comments from around the country in support. Only one comment was negative. So they turned their middle relay back on.
The middle relay was a pilot - successful. In November 2015 they moved their relay to exit status. This required a lot of work, getting a separate IP just for the relay so that the normal users of the library wouldn’t be impacted by anything the exit relay did or if it were attacked.

Arguments for libraries to support Tor:

  • Intellectual freedom
  • Privacy
  • Chilling effect of surveillance
  • A Tor relay lets your library have a global impact
  • A Pew survey found that majority of respondents wanted their libraries to educate about privacy and help people with privacy {incomplete citation provided}
  • A lot of bandwidth is wasted when the library isn’t open, can set the relay to take advantage of this.
  • Running a middle relay is basically free, can run on almost any computer that runs Linux, just requires set up time occasionally software update maintenance.

Very important to have admin support when this is undertaken. A middle relay is the most low-risk option. Should also install Tor Browser on public access computers. Libraries should host privacy classes.

1:00 – 2:30 PM Resource Sharing in Tomorrowland: A Panel Discussion About the Future of Interlibrary Loan

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=139596

Carolyn and I presented very brief highlights from our research. The other panelists were marvelous.

3:00 – 4:00 PM Micro-assessments of Public Services Usability

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=143200

Closing the loop is important.

UMD College Park had a lot of unpicked up holds. Why? Ran a Qualtrics survey of users who had placed a hold but didn’t pick it up. Responses indicated that there was a significant communication issue. People either didn’t read the emails about their holds or if they did, typically didn’t understand them. Need better subject line in the email.

UC Denver: Using student workers in academic library. Looked at their distribution area (where they worked and when) and the library and gate counts and transactions to get the most efficient use of staff. Resulted in more equitable distribution of workload among student workers, improved morale.

Exit surveys: DePaul University. Given upon students exiting the library. Got info about “why they came” and also satisfaction figures. Survey with radio buttons and checkboxes, keeps it quick. Got good responses by giving survey only a couple times per semester, avoid survey fatigue. Sell the survey as short “literally only takes 30 seconds!”, bribe with candy.

SUNDAY

8:30 – 10:00 AM Intellectual Freedom Committee – Privacy Subcommittee meeting

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=143643

Need to get all ALA web presences using HTTPS. Also need to move to responsive web design as many ala websites are not mobile friendly.

Various resolutions were discussed and their progress in drafting and whether IFC will endorse them.

Digital content working group will be shifting focus to user experience for ebooks. Discussion about having model language for publishers about protecting ebook user privacy. Some mock ups have been done by NISO about getting user consent in a friendly clear way.

Note: need to get ahold of the newly approved guidelines. They’ll be up on the web by the end of July, including with press releases.

Discussion about how because privacy impacts many areas of ala, notably LITA, maybe privacy needs its own committee/group instead of this just being handled by the IFC subcommittee.

10:30 – 11:30 AM 22nd Annual Reference Research Forum

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=139602

1st study: Dweck’s theories of intelligence

There are 2 general schools of intelligence, incremental vs. entity theory. Study looked at students working theories of intelligence: did they impact student use of library and interactions with librarians? Mixed results.

NOTE TO SELF: see recent work on Dweck’s theories in info lit and Howard Gardner’s multiple intelligences. There may be a paper here about g factor/IQ.

2nd study: container collapse

Print container of information have broken - everything looks alike online. Using mixed methods to see how science students (of many ages, grades) pick “credible” results. Presenting students with a bunch of resources, asking the, to pick ones to cite, then explain their reasons for not picking all the other sources. Using articulate storyline. Study not complete yet.

3rd study: interviews with Brandeis students about their research process

Lots of Google and Wikipedia use, but all participants did use the library’s discovery layer. None of them used specific subject databases, such as MLAIB. Saw a lot of classic Dunning-Kruger behavior regarding how students evaluated sources, they think they can critically evaluate sources but they’re actually bad at it.

11:30 – 11:45 AM Your Social Media Policy Checklist

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=160019

Every policy needs to answer these questions: What is social media? Define the scope, don’t write policy for just 1 platform.

  • Who has access & oversight? I.e. Who decides what?
  • Why are you there?
  • What can/can’t we post about?
  • What images can/can’t we use? (CC-licensed content, public domain images, etc.)
  • What happens in a crisis? (e.g. hacking, emergencies, criticism)
  • Are there legal things we need to document? See rules promulgated by parent organization.

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=137999

What is the next top tech trend?

  • Concepts
  • Real-time
  • Virtual reality
  • Balance
  • Super easy application development (easy tools for working with APIs)

What is the most useless trend?

  • Teaching patrons to code. Libraries should provide people access to the learning books/videos and communities. But be real, most people aren’t going to learn to program.
  • Internet of things
  • Shodan.io and Internet of things search engine.

What are you sick of hearing about?

  • 3D printers applause
  • Smart watches
  • Don’t buy a MakerBot if you aren’t prepared for a lot of maintaining

General comments of interest:

IT security in general is still hard, lots of ransomware attacks recently. Fortunately, Flash is becoming less used so that is one less vulnerability on library computers. Common logins are just a really bad idea. Every staff person needs a unique account and password.
Increasingly libraries are licensing things that are cloud hosted, the contracts for these must be read to ensure that the library is protected and that patron data is protected.

This isn’t a trend, but it should be: equitable distribution of makerspaces. Makerspaces require lots of up-front work by IT staff. Spec-ing these is work, 3D printers can be a lot of work to maintain. In general 3D printing still way oversold.

How much should we cater to the technology needs of patrons? It is a waste to cater to either of the tails at the bell curve ends. Focus on the middle 80%.

3:00 – 4:00 PM RUSA-ETS Discussion: Communicating with Users Through Social Media Networks

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=143587

Discussion Format

Topics:

  • Meetup
  • Periscope
  • Yik yak

General consensus:

  • Moderate all comments, if possible.
  • Yik Yak is cancer
  • Look at the LoC’s comments and posting policies
  • Acknowledge patron complaints, politely. Important to respond to comments as the institution
  • Periscope seems like it could be useful for special event programming

4:00 – 4:45 PM How to get beyond the ‘Agree’ button in Privacy Policies

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=160034

NYPL is in the midst of revising the privacy policy, they know form their analytics that hardly anyone ever looks at it.
Brooklyn public library had a 3 paragraph policy. NYPL is working with the Berkman Center at Harvard to create a shorter more readable policy.

Brooklyn public got an IMLS grant to develop training materials and fund the training of staff. Training of frontline staff very important.
This privacy training came out of as a result of a new America foundation study which revealed that staff needed much better training.

Dataprivacyproject.org

MONDAY

8:30 – 10:00 AM Data to Discourse: Subject Liaisons as Leaders in the Data Landscape

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=138564

YODA project, doing great things. (Yale)

Daniel, Biomed Central

The reproducibility problem, can we trust the literature? There is increasing evidence that there are very real significant problems.
We need to go back to the methods - document them well and share the procedures and results. Intervention at the write-up stage, where publishers come in, is TOO LATE. Publishers are becoming friendlier to publishing negative results, which previously they were relatively hostile to. This is part of the reproducibility crisis. Trial registration number are key here, they give a unique identifier that allows the procedures and results to be definitively associated with each other. At Biomed they’re giving each individual document a DOI and then publishing it to Crossref. (Only for clinical trials)

Holly

As more science becomes interdisciplinary, open data is increasingly important in order to just DO the type of science that needs to get done. This data needs to be both human and machine readable. Why should the library be involved? Who else is going to do it? IT? Institutional Research office? You must take the leap to do this. Start where you are, learn.

What can the library do:

  • Management
  • hosting
  • visualizations
  • GIS analysis
  • grant writing help

If you have a digital humanities lab - must partner with them, teach or co teach workshops or credit courses on data management. If you take the long view, data services are something that will cement your library’s role on campus.

Q&A
Topics discussed:

  • ACRL research data management roadshows
  • Data Cube
  • Purdue’s data curation profile toolkit

10:30 – 11:30 AM Student Privacy: The Big Picture on Big Data

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=142924

Big data means big money to be made, either by identifying economizing areas, or by selling profiles or insights. We as a society have still not adjusted our legislation and expectations to the impact that digital technology has on privacy.

FRPA, one prime concern was with a “permanent record” and how it could be misused. Goal was to limit access to the info to minimize potential harm. Many exceptions have been added for practical reasons.

In the present age, LMSs, email providers, are vulnerable and increase the scope of potential human error. Big data, LMSs and Khan Academy-like places are able to A/B test many things and make judgments. This has great potential benefits. Colleges can pinpoint the precise times when students give up or mentally check out of their courses (if they’re using the LMS). Student IDs that have RFID enabled allow almost total surveillance. Unfortunately schools are normalizing surveillance. Problem is vendors are not covered by FRPA the way schools are. They can share for “legitimate” business reasons. For profit educational websites (as long as not degree-granting) have NO FRPA regulation obligations.

Context of data sharing is really important, this makes legislation and regulations difficult to craft. People speaking out appears to be the main way to get change with Ed Tech providers. Inform and educate patrons about what is happening. Even if they can’t do anything, this can creates social movements and get them to speak up. When possible, go with privacy protecting alternatives, these may cost more, maybe use FOSS. Something that is free which isn’t a FOSS project is certainly selling the data they collect.

Q&A
California has good legislation in this area, which targets the operators of services, not the schools. This is a much better strategy.

1:00 – 2:30 PM Taking the Cake: A Generational Talkback

http://www.eventscribe.com/2016/ala-annual/fsPopup.asp?Mode=presInfo&PresentationID=142933

James LaRue Intro

Is it censorship if a publisher pulls and pulps a book on their own?
Not so long ago, IF was an absolutist position, at least officially for ALA. Now, we are experiencing a pushback for a variety of reasons.

Judith

The 1st amendment is the father of all others. Free speech is the first right, which allows all others and the only guarantee that coincides social progress.

From the Palmer raids to the patriot act recommended book.

ALA freedom to read was largely inspired by McCarthy and red-baiting. College campuses have recently become hotbeds of discussion about subordinating free speech to racial/religious or other concerns: often under guise of alleviating “micro-aggressions”. “Niceness is the new face of censorship in this country” her quoting someone else.

Katie

Despite the efforts of ALA, book challenges still happen frequently. Growing emphasis on social justice is pressing against traditional notion of freedom of speech. Pulling books out of circulation entirely is not the correct answer to books that make people uncomfortable. Innocuous speech needs no defenders.

Panel was anticlimactic, both panelists agreed that decision to pull book was wrong, thus very little actual debate about the deep issues occurred. Good Q&A about lots of things.

ACRL Notes

Portland, OR

My co-presenter and I flew to Portland from Long Beach; I snapped a #PDXcarpet picture as is tradition. See http://99percentinvisible.org/episode/pdx-carpet/ for details.

Wednesday, March 25

4:00 – 5:45 Opening keynote: G. Willow Wilson

http://gwillowwilson.com/

This is the largest ACRL ever by attendance. Anyone involved with media, creators and curates are part of the same conversations about “the new culture wars” - a rethinking of the American narrative. The Ms. Marvel series was only supposed to be 7 issues long. There are many comic book fans who are resistant to change. She did not anticipate the positive response. “History is a series of palimpsests. We all live on the same territory but we do not make and use the same maps.”

Later that evening I attended the opening of the exhibit hall where I saw many of my library school classmates; it was nice to reunite.

Thursday, March 26

8:00 – 9:00

I presented my co-authored paper, Bypassing interlibrary loan via Twitter: An exploration of #icanhazpdf requests, in the 3rd timeslot of this session. The 1st paper was about libraries learning from food trucks in their use of social media. The 2nd was about a crowd-sourced reference platform being used at Purdue University.

10:30 – 11:30 Partners in Design: Consortial Collaboration for Digital Learning Object Creation

The term digital learning objects is not well understood by librarians and students, it is instructional designer jargon. DLO should ideally be interactive. Passive consumption of content is lowest bar. DLOs can be used for blended learning, not a replacement for people but a preparation or follow up for human contact.

This consortium members are geographically dispersed. They were all small liberal arts schools with about 2/k average enrollment. Creating DLOs required a lot of collaboration. Funding to purchase the software was obtained from grants. Different consortium members used different software packages. Articulate Storyline and captivate were used. Collaboration was done over Google drive and hangouts. Working with others held people accountable. Collaborative work can require brining in extra people, losing people, be prepared for this. Having duplicate skills among people is good, helps to bounce ideas off others, skill redundancy if someone leaves. Don’t make a lot of time deadline commitments, these are hard to meet.

Stakeholders are intra- library and extra- library. Everything from catalog design to web site interface affects student learning. All these people need to be involved with DLO creation. If you get good at making DLOs, a lot of people in your university will want your help or your advice.

Assessment: DLOs must be assessed. Does it work is the most obvious - links live, people click where they should. Does it do what you want it to do? That is more difficult to assess, may require pre and post-tests. User testing is (usability or learning outcome) essential.

Having some type of “proof of completion” is important. Self-grading DLOs are easier on the back end, harder on the front creation.

1:00 – 2:00 Assess within your means: Two methods, three outcomes, a world of possibilities

Lots of university goals and library IL goals align, good opportunity for collaborative assessment. Assessment typically takes way longer than you think it will. Assessment is never really over, the findings should just inform your next set of questions.

Research question: Does ‘contact’ with librarians impact students information literacy skills? Method: scoring student work using an IL rubric. 3 population of study: transfers, underclassmen, upperclassmen. Populations were surveyed about their library use and contact with librarians. Papers were graded according to the rubric, scores compared to that they aligned.

4 tips

  • Start small
  • Set limits on the time you will invest
  • Be flexible, make changes as needed
  • Work with others on campus, many others are doing similar efforts

3:00 – 4:00 UX for the people: Empowering patrons and front-line staff through a user-centered culture

UX (user experience) is holistic, encompasses all the interactions people have with your organization. Everybody is a UX librarian, whether they want to be or not. “Aim for delight.”

UX is not dumbing down. When people have too many choices, they can become paralyzed. Reduce unnecessary complication. Make noticing a habit. Do the things your users do, not the way you know but how your directions tell them to do it. Pay attention to when you have to say “no”. Avoid catastrophic failures - make it easy for people to recover if they’ve made a mistake.

Content really is king. You can have nice images and CSS etc. but if your content is bad, your web presence is bad. Put the most important thing first. Be concise. Have a consistent voice.

4:15 – 5:15 Keynote: Jad Abumrad

http://www.radiolab.org/people/jad-abumrad/

Friday, March 27

8:30 – 9:30 Promoting sustainable research practices through effective data management curricula

Students need to reflect on how the generic data management best practices will apply to their area and research.

Don’t reinvent the wheel: lots of good content from:

  • New England collaborative data management curriculum
  • DataOne education modules
  • NSF provides good guidance too

Teaching a discipline agnostic workshop is a challenge, but allowing students/researchers from different disciplines to converse together in the workshop can have them teach each other a lot. If you have students create and turn in their data management plans at the end of the workshop series, these are a great opportunity for summative assessment. Presenters noted that a lot of students wanted detailed training on specific tools for their discipline.

11:00 – 12:00 Invited paper: Searching for girls: Identity for sale in the age of Google

Safiya Nobel

Search engine bias affects women and girls of color. Women’s labor in IT is racialized and undervalued.

Her work is grounded in 3 frameworks:

  • Social construction of technology
  • Black feminism
  • Critical information studies

Corporate control of information can be dangerous for democracy and collective conversations. Most adults, 66%, in a Pew study said “internet search engines are a fair and unbiased source of information.” Various studies show that manipulating search engine results can have very powerful affects, including determining political voting. The commodification of women on the Internet is wholly normalized. Totally non-sexual keywords bring back sexually charged results. Concepts that people use in everyday language get decoupled from keywords once they go in the algorithm.

Google autocomplete is a terrifying window into humanity.

Note: Most of this depends on the definition of ‘bias’. What is bias? She did not define it well here. Under different definitions, her results would vary.

1:30 – 2:30 Promoting data literacy at the grassroots: Teaching & learning with data in the undergraduate curriculum

Data reference presumes an understanding of what to do with the data once discovered. Data management presumes an understanding of the research process or data analysis. For undergraduates, it is probably best to have them start with an analysis of data as it already exists, have them play with a data set.

3 lesson plans:

  1. discovering data through the literature - modeling parroting the experts
  2. evaluating data sets - exploring data and asking questions based on it
  3. research design - operationalizing a research question

Why bother with this at all? Finding data sets is not as easy as finding journals books, etc. There is a bigger cognitive leap that needs to be made to use data in research than in using published traditional sources. ICPSR don’t forget about this great resource.

4:00 – 5:00 Invited paper: Open expansion: Connecting the open access, open data and OER dots

SPARC has been compelled by various forces to expand into the open data and OER movements. Digital environment is full of opportunities to leverage low cost distribution mechanisms. Despite this, the type of things most scholars and students want operate under restrictive distribution policies. All 3 movements believe “openness” is a solution to many problems in the current system.

$1,207 avg. budget for textbooks in 2013-2014 school year according to College Board. The textbook market and scholarly research markets both have intermediary problems.

The goals and strategies of these 3 movements are aligning around efforts to:

  • Create infrastructures
  • Create/adjust legal framework
  • Create sustainable business models
  • Create policy frameworks
  • Create collaborations

Things aren’t perfect though, there are conflicts about:

  • Business models
  • Privacy and security and
  • not wanting to be scooped
  • Value - potential for monetization

Saturday, March 28

8:30 – 9:30 A tree in the forest: Using tried-and-true assessment methods from other industries

Context is very important, longitudinal data is essential to measure change, one-off assessment can never capture the full picture. Don’t worry about forecasting, just track, this will allow you to answer the question “are we performing better than we have”.

Targeted marketing of resources can affect use if done correctly.

Methods:

  • NPS net promoter score
    • Loyalty is a great predictor of satisfaction “How likely are you to recommend X to a friend or colleague? Likert scale response Promoters 8 9 10 Passive 5 6 7 Detractors all below NPS = %promoters - %detractors
    • Npsbenchmarks.com
  • Net Easy Score
    • Looks at effort
    • “How easy was it to get the help you wanted today?”
    • NES = %easy - %difficult
    • Would be good for assessing reference
  • Customer Effort Score
    • “How much effort did you need to put forth to (X)?”
    • CES = sum of ratings / n
  • SUS System Usability Scale
    • Looks at perceptions of usability

Getting these metrics amongst a consortium is a great opportunity to compare what works at different institutions.

9:45 – 10:45 Learning analytics and privacy: Multi-institutional perspectives on collecting and using library data

Most administration people assume the library works, they need more granular and flashy data and metrics. Getting hooked into the academic advisory services is great, have them be partners in pushing students to the library.

FERPA is important, use the data internally, publishing adds a whole layer of complication. Lots of data is being collected that we may not be aware of. Who on the university can access it? How? What can it be used for? Offices of institutional research are crucial for answering these questions about how library affects students’ performance.

Depending on your state’s legislation, this type of research may be difficult or impossible.

11:00 – 12:00 Closing keynote: Lawrence Lessig

http://www.lessig.org/

Why ‘blog’ at all?

Isn’t blogging dead? Relatively speaking, yes. And yet the winner-take-all attention economy that those of us in the internet-connected world occupy has given certain blogs tremendous power. E.g. Wired did a fine profile of Italy’s Five Star movement and the crucial role that the Beppe Grillo blog played in their electoral success. I regularly read blogs and have learned a great deal from some of them. I have no illusions that my blogging will make a substantial impact on the world; my aims are much more modest.

  1. Share works in progress or ones that may never be published formally
  2. Post my mandated travel reports for the professional development events that I attend
  3. Keep my Markdown and JavaScript chops somewhat up-to-date

Site Generation With HexoJS

This site is made using HexoJS, a fast & powerful static site generator framework powered by NodeJS. I’ve made sites by hand in the past as well as used WordPress, when I realized I wanted to get some draft scratchpad content out publicly, neither aforementioned option sounded appealing. Coding by hand was too much maintenance coupled with decision fatigue, something I’m particularly prone to. WordPress is serviceable but I wanted to host it using freely available low overhead web hosting options and also be able to migrate easily if necessary. A quick web search on these criteria turned up HexoJS.

HexoJS Advantages
Hexo blogging is done in Markdown, which is easy to write and super portable in the event that a blog post needs to be repurposed into a webpage elsewhere or other format such as PDF (via simple steps with Pandoc or Python). All Hexo pages are static HTML/CSS/JS which are super fast (particularly once minified) and able to be served via any low overhead cheap/free webserver. My employer offers a bare bones webserver which still delivers the content beautifully and quickly. In the event that I need to migrate to a different server I could do that quickly and should development on HexoJS ever cease, all my content in Markdown is transferable with minimal headache to any other Markdown-based blogging platform, such as Jekyll.

My HexoJS Setup

A note here to myself and anyone interested in HexoJS