Motley Marginalia

A Digital Scratchpad by Gabriel J. Gardner

0%

Notes from ELUNA California User Group 2024 Conference

Long Beach, CA

Narrative

On Thursday 10/24 and Friday 10/25 Recently I attended the 2024 meeting and conference of Ex Libris Users of North America (ELUNA) California User Group, which was held at the California State University, Office of the Chancellor in Long Beach. This was quite pleasant because the last time many of us from the CalState libraries gathered at that location it was during our collective migration to Alma and Primo; it brought back fond and traumatic memories. Below are the detailed session descriptions with my session notes, which are of varying length depending upon how informative I found each session.

THURSDAY, OCTOBER 24 Real intelligence only! (No A.I. Day)

9:15 - 9:45am Opening / Welcome

Presenters: eCaug Steering Committee

Thank you to Ex Libris and the CalState University System for providing funding and in-kind contributions that allowed the conference to not charge a registration fee.

9:45-10:15am Ex Libris Roadmap

Presenter: Katy Aronoff (Ex Libris)

Now more than 2,655 institutions/libraries worldwide use Alma.
Development priorities are: enhancing existing products and implementing AI solutions that actually make things better, not just AI for AI as flashy thing.
Customers are now, supposedly, expressing increased interest in personalized services; this is in noted contrast to messages about privacy that Ex Libris has received in the past.

Recent developments:

  • Consortia central configuration dashboard UI (important for us in the CSU)
  • Streamlined open access publishing based on transformative agreements allowing libraries to process POLs for APC waivers. I am not sure if this would help us currently but it may in the future depending on the types of agreements that get signed.
  • Specto Special Collections is a new DAMS product based in Alma D.
    • Features include AI-powered metadata extract, liking to other materials that provide context, digital exhibition tool that places content into context.
    • Pricing will be based on additional services.
    • Alma Digital customers can be converted to the base Specto package (no add ons) at no additional cost.
    • Lots of new thinking about the discovery process, less search based and more context tools built in.
  • ‘Collecto’ is a new unified collection development workflow product. They are looking for libraries who want to be development partners.
    • Features include: cross-institution analytics (for benchmarking),
    • AI-based workflow suggestions,
    • AI-based collection development functions (e.g. buy-borrow comparisons, cost-per-use comparisons, linear feet analysis).
  • ‘Library Open Workflows’ new product for “no code” workflow development and workflow automation. They are trying to get Office 365 and Google Suite integration so those products can easily talk with Alma. General availability set for late 2025.
  • AI metadata generator for Alma Community Zone records being used to enrich MARC CZ records.
  • AI metadata assistant for Alma MD editor. Cataloger-mediated for quality control checks.
  • Linked open data: Sinopia integration available.
  • Rapido community is growing, currently 57 pods.

The ‘Next Discovery Experience’ has 4 components:

  1. Primo Research Assistant
  2. New UI, General availability set for summer 2025.
  3. Focus on Linked Open Data, leaning heavily on Wikipedia.
  4. User Engagement Platform, user analytics powered by MixPanel, not Oracle Business

10:30-11:15am Concurrent Sessions 1 - Partnering with providers for metadata excellence

Presenter: Tamar Ganor (Ex Libris)

Description:

This session will present the latest projects Ex Libris has launched to increase cooperation with content providers. Topic: ERM / Resource management

The content triangle of Ex Libris, content proviers, and institutions. Only two ends of the trangles ever talk together at once. This is the inherent difficulty.
They have developed a “Provider Platform” to improve their communications. This is basically giving the vendors access to an Alma UI that only shows things pertaining to the specific vendor’s products. It helps reduce troubleshooting time because the content providers can see how things are displaying and behaving in Alma. It will make part of the process visible and transparent. It also allows collaboration. Goal is to have faster resolution of customer issues and reducing the amount of time library staff spend on being middlemen between Ex Libris and the content providers.
Partners already using it: Gale, Aadam Matthew, Bloomsbury, Oxfored University Press, De Gruyter, Sage.
They’ve introduced a master Manifest, maintained by providers, which holds comprehensive record of all collections the content provider offers. Initial setup of the manifest is done manually but then automatically updated when content provider lists are updated.
Ex Libris is aware of upcoming regulations about content accessibility in the USA. They have extablished an ExLibris-Provider Advisory Group where vendors are collaborating on meeting accessibility requirements.

Q: The Provider Platform is great, how much of that conversation between Ex Libris and content providers will be visible to library workers?

A: None, currently. They are happy to talk about this issue but don’t want to give full transparency because it can create frictions between the 3 parties in the triangle.

11:30 -12:15pm Concurrent Sessions 2 - Alma Letters Workshop

Presenter: Christopher Lee (CSU)

Description:

Come with Letters you want help tweaking, and get them fixed up in real time. Christopher Lee created the Letters used by Rapido in the CSU, and his code has been copied at various institutions outside of the CSU. After this experience, Chris is comfortable editing most Letters. Other Letters experts are also encouraged to help improve Letters in this interactive session. Topic: Resource Sharing / Administration / Fulfillment

Ex Libris has improved this interface for editing letters since we migrated to Alma. It is considerably improved, although still leaves much to be desired. I think the best workflow would be to copy and paste the XML into VS Code, appropriately configured to work in XML, then paste it back into Alma once the edits have been made. I noted a few of our staff in this session as well and shared with them my thoughts on using VS Code.

1:30-2:15pm Concurrent Sessions 3 - Let Data Guide Us into the Next Discovery Experience

Presenters: Can Li, Heather L. Cribbs, Christian Ward, Gabriel Gardner (CSU)

Description:

Our presentation explores a collaborative effort among CSU campuses, led by members of the Data Issues Task Force, to optimize Primo VE search configurations with the goal of enhancing search results and user satisfaction across the system. Each campus employs unique settings for search and relevancy ranking, resulting in varied user experiences. Through a comprehensive analysis of these configurations using a shared spreadsheet, we identified patterns and inconsistencies that impact search outcomes. Our aim is to determine the optimal configurations and establish baseline analytics, ensuring that end users can efficiently complete their top tasks, especially with the new discovery UI set to launch in 2025.
A key component of our analysis is the use of the Primo Search API, which enabled us to automate the collection and comparison of data on search queries, results, and relevancy rankings across campuses. The API provided valuable insights into common search challenges and effective configurations, supporting our data-driven approach to optimizing search functionality.
As we refine these settings in preparation for the upcoming Next Discovery Experience, we invite further collaboration from other CSU campuses. By broadening our efforts under the guidance of the Data Issues Task Force, we can develop a unified approach that ensures consistent, high-quality user experiences across the CSU system. Topic: Discovery

2:30-3:15pm Concurrent Sessions 4 - What are we doing about Gov Docs?

Facilitator: Jill Strykowski (San Jose State)

Description:

With the recent announcement that both MARCIVE and CRDP are dissolving next year, a lot of us are looking for alternative solutions. Maintaining e-resource records for U.S. Federal Government Documents has never been easy, and maintaining e-resource records for California Government Documents has proven impossible. In this roundtable, I’m looking to brainstorm ways we can use the Ex Libris community and network zone management capabilities, together with OCLC metadata management tools, to help each other provide reliable discovery of Federal and State electronic Government Documents. Topic: Government documents / ERM / Resource management

MARCIVE is going out of business, all operations will officially cease by the end of December 2024. We need to test loads from OCLC using WMS Record Manager.
OCLC is providing webinars and forums on this topic.
Example from a library: USC they have a work in progress, it took a while get everything set up with OCLC. Need to send the GPO code to OCLC and they seem to set things up with GPO.
Institutions with special arrangements, such as NACO authorization or CONSER, can update the PURLs.
The UC Shared Cataloging Project are actively cataloging California gov docs.

Q. Can Ex Libris help with managing gov docs in the Community Zone?

A. They tried this but found that the collection was not up to their quality control standards. Too many problems were encountered.

3:30-4:15pm Concurrent Sessions 5 - Lightning Talks

Cut out the Copy/Paste: Introducing the Excel Alma Lookup Tool

Presenter: Evelyn Goessling

Good for:

  • Preliminary title overlap analysis
  • Identify MARC fields for cleanup

This may be helpful.

Excel Tips for Overlap Analyses

Presenter: Ellen Augustiniak

Keep your data tidy, do not leave any cells empty, put only one variable/datum in a cell - avoid delimiters when possible,
Learn to use and love XLOOKUP function.
Pivot Tables are incredibly useful.

Going all in? Testing the Activation of Open Access Collections in the Community Zone

Presenters: Kevin Balster, Erica Zhang

There are still many problems with open access linking. If you just turn all the collections on be prepared for significant investment of time responding to miniscule records showing in Primo and broken links not maintained by Ex Libris.

Our Workday Finance ‘integration’ - how we intermediate our payments process in order to pay our invoices and update Alma

Presenter: Joshua Hutchinson

NOTE: We need a ‘no invoice, received item’ Alma analytics report. I think we are relatively automated but we might be able to streamline trading of invoices by using OneDrive rather than email.

FRIDAY, OCTOBER 25 All kinds of intelligence welcome (A.I. Day)

9:15-9:45am eCaug business

Presenters: eCaug Steering Committee

9:45-10:30am Let’s talk about AI

Presenters: Katy Aronoff & Tamar Ganor (Ex Libris)

Clarivate is trying to use AI thoughtfully and based on reliable data sources.
Their strategy:

  1. Embed AI into current solutions, Web of Science, ProQuest platform, Ex Libris Discovery
  2. Innovate with new solutions, academic coaching, research integrity bots
  3. Improve efficiency for customers

NOTE: Three Cal State campuses are piloting Alethea academic coach.

They are contrasting their AI work against general-purpose large language models. Clarivate is doing “academic AI” which grounds all output in specific trusted content sources.
Clarivate is not doing any ‘training’ of models, they are taking pre-trained LLMs, currently GPT-4, but forcing them to constrict their responses to specific data sources. RAG, retrieval augmented generation, is the term of art.

Clarivate is committed to “the human in the loop” approach. They are mindful that the technology is ahead of the regulatory environment. However, because Clarivate is a global company, they are bound by the strictest possible regulatory overlay; they do not geofence particular features in their products or develop for different localities.

Focus areas: research assistants, analytics assistants, metadata assistance/enrichment, academic coaching.

Q. Does Primo Research Assistant query local records?

A. No; it is currently based on CDI only. They are considering local collections possibilities but no timeline.

Q. Given the different Research Assistants on different platforms, can we know what overlap is in the underlying data sources?

A. The RAs have slightly different use cases. So they are programed to respond slightly differently as well as being constricted to different data sources. They do NOT plan, currently, to have a ‘general’ RA that can be pointed to a data source of user choice.

Q. Why do they require users to sign in and ID themselves? Many librarians have privacy concerns.

A. The sign in is because the RA works with the full-text of copyrighted materials. The IDs of the users submitting the queries to the RA are not transmitted to 3rd parties. User IDs kept inside the RA just for the purpose of conversation continuity. They also want to remind USA users that all their products abide by the strictest EU AI and privacy regulations. Follow up on this later: RAs will not be available via IP-based authentication.
Clarivate developers are analyzing all the query logs for trends and trying to understand how people use the product.

Q. Does Clarivate plan to train their own LLMs?

A. No. They want to use the most well-trained LLMs available and artificially control their behavior to ensure lack of “hallucinations” and quality control , the aforementioned RAG approach.

Q. Costs and pass-through of costs. We know that every query to these LLMs costs money. How are they handling that as a business?

A. Some products will be included in base subscriptions, others will be available as add-on costs. They are also debating internally about the pros/cons of having tiered models and how that will affect relationships with customers. Follow up later: what about the environmental impact of AI? Clarivate has targets for carbon reduction and is adopting various ESG practices.

Q. What are Clarivate thinking about academic integrity and how their products of RAs will affect student work outputs? What about opting out?

A. On the opt-out question, they want to leave it up to each person every time, if they don’t want to use it, then don’t use it. They aren’t going to force people to interact with the RA.
People should not be concerned about any Clarivate RAs being used to generate student product whole-cloth. They aren’t designed for that, and there are better products available for students who want to cheat.

10:45-11:30am Concurrent Sessions 6 - Don’t Panic! Document it! Tips for Documentation and Succession Planning

Presenter: Evelyn Goessling (UC Irvine)

Description:

If you won the lottery and moved to Fiji tomorrow, how long would it take for panic to set in at the office? If you and your colleagues document workflows, decisions, and discussions, there should be no reason to panic. A healthy documentation habit encourages communication and continuity as staff and service needs change. In this presentation we will discuss our team’s approach to documentation and succession planning. We will talk about best practices, challenges, successes, and problems not yet solved. We will address how documentation fits into succession planning and reflect on how it has worked—and not worked—in our library. With creative thinking and open-mindedness, you, too, can set your team up for success when you hit the jackpot. Topic: Project management

  • Background and context
    1. Documentation is important! Creating a record of what people do is helpful for themselves and others. Through documentation, you establish and build institutional memory. There is a lot of literature about succession planning, they have no new innovation just things that have worked for them. Explicit technical knowledge is necessary to document but we also need to document tacit knowledge that explains the ‘why’ behind the technical steps.
  • How they do it and best practices
    1. They are a Microsoft shop. Use: Teams, SharePoint, OneDrive, and OneNote. The documentation lives in OneNote. They make writing documentation everyone’s responsibility.
    2. Know what you’ll document, know who will do the documentation, think about what knowledge is needed to understand the documentation i.e. is it for student workers?
    3. Build documentation time into work, schedule it.
    4. Formatting of documentation should be relatively simple and standardized. Keep the fact that it needs to be updated in mind. Do you really want to re-do a bunch of screenshots or can something be described in text? Test the documentation: have someone with a similar knowledge base work through it and see if they can complete a workflow.
  • Reflections and takeaways
    1. Documentation is super helpful in Technical Services in particular, it helps with professional development. It is also useful for reporting purposes.
    2. Challenges: OneNote search is limited. OneNote document revision is not tracked very well compared to other products like Word or Google Docs.
    3. Tips: always put the documentation and files in a shared environment, not someone’s personal OneDrive/SharePoint. The perfect is the enemy of the good; just get it done.

Q. When a document is updated how do they communicate that out?

A. They use the chat feature in Teams.

11:45-12:30pm Concurrent Sessions 7 - MarcGenie to the Rescue: Automating Post-Migration Cleanup of Serial Holdings

Presenter: Minyoung Chung, Rika Rudra (USC)

Description:

The Technical Services department at the University of Southern California (USC) undertook post-migration cleanup of irregular serials holdings data, specifically focusing on MARC tags 853, 863, and 866. This presentation details the methodologies employed to enhance these records using Pymarc, regular expressions, and the Alma API within MarcEdit. In the initial phase, we utilized Pymarc and regular expressions to parse and identify patterns within the existing holdings data. The process generated new 853 fields for captions and patterns, along with multiple 863 fields for coded enumeration and chronology data, all derived from the data present in the 866 fields. By integrating the Alma API with MarcEdit, we efficiently restructured and updated approximately 5,000 serials holding records. However, we encountered a challenge: Alma did not automatically generate the 866 fields through any holding-related job unless the record was manually saved in the Metadata Editor. This issue necessitated a second phase of cleanup. The project also introduces MarcGenie, a Python-based tool that automates the creation of the crucial 866 fields, further streamlining the process and improving data quality. This tool significantly reduces manual labor, allowing librarians to focus on higher-level tasks and enhancing the overall discoverability of library resources. Topic: Resource management

This was quite impressive. Hopefully, we can use this software at CSULB to save a non-trivial amount of staff hours.

1:45-2:30pm Concurrent Sessions 8 - Lightning Talks

Getting Started with Resource Recommender

Selena Chau, Jess Spencer Waggoner

Possibly helpful for other libraries but we are already using this feature at a sufficient level.

Crucial Conversations: Communication Strategies Employed during an Alma-KFS Payment Feed Project

Presenter: Ashley Newton

Crucial Conversations are ones where stakes are high, opinions vary, and emotions are strong. Lessons from the book of the same title that helped with setting up the Alma KFS feed:

  1. Lesson: Be a good listener. Paraphrase back what your interlocutor says to ensure mutual comprehension.
  2. Lesson: Create an environment where everyone is comfortable sharing. Get people involved to get them committed to the decision-making process and final decision.
  3. Lesson: Simplify your concern/question. Better to have multiple focused conversations than fewer rambling ones.
  4. Lesson: Turn conversation into action. Document action items and decisions, have good documentation in general.
Leganto implementation

Presenter: Chris Acosta

Not applicable for us due to our Springshare e-reserve integration in Canvas.

Access broker browser extensions: A comparative analysis?

Presenter: Win Shih

Browser extensions are one aspect of discovery experience and can be an important tool for end users. Some libraries lean into this approach to discovery, at CSULB we have never beaten this drum although perhaps we should.

Extensions examined:

  • EBSCOhost Passport
  • EndNote Click
  • GetFTR
  • Google Scholar Button
  • Lean Library
  • LibKey Nomad

Of these 6, Google Scholar Button performed the best in full text retrieval. It actually performed better than LibKey Nomad, however Google Scholar does not necessarily take users to the final publisher version of record, whereas LibKey Nomad will go to the version of record. An inherent problem in these products is the communication of a library’s holdings or subscriptions to the vendor or developer making the extension. GetFTR in theory holds the most promise because it is publisher driven and relies on publisher records, however it did not perform well compared to the SAAS products of Lean Library and LibKey Nomad.

2:45-3:15pm Closing - Call to action

Presenters: eCaug Steering Committee

This conference almost didn’t happen due to penny-pinchers at the CSU Chancellor’s Office being worried about the public perception of what it looks like holding a conference and the necessary expenditure of staff time and resources during a time period when all the Cal State campuses are experiencing budget difficulties. Fortunately, reason prevailed in the end. This conference, and other conferences like it which bring staff together to compare notes and methods, will ultimately pay dividends via local improvements in each campus library. Staff professional development has a positive return on investment.

Notes from Ex Libris Users of North America 2023

Los Angeles, CA

Narrative

From Tuesday through Friday I attended the 2020 meeting and conference of Ex Libris Users of North America (ELUNA), which was held at the Westin Bonaventure Hotel in Los Angeles. This happened to be a filming location for some movies that I like such as This is Spinal Tap and True Lies so that was quite fun for me. Also, the building in the next block was used as the bank for the climax of the movie Heat, which is absolutely fantastic and worth everyone’s viewing time. (I would call it the best movie of 1995 except for the fact that 1995 saw the realease of an incredible number of excellent movies including 12 Monkeys, Braveheart, Apollo 13, Mallrats, and Mr. Holland’s Opus. Anyway, Heat is great.) Below are the detailed session descriptions with my session notes, which are of varying length depending upon how informative I found each session.

TUESDAY, MAY 9

https://mtgsked.com/p/35060/s

https://mtgsked.com/p/34300/s

No sessions this afternoon as part of the conference, just registration and then the opening reception. I caught up with some CSU colleagues and a friend from Fullerton College before the reception.

WEDNESDAY, MAY 10

2:30 - 3:15 - PRIMO WORKING GROUP UPDATE

https://mtgsked.com/p/35052/s

Abstract:

Learn what the ELUNA Primo Working Group has been doing over the past year, and share your feedback for future priorities for the group. We will also help community members learn how they get involved with the group as well as other community advocacy opportunities.

This was an overview of the duties and roles of the working group. One important thing they do is manage the PRIMO-L listserv; Ex Libris does not have anything to do wtih this. They also work with Ex Libris on the NERS enhancement system and process. Advocacy for user needs takes place with monthly meetings with Ex Libris.

Q&A

Q: What do we know about the ‘new Primo experience’ announced by ExL?

A: Guy from Ex Libris, they are only in very early stages. Still doing the mockup process. What they are particularly focused on is linked data and leveraging AI to enhance the discovery process. This is at least a 3-year process that they are starting.

Q: Would people like to see more curation of the Idea Exchange? Many items with zero votes, also it is not clear from ExL why things on Idea Exchange get developed over others.

A: Discussion: people want to be able to export Idea Exchange data to mine it for NERS requests. Also, people are unhappy about how long it takes for Ex Libris to close Idea Exchange entries and return the votes back to the users. Some people cannot vote again because all their votes are tied up in open Idea Exchange entries for features that have already been developed.

3:45 - 4:30 - STRATEGIC VISION, PART 1: ALMA, RAPIDO & HIGHER EDUCATION PLATFORM

https://mtgsked.com/p/35373/s

Abstract:

Our long-term vision for Ex Libris products and services is greater connections for everyone involved. Greater connections across the Ex Libris and Clarivate ecosystem; greater connections across campus and institutions for shared services, shared collections, shared collection development, and greater opportunities for resource sharing. Bringing efficiencies and cost savings through technology and connections.

The general theme is increasing product integration across ExL and Clarivate products.
They’ve moved to a daily update schedule of over 10 million records.
Rialto and Leganto connections between products are being improved.
Rialto and Rapido are going to be connected to display Rapido request information in Rialto to aid purchasing decisions.
Now that ExL is a Clarivate product, they are integrating Web of Science data into Primo via linked data, even for customers who are not WoS subscribers.
They say they are committed to ‘supporting’ a transition to bibframe. Though they will continue to support MARC.
They want to pilot with a leading AI platform AI21 Labs to build a trusted AI framework. AI model trained on scholarly resources.


Alma: they are working on workflow simplification and a consistent user experience across Alma modules.
Looking at creating efficiencies and reducing the number of clicks.
Analytics: they are working on OOTB dashboards, appropriately normalized, to allow collection-level benchmarking. These would be available at multiple levels (NZ, CZ, and across non-Alma libraries)
~1000 libraries are using Rapido or RapidILLL and fulfilled over 2 million requests in 2022.
They are planning to bring Rosetta, the preservation software, onto the same cloud platform.

4:45 - 5:30 - THE GOOD, BAD AND UGLY OF BOOKING: HOW TO IMPROVE FUNCTIONALITY AND DISCOVERY

https://mtgsked.com/p/34737/s

Abstract:

Fondren Library at Rice University makes two types of resources available for advanced reservation: study rooms and equipment. We use two very different approaches for discovery and booking for each. For equipment, we enriched our catalog records to improve discovery so materials could be easily found through Primo VE. For our study rooms, we use an open-source application that integrates with Alma through extensive use of APIs and webhooks. Each of these approaches is specific to how the materials are discovered and used by patrons. We will share cataloging strategies and technological solutions.

Rice migrated to Alma in 2019.
Rice is an urban university open to the public. They offer study rooms. Rooms are locked and contain valuable technology. The keys are checked out.
Before the migration to Alma, they did booking by two systems in parallel, Evanced for study rooms and PHPScheduleIT (now called Booked) for equipment, then ‘check-in/out’ in their ILS.
Why not use Primo for rooms? The calendar is awful, too hard to determine availability.
Also, the reservations are passed via webhooks, for whatever reason, these are processed much slower than other API calls, which created “ghost” reservations.
Their open-source solution has been in production for 9 years, processed over 300,000 reservations, and it works well.
They have a ‘Digital Media Commons’ which offers free equipment checkout among other services.
Their old system used a web form to collect reservation data and then use manual student labor to enter details in Alma. The new system, 2021, is self-booking in Primo at the title level.
Why not use the Room Reservation system for booking? RR only supports copy-level booking but the equipment was mainly ‘multicopy’.
The current system has all DMC content cataloged in Alma with rich records at the title level. HTML tags are actually acceptable syntax in Alma MARC records! Who knew? They create MARC records for each equipment item and then enrich them with HTML which includes images of the equipment and HTML lists of the features, these are rendered correctly in the Primo description section of the bib record when cataloged correctly.
They had to create a local resource type and run normalization rules to get them to be indexed and displayed correctly.
So that people can see all the available equipment in one list (rather than having to search for it in normal Primo) they point users to the Browse search and thanks to a normalization rule all the records are returned via Browse.
The good of using Primo for booking like this: no coding!
The bad of using Primo for booking like this: the calendar. Calendar can’t show to the future limit, it does not generate a confirmation email. Reservations are cumbersome, and require the entry of day, hour, and minute for the form to be submitted.

Q&A

Q: How do people enter Primo and find these things?

A: Just searching for something like ‘laptop’ or ‘camera’ in Primo returns too many irrelevant results. So the Digital Media Center website points to a Browse search with pre-filters for only equipment applied.

5:45 - 6:30 - GOING DIGITAL WITHOUT GOING CRAZY! CDL & DIGITIZATION TIPS & TRICKS IN ALMA & LEGANTO

https://mtgsked.com/p/35386/s

Abstract:

Come and learn about tips, tricks, and workarounds our team will share with you to successfully handle digitizing the many faculty requests that arrive each semester; how to leverage the features in Alma and Leganto to assist with Controlled Digital Lending (CDL) work; and some processes the team adopted to handle various CDL issues including moving items in Alma to a secondary location while materials are being digitized outside of the library. We’ll share our best practices for distributing digitization work equitably, and what we’ve learned about storing files and digitizing large and fragile materials. Learn how to use Alma and Leganto to your advantage to transition materials from in-person physical only, to anywhere, anytime virtual.

COVID closure forced Rice to do rapid innovation/pivot to CDL for e-reserves. The workflow: each class or reading list is assigned to an individual staff member. They maintain a spreadsheet of which staffer is working on which class or reading list.
CDL is based on a 1-to-1 relationship of course.
As items are pulled from the stacks and scanned, they then are checked out and held behind in a staff area so that the digital copies can go out.
They use bookeye scanner.
Files are all held in a shared folder in the cloud with standardized file naming conventions (camel case, alphabetical). They only use 150 DPI for text to control file sizes. Photos at 200 DPI. The bookeye they have does not support 300 DPI. Partial scans (if someone can’t get a complete scan done during their shift) are held in subfolders of the parent CDL folder. Adobe Acrobat Pro is used to combine any partial scans.

THURSDAY, MAY 11

9:00 - 9:45 - SUMMER OF ’23: TIME TO CLEAN UP ALMA LETTERS!

https://mtgsked.com/p/34671/s

Abstract:

When summertime comes to North America you might be looking for projects for you and your library staff. How about giving your Alma letters a review? This presentation will cover why you should take on such a project, how it can improve communication with your patrons and the happiness of your staff, the structure of Alma letters, small and big changes you can make to letters, facilitating testing and review of letters by your staff, and keeping up with new letters and features as Ex Libris adds them in new releases. Letters have changed a lot since the last ELUNA in-person conference and likely since your institution migrated to Alma, so even seasoned letter users will find some tips here.

Why should we do a comprehensive review of all letters?

  • for some users, they are the only/primary form of communication from the library
  • review for any inaccurate pandemic change information
  • the default language is perceived as “too strong” anecdotally; softening up may improve user experience

Including Rapido has greatly increased the number of letters to use and configure

The ‘User record - Attachments tab holds previous communications with users for varying retention periods, default last 365 days

Only the ‘Letter Administrator’ and ‘General System Administrator’ Alma roles can configure letters. Permissions are all or none, no granularity of types of letters.

Letter editing is best done in a text editor that supports the .xsl and .xml filetypes. NOtepad++ or Visual Code Studio.

Letters for features you don’t use - they are on by default. This isn’t bad but can clutter your lists considerably.

Remember to hit the ‘SAVE’ button frequently because navigating away without saving will destroy unsaved edits.

You can store historical examples of letters in Alma by downloading the XML of a sent letter from the Letter Preview, then upload it back into Alma (the file needs to be renamed to note that it is historical)

Some labels to check:

  • department
  • sincerely
  • addressFrom
  • addressFrom
  • hello
  • header_line
  • due_back

Many scenarios why it is a good idea to review these. Suggest usability testing of language with the general student population.
Can administer surveys related to recent borrowing (or ILL or whatever) by adding links to them in the letters. Great way to get targeted feedback.

Need to set a calendar reminder to review Alma letters after each new software release.

Q&A

Q: Is there a way to sort letters by when they have been sent? As a type, in other words by frequency or total volume?

A: No. You can only see what has been sent in the last 10 days. Or what has been sent to a particular patron. If a patron does not have an email on file, no history of letters is generated.

Q-A: Comment: commenter’s institution has multiple libraries and multiple people editing letters. They organize this by keeping all the XLS and XML files in GitHub as the master repository allowing collaboration.

10:00 - 10:45 - HOW AI LINKING TECHNOLOGY WORKS TO MEET USER EXPECTATIONS

https://mtgsked.com/p/34743/s

Abstract:

Connecting users to full-text articles used to be a straightforward process of creating an OpenURL query that linked users from A&I resources to publisher sites. Today, it’s not so simple. Several factors now need to be considered in creating a full-text link and deciding which one is used, including which publisher and aggregator sources have the article, the formats available to the user (PDF or HTML), whether the article is Open Access, if the version of record and alternative versions are available, if an article has been retracted or carries an expression of concern, and if the article is from a predatory journal. This presentation describes how the LibKey linking technology uses article-level intelligence and AI-based source selection to navigate the new terrain of linking to provide the fastest, most reliable, and most informed link to full text, every time, wherever the user starts their research journey.

Third Iron has a product, LibKey, focused on linking that blends Open Access sources to improve user experience. Unpaywall and other OA aggregators contain inaccuracies. Hybrid OA is an absolute mess because link resolvers don’t differentiate OA/paywall at an article level, they work at a journal level. So you can get false positives and false negatives for individual article access in Hybrid journals.

LibKey solves the Hybrid journal problem because they start first with a database of curated OA articles and requests are routed through to the good OA version if available then only on to the link resolver if no workable OA version is available.
LibKey curation includes the version of record OA as priority then postprints/accepted version. It will not pass on preprints/submitted versions.

LibKey partners with RetractionWatch to flag retracted items (as well as ‘expressions of concern’) on the curated list. They also push this out into Primo which will display retraction information in the results list.
LibKey also has Cabells integration for Cabells subscribers. This will provide links to Cabells information on that platform.

LibKey integrates with PQ and EBSCO databases as well as Primo, providing a very consistent experience across all platforms.

There is a coming ILLiad-LibKey integration.

The Nomad product, a browser extension, has a LibChat integration. If you get all your users to install it, it basically provides Library chat services on any web page.

Q&A

Q: Is there a good Rapido integration?

A: ILLIad is the only ILL provider they have software integration with.

11:15 - 12:00 - PUBLISHING WITHOUT PERISHING

https://mtgsked.com/p/34962/s

Abstract:

Alma publishing profiles are a powerful and flexible way to export metadata from Alma. In this presentation, we will demonstrate how publishing profiles play a key role in customizing and normalizing data to meet project needs, and discuss when they are a better choice than Alma Analytics or the Export Bibliographic Records job. Publishing profiles can easily incorporate physical or electronic inventory information into MARC bibliographic records for further analysis and transformation through various MARC editing and processing tools. Publishing also allows powerful filtering rules and normalization routines to customize data outputs and can work incrementally via OAI-PMH for data harvest and processing by external programs. Applications covered in the presentation include: using published data for analysis to support enrichment and remediation projects, exporting transformed metadata for ingest into a local digital repository, incremental OAI publishing for external partners, and using general publishing for OCLC Datasync.

This was a very thorough review of multiple use cases for Alma Publishing. A lot of this requires using other applications to transform the published data for optimal ingestion into the endpoint system.

Q&A

Q: Have there been issues with the new Alma scheduling feature affecting jobs? This questioner from IL has seen inconsistent results between older publishing profiles and new ones remade with the same set list queries.

A: Did you try ‘rebuild entire index’? That option might help.

Comments: make sure your normalization rules have very clear different names from publishing rules. If you run the norm rules as ‘jobs’ on the entire data it will destroy important information which Ex Libris will need to recover for you. Be careful.

12:15 - 1:00 - IMPROVING DISCOVERY WITH MORE INCLUSIVE (AND IMMEDIATE) SEARCH RESULTS

https://mtgsked.com/p/34950/s

Abstract:

For years, the University of Tennessee-Knoxville only displayed results we owned and directed users to “Expand My Results” if they wanted to see other possibilities. However, this year we made a significant philosophical change by disabling “Expand My Results.” Now, everything displays up front in search results, regardless of whether we own it or not. We made this decision in an effort to remove barriers that prevented inclusive discovery. To accomplish this work we needed buy in from several departments, changes to the GES links for resource sharing, and updates to certain labels. As a result, we believe we now have a more inclusive and improved discovery tool for students, faculty, and researchers. What do users think of the change? Were there any unexpected surprises uncovered as a result of this change? Come join me as I discuss this topic.

How is discovery changing? There is a push to make search results more inclusive and/or non-biased while simultaneously expanding the number of sources we make available.

In 2022 UTK completed a MISO (Measuring Information Service Outcomes) survey and some of the feedback they got back was about Primo ‘unjustly privileging non-diverse sources’. There’s quite a bit to unpack there and UTK was not sure how to answer it well because the Ex Libris anti-bias group methodology is not clear and transparent.

UTK staff met with Judith Fraenkel, Ex Libris Director of Product Management who strongly suggested disabling ‘expand my results’ which queries CDI for items not clearly in holdings. This was not a great solution because they looked at their data and less than 1% of searches even used ‘expand my results’ in the first place. They decided to just turn it off. This required a bit of label rewording. It also required them to rewrite many GES.

In VE config the ‘filter by availability’ checkbox on a search profile is the toggle on/of for ‘expand my results’.

Why does Calculated Availability not always get the correct answer? Because there are actually 2 indexing processes working, 1 CDI for brief results and 2, ALma for full results.
Alma e-holdings are published to CDI. Unpublished (pub job) collections will not show as full-text available. Incorrected metadata in ALma records (incorrect as in not matching what is on file with CDI).
For troubleshooting, use the CDI Activation Analysis tool - URL available from Ex Libris support.

2:00 - 3:30 - STRATEGIC VISION PART 2: THE NEXT DISCOVERY EXPERIENCE, LEGANTO & GETTING IT ALL TOGETHER

https://mtgsked.com/p/34822/s

Abstract:

Create the proactive library that engages the entire community and doesn’t just react to the users that walk in the door. From discovery to action, anticipate users’ needs and respond when needed, not when asked, with personalized content and information. In this session, we’ll share our vision for how we think a proactive and personalized experience for all users from students to library staff will make a difference for the future of libraries.

https://mtgsked.com/p/34677/s

Abstract:

Automated storage retrieval systems (ASRS) don’t always run as we would like. They often break down, require maintenance, or generally stop working as expected which render stored items inaccessible, causing confusion and frustration for library staff and users. At San Francisco State University, we’ve developed creative workflows using Alma to reduce the amount of time staff devote to canceling requests for inaccessible materials, and communicate with our users the availability of those materials. This presentation will detail the steps we took to develop and apply our solutions using Alma configurations and workflows, and will give attendees the tools to look creatively at solving similar problems within their library.

Access Services and Circulation Services supervisors from San Francisco state reviewing their setup and how to improve workflows.

Problems: individual bins are not available or entire aisles are not available. How to let users know?
Alma work orders. They created a new work order type and a new Department just for ASRS items. Fulfillment unit rules need to be updated.
Barcodes then need to be gathered or exported from Dematic.
Bins are then marked as unavailable by running a bulk change ‘Physical Items’ job. IF only a few items need to be changed this can be done by searching individual barcodes and editing the holdings.

For when entire aisles are unavailable: use a temporary location. Make sure to set the new temporary location as a Remote Storage location.

Irene mentioned to me that we may have a problem with some data in Alma not matching what is on file in Dematic.

Q&A

Q: (Irene) Do the presenters have any advice for import/export errors?

A: SFSU has some open support tickets about this.

Q: At UNLV they store special collections in their ASRS, any advice?

A: Keep them in restricted boxes, Dematic and Alma permissions need to be adjusted.

Q: how to handle special collections items that are uncatalogued?

A: Put items in bankers’ boxes or preservation boxes that are barcoded. The Special Collections unit maintains finding aids showing what are in the boxes, boxes are then cataloged individually for discoverability and requestability in Primo.

FRIDAY, MAY 12

9:55 - 10:40 - STAT CAT WHAT’S THAT? WORKING WITH ALMA USER STATISTICAL CATEGORIES

https://mtgsked.com/p/35147/s

Abstract:

Libraries are trying to determine new methods of assessment as we return to in-person service post-pandemic. You may already have user statistical categories (“statcats”) in your Alma user records and not even realize it! This presentation will cover how to see if you already have some statcats in your system, how to configure, add and improve them, how to use statcats in Analytics, statcat ideas you might want to try, and what statcats can tell you about your library users to help with desk staffing and outreach plans.

A newish feature expansion is in statistical user categories. This presentation reviewed how and why these should be used.
Data imported from the university can be mapped to statistical categories.
Only 10 categories can be tracked and reported out through Analytics.

Common errors - the ‘add row’ area will let you add things twice.
Alma sets and jobs can be used to adjust and assign statcats in bulk.

Q&A

Q: Do you anonymize user transactions in Alma?

A: No. When the anonymization is enabled it changes how and when the statcats are recorded.
Discussion ensued about anonymization and privacy requirements.

11:00 - 11:45 - STRATEGIC VISION PART 3

https://mtgsked.com/p/35302/s

Abstract:

In a data-driven world, our goal is to help libraries have the data they need to make smart choices and reshape/automate workflows. In this session, we’ll talk about our work to create tools that measure library success across platforms, initiatives, and goals, instead of simply counting staff activity. These tools can help predict and plan the future, ensuring efficient use of limited resources and mission success for the library.

11:55 - 12:40 - REDIRECT SEARCHES TO RESOURCE SHARING REQUESTS AND BEYOND

https://mtgsked.com/p/34726/s

Abstract:

Due to a mold infestation, more than half of the physical items (more than 300,000+) were inaccessible at SUNY New Paltz library. Working with the SUNY Libraries Support team, we can fill those physical material requests via our Alma Network Zone and Resource Sharing within the SUNY Library Consortium network and other participating libraries. In this presentation, we will show the step-by-step instructions on the configuration changes and setup in Alma and Primo VE, with Analytics reports. The presenters will also cover interlibrary loans and resource sharing via Alma with other libraries outside of the SUNY consortium network, expanding Alma resource sharing beyond New York state to other state-wide consortia and institution partners; The presentation will also mention the ebook ILL pilot.

At SUNY New Paltz they had a mold outbreak. But only certain areas of the stacks were affected. How to address this from a discovery and resource-sharing perspective? They used Alma sets. “Missing” is an existing status so they made a new temp location “Still Missing Not Found”. They used the ‘Change Physical Item Information’ job to update moldy items status.

SUNY has a small centrally managed staff who support their Network Zone, they have 60 Institution Zones.
Because there is so much sharing between SUNY libraries, they could not just pull themselves out of the ROTA because that would not allow their users to place requests.

1:30 - 2:15 - RELATIONSHIPS MATTER: CREATING CONNECTIONS BETWEEN ACQUISITIONS AND CATALOGING

https://mtgsked.com/p/35294/s

Abstract:

Cataloging and Acquisition departments use the same library system but often interact with it in very different ways, from how they search to what information they focus on, often leading to silos of processes and knowledge. In next-generation systems like Alma, the different types of electronic resource records rely on an interconnected system that requires more communication and collaboration between departments. For instance, if an ebook collection is purchased as a single collection, but cataloged separately as individual titles, the mismatch between the acquisitions records and the bibliographic records will impact the ability to collect data from Alma in any useful way. This presentation will demonstrate how Woodruff Library deals with these issues, by creating a governance structure that relies on the underlying record relationships to create a cohesive process for dealing with both the acquisitions and cataloging of electronic resources.

Detailed overview of Emory libraries setups.

Alma is designed for multiple connections between records but from the search interfaces the connections are not always obvious or even visible. You need to look at the data from multiple searches and perspectives to see how they are connected.
The PO Line is the only type of connection that can get tied to any other record category (MARC, invoice, vendor, fund, license, inventory).

All the record relationships need to be correct so that staff can do their jobs efficiently. This also has the potential to increase end usability by clearing up holdings access issues.
Good communication between cataloging and acquisitions is necessary.
Getting the records linked correctly also improves the reporting capabilities in Alma Analytics. Recall that the Subject Areas in Alma Analytics do not combine well. Exporting Analytics and importing the conflicting subject areas into Visualization will work. By looking for missing data in the Visualization feature, you can see missing connections between records.
Not documenting connections and workflows correctly makes the work done invisible.

Detailed examples of their workflow were given. They maintain a DDA-EBA ‘Technical Considerations’ shared spreadsheet.
Clear workflow documentation - that is available across departments - solves problems and reduces siloing.

Q&A

Q: How did staff react when told that their workflows needed to be adjusted?

A: Quite political because the people involved had different supervisors but it was resolved eventually.

2:30 - 3:30 - CLOSING SESSION

https://mtgsked.com/p/35292/s

Abstract:

Please join us for the ELUNA 2023 Annual Meeting Closing Session! Get exciting updates on: ELUNA 2023 Wrap-Up, ELUNA Strategic Goals and Activities, ELUNA Financial and Membership Update, ELUNA Education Update.

Marshall Breeding. Poor Marshall wrote his entire librarytechnology.org website in PERL and maintains the entire operation himself. He also does the Library Perceptions and Library Systems annual reports.

What was ELUNA about this year, according to Marshall? Community, AI, linked data. Networking is very important.

Allen Jones, noted that trust is crucial for AI adoption, and libraries as customers need transparency and explainability in any AI products they should be willing to buy.
Note that CDL is NOT dead, despite the Haccettee v. Internet Archive case. There are still ways to do this legally for: materials with permission, and materials with no ebook market. Inter-library lending is the most risky, we need experimentation at the local library level.

Very interesting details about voting patterns in the NERS process were covered. ELUNA will be reviewing voting procedures.

ELUNA finances, looking solid for the next year. Organizational and conference expenditures are separated with the intention of having each aspect of work be self-sustaining. i.e. conference fees are set to cover all costs.

Next ELUNA [2024] will be in Minneapolis. Downtown Hilton. May 13-14 Developers Day 14-17 Annual Meeting

Notes from Computers in Libraries 2022

Online

Narrative

From Tuesday through Thursday I attended Computers in Libraries 2022, virtually. This conference allowed me to present on co-authored research about how Primo has been used in the CSU Libraries. We had previously presented our findings internally to the ULMS Discovery Functional Committee where it was well-received and we used feedback from that to improve our presentation for the national stage. Since almost everything in libraries now has something to do with ‘computers’ the content at CIL 2022 was varied and I sampled from the various content tracks that were organized based on professional interests and job responsibilities. Below are first the session descriptions and then my session notes, which are of varying length and intelligibility depending upon how informative I found each session.

Tuesday March 29th

Keynote: Libraries, Climate & the Crowd: A New Concept of Digitality

Michael Peter Edson, Digital Strategist, Writer, Independent Consultant

How should your library respond to the climate emergency and natural disasters? Where does librarianship end and citizenship begin? And what digital tools and mindsets can libraries use to help create a society that is joyous, sustainable, and just? In this provocative and inspiring keynote, digital pioneer and former Smithsonian tech leader Edson argues that the library sector is operating with an outdated concept of digitality that is unable to answer today’s most important questions about technology, society, and change. An updated concept of what “digital” means in the 2020s — new tools, new skills, and a new understanding of the digital public sphere — can provide a new direction for digital librarianship and unlock new capabilities within the sector and in the communities we serve. Be challenged by this popular speaker with new ideas to be even more innovative, creative, and community focused in your organization and environment!

Our understanding of digitality is outdated. This is wild time to be alive and libraries and cultural institutions have a responsibility to the public. Digital infrastructure requires a significant expenditure of resources and affects the climate.

‘The big fricking wall’. A concept from NAME. On one side is where we are, we plant gardens and make it nice; but on the other side is where we need to be: solving large societal problems.

Technological foundations allow for further acceleration of technological development. We need to think about climate change. Winning the battle against climate change slowly is functionally the same as losing.

If you/we are keeping our powder dry to fight climate change and against political catastrophe then you are missing the opportunity. The time is now.

An EU initiative is promising but Edson does not feel it leverages digitality sufficiently. People often find themselves on the comfortable side of the ‘big fricking wall’. How can cultural institutions get over or under the wall? At a workshop of cultural/library workers they discussed digitality and he realized that many people were focused on the dark side of the technology. We are missing an opportunity to move on climate using digital technology.

The aspects of digitality that could work to fight against climate change are areas that are not our strengths in the cultural institutions: working at speed, conservative-sustaining mindset, lack of history in ‘selling’ ideas to ‘customers’, and lack of clarity about the threat of climate change.

What happened after Trump pulled out of the Paris Climate Accords? He went around to cultural institutions websites after this act and noticed that no one was highlighting it. These institutions are government funded but we have a lot of leeway to speak. Workers in these places spoke out on social media and made statements but the institutional voices were absent.

The Pew Research Foundation reports on the internet and how connectivity is changing the world are a great resource. They show both the light and dark side of connectivity. On the positive empowering side, a consistent aspect of digitality is that people report it to be important in finding more information that is useful for their lives.

  1. digital culture is inseparable from ‘normal’ civic life
  2. the gains of digitality are still there; people are just focusing on the dark side now
  3. basic Web 2.0 elements still hold true and can be applied to cultural institutions
  4. most of digital’s force lies in the open areas and aspects of the web “the dark matter”, not on our protected cultural walled gardens
  5. the ‘big fricking wall’ is made of: fear. We have hugged the idea of openness and crowd scaling on the web so tightly that it can’t move freely.
  6. think big, start small, move fast - these remain an excellent recipe and we must use it to generate awareness around climate change

Lessons Learned From the Brands You Love

Alyx Park & Meg Slingerland

Why is your library’s website brand so important? Your website lets the world know what to expect of your library’s services and programs. It helps communicate everything your library has to offer and emphasizes that your library is much more than a place patrons use to borrow collections. We have moved into a new era of marketing—one where the creation of value through content-driven experiences is the focus. Today, content is king. This session takes a deep dive into corporate brands which are changing the game and explores how you can apply the lessons learned to your library’s marketing strategy. Learn how to be agile enough to quickly adapt to the ever-changing web trends and take the best practices of the web from other industries and apply them to your library!

Central Rappahannock Regional Library serves several communities including Fredericksberg VA. The website staff are part of the larger PR/marketing division in the library.

2022 Marketing Trends source: Adobe 2022 Marketing Trends

  • Pandemic prompted companies to focus on customer experience, need to think about customer holistically
  • Customers expect a personalized/customizable experience
  • “integrated marketing platforms” have reduced total cost of ownership by 30% in numerous case studies
  • Marketing and IT must work together to achieve overall goals of parent organization

Fixer Upper brand
Lessons:
content is king
find your style, refine it
identify a niche, make customers feel at home with your business
rethinking “place” as a character
highlighting the community resonates with the community
marketing is about more than buying things

CRRL does ‘Guest Picks’ where community members share their favorite books. Pick community leaders and families that use the library frequently. Guests suggest 5 to 10 items held in the catalog and their picks are put up on the website along with a brief biography. The feature is on an invitation-only basis.

Whole Foods brand
Lessons:
design with local flair
location-based marketing goes beyond differentiation (Whole Foods uses geofenced ads when people are grocery shopping at their competitors)
focus on self-service
strong location-based tools helps on multiple levels

CRRL has 10 branches (and a makerspace location) and associates to a users selected branch when authenticated. Every branch has its own library location webpage. Each page has detailed information about the amenities offered at the branch. Users can browse through new arrivals at that particular branch.

Disneyland brand
Lessons:
use landmarks to keep moving and direct traffic
Color creates a worthwhile customer journey

  • use color to evoke emotions
  • use color to create depth
  • use color to distract (go-away green and blending blue)

CRRL has 3 distinct colors in their branding. On long pages they use all 3 to differentiate important areas of interest. Gradients are used within the palate to make certain areas pop out.

Q&A:
Q. What CMS do they use?
A. Wordpress

Keeping Search Configuration Options Open

w/ Heather L. Cribbs

Modern discovery layers allow for a wide variety of design configurations. One essential consideration when creating a search environment is the complexity of search options available to the end user. This multi-campus study took a big data approach and examined 4 years of data collected from the California State University (CSU) libraries and compared user search query behavior across all libraries. There is a lot of research that suggests reducing cognitive overload results in a better user experience. Understanding how users search is key to designing a library which is more accessible to all. When configuring your library catalog, certain questions must be answered, such as whether to cater to advanced librarian query behavior or adopt a simpler, commercial-style approach. Designing for all users involves considering a diverse userbase with a wide range of abilities, keeping edge cases in mind, and building from proven web design principles. Get tips and learnings from this talk.

My co-presented session. 126 attendees.

Chat Box: Small Change, Big Difference

Colleen Quinn, John DeLooper, Michelle Ehrenpreis, Ryan Shepard

Our first presentation provides learnings from our speakers’ experience implementing a chatbot developed by Ivy.ai on its website. Discover what a chatbot is, how chatbots work, and how the library and IT department worked together to prepare for the bot’s implementation and hone the chatbot’s search algorithms. Learn how they leveraged the chatbot to improve its website for better user experience and better findability for all users, with a focus on improvements related to patrons asking questions via the chatbot. In addition, leave with some lessons learned from the implementation process and future plans for assessing the chatbot’s use and effectiveness so your implementation of a chatbot will also be successful. The second presentation features how a floating chat box was added to all the library pages. With a student body of approximately 90,000 global, asynchronous learners, librarians typically answer 12,000-15,000 questions per year via email, phone, zoom, and chat. Questions increased substantially within minutes of adding the box. The first week, the library saw a 25% increase in the number of questions. Along with the implementation of the floating chat box, other changes were made to the website, Hear how this strategy has changed library statistics and staffing models, get tips to help you navigate the waters of adding a floating chatbox, and understand how staffing and service changes can impact customer satisfaction.

University of Maryland Global Campus.
UMGC rolled out Springshare’s proactive chat, floating option. There were some web accessibility problems with the floating widget so they left static embedded widgets in many places.
They saw an 83% increase in chats comparing Fall 2020 to 2021. 70% of their chat traffic was coming from the proactive widget, despite presence of other widgets in different places. They had to double up their staffing in many shifts to accommodate the increase in traffic.
They went through many iterations of adjusting the timer: 25, 20, then 15 seconds. They now have a 60 second delay when the widget is placed in EBSCO databases.

Despite the double staffing for busy shifts, there still is a struggle to deal with the traffic. They have a SOS model where they ask others to login and help. They also recognize that there will be missed chats, they put these into the LibAnswers queue.

Canned messages help a great deal.

They noticed a big decline in LibAnswers FAQ “knowledge base” traffic after introduction of proactive chat. Chat is where people are going instead of using the FAQs.

Lehman College CUNY.
Chatbots are AI using natural language that allow users to converse with the AI for various reasons.
At Lehman they are using Ivy Chatbot. Ivy runs of of web pages that are crawled and used to form a knowledge base for the chatbot. The AI forms ‘intents’ based on the crawled content.

Ivy has a human intervention feature, Inbox Zero. When it cannot answer a question it goes to a human. The librarian can intervene and the AI then learns the correct answer.

This has been a big success, seemingly. By reviewing interactions with users and Ivy, they realized aspects of their website that needed to be redesigned in light of what the chatbot was sending people. This ended up with better UX for everyone.

Their license limits them to 5 pages that will be crawled by the AI to form answers. They have to be very judicious about what pages are pointed to and what content is there.
The IT department has handled most of the implementation, otherwise it would have been unmanageable.

Q&A
Q. Have they seen this impacting the reference transactions?
A. They make it clear in the initial interaction that it is a computer and not a librarian and then give a link to how to contact librarians. Have not seen a decline in reference transactions.

Q. How many chatbots did they look at before picking Ivy?
A. Zero, the campus IT picked it.

Search Innovation Sandbox

Greg Notess

Our popular web search expert shares the latest search innovations and speculates where search of the future is headed.

Reviewed various developments and changes in Google interface and behavior. Reviewed alternatives such as Neeva.

Alternatives to 3rd party cookies

  • FLEDGE
  • FLoC
  • Topics API to replace FLoC

Discussed various regulatory and intra-business developments such as Google payment to Apple to keep Google as default search on iOS.

Web3, unclear how this will impact commercial search engine market. Many NFT search engines are being developed. Some firms are attempting decentralized knowledge graphs. Web3 might bring to fruition the promises of Semantic Web.

Online Learning Modules & ACRL Framework

Anthony Paganelli

Paganelli discusses how online learning modules were created for university experience students to support their learning outcomes to provide quality library information instruction. The modules were designed to introduce library and information service to first-year students based on the Academic College & Research Libraries’ (ACRL) information literacy framework. The five modules that students were able to complete online served two purposes. First, the modules were embedded in a course management system that was self-graded and included in the course’s overall grade in accordance with the faculty’s request. This allowed the faculty to provide students with library information without extra work or time. Secondly, students completed the modules prior to an in-person instruction workshop, which introduces the information and terminology regarding library services and resources. With prior knowledge of the content, students are familiar and have a stronger opportunity to retain the information. Paganelli demonstrates the importance the online library learning modules have had on reinforcing library and information literacy.

WKU has a ‘university experience’ course. Not offered to all students but those in need, based on ACT/SAT scores and Kentucky state standards for college readiness. Library has designed a module embedded in the LMS.
Modules in the LMS are replacing ‘library assignments’ at the university. Faculty bought into modules because they were self-graded.

They cover each of the frames in the ACRL Framework over the semester. Each subsection of the module, centered around a frame, has content for review, a discussion, and a quiz. Quizzes are limited to 2 attempts and students can use the higher score.

They did a case study where they then brought in students who took the modules for in-person library instruction. They retook certain questions from the quizzes in person. The results were not promising and students uniformly scored below the averages in the modules. Not clear why this was the case? Working hypothesis is that students gamed the quizzes online and took the higher score rather than actually retaining any of the information from the content portion of the modules.

Getting data out of the LMS, Blackboard, they found a challenge.

In the future they will be using a pre-test knowledge check so that they can establish a baseline from which to measure the changes in knowledge.

Wednesday March 30th

Keynote: Community Internet Strategies & Partnerships for Better Digital Visibility

Nicol Turner Lee

More than one-half of the world’s 7.7 billion people still do not have access to the Internet, including millions of people in the United States, which has led the digital revolution. Most of these non-adopters—whether by choice or circumstance—are poor, less educated, people of color, older, or living in rural communities. As the digital revolution is quickly carving out this other America, it’s likely that these people on the margins of the information-based economy will fall deeper into abject poverty and social and physical isolation.

Based on fieldwork across the United States, Turner-Lee explores the consequences of digital exclusion through the real-life narratives of individuals, communities, and businesses that lack sufficient online access. The inability of these segments of society to exploit the opportunities provided by the Internet is rapidly creating a new type of underclass: the people on the wrong side of a digital divide. Turner-Lee offers fresh ideas for providing equitable access to existing and emerging technologies. Her ideas potentially can offset the unintended outcomes of increasing automation, the use of big data, and the burgeoning app economy. In the end, she makes the case that remedying digital disparities is in the best interest of U.S. competitiveness in the technology-driven world of today and tomorrow. Learn about real-life consequences of the digital divide, and what can be done to close it.

The COVID-19 pandemic laid the digital divide bare. It is absolutely undeniable now.

ALA reported that many public libraries were cutting staff during the pandemic but that was not an indication of less demand for library services but rather a mismatch between the workforce designed for physically open facilities and the surge in demand for digital access and content.

Homework Gap - mentioned, look that up.
https://www.nsba.org/Advocacy/Federal-Legislative-Priorities/Homework-Gap

At Brookings they are now thinking of libraries as part of infrastructure. Libraries lead the way in getting people online or with hotspots ahead of schools in many parts of the country. Libraries are part of the public square and government tools that keep society going.

Her book is Digitally Invisible: How the Internet is Creating the New Underclass (forthcoming 2022, Brookings Press)

Broadband is just as much part of essential infrastructure as rural telephone lines and rural electrification. Like Roosevelt passed the Rural Electrification Act in 1936, the digital divide revealed in COVID-19 a need for a urban and rural broadband act of similar scope.

We, libraries, can’t close the divide if we also perpetuate it. We need more digital sharing of resources. We need better communication and instruction around misinformation.

  1. we must bring resources directly to patrons, no library can be offline
  2. we, as a country, must prioritize libraries as digital infrastructure
  3. we need more funding and cannot let the politicians off the hook
  4. we need to reimagine our individual roles in this ecosystem, the digital train left the station, librarians must adapt

Impact of Industry Consolidation

Marshall Breeding

The library technology industry has become highly consolidated via ongoing rounds of mergers and acquisitions. The last 2 years have seen some of the most aggressive changes. A smaller number of companies are now responsible for the strategic technology produced upon which libraries depend. Breeding discusses the impact that consolidation has had on the number and types of products available to libraries. He draws on data collected from a variety of sources to help answer these questions.

Quite a bit of consolidation in the LIS software and content industries recently.
May will see Marshall Breeding’s next Library Technology Industry Report in American Libraries magazine.

Virtually all of the change in the business landscape during past 2 years has been consolidation. The one exception was OCLC selling QuestionPoint to Springshare.

Libraries hold on to software systems for decades, typically. Big ‘lock in’ effects and/or lack of staff flexibility (or will or funding) in libraries for quick software pivots.

Major 2021 events

  • FTC reviewed ProQuest’s acquisition of Innovative. (Approved)
  • FOLIO moved into production grade
  • Alma/Primo grew dramatically in the academic library market

Major 2022 events

  • Clarivate purchased ProQuest (incl. Innovative and Ex Libris)
    • Innovative continues to offer Vega for discovery
  • Axiell purchased Infor
  • Follett breakup, family ownership. Selling off many assets, most to private equity firms.

There is vertical and horizontal consolidation; important to differentiate between the two. Despite this, the distinct products they offer seem to continue to live. A question: does consolidation prolong the life of struggling software platforms? They now have more resources with which to develop sometimes stagnant products.

Vertical consolidation raises many questions about content/software distinctions and favoritism or preferential product treatment.

The fact that many important supplier companies are publicly traded in relatively new. Hard to say how that will affect things.

  • Clarivate
  • Constellation Software, Inc.

He notes that there isn’t a single issue of his Technology Industry Report that hasn’t had some consolidation, this is a long going trend.

Worrisome charts showing Clarivate (nee ProQuest) dominance in academic library software market ~65% depending on small- mid- large- sized academic libraries. But Clarivate is clearly not a monopoly.

Adoption of Alma has been “meteoric”, unprecedented adoption compared with his historical data on library ILSs.

Mining Library Data for Decision Making & Pivoting in a Pandemic

Diana Plunkett, Garrett Mason, Kasia Kowalska, Sarah Rankin

A panel of library data folks discusses how data was used to pivot during the pandemic. Each library has a different approach and discusses changing what and how they reported, using the data to determine where services were needed, what they measured and how they took action as a result (including how both staff and patron needs were measured), what they learned, and more.

Indianapolis Public Library has been reviewing statistics on the effects that COVID-19 had on their services (closures - partial/full, and cleanings). Their board also wanted information on testing rates of staff to know how pandemic was affecting employees.

They put together dashboards of service statistics (gate counts, reference question volume, circulation, public computer usage rates, etc.) that were then made available to their board.

To determine staffing needs, they were short-staffed given medical leaves, they analyzed gate counts to change operational hours just to peak times and minimize effects of partial/full closures.

Brooklyn Public Library used the Zip code data from NYC Health Dept. to guide their reopening. At certain threshold in Zip code they brought in grab-and-go and at lower rate they fully reopened (with exception of children’s activities which are on hold because of no under 5 vaccine).
There were walk-throughs where people role-played how things were supposed to work under COVID-19 protocols. They also moved checkout machines at various locations and shared this information widely. After each phase of re-opening, they did a rose/bud/thorn evaluation on how it went: what worked, what didn’t work, and what has potential. These evaluations generated a lot of free-text which was analyzed and shared.

Their BklynSTAT dashboard shows declines in almost all metrics per open hour; just the things that are possible by website are unaffected by behavior changes introduced by the pandemic.

NYPL tracked the composition of their borrowers (print only, ebook only, print and ebook ; combined with burroughs/geography). The pandemic changed their borrower composition, they had many new cards issued, lost some print only borrowers. Ebook usage surged. Physical checkouts were available but grab-and-go; the print only borrowers who dropped off apparently did not find that to meed their needs.
There was a +50% decrease in borrowers living in median household by zip code areas of 0-35k.

Higher income areas had higher ebook usage rates before pandemic and this pattern continued after. This was also true for holds usage.
Clear example of digital divide.

The usage patterns across income areas got them to re-evaluate their floating collections data. Float-driven collection change is/was resulting in lower-income areas having depleting collections while higher-income areas were having growing/pooling collections. Because of this data analysis, they have turned floating off for at least 1 year. They are in the process of re-evaluating their floating collection rules in light of equity concerns.

Checkouts per-capita for children’s books spiked in higher income areas during pandemic. Basically wealthier patrons familiar with ebooks got lots of ebooks for their children while lower income patrons did not explore ebook usage and were reliant on grab-and-go to get children’s books.

Q&A
Q. What software do you use for your dashboards?
A. INDYPL, Excel; Brooklyn, Tableau; NYPL Google Sheets, Illustrator, ggplot

The Future of Authentication: Landscape & Challenges

Jason Griffey

Access to online resources is a lot more complex than it used to be, with IP-based access starting give way to federated authentication. Changing expectations about data privacy are leading to greater scrutiny of authentication and authorization data, and the evolving regulatory landscape is increasing responsibilities for those collecting personally identifiable information. Griffey looks at the existing landscape of federated authentication and its associated technologies such as seamless access, as well as upcoming challenges to authentication and authorization in general.

Authentication: individual (username/password, ORCiD) and organizational (IP address, referrer URLs passcodes) and overlapping (username/password from your org, federated access)
Every method has trade-offs.

Web browser vendors are expected to make many changes soon/now that will affect almost all the authentication mechanisms. There is a coming authentication apocalypse. This is a result of browser developing firms trying to reduce the amount of tracking that happens across the web.

WebKit & Blink. These are really only the 2 browser engines with any uptake in the market. (Also Gecko.)
All iOS browsers are built on WebKit (even Chrome). On Windows/Linux Chrome and Edge are on Blink.

Federated authentication, even IP authentication at times, uses the same underlying features in the browser that ad-tracking cookies and scripts do. It looks the same to the browser developers as 3rd party tracking.

Because of the crackdown on 3rd party cookies, we’re seeing a rise in ‘bounce tracking’ (aka redirect tracking) where Website A sends you to Website B incredibly quickly and then back to Website A. At Website B a 1st party cookie can be set.

Authentication that uses SAML will work for the next couple years. But because of the ongoing work at browser vendors cracking down on 3rd party cookies, SAML federated access will eventually break unless it adjusted.
SeamlessAccess anticipates it will break under certain circumstances until sufficient new development happens.

What will break if 3rd party cookies are totally gone?

  • SAML, depending on implementation
  • IdP
    To test, use Safari which will emulate the worst case scenario as far as death of 3rd party cookies.

Apple’s iCloud Privacy Relay (subscription product) breaks IP authentication.

What can we do to prepare?

  1. just realize that this is a problem, some things are beginning to break now, if you are troubleshooting
  2. anyone involved with tech support needs to educate themselves
  3. evaluate your access methods and gameplan out how changes will affect the user experience
  4. over then next 5 years we can all expect changes for basically every authentication method
  5. learning how to troubleshoot these new problems will be very difficult (will depend on Blink/WebKit/Gecko and combined with authentication mechanism and local configuration)

Internally we need to advocate for privacy in what data is being shared through an authentication mechanism and authentication work/methods also affect contracts for electronic resources. Externally we need to pay attention to groups that are monitoring this. No library will solve this themselves.

Q&A
Q. Will this break EZproxy?
A. It seems so, there are a couple “vulnerabilities” with how EZproxy works that will be affected by losing 3rd party cookies.

  • Proxy URL (initial pickup during login) relies on referral URL

Q. Should we just force everyone onto VPN?
A. Probably not, just from a PR and level of difficulty.

Q. Is anything NOT going to break?
A. Username/password will still work. But that is not privacy-protecting.

Maria Markovic, Marko Perovic

Publishers are in perpetual competition mode to outperform each other’s platform functionalities. With Generation Z fast approaching the workforce, the glaring challenge of explaining copyright guidelines to born-digital employees becomes more apparent. Examine the future outlook and expectations for the librarians in the roles of copyright advisors, educators, and influencers: how do new technologies impact librarians’ role in the education of generation Z employees, how to develop efficient copyright education programs that demystify copyright guidelines and direct generations of restriction-free content users into the copyright compliant content use mode.

Common information pathways used for research:

  • commercial public web search, e.g. Google
  • PubMed
  • a library

Risks:

  • accuracy
  • validity of access

The librarian’s constant dilemma: to be or not to be the copyright cop

Gen Z, entering the workforce. Digital natives. She ran a survey of 20-27 year olds; only 22 sample size.
Results: 100% said they repost/reshare/forward content. Free text answers indicate that respondents were familiar with Copyright as a concept. One question on survey was “What would be the best way to alert users of copyright content?” Most popular answer was some kind of intrusive pop-up message.

Gen Z (small sample) does appear to know about copyright protection for consumed content but not in detail or for their own work. Awareness of ownership with knowledge of revenue potential as well as why school/library subscribed databases were no longer free after they graduated.
Examples that educators should use are ones that will resonate.

There appears to be a dislike of fee-based content. Will this affect Gen Z behavior when they research out in the workforce or their daily lives?

Library Engagement Platforms

David Lee King

In the last few years, there have been quite a few library engagement platforms that have appeared. Each of them does slightly different things, all with the goal of connecting with and engaging your library patrons. King gives an overview of library engagement platforms and shares the different ways these tools help to connect library patrons to the library.

Public libraries are using a variety of software to generate and sustain online and in-person engagement.
These might take place on social media, via email, or other website.

Does direct marketing email work? Research shows there is about a 28% open-rate on marketing emails, with certain sectors getting more opens and reads that others. If you think about the library as a business, there is a clear case for direct “marketing” of library services and events via email.

Gave overview of many reasons why libraries might use targeted emails.

Customer Management System, widely used in the private sector. Wide applicability to libraries.
E.g. LibConnect from Springshare, Vega from Innovative. These are for all sorts of use cases. LibCal for booking all sorts of things.

A dedicated ‘app’ for library website can help with library engagement. Others: Libby, OverDrive, Hoopla, etc. Things that the library subscribes to should work through the vendor’s mobile apps; testing required.

Examples: OrangeBoy

More about this topic in the author’s ALA Library Technology Report Library Engagement Platforms

Keynote: Opportunities for the Future

Lee Rainie, Pew Research Center

Based on Pew research, our popular and fun speaker shares results and insights for libraries looking at maintaining their strong community relevance in an uncertain and highly digital world.

Pew dropped their plans to ask questions about COVID-19 and pandemic things.

Very interesting findings about usage of technology in the pandemic as well as the effects on people. (Rather depressing.)

Thursday March 31st

Keynote: Building Intelligent Communities & Smarter Cities

Rick Huijbregts

Throughout his whole career our speaker has been interested in understanding, building, and improving the DNA of intelligent communities. From Cisco Systems working with many communities including the Toronto Public Library, to George Brown College, an intelligent community itself with 90,000 learners, 4,000 colleagues, and an eco-system of 10,000’s of partners/employers across three campuses and more than twenty buildings, to Stantec where he recently joined as Global Head of Smart Cities. Communities (big and small) are complex ‘networks’ of systems, services, and infrastructure. With the emergence of the “digital economy”, these communities are looking for solutions to optimize, enhance, and transform the experiences throughout the complex network. Today, our citizens, consumers, and community stakeholders increasingly expect - and demand - new experiences and services that are unique, differentiated, connected, and personalized. Our communities, and the next “smart cities” thrive at the intersection of people (and culture), processes (and workload), places (both virtual and in the physical world), exponential technologies (including, but not limited to, connectivity, IoT, and Artificial Intelligence), and governance. Be sparked with insights and ideas as our speaker shares tips for building future communities and launching exciting new places that straddle physical and virtual worlds.

A lot to learn from indigenous communities and an inclusive eye, at least for smart city design.

Trains during 1st industrial revolution were the first technology to dramatically reshape cities. Factories with assembly lines were the second. We are living through another revolution in how computers can/will reshape cities.

Internet of Things and metaverse will allow libraries and cities to offer services they currently don’t.
Every company is now a digital company to some degree. And every government must have some digital services.

The trains and factories and roads for cars required a lot of physical infrastructure. The transformation of cities using computers will require new physical infrastructure as well to allow connected IoT cities.

New regulation is also required, many people have already been victim of digital crimes (e.g. identity theft).

There are so many concerns implicated in smart cities (climate change, sustainable development, population growth/change, business/economic development, digital divide) that the process of development must be collaborative.

Libraries must involve themselves in their cities ‘smart city’ plans. Many cities around the world are currently having these discussions.

Libraries and Learning Tools Interoperability (LTI)

Natasa Hogue, Tammy Ivins

“Go to the library website” is a common phrase in classrooms and curricula across the country, reflecting the siloed nature of library resources. There often remains a fundamental barrier between the users and the library experience. Maximizing that user experience requires easing access to library resources. One way to do so is to integrate the library closely into an institution’s learning management system (LMS) through a learning tools interoperability (LTI) feature. This presentation discusses the value of using LTI to integrate library resources directly into an institution’s curricula and LMS, as well as the communication and collaboration approaches that librarians must utilize to encourage teaching faculty participation in that integration. When this process is successful, it not only eases user access to library resources, but also supports the use of the library resources as textbook alternatives in the classroom. Therefore, by leveraging LTI to integrate the library into an LMS, we can not only maximize the user experience but also maximize the library’s value to its institution.

Removing barriers between the library content and user. SSO and LTI both work. WIthout SSO (true, cross site) and LIT, many abandonment points are possible between student need for content and access to it.

LTI integration LIRNProxy is the only certified LTI compliant proxy. https://www.lirn.net/products-and-services/lirn-services/lirn-proxy/

When SSO is activated, the vendor <embed> features will work to display content within the LMS. Otherwise, proxied, stable URLs are second-best alternative. But these are not always clear for faculty to find.

Low-stakes assignments work to build familiarity with the library. Good to get some of them in before students have high-stakes papers.

Enhancing Library Technology Competencies: Continuous Learning Program

Jennifer Browning

Through her development of a unique multi-step continuous learning program at Carleton, LSP Team Experts, Browning has been working with library staff to inspire continuous learning philosophies to improve confidence and expertise in their use of library technologies. She highlights the strategies taken to develop and support a local continuous learning program and offers insight into the impact of the use of technology on local library culture.

Learning organizations

  • where people are continually learning how to learn together
  • Goals, and the level they apply to, must be articulated
    Professional development
  • are we tapping people’s commitment and capacity to learn at all levels in an organization?
    Library Technology Competencies
  • are we gatekeeping who learns and who teaches library skills? (e.g. is information about metadata siloed in one department?)

When Carleton University Library migrated to a new LSP/LMS in 2020 they realized they were faced with past practices for staff training under justifiable scrutiny. They had a need for cross-departmental information sharing and learning. Many repeated questions which could be answered by vendor documentation and better communication between departments.

They made a LSP Experts Program:

  • cross departmental membership
  • members are supposed to share information with their departments
  • content of program was in 5 steps, background about the system, skill building in various segments of the LSP software, understanding analytics of LSP, troubleshooting and support (internal and external documentation and ticketing workflows), continuous learning (each member of cohort commits to information sharing and keeping up to date)
  • many synchronous presentations & discussion, some async readings. All presentations recorded and stored in the LMS.
  • Periodic check-in meetings after cohort completion (What’s 1 new thing you’ve learned in past time_since_last_checkin?)
  • Requires commitment from management and sense of ownership by staff

See work of Clare Thorpe for further examples applied to libraries.
https://doi.org/10.1108/LM-01-2021-0003
https://blogs.ifla.org/cpdwl/2021/12/17/what-they-said-clare-thorpe-webinar/

Q&A
Q. How did you determine need to know v. good to know?
A. She sent out a questionnaire ahead of time about what people wanted to know. She also, because she was seeing support tickets, had identified areas where people were having trouble with using the LSP software.

Learning From Your Knowledge Base

Robert Heaton

The “knowledge base” is an application of structured metadata principles that has revolutionized library resource management, allowing librarians to manage acquired items in packages rather than individually. The NISO recommended practice “Knowledge Bases and Related Tools,” or KBART, has been central to this shift since its release in 2010, ensuring that patrons can connect dependably and accurately to library-provided materials. Heaton explains ways librarians can expand the use of knowledge bases in local discovery and access systems, encourage content providers to adopt KBART when negotiating license agreements, and contribute to NISO’s maintenance of the recommended practice.

NISO recommended practice KBART. https://www.niso.org/standards-committees/kbart
Knowledge bases work at the package level. The company supplying the knowledge base keeps it up to date.

There is great opportunity for automation relying on APIs. Not every publisher has done this yet but it is picking up steam.

KBART is still developing. New content types and fields are becoming important and KBART intends to adjust to evolve with these in time.

What librarians can do:

  • ask content providers to support it, get clauses in contracts
  • if vendors already support KBART, ask them about automation
  • look for other KBART applications, can work for things other than e-journals lists
  • offer public comment on Phase III of KBART recommendations

Using Internal Communications to Create Collaboration: So Long Silos

Meghan Kowalski

Libraries often spend money and time marketing and communicating to their users and patrons. How much time do you spend communicating internally? Kowalski shares the benefits of internal communications in breaking down traditional silos to foster collaboration and create a more cohesive team. She discusses activities that support all forms of work: in-person, online, and hybrid. She demonstrates the value of communicating in all directions, be it to a team of reports or to management and administration.

Internal communications help staff see where they fit in the grand scheme of the organization.
Silos smother collaborations. The more staff know, through internal communication, the better ‘brand advocates’ they can be with your user base.

Benefits

  • forces you to write things down, codifing them, building up knowledge within the organization
  • shared updates bring everyone along towards projects on a similar timeline
  • allows everyone to hear from everyone else
  • good 2-way communication uncovers roadblocks and workflow problems

Types of internal communication

  • administrative: updates, policies
  • team-building: input / feedback / “What did you…”
  • water cooler: fun, personal updates, content recommendations
  • kudos: jobs well done

Meetings remain important, physicality is real and some ideas are much better explained verbally. Email is good for forcing things to be written and searchable. Chat (Slack/Teams/Other) useful for fun water cooler communication.

Own wins and losses. Owning wins is a no-brainer. Owning losses shows people that innovation and occasional failure are valued. Encourage critical reflection. Provide an opportunity for anonymous feedback.

Avoid spamming, make written communication clear and purposeful. Good idea to have a “no communication” day per week when everyone knows there will not be administrative (or other type) communication.

Q&A
Q. Recommended platforms?
A. Nothing in particular but be wary of having too many platforms. Communication is good but fatigue about which information is where is real. (E.g. having multiple chat locations is bad)

Q. Are there policies on what NOT to say in chats or water cooler platforms?
A. If your organization has an HR office those should be sufficient. General rule “Don’t say it if you aren’t willing to be fired for it.”

Increasing Faculty Engagement With Makerspace Technologies

Chris Woodall

Have you had trouble getting faculty to make use of the technology in your makerspace? Libraries often build spaces that offer all kinds of exciting technologies, but sometimes faculty do not make use of the equipment. Due to budgets that are already stretched thin, faculty do not always have the resources or expertise to try out new technologies in their courses. Woodall highlights how they turned this around when Salisbury University Libraries partnered with the campus’s Faculty Learning Communities (FLC) program to create an opportunity for faculty interested in technologies like 3D printing and AR/VR to share their knowledge and explore how these technologies can enhance their courses. Through this program, they were able to develop a core group of evangelists on campus who could spread the word about how these technologies can be helpful for teaching in a wide variety of disciplines. Faculty were much more engaged and willing to work with the makerspace.

At their institution they don’t have big engineering program and consequently did not have big engagement with makerspace via coursework. They require students to pay for their supplies but supplies are sold to students at-cost. This impacts number of students they can work with and some faculty are reluctant to require using makerspace because of added costs.

How could they increase usage?

  • improve awareness via marketing
  • relocated to a larger, more visible, space
  • had goal of getting faculty to increase in-class learning activities. The faculty who did use the space were largely siloed.

Developed a Faculty Learning Community around the makerspace

  • interdisciplinary
  • at their university all FLCs must be approved and when approved get funding from Provost
  • they have 18 faculty who are currently involved, perhaps even a little bit larger than ideal
  • goal is simply to increase awareness of and usage of the makerspace
  • individual FLC members give talks about how they’ve used the technology
  • Provost’s Office funded a mini-grant to support faculty who were going to use makerspace tech in their courses

Review of projects funded by the mini-grant. Speculated that the model might work even without a financial incentive; could offer recognition in newsletters and promotional materials or perhaps course release.

The new technology permits us to do exciting things with tracking software. Wave of the future, Dude. 100% electronic

Have you heard? There’s a war on. The target? IP-based, proxy-enabled, authenticated access to the commercially-owned scholarly literature. The combatants? The largest and most powerful scholarly publishers vs. the librarians and our user base.

As reported by Times Higher Education and Coda, The Scholarly Networks Security Initiative (SNSI) sponsored a webinar in which Corey Roach, CISO for University of Utah, floated the idea of installing a plug-in to library proxy servers, or a subsidized low-cost proxy, for additional data collection. (To be clear he did not advocate for sharing this information with publishers, only that it be collected and retained by libraries for user behavior analysis.) Examples of the data collected in library logs (as distinguished from publisher logs) via the proposal are:

  • timestamps
  • extensive browser information
  • username
  • account Information
  • customer IP
  • URLs requested
  • 2-factor device information
  • geographic Location
  • user behavior
  • biometric data
  • threat correlation

I question whether such rich personally identifiably information (PII) is required to prevent illicit account access. If it is collected at all, there are more than enough data points here (obviously excluding username and account information) to deanonymize individuals and reveal exactly what they looked at and when so it should not be kept on hand too long for later analysis.

Another related, though separate endeavor is GetFTR which aims to bypass proxies (and thereby potential library oversight of use) entirely. There is soo much which could be written about both these efforts and this post only scratches the surface of some of the complex issues and relationships affect by them.

The first thing I was curious about was, who is bankrolling these efforts? They list the backers on their websites but I always find it interesting as to who is willing to fund the coders and infrastructure. I looked up both GetFTR and SNSI in the IRS Tax Exempt database as well as the EU Find a Company portal and did not find any results. So I decided to do a little more digging matching WHOIS data in the hopes that something might pop out, nothing interesting came of this so I put it at the very bottom.

They’re gonna kill that poor proxy server

A simple matrix can help visualize the main players behind the efforts to ‘improve’ authentication and security.

Part of SNSI
Yes No
Part of GetFTR Yes American Chemical Society Publications (ACS)
Elsevier
Springer Nature
Taylor & Francis
Wiley
Digital Science
Karger
Mendeley
Researcher (Blenheim Chalcot)
Silverchair
Third Iron
No American Institute of Physics (AIP)
American Medical Association (AMA)
American Physical Society (APS)
American Society of Mechanical Engineers (ASME)
Brill
Cambridge University Press (CUP)
Institute of Electrical and Electronics Engineers (IEEE)
Institute of Physics (IOP)
International Association of Scientific, Technical and Medical Publishers (STM)
International Water Association Publishing (IWA)
Macmillan Learning
The Optical Society (OSA)
Thieme
Any/all other publishers and supporting firms.
Accurate as of 2020-11-16.

It should come as no surprise that Elsevier, Springer Nature, ACS, and Wiley - which previous research has shown are the publishers producing the most research downloaded in the USA from Sci-Hub - are supporting both efforts. Taylor & Francis presumably feels sufficiently threatened such that they are along for the ride.

And a lotta strands of GetFTR to keep in my head, man

I think it is important to conceptually separate GetFTR from the obviously problematic snooping proposed by SNSI. It would be theoretically possible for GetFTR to dramatically improve the user experience while not resulting in additional data collection about users.

But… as Philipp Zumstein pointed out on Twitter, there are already some ways to improve the linking “problems” and user experience that GetFTR is working on. The fact that they are instead building something new gives them opportunities to control and collect data on usage patterns and users.

Given the corporate players involved here, I am not optimistic. However, I can also see large gains in usability if GetFTR works as advertised. In an ideal world, the usability/privacy tradeoff would be minimal; but as we are reminded on a daily basis, Dr. Pangloss was not a reliable guide.
For now, I have “registered my interest” with the group and am waiting to see how things are fleshed out.

If the plan gets too complex something always goes wrong: O’Reilly / Safari

O’Reilly a few years ago introduced a new type of authentication based on user email and they tried to default to it as part of a platform migration. We informed them that we wanted to continue using EZproxy, which they continue to support but have made it very cumbersome. As it currently stands, our users are presented with a different login experience depending upon how they enter the platform. While O’Reilly representatives have not been unresponsive, they clearly want users to authenticate with their “patron validation” method which collects user emails, rather than the shared-secret/proxy which is technically supported but only triggered when users enter from our alphabetical database list.

If the plan gets too complex something always goes wrong: Fitch Solutions

This provider ended support for proxy access. However, we achieved a slight simplification of the login experience for users that still satisfied our policy obligations through a back and forth conversation about the user variables we as IdP would release to the vendor. It was not a pleasant experience but a tolerable one. If vendors recognize and work with university SSO systems but do not require PII, improvements to user workflow and access are possible. To be clear, what O’Reilly and Fitch have done by moving away from IP access is not GetFTR, which is still in pilot phase.

But fighting alone is very different from fighting together

How might librarians push back against (likely) excessive data collection by SNSI or GetFTR-using platforms? I can think of three tools at our disposal, though the discussion below is not meant to be exhaustive. I cover them in order of their possible strength/severity, in actuality the textual support they provide for pushback against vendors will vary.

I want a fucking lawyer, man

Might State laws might have any clauses that could resist a publisher data grab? In California there are two relevant sections which covers library privacy GOV § 6267 and GOV § 6254 (j). GOV § 6254 (j) is no help as it specifically refers to “records kept for the purpose of identifying the borrower of items”, which is not the case with authentication data as in most cases it is not being used to ‘borrow’ anything. GOV § 6267 however could be interpreted in interesting ways. I reproduce the relevant clauses here with my own emphasis.

All patron use records of any library which is in whole or in part supported by public funds shall remain confidential and shall not be disclosed by a public agency, or private actor that maintains or stores patron use records on behalf of a public agency, to any person, local agency, or state agency except as follows:

(a) By a person acting within the scope of his or her duties within the administration of the library.

(b) By a person authorized, in writing, by the individual to whom the records pertain, to inspect the records.

(c) By order of the appropriate superior court.

As used in this section, the term “* * * patron use records” includes the following:

(1) Any written or electronic record, that is used to identify the patron, including, but not limited to, a patron’s name, address, telephone number, or e-mail address, that a library patron provides in order to become eligible to borrow or use books and other materials.

(2) Any written record or electronic transaction that identifies a patron’s borrowing information or use of library information resources, including, but not limited to, database search records, borrowing records, class records, and any other personally identifiable uses of library resources information requests, or inquiries.

This section shall not apply to statistical reports of patron use nor to records of fines collected by the library.

I am not well versed in the law, nor am I aware of any litigation involving § 6267 but it seems to me that a straightforward interpretation of this is that any personally identifiable information collected by vendors is subject to this law and thus must remain confidential. But confidential does not mean that vendors can’t use that PII for their own internal purposes, which is what some in the library community are worried about.

Ultimately, I did not invoke either of these sections in negotiations with O’Reilly or Fitch as there was a more clear and less legalistic option, university policy, detailed below.

The ALA has put together a page of the various state library legislative codes. http://www.ala.org/advocacy/privacy/statelaws Since these don’t change often, I assume it is up to date (unlike some other ALA pages) should any readers want to check how things might shake out in a non-California jurisdiction.

Also, I have yet to take a deep dive into the newly effective California Consumer Privacy Act of 2018, but perhaps that will be useful going forward. Unfortunately, most jurisdictions are not as proactive about privacy as California so they will have to avail themselves of the other tactics listed here.

But your university– they run stuff, they–

CalState Long Beach, like virtually all universities - I should hope - has internal policies governing the release of information. Presently, the Information Classification Standard delineates three types of data: confidential, internal, and public.

Confidential

There is a subheading in this section for Library Patron Information. I include it here in full.

Library database for faculty, staff, students and community borrowers which may contain:

  • Home address
  • Home phone
  • Social Security Numbers

Note the word may. That might lead us to think that this would be a clause that could be liberally interpreted in negotiations with vendors but unfortunately the items explicitly listed as Public (below) make it clear that this section is about shielding employees’ personal and home information, not any data they might generate in the course of their remunerated activities as employees.

As a creature of the state, a lot of institutional information (which vendors no doubt would like to have and incorporate into their models of user behavior) is, and should be public, such as:

  • Name
  • Email
  • Title
  • Department
  • Etc.

Internal information is where things get interesting.

There are a number of demographic characteristics/variables in this category which firms would love to hoover up and feed into whatever models they run on data about their users. Users might voluntarily disclose this information, e.g. by uploading a photograph of themselves to a profile on a vendor platform site. But the policy says this is information which must be protected. The implication being that this information is not of the Public category and that the University (thus library) should not routinely disclose it. Importantly, there is a subheading in this section for Library Circulation Information.

Information which links a library patron with a specific subject the patron has accessed or requested

That was the (in my opinion) the crucial piece of documentation that I provided to the Fitch Solutions staff which helped us carry the day and minimize data exchange and harvesting.

Say what you want about local library policies dude, at least they’re an ethos

At present, we don’t have any in-house policies specific to authentication. Though I am open to such a move, my feeling is that the stronger play here for us is to continue to use the University policies (and applicable CA laws) in order to push back against overcollection of user data by publishers. A local library-specific policy is surely better than nothing in the absence of such a university policy, but when faculty in need of a specific resource such as a future SNSI-GetFTR-enabled ACS Digital Library come knocking, my suspicion is that some libraries will yield to the demands and implement the publisher’s preferred authentication mechanism. We can’t all be Jenica Rogers.

On can we? A coordinated effort on the part of libraries around the world to draft and enact clear in-house policies that reject SNSI-supported spyware (or anything similar) might just work. The ACS-RELX-SpringNature-T&F-Wiley-leviathan does not conceal its views and aims; neither should we. They want to collect “information about them as a student or an employee” and change contract language in order to “ensure attribute release compliance.” They tremble at the threat that piracy poses and, as pointed out by Sam Popowich, are working to convince everyone that “security and vendor profits should trump user privacy.” The stakes are high.

Correction

This post originally listed American Society of Clinical Oncology (ASCO) as a supporter of GetFTR. Angela Cochran, ASCO’s VP of Publishing, who served on the GetFTR advisory board has clarified this via correspondence with me. The ASCO is not a participating partner in GetFTR. I regret the error.


WHOIS Signatures

Here are the relevant WHOIS data for each site. Both use the privacy options their hosting providers offer to not reveal important information. In the end, comparison of WHOIS data did not reveal anything interesting.

SNSI

Domain Profile
Registrant Org Domain Proxy Service. LCN.com Limited
Registrant Country gb
Registrar Register SPA
IANA ID: 168
URL: http://we.register.it/
Whois Server: whois.register.it
(p) 395520021555
Registrar Status clientDeleteProhibited, clientTransferProhibited, clientUpdateProhibited
Dates 245 days old
Created on 2020-03-16
Expires on 2021-03-16
Updated on 2020-09-23
Name Servers BRAD.NS.CLOUDFLARE.COM (has 18,590,729 domains)
PAM.NS.CLOUDFLARE.COM (has 18,590,729 domains)

Tech Contact —
IP Address 104.26.12.10 - 73 other sites hosted on this server

IP Location United States Of America - California - San Francisco - Cloudflare Inc.
ASN United States Of America AS13335 CLOUDFLARENET, US (registered Jul 14, 2010)
Domain Status Registered And Active Website
IP History 14 changes on 14 unique IP addresses over 15 years
Hosting History 9 changes on 6 unique name servers over 7 years

Website
Website Title 500 SSL negotiation failed:
Response Code 500

Whois Record ( last updated on 2020-11-16 )

Domain Name: SNSI.INFO Registry Domain ID: D503300001183540550-LRMS Registrar WHOIS Server: whois.register.it Registrar URL: http://we.register.it/ Updated Date: 2020-09-23T07:14:11Z Creation Date: 2020-03-16T12:23:58Z Registry Expiry Date: 2021-03-16T12:23:58Z Registrar Registration Expiration Date: Registrar: Register SPA Registrar IANA ID: 168 Registrar Abuse Contact Email: Registrar Abuse Contact Phone: +39.5520021555 Reseller: Domain Status: clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited Registrant Organization: Domain Proxy Service. LCN.com Limited Registrant State/Province: Worcestershire Registrant Country: GB Name Server: BRAD.NS.CLOUDFLARE.COM Name Server: PAM.NS.CLOUDFLARE.COM DNSSEC: unsigned URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/

The Registrar of Record identified in this output may have an RDDS service that can be queried
for additional information on how to contact the Registrant, Admin, or Tech contact of the
queried domain name.

GetFTR

Domain Profile
Registrant On behalf of getfulltextresearch.com owner
Registrant Org Whois Privacy Service
Registrant Country us
Registrar Amazon Registrar, Inc.
IANA ID: 468
URL: https://registrar.amazon.com,http://registrar.amazon.com
Whois Server: whois.registrar.amazon.com
(p) 12067406200
Registrar Status ok, renewPeriod
Dates 446 days old
Created on 2019-08-28
Expires on 2021-08-28
Updated on 2020-07-24
Name Servers NS-1001.AWSDNS-61.NET (has 39,473 domains)
NS-1171.AWSDNS-18.ORG (has 35,686 domains)
NS-143.AWSDNS-17.COM (has 17,174 domains)
NS-1760.AWSDNS-28.CO.UK (has 304 domains)

Tech Contact On behalf of getfulltextresearch.com technical contact

Whois Privacy Service
P.O. Box 81226,
Seattle, WA, 98108-1226, us
(p) 12065771368 IP Address 35.197.254.76 - 13 other sites hosted on this server

IP Location United Kingdom Of Great Britain And Northern Ireland - England - London - Google Llc
ASN United Kingdom Of Great Britain And Northern Ireland AS15169 GOOGLE, US (registered Mar 30, 2000)
Domain Status Registered And Active Website
IP History 1 change on 1 unique IP addresses over 1 years
Registrar History 1 registrar
Hosting History 2 changes on 3 unique name servers over 1 year

Website
Website Title 500 SSL negotiation failed:
Response Code 500

Whois Record ( last updated on 2020-11-16 )

Domain Name: getfulltextresearch.com Registry Domain ID: 2427634873_DOMAIN_COM-VRSN Registrar WHOIS Server: whois.registrar.amazon.com Registrar URL: https://registrar.amazon.com Updated Date: 2020-07-24T22:01:02.974Z Creation Date: 2019-08-28T12:53:20Z Registrar Registration Expiration Date: 2021-08-28T12:53:20Z Registrar: Amazon Registrar, Inc. Registrar IANA ID: 468 Registrar Abuse Contact Email: Registrar Abuse Contact Phone: +1.2067406200 Reseller: Domain Status: renewPeriod https://icann.org/epp#renewPeriod Domain Status: ok https://icann.org/epp#ok Registry Registrant ID: Registrant Name: On behalf of getfulltextresearch.com owner Registrant Organization: Whois Privacy Service Registrant Street: P.O. Box 81226 Registrant City: Seattle Registrant State/Province: WA Registrant Postal Code: 98108-1226 Registrant Country: US Registrant Phone: +1.2065771368 Registrant Phone Ext: Registrant Fax: Registrant Fax Ext: Registrant Email: .whoisprivacyservice.org Registry Admin ID: Admin Name: On behalf of getfulltextresearch.com administrative contact Admin Organization: Whois Privacy Service Admin Street: P.O. Box 81226 Admin City: Seattle Admin State/Province: WA Admin Postal Code: 98108-1226 Admin Country: US Admin Phone: +1.2065771368 Admin Phone Ext: Admin Fax: Admin Fax Ext: Admin Email: .whoisprivacyservice.org Registry Tech ID: Tech Name: On behalf of getfulltextresearch.com technical contact Tech Organization: Whois Privacy Service Tech Street: P.O. Box 81226 Tech City: Seattle Tech State/Province: WA Tech Postal Code: 98108-1226 Tech Country: US Tech Phone: +1.2065771368 Tech Phone Ext: Tech Fax: Tech Fax Ext: Tech Email: .whoisprivacyservice.org Name Server: ns-1001.awsdns-61.net Name Server: ns-1171.awsdns-18.org Name Server: ns-143.awsdns-17.com Name Server: ns-1760.awsdns-28.co.uk DNSSEC: unsigned URL of the ICANN WHOIS Data Problem Reporting System: http://wdprs.internic.net/

For more information on Whois status codes, please visit
https://www.icann.org/resources/pages/epp

Other WHOIS Lookups

Notably, the main corporate firm players themselves do not use privacy services for their domains.

Wiley
WHOIS via DomainTools.com, Whois.com
Registrar: CSC CORPORATE DOMAINS, INC.
Registrant Organization: John Wiley & Sons, Inc

Taylor & Francis
WHOIS via DomainTools.com, Whois.com
Registrar: Safenames Ltd
Registrant Organisation: Taylor and Francis

Springer Nature
WHOIS via DomainTools.com, Whois.com
Registrar: Eurodns S.A.
Registrant Organization: Springer Nature B.V.

Elsevier
WHOIS via DomainTools.com, Whois.com
Registrar: Safenames Ltd
Registrant Organisation: Elsevier B.V.

ACS
WHOIS via DomainTools.com, Whois.com
Registrar: Network Solutions, LLC
Registrant Organization: American Chemical Society

Notes from SCIL Works 2020

Long Beach, CA

January 17, 2020

#SCILworks2020

Narrative

This was the first SCIL event I have attended. Previous events have either been an inconvenient locations or times. The title of the event was Disaster Planning: Bouncing Back from Instructional Fails and all the presentations were along the lines of adversity overcome and tips for how other might avoid or adapt to the highlighted ‘fail’ scenarios. Things proceeded quickly and all of the presenters were engaging; I learned a fair amount given that the whole thing was only 3 hours long. My notes are below.

For more information about SCILworks in general, see: http://www.carl-acrl.org/ig/scil/scilworks/index.html

Opening remarks from Judith Opdahl, SCIL Chair & Kelly Janousek, CARL President

CARL is a great deal financially compared with other professional associations and the professional development opportunities offered.

Blackout: Surviving an instructional apocalypse

Kelli Hines & Ruth Harris

They had a 6-hr long workshop (lecture and interaction) scheduled and the power went out 1 hour into the session. The power was out long enough that the professor cancelled the rest of the class.

Fortunately, the session was already ‘flipped’ somewhat and the students were extrinsically motivated to learn since they needed this information to pass an important quiz for a grade. But the power outage meant they never got to the hands on experience. What to do? Somehow condense hours of practice and Q&A into a short instructional video that would go out to students after the cancelled class.

How they solved for the missing experience:

  • Made screencasts, using Jing (now SnagIt), for the databases that were difficult to search.
  • Was information available via the flipped slides? If so, no need to make a video.
  • Promoted consultations and contact information
  • Graded the assignment more leniently (pass/fail)
  • Emailed assignment feedback to all students so everyone understood why the wrong answers were wrong

General framework for how to approach prioritizing content given time or delivery constraints: need, nice, nuts.

  • Need to know
    • Won’t pass class if they don’t know it
    • Will it cause patients harm if they don’t know it
    • Was information foundational for another class?
  • Nice to know
    • Will save them time
    • Will make student searches more robust
    • more…
  • Nuts to know
    • Too complicated to explain via the information
    • Librarian-level knowledge e.g. controlled vocabulary
    • Skill will be seldom used
    • Specific resources not frequently used
Questions:

Could this class be turned into a video series instead of such a long workshop?

Possibly. But they were offered a 6hr block so they took it.

How much engagement did you have with the content after the fact?

They could tell there were some students who did not watch the videos.

Did they get feedback from the professor about whether the students were at a perceived deficit?

A later professor did note a deficient understanding of PICO concepts. Student feedback included many negative comments.

Please, I can assure you that The Onion is not a trustworthy source: What to do when active learning backfires

Faith Bradham & Laura Luiz

Fail 1: Too many questions during self-guided activities
Not enough time to get to all the content.

Solution 1: let students use each other as resources before they come to you. Pair or group them to evaluate resources, then have them share; peer learning.

Fail 2: Bias instruction gone wrong
A student appeared to actually believe that The Onion was not satire. Student became very frustrated.

Solution 2: Frame discussion around scholarly research.
Be clear that opinion and bias are different things. Stress that we all have opinions and are human, don’t attack students. Use yourself as an example, but keep your political views out of it.
Context is important, explain how The Onion etc. is perfectly fine to read, as long as you are reading it for entertainment, not to inform yourself.

Fail 3: students love choosing sensitive topics for student-led activities
Drawback of student input is that you loose some control. For a mind map activity, they’ve seen students pick very obscure or controversial examples.

Solution 3: Politely reject topics that can derail the activity. To make the classroom inclusive, some topics are better than others. Ask students to define their topic prior to starting the mind map activity.
Have a back-up topic just in case!

Fail 4: Activities that require prior knowledge from students

Solution 4: carefully consider what prior knowledge is needed
Budget time to explain concepts, don’t overschedule. One-shots make activities with prior knowledge difficult, but these activities can be very useful in scaffolded scenarios.

Questions:

What is the most important thing you’ve learned?

Bias is incredibly important and also difficult to teach. E.g. loaded language is something that their students need to know but may not get outside of their library instruction.

Did the examples here come from the for-credit course of the one-shots?

The one-shots.

If the students learned The Onion isn’t real, isn’t that not a fail but a success?

Feedback from students is generally positive. The framing of discussions and making sure students don’t feel attacked is important.

What are good resources for thinking about bias and loaded language?

  • Discussion of bias is framed around: “What is your favorite sports team?” If you had to write a paper about how bad they are, would you be tempted to pull some punches?
  • Show headline examples of bias.
  • Snopes.
  • Media Bias Fact Check.

Remix the worksheet: Creative ideas for analog instruction

Carolyn Caffrey Gardner

Why use analog activities? At CSUDH they try to resist database demos, they also had very slow computers which their campus IT had made into virtual machines which took a long time to start up. Analog activities lend themselves to andragogy - problem centered and self directed learning. (Analog also allows you deal with classroom situations where computers are limited or malfunctioning.)

Analog Examples:

  • Whiteboard walks
    • Supplies needed: big pieces of paper or whiteboard, markers, prompts
    • Tip: make sure you have enough prompts, make the questions open-ended
  • Conceptual mapping
    • Supplies needed: paper maps or items to sort or items to map
    • Tip: some concepts that are obvious to librarians may require more time to map for novices
    • Tip: count your supplies because students walk off with them
  • source analysis
    • Supplies needed: physical sources to analyze, post-its, highlighters
    • Tip: this definitely cannot be done well with less than 20 minutes. Some students really go to town and may need to be shepherded along.
Questions:

Various people shared prompts that they use.

☇ Round: Grading déjà vu

Kelli Hines & Ruth Harris

Presenters accidentally loaded assignments in the LMS from wrong year, graded, them, and emailed them to the wrong students. This was not only a big fail, but also a FERPA violation.

What they learned:

  • Don’t use email! To comply with FERPA, grades must be returned to students in a secure manner.
  • Get a copy of the class roster and check against it.
  • Change the cases (the ordering) of questions from semester to semester - helps prevent cheating and can help you avoid mistakes as well.

☇ Round: Serious fail: How a fail led to a Title IX Talk

Michelle DeMars

Presenter had what started out as a normal one-shot; 170 students. In the course of doing a “what barriers are there to you using the library?” activity things got out of hand. In small groups she uses post-it notes, but to adapt to 170 students she did this activity on Padlet. Students posted harassing language and clearly violated Title IX but submissions were anonymous.

She had a conversation with the professor about the class behavior, he lectured class about Title IX.

Now, when she uses Padlet, she places a lot of restrictions on student input to avoid repeats of this behavior. Now the professor regularly advises students on Title IX at the beginning of the semester.

Tessa Withorn

CSUDH uses LibWizard for general tutorials and course-integrated tutorials.
Looking at statistics, she saw that there were some tutorials that had a high failure rate (students didn’t get good grades on the embedded quizzes) so she revisited them.

Problem:
In LibWizard the <iframe> elements have to load via HTTPS. But their university had expired SSL certificates, breaking the tutorials. Tip: contact Springshare support, they are often understanding.

Problem:
One tutorial question was just too difficult and resulted in a lot of students using the chat with a librarian feature. Student feedback was used to adjust revisions of the tutorial and come up with new content.

LibWizard does not have a robust spell or grammar check, so write in another editor and paste into LibWizard. This also backs up your content in a platform that is probably more robust.

Questions:

How is ADA compliance for LibWizard?

Meets most of their needs, but you must caption any video content you use.

☇ Round: We don’t have that…

Lucy Bellamy

Specialized programs, like at Gnomon, require very niche collections. But the school was trying to get a BA offering accredited. So she was faced with more expanding the collection and new students needing items not in the collection.

Solution:
Library Extension (LE) tool. She worked with IT to have this installed on all the computers and offered classes on this - basically outsourcing a lot of users to the public library system.

Closing remarks by Judith Opdahl & Mary Michelle Moore

Big thanks to Michelle DeMars for handling logistics of this event.

SCIL is looking to add people to the Board.

Next SCIL event is at the CARL conference in Costa Mesa.

Where are all the UC Libraries ILL requests post-cancellation?

Another Open Access Week has come and gone. Here at Long Beach, it triggered some interesting in-house conversations pertaining to the CSU system’s potential cancellation of Elsevier’s ScienceDirect subscription. Much of that was speculative and, so far, not particularly data driven as to the impacts on our campus. We will work out some figures eventually of course. But it got me thinking about a recent presentation by Günter Waibel from the California Digital Library at FORCE11 2019. Specifically, tweets from this presentation, as I was not able to attend.

So what gives? An estimated demand for Elsevier ILLs of 20,785 based on pre-cancellation usage and a realized ILL demand of 4,238 request for Elsevier content is a large discrepancy. There are several possibilities, which aren’t mutually exclusive.

  • Faculty aren’t reading Elsevier content post-cancellation
    • No interest, or solidarity motives
    • They get by with legally available summaries and abstracts (I’d like to believe this does not happen but given that Letrud K, & Hernes S (2019) showed in an analysis of ‘affirmative citation bias’ that indeed many things that are cited are not read in their entirety, I find it plausible.)
  • They somehow substitute other articles for the desired Elsevier content (Thinking about this is difficult because we typically model each article as a unique good which has typically been monopolized by a publisher.)
    • See the abstract theory immediately above
    • They engage in satisficing behavior and therefore read/cite more non-Elsevier content as substitution sources
  • Faculty are reading Elsevier content at similar rates post-cancellation as they were pre-cancellation, via non-library extra-legal means
    • Email
    • Social media
    • Sci-Hub

These are all empirical hypotheses, with varying degrees of difficulty involved in determining what actually is happening. I’m willing to be persuaded about any one of these possibilities, but they got me thinking about the survey that John Travis from Science ran in 2016 about Sci-Hub.

On the social media theory, I note that Reddit’s r/Scholar board did experience a large spike in membership growth post-cancellation but the date 2019-06-12 (502) didn’t correspond to any UC system press releases posted on the web, perhaps it was a day an email circulated? I don’t know. Also, it appears that the metrics site that I use for this count had some problems with their daily count data for much of 2018. So perhaps that spike was related to that stagnation. The total subscriber count did experience an increase in the trend line during 2018, so I am hesitant to rule this out entirely but I would look for the big increase in 2019 if it came from UC faculty. (Or perhaps those who moved to r/Scholar are all very proactive and wanted to get acquainted with the norms of the board prior to losing library access?)

Daily Subscriber Growth

Reddit Metrics data for r/Scholar daily

Total Subscribers

Reddit Metrics data for r/Scholar total

Data for r/Scholar pulled from Reddit Metrics

As for Twitter and #IcanhazPDF, there are a scant number of tweets in 2019 mentioning the UC system and using the hashtag. I do not find that to be indicative of a groundswell of attention to that informal network from people at the UCs.

So, assuming the folks in the UC system are still reading Elsevier content at all, this leaves us with email and Sci-Hub. The email hypothesis is pretty difficult to investigate and Sci-Hub is super easy to use, so my bet is that folks are using it. (Also worth noting is that Sci-Hub was explicitly mentioned in the UC’s Alternative Access to Elsevier Articles documentation.) Along those lines, I think that the data from John Travis’ survey in Science can illuminate this a bit until someone does either traffic analysis (presumably with IP data supplied by Alexandra Elbakyan) or citation analysis of publications coming from UC scholars comparing pre- and post- cancellation rates of citing paywalled Elsevier articles. John kindly provided me with the raw data in 2016 which I used a bit of in my own research. Below I present my number crunching and note the highlights in the wrap up section.

Correlation

Are there any noteworthy correlations between the variables in the data? This was the first question I asked myself up on seeing that Science had run the survey and the reason I asked John for the raw data.

The data was a mix of nominal and ordinal variables (with one string) which I imported into SPSS for analysis. Checking for normality in SPSS revealed that a majority of the variables failed the test, so running the usual statistics on the data set was not going to work. After spending some time on Wikipedia and the Stats section of Stack Overflow, I determined that Kendall’s Tau was a conservative measure of correlation to proceed with.

Below, we can see the answer to my initial question: not really. The only thing that jumps out is a weak positive correlation of .314 between “Have you used Sci-Hub, and if so, how often?” (ordinal: Never = 0, A few times = 1, Daily or weekly = 2) and “Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription?” (nominal: No = 0, Yes = 1). That rather lackluster finding was not particularly interesting, so I decided to dig deeper and look at the crosstabs, which are farther below. Perhaps the most interesting finding other than the headline ones pointed out by Science, was that there was not even a weak correlation between age and any of the other variables. A priori I didn’t expect any strong relationships here but the absence of even weak correlation between “Do you think it is wrong to download pirated papers?” and age and “Do you think SciHub will disrupt the traditional science publishing industry?” and age surprised me.

Crosstabs

Below I present all the cross tabulations that I ran. Most did not strike me as particularly noteworthy, which is why I only presented a few of them in the presentation I did about guerrilla access for UC Riverside in 2016. But in case others want to dig in here or build on this, I have included tables for all of the variables crossed against “Have you used Sci-Hub, and if so, how often?” and “What’s the primary reason you use Sci-Hub or other pirated article repositories?”.

Do you think it is wrong to download pirated papers? * Have you used Sci-Hub, and if so, how often? Crosstabulation

Do you think it is wrong to download pirated papers? * Have you used Sci-Hub, and if so, how often? Crosstabulation
  Have you used Sci-Hub, and if so, how often? Total
Never A few times Daily or weekly
Do you think it is wrong to download pirated papers? No Count 3742 3210 2538 9490
% within Do you think it is wrong to download pirated papers? 39.4% 33.8% 26.7% 100.0%
% within Have you used Sci-Hub, and if so, how often? 84.5% 89.3% 91.4% 87.9%
% of Total 34.6% 29.7% 23.5% 87.9%
Yes Count 689 383 238 1310
% within Do you think it is wrong to download pirated papers? 52.6% 29.2% 18.2% 100.0%
% within Have you used Sci-Hub, and if so, how often? 15.5% 10.7% 8.6% 12.1%
% of Total 6.4% 3.5% 2.2% 12.1%
Total Count 4431 3593 2776 10800
% within Do you think it is wrong to download pirated papers? 41.0% 33.3% 25.7% 100.0%
% within Have you used Sci-Hub, and if so, how often? 100.0% 100.0% 100.0% 100.0%
% of Total 41.0% 33.3% 25.7% 100.0%

Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. * Have you used Sci-Hub, and if so, how often? Crosstabulation

Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. * Have you used Sci-Hub, and if so, how often? Crosstabulation
  Have you used Sci-Hub, and if so, how often? Total
Never A few times Daily or weekly
Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. No Count 3301 2438 1816 7555
% within Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. 43.7% 32.3% 24.0% 100.0%
% within Have you used Sci-Hub, and if so, how often? 74.4% 67.9% 65.9% 70.1%
% of Total 30.6% 22.6% 16.8% 70.1%
Yes Count 763 902 766 2431
% within Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. 31.4% 37.1% 31.5% 100.0%
% within Have you used Sci-Hub, and if so, how often? 17.2% 25.1% 27.8% 22.5%
% of Total 7.1% 8.4% 7.1% 22.5%
Not applicable Count 373 250 174 797
% within Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. 46.8% 31.4% 21.8% 100.0%
% within Have you used Sci-Hub, and if so, how often? 8.4% 7.0% 6.3% 7.4%
% of Total 3.5% 2.3% 1.6% 7.4%
Total Count 4437 3590 2756 10783
% within Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. 41.1% 33.3% 25.6% 100.0%
% within Have you used Sci-Hub, and if so, how often? 100.0% 100.0% 100.0% 100.0%
% of Total 41.1% 33.3% 25.6% 100.0%

Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? * Have you used Sci-Hub, and if so, how often? Crosstabulation

Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? * Have you used Sci-Hub, and if so, how often? Crosstabulation
  Have you used Sci-Hub, and if so, how often? Total
Never A few times Daily or weekly
Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? No Count 3605 1972 1205 6782
% within Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? 53.2% 29.1% 17.8% 100.0%
% within Have you used Sci-Hub, and if so, how often? 81.9% 55.1% 43.7% 63.1%
% of Total 33.6% 18.4% 11.2% 63.1%
Yes Count 798 1608 1554 3960
% within Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? 20.2% 40.6% 39.2% 100.0%
% within Have you used Sci-Hub, and if so, how often? 18.1% 44.9% 56.3% 36.9%
% of Total 7.4% 15.0% 14.5% 36.9%
Total Count 4403 3580 2759 10742
% within Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? 41.0% 33.3% 25.7% 100.0%
% within Have you used Sci-Hub, and if so, how often? 100.0% 100.0% 100.0% 100.0%
% of Total 41.0% 33.3% 25.7% 100.0%

What’s the primary reason you use Sci-Hub or other pirated article repositories? * Have you used Sci-Hub, and if so, how often? Crosstabulation

What’s the primary reason you use Sci-Hub or other pirated article repositories? * Have you used Sci-Hub, and if so, how often? Crosstabulation
  Have you used Sci-Hub, and if so, how often? Total
Never A few times Daily or weekly
What’s the primary reason you use Sci-Hub or other pirated article repositories? Other (please specify) Count 528 198 155 881
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 59.9% 22.5% 17.6% 100.0%
% within Have you used Sci-Hub, and if so, how often? 15.0% 5.5% 5.6% 8.9%
% of Total 5.4% 2.0% 1.6% 8.9%
I don’t have any access to the papers Count 1348 2043 1640 5031
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 26.8% 40.6% 32.6% 100.0%
% within Have you used Sci-Hub, and if so, how often? 38.3% 57.1% 59.2% 51.0%
% of Total 13.7% 20.7% 16.6% 51.0%
Convenience--It’s easier to use than the authentication systems provided by the publishers or my libraries Count 488 640 540 1668
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 29.3% 38.4% 32.4% 100.0%
% within Have you used Sci-Hub, and if so, how often? 13.9% 17.9% 19.5% 16.9%
% of Total 4.9% 6.5% 5.5% 16.9%
I object to the profits publishers make off academics Count 1151 700 435 2286
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 50.3% 30.6% 19.0% 100.0%
% within Have you used Sci-Hub, and if so, how often? 32.7% 19.5% 15.7% 23.2%
% of Total 11.7% 7.1% 4.4% 23.2%
Total Count 3515 3581 2770 9866
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 35.6% 36.3% 28.1% 100.0%
% within Have you used Sci-Hub, and if so, how often? 100.0% 100.0% 100.0% 100.0%
% of Total 35.6% 36.3% 28.1% 100.0%

Do you think SciHub will disrupt the traditional science publishing industry? * Have you used Sci-Hub, and if so, how often? Crosstabulation

Do you think SciHub will disrupt the traditional science publishing industry? * Have you used Sci-Hub, and if so, how often? Crosstabulation
  Have you used Sci-Hub, and if so, how often? Total
Never A few times Daily or weekly
Do you think SciHub will disrupt the traditional science publishing industry? No Count 1484 1397 1166 4047
% within Do you think SciHub will disrupt the traditional science publishing industry? 36.7% 34.5% 28.8% 100.0%
% within Have you used Sci-Hub, and if so, how often? 34.0% 39.0% 42.4% 37.8%
% of Total 13.9% 13.1% 10.9% 37.8%
Yes Count 2879 2182 1586 6647
% within Do you think SciHub will disrupt the traditional science publishing industry? 43.3% 32.8% 23.9% 100.0%
% within Have you used Sci-Hub, and if so, how often? 66.0% 61.0% 57.6% 62.2%
% of Total 26.9% 20.4% 14.8% 62.2%
Total Count 4363 3579 2752 10694
% within Do you think SciHub will disrupt the traditional science publishing industry? 40.8% 33.5% 25.7% 100.0%
% within Have you used Sci-Hub, and if so, how often? 100.0% 100.0% 100.0% 100.0%
% of Total 40.8% 33.5% 25.7% 100.0%

How old are you? * Have you used Sci-Hub, and if so, how often? Crosstabulation

How old are you? * Have you used Sci-Hub, and if so, how often? Crosstabulation
  Have you used Sci-Hub, and if so, how often? Total
Never A few times Daily or weekly
How old are you? 25 and under Count 1096 957 762 2815
% within How old are you? 38.9% 34.0% 27.1% 100.0%
% within Have you used Sci-Hub, and if so, how often? 24.9% 26.6% 27.5% 26.1%
% of Total 10.2% 8.9% 7.1% 26.1%
26-35 Count 1797 1732 1471 5000
% within How old are you? 35.9% 34.6% 29.4% 100.0%
% within Have you used Sci-Hub, and if so, how often? 40.8% 48.2% 53.1% 46.4%
% of Total 16.7% 16.1% 13.7% 46.4%
36-50 Count 1007 680 418 2105
% within How old are you? 47.8% 32.3% 19.9% 100.0%
% within Have you used Sci-Hub, and if so, how often? 22.9% 18.9% 15.1% 19.5%
% of Total 9.4% 6.3% 3.9% 19.5%
51 or older Count 502 226 120 848
% within How old are you? 59.2% 26.7% 14.2% 100.0%
% within Have you used Sci-Hub, and if so, how often? 11.4% 6.3% 4.3% 7.9%
% of Total 4.7% 2.1% 1.1% 7.9%
Total Count 4402 3595 2771 10768
% within How old are you? 40.9% 33.4% 25.7% 100.0%
% within Have you used Sci-Hub, and if so, how often? 100.0% 100.0% 100.0% 100.0%
% of Total 40.9% 33.4% 25.7% 100.0%

Do you think it is wrong to download pirated papers? * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation

Do you think it is wrong to download pirated papers? * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation
  What’s the primary reason you use Sci-Hub or other pirated article repositories? Total
Other (please specify) I don’t have any access to the papers Convenience--It’s easier to use than the authentication systems provided by the publishers or my libraries I object to the profits publishers make off academics
Do you think it is wrong to download pirated papers? No Count 680 4454 1485 2177 8796
% within Do you think it is wrong to download pirated papers? 7.7% 50.6% 16.9% 24.7% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 77.7% 88.8% 89.3% 95.4% 89.4%
% of Total 6.9% 45.3% 15.1% 22.1% 89.4%
Yes Count 195 564 178 105 1042
% within Do you think it is wrong to download pirated papers? 18.7% 54.1% 17.1% 10.1% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 22.3% 11.2% 10.7% 4.6% 10.6%
% of Total 2.0% 5.7% 1.8% 1.1% 10.6%
Total Count 875 5018 1663 2282 9838
% within Do you think it is wrong to download pirated papers? 8.9% 51.0% 16.9% 23.2% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 100.0% 100.0% 100.0% 100.0% 100.0%
% of Total 8.9% 51.0% 16.9% 23.2% 100.0%

Have you used Sci-Hub, and if so, how often? * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation

Have you used Sci-Hub, and if so, how often? * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation
  What’s the primary reason you use Sci-Hub or other pirated article repositories? Total
Other (please specify) I don’t have any access to the papers Convenience--It’s easier to use than the authentication systems provided by the publishers or my libraries I object to the profits publishers make off academics
Have you used Sci-Hub, and if so, how often? Never Count 528 1348 488 1151 3515
% within Have you used Sci-Hub, and if so, how often? 15.0% 38.3% 13.9% 32.7% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 59.9% 26.8% 29.3% 50.3% 35.6%
% of Total 5.4% 13.7% 4.9% 11.7% 35.6%
A few times Count 198 2043 640 700 3581
% within Have you used Sci-Hub, and if so, how often? 5.5% 57.1% 17.9% 19.5% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 22.5% 40.6% 38.4% 30.6% 36.3%
% of Total 2.0% 20.7% 6.5% 7.1% 36.3%
Daily or weekly Count 155 1640 540 435 2770
% within Have you used Sci-Hub, and if so, how often? 5.6% 59.2% 19.5% 15.7% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 17.6% 32.6% 32.4% 19.0% 28.1%
% of Total 1.6% 16.6% 5.5% 4.4% 28.1%
Total Count 881 5031 1668 2286 9866
% within Have you used Sci-Hub, and if so, how often? 8.9% 51.0% 16.9% 23.2% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 100.0% 100.0% 100.0% 100.0% 100.0%
% of Total 8.9% 51.0% 16.9% 23.2% 100.0%

Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation

Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation
  What’s the primary reason you use Sci-Hub or other pirated article repositories? Total
Other (please specify) I don’t have any access to the papers Convenience--It’s easier to use than the authentication systems provided by the publishers or my libraries I object to the profits publishers make off academics
Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. No Count 669 3372 1055 1575 6671
% within Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. 10.0% 50.5% 15.8% 23.6% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 76.5% 67.3% 63.4% 69.0% 67.9%
% of Total 6.8% 34.3% 10.7% 16.0% 67.9%
Yes Count 128 1302 475 502 2407
% within Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. 5.3% 54.1% 19.7% 20.9% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 14.6% 26.0% 28.6% 22.0% 24.5%
% of Total 1.3% 13.2% 4.8% 5.1% 24.5%
Not applicable Count 77 337 133 205 752
% within Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. 10.2% 44.8% 17.7% 27.3% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 8.8% 6.7% 8.0% 9.0% 7.7%
% of Total 0.8% 3.4% 1.4% 2.1% 7.7%
Total Count 874 5011 1663 2282 9830
% within Have you used other repositories of pirated journal articles, or used the twitter hashtag #IcanhazPDF to obtain a paper. 8.9% 51.0% 16.9% 23.2% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 100.0% 100.0% 100.0% 100.0% 100.0%
% of Total 8.9% 51.0% 16.9% 23.2% 100.0%

Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation

Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation
  What’s the primary reason you use Sci-Hub or other pirated article repositories? Total
Other (please specify) I don’t have any access to the papers Convenience--It’s easier to use than the authentication systems provided by the publishers or my libraries I object to the profits publishers make off academics
Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? No Count 656 3602 315 1312 5885
% within Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? 11.1% 61.2% 5.4% 22.3% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 74.7% 71.9% 19.0% 57.9% 60.0%
% of Total 6.7% 36.7% 3.2% 13.4% 60.0%
Yes Count 222 1408 1342 954 3926
% within Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? 5.7% 35.9% 34.2% 24.3% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 25.3% 28.1% 81.0% 42.1% 40.0%
% of Total 2.3% 14.4% 13.7% 9.7% 40.0%
Total Count 878 5010 1657 2266 9811
% within Have you obtained a pirated journal article, through Sci-Hub or other means, despite having access to it in some manner via a university library or institutional subscription? 8.9% 51.1% 16.9% 23.1% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 100.0% 100.0% 100.0% 100.0% 100.0%
% of Total 8.9% 51.1% 16.9% 23.1% 100.0%

Do you think SciHub will disrupt the traditional science publishing industry? * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation

Do you think SciHub will disrupt the traditional science publishing industry? * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation
  What’s the primary reason you use Sci-Hub or other pirated article repositories? Total
Other (please specify) I don’t have any access to the papers Convenience--It’s easier to use than the authentication systems provided by the publishers or my libraries I object to the profits publishers make off academics
Do you think SciHub will disrupt the traditional science publishing industry? No Count 301 2164 639 649 3753
% within Do you think SciHub will disrupt the traditional science publishing industry? 8.0% 57.7% 17.0% 17.3% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 34.7% 43.3% 38.7% 28.5% 38.3%
% of Total 3.1% 22.1% 6.5% 6.6% 38.3%
Yes Count 566 2838 1014 1627 6045
% within Do you think SciHub will disrupt the traditional science publishing industry? 9.4% 46.9% 16.8% 26.9% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 65.3% 56.7% 61.3% 71.5% 61.7%
% of Total 5.8% 29.0% 10.3% 16.6% 61.7%
Total Count 867 5002 1653 2276 9798
% within Do you think SciHub will disrupt the traditional science publishing industry? 8.8% 51.1% 16.9% 23.2% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 100.0% 100.0% 100.0% 100.0% 100.0%
% of Total 8.8% 51.1% 16.9% 23.2% 100.0%

How old are you? * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation

How old are you? * What’s the primary reason you use Sci-Hub or other pirated article repositories? Crosstabulation
  What’s the primary reason you use Sci-Hub or other pirated article repositories? Total
Other (please specify) I don’t have any access to the papers Convenience--It’s easier to use than the authentication systems provided by the publishers or my libraries I object to the profits publishers make off academics
How old are you? 25 and under Count 181 1324 535 575 2615
% within How old are you? 6.9% 50.6% 20.5% 22.0% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 20.8% 26.3% 32.3% 25.2% 26.6%
% of Total 1.8% 13.5% 5.4% 5.8% 26.6%
26-35 Count 336 2517 762 1044 4659
% within How old are you? 7.2% 54.0% 16.4% 22.4% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 38.5% 50.1% 46.0% 45.8% 47.4%
% of Total 3.4% 25.6% 7.7% 10.6% 47.4%
36-50 Count 217 892 287 480 1876
% within How old are you? 11.6% 47.5% 15.3% 25.6% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 24.9% 17.7% 17.3% 21.0% 19.1%
% of Total 2.2% 9.1% 2.9% 4.9% 19.1%
51 or older Count 138 293 73 182 686
% within How old are you? 20.1% 42.7% 10.6% 26.5% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 15.8% 5.8% 4.4% 8.0% 7.0%
% of Total 1.4% 3.0% 0.7% 1.9% 7.0%
Total Count 872 5026 1657 2281 9836
% within How old are you? 8.9% 51.1% 16.8% 23.2% 100.0%
% within What’s the primary reason you use Sci-Hub or other pirated article repositories? 100.0% 100.0% 100.0% 100.0% 100.0%
% of Total 8.9% 51.1% 16.8% 23.2% 100.0%

Wrap up

The first thing to say is that obviously the Science survey and UC faculty are two different populations and so we shouldn’t extrapolate the findings from the one to the other. With that said, my instinct is to assume that the dedicated Sci-Hub users motivated by distaste of commercial publishers were probably already using it prior to the Elsevier cancellation (and these people are real though small in number as Carolyn Gardner and I demonstrated here ). So, knowing that there weren’t any strong correlates of Sci-Hub use within the survey, and with the additional assumption that the dedicated scholcommies were already users, it seems the place to look in the crosstabs is at the ‘I don’t have any access to the papers’ and the ‘Convenience–It’s easier to use than the authentication systems provided by the publishers or my libraries’ responses. With cancellation, UC Faculty (and possibly CSU Faculty soon) are de facto dropped into the ‘I don’t have any access to the papers’ category - the overwhelming reason for people to use Sci-Hub. The ‘Convenience–It’s easier to use than the authentication systems provided by the publishers or my libraries’ question did not address ILL so we actually should not use it to inform speculation about the discrepancy in the UC ILL figures. (Though in unpublished raw data from the paper with Carolyn Gardner linked above it was very clear that some scholars are unhappy with the ILL systems their libraries provide.)

Since we can apparently rule out a big surge in social media providing access post-cancellation, and since the UC ILL figures are much lower than anticipated, if UC faculty are continuing to cite Elsevier publications in their work (a fact which remains to be demonstrated), there seem to be only 2 possibilities:

  • They somehow “substitute” for the desired Elsevier content
  • They are using Sci-Hub

I’m hoping that someone will do empirical work on citation patters of UC faculty post-cancellation. I think we could learn a lot about both the role libraries can play in a world where Sci-Hub exists and the information seeking and use behavior of scholars under a known constraint of not having easy access to content from the world’s largest scientific publisher.

Finally, I’d be remiss if I did not compliment the many people from the UC system and CDL who produced the Negotiating with scholarly journal publishers toolkit. I know that several people in the CSU Libraries are poring over it right now. Kudos!

Gabriel Gardner’s notes from LibIT/STIM Day at the CSU ULMS Summer Meeting

8/7/2019

Christina Hennesey welcome

9:30
172 people attending CSU Palooza at some point over the 3 days.

CSUN Dean Opening Remarks

10:00
Mark Stover so proud that CSUN hosting first Palooza that is not at the CO. Hopes that this will become a regular thing where we can all share our practices together.
He has long recognized the importance of technology to library work and believes it will continue to be essential and ubiquitous. Librarians simply cannot afford to let their technological literacy deteriorate or stagnate.
Noted that the ‘serials crisis’ has been going on for 30 years, still a crises, no end in sight except for more widespread adoption of Open access and more sustainable infrastructure for publishing.

Deans’ Panel

10:15
Karen Schneider, Patrick Newel, Emily Bonney, Rod Rodriguez
Where does money come from? Multiple sources, every campus does it differently
Money solves almost all problems, but funding is political and all politics is local. Very important for library to library to play politics.
Every librarian is an ambassador for the library on campus - how you behave and are perceived by peers on campus can directly impact funding.
Partnerships - on campus, out of silos if necessary - are crucial. Stamping your foot and pouting does not work, some Deans know from experience. Bring solutions to campus, not demands. Every entity on campus ‘demands’, not every one brings solutions and partnerships.
At Fullerton library budget has been cut or stayed flat every year for 15 years.
Communication is essential, the library and our budget should not be a black box. Costs of operation when transparent can help convince others on campus of problems and situation.
Cultivating relationships with the Academic Senate and ASI is important - show our relevance.
Frame around the students - they are the #1 priority.

Q&A

  • At Chico are student lending fees separate from tuition, some campuses don’t get student success fee money
  • how does centralized vs. in house library IT affect how you allocate time and money? At Fullerton they have a ‘library IT team’ that is part of central IT and reports to IT but consults with library dean - means library resources aren’t devoted to IP personnel. Stanislaus has centralized IT and 2 dedicated local people which they need for survival since service from central IT is not very fast. At Chico library reported to IT, makes it difficult to do NEW projects since IT don’t have library mindset and aren’t interested in learning on their own what library does and needs. When PeopleSoft came to Fresno many years ago, the PROVOST and IT wanted PeopleSoft to run the catalog - IT people think because they know computers they can do almost all library stuff, very hubristic.
  • Comment from East Bay that they have long had a hostile relationship with IT which keeps trying to take over work done by their library.
  • With new EO 1071, many new programs coming down waterfall but not necessarily funded appropriately. It is important for libraries to insert themselves into new program committees so that they can raise concerns. What we in the CSU need is to do a study about what programs/majors are offered at majority of CSU campuses and to have ECC funding align with the programs offered.
  • Do any libraries get funding from extended Ed to offset costs of dealing with their patrons? At Fullerton and Sonoma: Yes.
  • Any conversations going about system-wide support for OA? UC faculty don’t have to worry about funding to make their articles OA, we in the CSU need a centralized system like theirs. But there is a HUGE tension between asking for more funding for electronic resources while simultaneously asking for money to work on a different publishing model: Nascent discussion at COLD.

Building a dynamic website events display using LibCal RSS/XML and the Drupal 8 Aggregator

11:15 AM
Christian Ward
Known issues: many but they are documented online
In Drupal 7 there was a great ‘feeds’ module that worked great. In Drupal 8 a lot of those features are native, but not as robust. You have an RSS/Atom feed somewhere that you want to bring in to Drupal.
LibCal RSS, SacState recently started using LibCal to manage events. One single point of entry for public events that gets pushed out to external systems - signs, website, etc.
Drupal 8 Issues: doesn’t delete items before updating, only looks for new, results in phantom items that can’t be managed, deletion based on pubDate, if pubDate is null date epoch returned, no way to configure namespace fields
LibCal RSS issues: does not provide either feed or item level pubDate, all the good important metadata is put into namespace fields
Community suggestions to fix these issues:
import XML feed via the migrate module (very complicated setup)
Final solution: create a intermediate custom script that formats LibCal RSS feed
Drupal points to intermediate custom script, script harvests LibCal RSS
Data is refreshed every time Aggregator chron runs

Step 1: install and configure feed PHP script to a directory that Drupal can access, modify libcal-feed.php to point to LibCal RSS
Step 2: point the Drupal Aggregator to the intermediate libcal-feed.php script URL,
Step 3: create and configure view

View block Aggregator feed items; add feed item fields Title and Description; add filter (posted >= 12 hrs); add sort (posted on asc); make sure there is a null results display so that when there are no events in the RSS feed it doesn’t break display of Drupal page
Next Steps

  • Not great that the intermediate script has to use an intermediate application server
  • Ideally would be a Drupal module, living with the Drupal environment
    Need to address web accessibility standards to add Aria-label for duplicate event titles (so that screen reader understands they are unique items)
    Christian will share libcal-feed.php code on request.

Amazon Web Services (AWS) & Digital Initiatives

1:00 PM
Elizabeth Altman
CSUN used to have file and print server with extra drive hosted locally
They hosted special collections (SC/A) digital objects (and other things)
It was so full that backup was impossible, various problems with versioning and workflows
AWS cloud is hosted, resilient, backed up
Challenge: needed to move to AWS and preserve SC/A processing existing workflows as much as possible and work with existing campus tools
New hosting system needed to be: inexpensive, secure, automated, sustainable, distributed control
AWS components used: amazon glacier, cloudberry backup (3rd party product) S3 storage buckets, storage gateway, lambda, CloudWatch events, local Windows server for automated backup jobs
New preservation copy workflow: saved to a particular drive, Cloudberry backup running on local Windows server automatically pushes things saved to AWS Glacier
Lambda rules are needed to clear cache and reinstate components of S3 buckets
Advantages:
Now paying ~$30/month to store 7 terabytes, much cheaper than doing it locally.
Much more automation than previous local systems
S3 buckets and Amazon Glacier file size limit is 5TB no problem with that
Unexpected cost: private cloud, need to maintain connections between various components is $74/month
New challenges:
Workflows must be rigorously defined and followed
Nontrivial learning curve to AWS and other software
Items preserved are high quality -files- not the Description metadata associated with those files. development would be required to preserve files and metadata simultaneously using this system
Other problem: Library made deal and contract with AWS first, then CSUN campus inked a deal and forced library onto their contract and systems, which required setting up things all over again.

Innovative Use of Assistive Technology to serve users with/without disabilities

2:00 PM
Wei Ma and Cristina Springfield
https://bit.ly/CSUDHAT
What things are covered by AT? we should not just account for the ‘traditional’ ones but also hidden ones like cognitive learning
“Access is Love” a recommended project (social media hashtag #accessislove)
How should we think about disabilities: previously a medical definition, we should adopt a cultural definition
Goal #1: compliance, that which is legally obligated
Traditional service model: a specialized office takes the lead that has the equipment, faculty not widely trained, focus is on compliance
Goal #2: expand access so accessibility is possible anytime anywhere (as much as possible)
Barriers: administrative burden, stigma, costs born by students to obtain formal disabled status, historically disability work has been divorced from other ‘diversity’ work on campuses
Goal #3: serve those not typically targeted by traditional disability service model
Many things originally implemented as so called ‘accommodations’ actually benefit everyone e.g. automatic door openers, wheelchair ramps, gender neutral bathrooms
Work on this at DH: they didn’t have in the library specialized knowledge or funding for this so… they collaborated with university IT and campus student disability services center - got grant to purchase special hardware and software. SDRC didn’t have space for hardware so library housed it with SDRC paying for license fees and providing training on the software; campus IT installs software and hardware
Now at DH lib they have a wonderful room that can meet almost any student needs but students need to physically go visit it. Not ideal they wanted to get it out to everyone
Project 1: text-to-speech they got a web-based software for this and are promoting it heavily
Project 2: speech-to-text windows 10 and MacOS X have built in speech recognition features, DH put up a LibAnswers FAQ on how students can use those, Google also has a cloud Speech-to-text. So, no purchase by library necessary but they promote helpful ubiquitous features students might not know about.
Made a LibGuide for all disability services on campus with very detailed instructions on how to use each product
They are also doing a big social media push for all their accessibility offerings
Further ideas: integrate into infolit instruction? insert a text-to-speech button on ereserves
Q&A Reflection/Share Out
What about ATI? Whole CSU is supposed to move to that. Every campus will have a steering committee or individual ‘champion’, every library needs to work on this. ATI requirements are more strict than 508 rules.

NOTE: LibCal event registration form contains a WebAIM error, empty button, need to report to Springshare https://csulb.libcal.com/event/5638031
LibAnswers homepage (and individual FAQ entries) has 4 empty link errors, need to investigate: https://csulb.libanswers.com/
4 errors on the Primo Advanced Search page

Data Visualizations: How to Communicate you Library’s Data

3:15 PM
Jenny Wong-Welch
What is it: an umbrella term including graphs, charts, mindmaps, etc.
Uses: instruction - mindmaps; digital humanities - metadata tags; marketing the library, our spaces, our resources, our services; assessment - heatmaps, charts to understand usage patterns
Use data viz in your own research: collection analysis, citation analysis, space usage,
It can take a lot of work to make a pretty picture: very messy process:
Acquire: we have mountains of data though it is often in various areas and hard to get and aggregate together

NOTE: could add a ‘data/statistics’ type tag to the LibGuides A-Z filter list

Examine: understand the data you have, qualitative v quantitative
Parse and Filter: almost all data will require some filtering to produce a visualization that makes sense, document all steps
Basic Traditional Tools:

  • excel
  • openrefine
  • Python/R
    ‘bibliomining’

Data mining models:

  • association
  • clustering
  • classification
  • regression

Format/process: in SDSUs makerspace they put all documentation in GitHub
3 stages of understanding: Perceiving, interpreting, comprehending
Don’t lie with data: many things can’t be compared truthfully in graphs
Context behind data is crucial
Certain types of data lend themselves to certain types of visualization

Fancier Tools:

  • Tableau
  • Simply Analytics
  • Bubblus
  • Noun Project
  • Freepik
  • Piktochart
  • Adobe Illustrator
  • D3.js
  • matplotlib in Python
  • app.rawgraphs.io
    https://tinyurl.com/csudataviz

Meeting recap

4:15 PM

  • STIM Ron Rodriguez from Stanislaus income Chair, there are 2 openings
  • Schol Comm, new committee, Patrick Newell Chair, Mark Stover vice chair, goals: make a copyright first responders curriculum, make a guide to centralize information for CSU about scholcomm, run various surveys to gauge interest in possible initiatives, need to discuss researcher profile systems in conjunction with STIM and various campus ORSP offices,
  • ScholarWorks Kevin Cloud: move from Multiple instance approach with D-Space to single instance approach for all 23 campuses on Hyrax/Samvera, there are now SW interest groups of volunteers who are working on issues identified by a COLD report about repository needs

ULMS Summer Meeting

8/8/2019

Welcome from Mark Stover

9:45 AM
Today is the biggest crowd yet. We have such a wealth of knowledge in the system and events like this let us share it. Hopefully everyone will come away with important information today.
The behind the scenes work that people here do is essential, though we often do not get the praise and attention that others in the library do. Mark wants everyone to know they are valued! Various housekeeping notes.

John Wenzler CSUEB, chair of the COLD ULMS steering committee; thanks to the steering committee and Brandon Dudley who organized this event and made COLD aware of the importance of these events. We need to use this opportunity to build connections across campuses.

Discovery Committee meeting

10:00 AM
We need to find a new chair - Ryan Schwab stepping down to move to UCSC. Andrew Carlos fell on this sword.

NOTE: need to email out the ELUNA presentation about Primo Analytics problems to the list because this was a Disco task force item to be done

Task force discussion - DW says we probably don’t need a discovery person on the norm rules task force; discussion
At Sac State Christian has set up Google site search to track every individual Primo search

NOTE: I am Vice Chair (by default since the other members are cycling off after this year)

We need CSU discovery drop in sessions where campuses that need help can come
Christian suggests we need a way for programmers to work together on technical issues that doesn’t place more burden on Dave and Zach. Need more of an open sharing environment - have a feature on Confluence where anyone can suggest features? Have a disco forum about ‘the lay of the land’ and how people can contribute (github pull requests, etc.)

If You Build it, Will They Come? Natural Experiments with Springshare’s Proactive Chat Reference

11:30 AM
Me, Joanna Kimmitt, Mike DeMars
My presentation about our and Dominguez Hills’ experience with pop-up chat.
Unfortunately (fortunately?) I forgot to hit the record button so it is lost to the sands of time.

Primo Usability With Diverse Populations in Mind

1:45 PM
Andrew Carlos & Lee Adams
When you look up ‘EDI’ in LISTA you don’t get things about diversity. A lot of the conversation about equity in libraries does not permeate the literature, it takes place in blogs, live presentations, etc.
Word choice is very important when thinking about this population and how they will relate to library systems, go for lowest common denominator universal understanding as much as possible
Lots of Primo UX literature is focused on Classic UI, need more studies on NUI
At East Bay they did 2 focus groups, got IRB, 7 students each group, incentivized with pizza and battery pack, recorded the focus groups then had them professionally transcribed - only after trying to use automation transcription which failed
Expect students to flake out and not attend testing sessions they signed up for…
Zoom sessions had a lot of background noise - making transcription expensive
Poll of librarians shows that we think the students think it is more intimidating than they actually do. They seem to just use it!
Q&A
Need to test Primo with JAWS, supposedly hard to stop reader at search box
English as second language speakers report that lots of jargon in Primo is not easily understandable

CSU GetIt Menu Update

3:15 PM
David Walker & Zach Williams
David & Zach’s plans to fix default GetIt and ViewIt menus
Some central package changes coming before the beginning of Fall semester:

  • moving the sendTo menu
  • removing the full record sidebar jump links

NOTE: I need to setup authentication in sandbox
change owner profile to your campus
then setup login profiles - an local ALMA Internal profile will be easy to setup and allow testing.

Wrap-up speech cancelled so we could get on the road

Notes from Ex Libris Users of North America 2019

Atlanta, GA

Narrative

From Tuesday April 30 through Saturday May 4, I attended the 2019 meeting and conference of Ex Libris Users of North America (ELUNA), which was held at the Hilton Hotel in Atlanta, Georgia. I did so in order to present on some co-authored scholarship that I have been directing with some other colleagues in the CSU and to ensure that I was aware of the most up-to-date information about the Primo discovery software. Below are my session notes, which are of varying length depending upon how informative I found each session. ELUNA is (theoretically) organized in ‘tracks’ so that sessions pertaining to the same or similar products are spread out over the course of each day and that people interested in only one aspect of Ex Libris’ products, such as Primo, can absorb all the relevant content; however the scheduling of the Primo track was definitely sub-optimal this year so I was forced to make some hard choices about which sessions to attend. Fortunately, I was able to download all the relevant slideshow files for later review. They are located at: X:\Gabriel\ELUNA2019\ELUNA_Primo_Content\

If you are interested, find my (often cryptic) notes below with the detailed session descriptions. Let me know if you have any questions.

TUESDAY, APRIL 30

6:00PM - 9:00PM OPENING RECEPTION – GEORGIA AQUARIUM

Nice aquarium, larger than ours in Long Beach, at least prior to the recent renovations there. My co-presenters and I had not had the opportunity to practice our presentation prior to the conference so after hitting the open bar we went over our slides.

WEDNESDAY, MAY 1

9:00AM - 9:20AM - ELUNA 2019 OPENING SESSION

Opening session focused on big data and AI and how it is spreading rapidly and being adopted by some libraries already. Nothing groundbreaking but everyone should be paying attention to this trend.

9:20AM - 10:20AM – EX LIBRIS COMPANY UPDATE AND STRATEGY

They think Alma has collected enough data that they can now use it to improve the product based on user behavior. To that end they have introduced an artificial intelligence assistant called “Dara”. More on that later.

Lots of hype about 4th industrial revolution, shout-outs to campuses that are full stack Ex Libris product users. Announced new “provider zone”, similar to Community Zone, where publishers can improve their metadata in the Central Index and for basic bib records.

Pledges to increase transparency in their R&D process. They have hired 20 new people in their “customer success” department - handling library reported problems. On average 30% of every new software release of Alma and Primo are improvements that come from the Ex Libris Idea Exchange. It really is important for people to vote on there. https://ideas.exlibrisgroup.com/

Coming soon: “App Center” a collection of extensions and on-platform apps that can be easily added to Alma/Primo, easier implementation that the existing Developers Network. Introduced yet another new product in their never-ending quest to monopolize all things related to libraries: Rialto, which basically does the same things as GOBI.

10:45AM - 11:30AM - INSIGHT OR APOPHENIA? THE PITFALLS OF DATA FOR DATA’S SAKE

AI, Google, has deep learning image searching that can basically fine pictures of anything in anything now, neat for art but bad way to go about data analysis.

Big data is actually a giant mess; lots of implicit bias in data collection that can be very hard to detect if you didn’t collect the data, which is the situation libraries are in almost all the time. So many algorithms have high false positives and high false negative rates but we often focus on the accuracy and have a false sense of how well things based on big data work.

Garbage in, garbage out you can’t mine gold out of crappy data - e.g. Primo Analytics sign-in data is a binary variable when actually people will do things in a session before they sign in so calculations based on this will be inaccurate.

Data collection should be intentional - we need this for X purpose, don’t try to hoover up everything because you will probably do it poorly and won’t be able to get the insights that you want.

We should apply GDPR to everyone. Personally identifiable information is a hot potato, risky in case of hack, we should collect with DATA MINIMIZATION in mind. In line with GDPR, we must be transparent - need a privacy policy that can be easily found which lists all the data collected. As libraries we should be better than Silicon Valley et al.

No amount of data will get you out of talking to people - data never speak for themselves. Self-reported data is well known to be somewhat or very inaccurate, so you can’t rely on that just alone. You MUST use mixed methods to accurately understand the world. A la Marie Kondo, ask: does this data spark insight? If it doesn’t and contains possible PII then get rid of it.

Q&A
Talking to people can be hard what does she recommend? - Guerilla usability testing, mini-ethnographic observation, pop-up focus groups.

11:45AM - 12:30PM - FINDING YOUR WAY THROUGH DATA WITH VALUE STREAMS

What is value stream? A term from lean methodology, a focus on creating more “value” with same or fewer resources. The value stream represents all the things we do to create value for our uses.

At Loyola Chicago they have 1 instance of Alma/Primo but 4 libraries /campuses, requires a lot of coordination and training.
They did a user roles (i.e. permissions) project to help staff understand their roles in serving end users. Determined the types of profiles they needed to create based on position descriptions then delete all current roles and reassign them based on their analysis of the workflows and position descriptions. This project has streamlined onboarding of new staff which happened recently for them and they also discovered that there was a lot of role/permission misalignment in many existing staff capabilities.

Note: Role Area and Role Parameter is not queryable in Alma Analytics

The dilemma: have lots of very specific profiles? Or fewer but more inclusive and capable profiles? They went with the latter option. Is it actually necessary to tailor permissions so granularly and securely? They think risk of giving a couple people additional capabilities as part of a role template outweighs the granular adjustment of permissions for each person. Pretty disappointed in this session since it was billed as having relevance to Primo but didn’t.

Q&A

  • Did they get any pushback from staff who now had more permissions and capabilities about how since they had more abilities now they might be asked to do more work, possibly outside their job description? Not yet. They have very collaborative environment and staff are not afraid to talk to management.

1:30PM - 2:15PM - IF YOU BUILD IT, WILL THEY COME?: NATURAL EXPERIMENTS WITH SPRINGSHARE’S PROACTIVE CHAT REFERENCE

My session. It was well-attended.

2:30PM - 3:15PM - BEST PRACTICES FOR SUCCESS WITH DISCOVERY

ExL advice for maintaining eresources. Need to monitor listservs, check knowledge center when there is an issue, review configuration options, and share information widely internally. Use the proper escalation path (found in the slides) to bump salesforce cases up if not getting support needed.
Noted that the resource recommender has many open fields for customization - you can recommend basically anything, we don’t use this at LB nearly as much as we could. Noted that the bX recommendation system uses global data from all Primo users, no data source customization.

All workflows should be documented in case someone is hit by a bus or leaves - note: I have not done this at CSULB.

3:45PM - 4:30PM – EMPOWERING THE LIBRARY

Dara is an AI designed to help libraries find better workflows that might not be obvious to them.
Yet another new product is on the way! - unnamed but is about resource sharing and apparently will compete with ILLiad/Tipasa.

Big deal the Summon Index and Primo Central Index will be merging. This will affect amount of content available to us and material types. Details here: https://knowledge.exlibrisgroup.com/Primo/Knowledge_Articles/The_Ex_Libris_Central_Discovery_Index_(CDI)_%E2%80%93_An_Overview We will be getting moved to the new metadata stream in the first half of 2020.

4:45PM - 5:30PM - SPRINGSHARE INTEGRATION WITH EX LIBRIS TOOLS

All public facing LibApps products can integrate with Alma/Primo in some fashion.

How to get Libgudies into Primo - DO NOT use the method recommended by Northwestern librarians on the Ex Libris developers network - instead do Libguides > tools > data export > use the OAI link. With the LG OAI link, go to PBO and use the OAI splitter, use pipes to pull the content, three pipes: full harvest, delete, and update. Use the scheduler to run them automatically. Some normalization may be required.
Recommended to hide display of publication date of all LA content in Primo since it grabs the originally published date, not the most recent updated date.

If you assign different dc.type values to the LG content e.g. course guide or research guide, then that is what displays as “material type” in Primo.

Other method to get LG content into Primo is recourse recommender. Can do either of these for e-reserves, can do it for the databases A-Z records.

Libanswers doesn’t have OAI support but does have an API. Lots of libraries are using a dedicated Libanswers queue to handle primo problems reports, contact Ryan McNally at Northeastern University for details.

LibInsight is COUNTER5 and SUSHUI compliant, can ingest Alma circ and e-resource usage data.

THURSDAY, MAY 2

9:00AM - 9:45AM - SEAMLESS REQUESTING FROM PRIMO TO ILLIAD: A 5-SECOND DEMO

With APIs there is actually no reason to use webforms to talk to ILLiad anymore. They are using the TAGS section in Primo to build their customization, TAGS is a Primo directive that appears on every type ad view of primo pages.

Design principles: if you make requesting easier people will do it more, all the data needed to plac requests is in Primo already, needs to meet accessibility requirements, keeping users in the same application simplifies cognitive load and UX. They knew from google analytics that after people placed requests on the ILLiad form, they usually don’t go back to Primo. But they want people to stay on primo and continue to discover things. impressive demo

From UX studies we know that motion on a page equates to work happening for end users so they wanted something to refresh.
They have a script that populates a database table in between Primo and ILLiad so that requests from Primo go to the intermediate table then later get sent to ILLiad at 5 minute intervals. This allows for requests to be placed from Primo even if ILLiad is down/offline. The end user always sees a message that the request is placed (the request will actually be placed a bit later) unless the intermediate database goes down which it hasn’t.

Since implementing this new system, their ILLiad requests have increased 130%. They now have a little problem with being overloaded with returns but that is a good problem to have.

The code cannot accommodate multi-volume items, they have not figured a way to deal with that.
https://github.com/tnslibraries/primo-explore-illiadTabs

Q&A

  • Do all requests go into ILLiad? Yes.
  • What about new users getting ILLiad accounts? They have a patron load into ILLiad
  • Is their ILLiad locally hosted or cloud and does it matter? It doesn’t matter as long as the ILLiad version has the API.
  • Can this system accommodate users who aren’t in the LDAP/SSO system? NO, to use this, everyone must be in one centralized authentication system.
  • How long do they keep the data in that middle level database table? They clear it out at the beginning of every semester.
  • Where do patrons go to manage the requests? Right now they have to go into ILLiad to cancel requests. But there is a plugin developed by Orbis Cascade (MyILL) that lets the MyAccount area in Primo talk to ILLiad and that is the next step in this project.

10:00AM - 10:45AM - INCREASING SEARCH RESULTS: USING ZERO SEARCH RESULTS & JAVASCRIPT TO REFRAME SEARCHES

Last year ELUNA 2018 people at WSU noted there were some errors in the Zero search results stuff from primo analytics. These presenters pressed on and did literature review and dove into data to categorize zero search results.

They found that they were getting a lot of database name searches in primo so they turned on the resource recommender.

There are searches that show up in the PA that look good, was PCI down? Was Alma offline? No way to know given the PA data because there aren’t full timestamps, only Date indicator.

Many libraries have reported that when they moved from old primo to NUI the numbers (raw number) of zero search results counts decline dramatically - no one has been able to really explain this, Ex Libris has been asked and don’t have an explanation. At UCO they saw big drop moving to NUI and even further decline in number of zero search results queries after turning on the resource recommender.

Categories of zero search hits:

  • Boolean errors
  • spelling/typographical errors
  • nonsense queries
  • library services queries

Came up with the idea of using JavaScript to reformat queries so that they would get results e.g. if someone searched a DOI URL, strip out everything but the actual DOI. Code used is up online. With their new JS implementation - which is on their homepage, not inside Primo - they did see a further decline in number of zero search results. Future plans: parse ISBNs, handle punctuation, implement within Primo proper.

11:15AM - 12:00PM - ARTIFICIAL INTELLIGENCE: IS IT A REAL THING FOR LIBRARIES?

We are definitely in a hype cycle for AI now. What even is AI? Machine learning at present. How can machine learning be brought into libraries? The standards for content delivery are set by silicon valley/amazon etc. now. We might not like it but that is just the world we live in now and libraries need to keep up.

ExL ran a survey about tech adoption and found customers are already thinking very forwardly, though a minority thought machine learning would be implemented in their library in next 10 years. Big data - one of the big benefits of moving to the cloud is that ExL can aggregate libraries data and mine it for insights across all their customers, which was previously siloed and stored locally. ExL anonymizes all data but even so, there are clear trends that can be seen, already using this stuff to power the bX recommender - see ExL whitepaper: http://pages.exlibrisgroup.com/alma-ai-whitepaper-download

New AI tool is Dara, Data Analysis Recommendation Assistant to speed up work in Alma and reduce repetitive tasks. DARA is not trying to do anything that couldn’t already be done by people manually already, but it lowers the bar, it brings superuser knowledge to anyone and does it with much fewer clicks. Through machine learning deduping, DARA can tell when certain libraries are not doing things as efficiently as other libraries.

note make sure we are doing ‘real time acquisitions’

DARAs recommendations are available in Alma in a task list format, they will only display to users who have the permissions/roles levels high enough to actually implement the recommendation change. Coming DARA recommendations: if no item available - prompt for a resource sharing request, cataloging - locate “high quality” records for copy cataloging, generate high demand lists automatically.
ExL admits it still has a long way to go. Nothing that they are doing is “AI” yet, just machine learning for deduplication purposes and basic statistics and logic to determine applicability of recommendations.

Q&A

  • Will they be charging more for Alma AI? No, it will all be bundled in.
  • Will they do AI stuff in Primo? DARA stuff just applies to Alma, Esploro, Rialto and the behind the scenes products, there are machine learning improvements planned for Primo once the new Central Discovery Index goes into production to create relationships between records.

12:15PM - 1:00PM - WHAT ANALYTICS TELL US ABOUT FACET USE

At CU boulder they took away the ‘discipline’ facets in summon since they had UX testing that showed people got them confused with subject headings, now just use LCSH. Comparing OBI (default ExL Analytics) with Google analytics, there are pretty big discrepancies… which to trust? As percentages there aren’t big differences in facet usage compared with on campus or off campus indicating that the librarians aren’t really skewing the data at CU. ‘More” is the 3rd most used facet. They can see that there is a 3 step UX problem where people select a facet but then don’t click on Apply.

At UMich they just use Summon for article discovery. Changed from “Facets” to “filters” supported by much anecdotal evidence and some studies. They use Google analytics too to track. Order of filter groups: pub date, format, subject, language - based on frequency of usage. Not taking into account advanced search Boolean and pre-filtering, only 3.4% of searches in the article discovery area used filters of any kind - very low compared to other places reportedly. Philosophical questions: is filter use good or bad? If relevancy ranking was amazing then filters would be unnecessary except for the most broad searches.

At Yale they use Blacklight powered by Summon API. They have an intermediate bento result for the initial query then people need to make an additional click in order to see more than the select results that appear in the bento. They also use Google Analytics. They implemented a pre-filter of applying scholarly to all article searches (need to actively take off the filter after searching in order to see non-peer-reviewed content) did this change behavior and how people used facets? Since they use the API, they can’t tell from the OBI stats, and there was no data in GA to support the idea that this pre-filter change affected facet usage. It appears that people will basically use whatever.

2:00PM - 2:30PM - LEARNING, RESEARCH, AND CAMPUS SOLUTIONS

Naked sales pitch for various products. Barf.

2:30PM - 3:30PM – CUSTOMER SUCCESS

Did not attend.

4:00PM - 4:45PM - ADD INSTITUTIONAL REPOSITORY CONTENT TO PRIMO: A HOW-TO GUIDE

1st determine how the IR can export metadata. Create scope - should be “collection”. Use PBO views wizard to add scope to the appropriate view.
Create norm rules - def use a template e.g. dublin core. Create pipe, deploy all.
Check results, look at the status and view the output even if it didn’t get flagged for errors. After you get the data harvested and normed correctly, then schedule the pipe to pull regularly from the IR.

Various crosswalks between metadata standards may be required - these can be solved by norm rules. Making norm rules is an iterative process, just keep tweaking - harvesting publishing etc. - until you get it right. See presentation slides for nitty gritty details.

Pro tip - norm things in appropriate case (upper/lower). The error messages are notoriously not helpful… good luck! Getting IR data into Primo usually exposes problems with the IR data - take this as an opportunity to improve things in the IR!

Q&A

  • Where are the norm rule templates? There are some included in the OOTB PBO
  • When should we reharvest? Only if you need to pull in everything again, normal pipe will do the work most of the time.

5:00PM - 5:45PM – CALIFORNIA USER GROUP MEETING (ECAUG)

California is the state with the most Ex Libris customers and we also serve the most students – both of these “mosts” by quite a lot compared with other states. This is a very recent development, was not the case 2 years ago. If we can get our act together we would have massive voting clout in the NERS and Idea Exchange processes on issues that affect all of us. (It is not obvious that there are CA-specific issues around which to rally though…)

FRIDAY, MAY 3

9:00AM - 9:45AM - UNDERGRADUATES AND DISCOVERY

Consider how librarians think vs. how undergrads think about various scenarios - we are very different and it can be hard to “unlearn/unknow” things.

Lots of literature and experience supports the assertion that undergrads use of online search tools is almost entirely driven by class assignments. More and more the expectation is one of efficiency, undergrad don’t think they have the time to take deep dives, when you add in library anxiety to this mix there are reasons for a pessimistic outlook. Good news is that there is research demonstrating that if students can find things in the library that meet their needs, they do prefer these sources over others.

Recommended strategies:

  • teach them when to use discovery and when to use specialized databases
  • in instructional sessions have students compare search tools
  • have students compare google/bing to discovery - internal ExL research and other shows that basically no one uses facets (filters) unless they are taught to do so, after which they use them a lot
  • activity: have students imagine they have to make a search engine then brainstorm how relevancy ranking works
  • activity: evaluation jigsaw to discuss results found in discovery
  • explore critical information literacy
  • do citation trail activities to have students understand citation and intellectual debts

9:55AM - 10:40AM – DIY STACK MAPS IN PRIMO

Always good to seek more input - physical wayfinding signage is incredibly important, not all fixes are catalog-based.

Problems with OOTB RTA display: library name is superfluous if you aren’t in a consortia, in Primo classic there is no physical location information in the brief display. At Lawrence they have very idiosyncratic physical layout of collections. Just displaying the floor number (which our Long Beach NUI primo locations do) is a huge improvement.

They had nice existing floor plan maps posted in spots physically around the library and PDFS of them online. The maps were sufficiently detailed already based on blueprints.

Library name is a required parameter in the PNX and RTA data so if you don’t want it to show to end user need to use simple display:none CSS

In the PBO, the “map” parameter is designed to accept a URL (though this is not obvious from the directions and documentation) and what displays to the end users in Primo is ‘Locate’. At Lawrence they had various location cleanup things they needed to do as part of this project - not applicable to Long Beach. Configuration in Alma: https://knowledge.exlibrisgroup.com/Alma/Product_Documentation/010Alma_Online_Help_(English)/060Alma-Primo_Integration/060Configuring_Alma_Delivery_System/205Configuring_the_Template_for_the_Location_Map_Link

11:00AM - 11:45AM - EX LIBRIS MANAGEMENT Q&A

Nothing of consequence was asked nor were any answers more than vague corporate-speak.

11:55AM - 12:40PM - IMPROVING THE USER EXPERIENCE IN PRIMO BY OPTIMIZING DATA IN ALMA

UT Dallas had many problems with their migration. They discovered a lot of problems with their MARC/bibs after migration - many small and detailed fixes required.
Basically the message here is that MARC record quality still matters a lot, only so much can be fixed by changing the normalization rules if the underlying data is inaccurate or incomplete. We ignore and underfund cataloging at our own peril.

1:30PM - 2:15PM - STUCK IN THE MIDDLE WITH YOU (YOUR OPEN URLS)

Their fulfillment unit wanted a “one-button” request solution, they tried to get ExL to do this in development to no avail.
To get an approximation of a one-stop shop they hid all request options in Primo via CSS and then force all request traffic through ILLiad and they put a request link of every physical resource (some links show up where they don’t want them but this is the only way to get the one stop shop they want).
There were various metadata problems that they “solved” using a man in the middle script that lives on their servers and corrects known problems and cross-references for more metadata to enrich the request before it gets sent to ILLiad.

Various changes in Alma and Primo coupled with move to NUI meant that they needed to revisit their script and talk to all staff involved to see what people still wanted. They ended up rewriting the man in the middle script, see slides for details.
https://github.com/vculibraries/

Unfortunately, we can’t do this at LB because it would cut out CSU+ and Link+.

2:30PM - 3:15PM - MAKING PRACTICAL DECISIONS WITH PRIMO ANALYTICS

Most usability studies on Primo are about the Classic UI, in the very few NUI studies none of them mention Primo Analytics. At NDSU and Loyola Chicago they have just focused on basic counts and usage since that is what is easiest to get out of PA.

Note: single letter searches that show up in the PA queries list come from the journal A-Z list - very few know this.

Recommend that resource recommender tags be updated to reflect most popular queries that might not be showing the desired results. At Loyola they have triggers for the most used newspapers and journals to point users to the databases that have the most either recent or comprehensive content. Maintenance of the resource recommender is a group effort - the experience of R&I librarians who work with the students explains a lot of the queries, need to revisit the triggers periodically, they meet quarterly at Loyola.

Loyola had Classic UI and analytics data from before move to new Primo so they compared classic and NUI post-switch to see how the NUI affects behavior. No big changes…

Usability testing supplements PA data and helps you understand it better but it sometimes doesn’t tell the same “story” as the data.

They use Newspapers Search and in their usability tests no one could correctly search it - Newspapers search analytics will be available at end of May.

Problems with PA:

  • inflated popular search totals; SF case 00552481
  • inconsistent user group categorization, SF case 00628836;
  • inconsistent action totals based on where the date column is included in the query/report

Side note: what the fuck, how much are we paying for this?

  • University of New South Wales in AUS has over 20 open cases with ExL about problems in PA.

Supposedly ExL is going to make PA a focus area of development - bottom line is that it needs to be better.

3:30PM - 4:00PM – ELUNA 2019 CLOSING SESSION

Business report: ELUNA has grown tremendously in the past 5 years, this has created unique challenges for the org and steering committee. ELUNA has revised and clarified their mission statement slightly. 2018 financial summary - ending the year with $315k in the bank. Majority of income comes from the conference, majority of expenditures are on the conference.

Added over 200 libraries from 2018 to 2019.

Next year: Los Angeles! (Looks like U$C has their greased palms all over this because no one from the CalState LA basin campuses knew about it.) May 6 - 8

CARLDIG-S 2018 Fall Program, Travel Report

Loyola Marymount University, Los Angeles

Narrative

On Friday December 7, 2018 I traveled to the William H. Hannon Library on the campus of Loyola Marymount University to attend the fall program of CARLDIG-S: Recapturing Reference: Making Research Relevant for Today’s Student. Michelle DeMars and George Martinez traveled with me.

The purpose of this travel was to present about our pop-up chat widget experience at CSULB. There has been some exploration of this type of proactive chat in the LIS literature, we thought our experience with the flood of traffic it generated would be helpful for other librarians in the southern California area to hear about. The full schedule is below. Some of the presentations were lackluster but I did enjoy the presentation about chat transcript analysis by Alexander Justice.

Meals were included with registration and the food, catered by LMU, was satisfactory. This outing was beneficial on the professional networking front as I was able to meet in person a couple librarians who I had only previously known over email and to forge stronger connections to some who I had met previously at ACRL or via personal pathways that my spouse traveled when she worked at USC.

Order of Events

Recapturing Reference: Making Research Relevant for Today’s Student

A Professional Development Opportunity Hosted by CARLDIG-South

News Media Literacy Gallery Walk
Suzanne Maguire, Mt. San Antonio College

Reimagining Reference Services Using Student Reference Staff

Annie Pho, Antonia Osuna-Gardia, Wynn Tranfield, Diana King, & Miki Goral, UCLA

Data Literacy and Reference Services: Core Competencies for Supporting Undergraduate Students

Nicole Helregel, University of California, Irvine

Meet Me at the Main Services Desk: How The Claremont Colleges Library moved from drop-in reference hours to scheduled research appointments

Charlotte Brun and Kirsten Hansen, Claremont Colleges Library

Using Text Analysis of Online Reference Transcripts to Recapture Reference

Alexander Justice, Loyola Marymount University

In Your Face: Our Experience with Proactive Chat Reference

Michelle DeMars, George Martinez, Joseph Aubele, & Gabriel Gardner, California State University, Long Beach

Shamelessly Integrating Google into Reference Practice (Poster)

Maggie Clarke, California State University, Dominguez Hills

Discovery: Using Star Trek to Teach Students How Libraries Structure Access to Scholarly Journal Articles (Poster)

Laura Wimberley, California State University, Northridge

When and Where
Friday, December 7, 2018
Loyola Marymount University
William H. Hannon Library, Von der Ahe Suite
1 LMU Drive
Los Angeles, CA 90045

Parking
Parking is $12.50 and attendees are encouraged to park in Drollinger Parking Plaza. Directions and parking instructions can be located at https://library.lmu.edu/directions&parking/

Program Schedule

  • 8:45am-9:30am: Registration and continental breakfast
  • 9:30am-12:30pm: Presentations
  • 12:30pm-1:00pm: Lunch
  • 1:00pm-1:45pm: Panel Discussion

CSU ULMS Summer Meeting 2018

LOCATION: CHANCELLOR’S OFFICE, LONG BEACH

AGENDA: HTTPS://CALSTATE.ATLASSIAN.NET/WIKI/SPACES/ULMS/PAGES/610238465/

MONDAY, AUGUST 13

This day was six hours of analytics training led by Megan Drake from Ex Libris. There were many breaks and opportunities to socialize with staff and faculty from our other system campuses. We covered introductory topics for our Institution Zone, advanced topics (sadly there was no “intermediate” between introductory and advanced), and Network Zone analytics.
I learned quite a bit, as I’m sure the other people from our campus did. We are all better informed and able to do more reporting and structuring than we were prior to this day of training. The software still has some quirks and mysteries which elude me but we will now make much more progress on transparent data for librarians to use in their collection analyses.

TUESDAY, AUGUST 14

This day offered five different tracks depending upon functional area: fulfillment, resource sharing, resource management, acquisition/electronic resources, and discovery. I spent all day in the discovery track. The following individuals presented or emceed: David Walker [CO], David Palmquist [Fullerton], Nikki Demoville [SLO], Christian Ward [SacState], Megan Drake [ExL], Mike DeMars [Fullerton], and Zachary Williams [Pomona].
Some content was familiar to me but I learned some new things. Perhaps most importantly, this meeting provided a gathering were people could meet and share their problems with Primo in a low pressure atmosphere. All the campuses have the same basic setup and problems while the differences we have provide inspiration to (possibly) improve and food for thought.
Highlights:

  • There’s no technical impediment to bringing in e-reserves information to Primo via an automated method. This would reduce the work time that staff do creating dummy records in Alma for the links to e-reserves. The drawback is that the metadata using this automated method is very sparse. You can see the XML here: http://csulb.libguides.com/oai.php?verb=ListRecords&metadataPrefix=oai_dc&set=er_courses
  • I got some ideas from Christian and Zachary about enhancements to our No Results page. Specifically pushing people who land on that page out to WorldCat or Google Scholar.
  • The results of a new round of usability testing at 5 campuses were presented. We are already following the best practices they identified with a couple exceptions, which I’ll be fixing shortly.
  • “Library Search” the first element in the Primo header links will be changed to “New Search” to more accurately reflect what happens when you click it.
  • Testing showed that people really had problems finding the ‘Expand my results’ checkbox to toggle on Primo Central Index. We already have PCI turned on for the EVERYTHING search. But I left it off by default for the ARTICLES search. After seeing the results of the test, I’ll will turn PCI on by default in ARTICLES.