Society for Scholarly Publishing 2017 Meeting
GABRIEL’S NOTES FROM SSP 2017
General Points
From Wednesday, May 31 to Friday, June 2, I attended the 39th Annual Meeting of the Society for Scholarly Publishing in Boston, MA. My travel there was so that I could present on findings of my co-authored research on the piracy of scholarly materials. I did so as part of a panel that also included the COO of Reprints Desk (a document delivery company) and the CEO of Aytpon (which offers various SAAS platforms to publishers). Below are my session notes, which are of varying length depending upon how informative I found each session.
I found the whole affair fascinating. I was one of about 15 librarians at the whole event which had almost 1000 attendees and exhibitors. Conversations were very business and product-focused; panels mainly consisted of competitors talking about their internal research and how method or technology X might solve problem Y. You might think that would make for a competitive environment, but everyone was very cordial and professional. The entire place was filled with competent and intelligent people trying to deliver good products (so that they can make as much money as possible off us). I spent most of my time eavesdropping. Keep your eyes out for more uses of ‘machine learning’/artificial intelligence in the future; there were lots of people thinking about how these tools can be used in publishing and libraries. The keynotes were very good, and both incredibly pessimistic. Of particular interested to you might be my notes from the plenary New and Noteworthy Products session which made it clear that publishers and their B2B vendors want to collect as much information as they possibly can about users of their services in order to monetize it, a very worrisome development from our perspective. Also of note is the RA21 initiative which is a plot by the major publishers to kill IP authentication and systems like EZproxy. (Elsevier is the initiator of the initiative, that was made absolutely clear in the verbal remarks of the session. It is how they plan to end mass piracy; the large pirate sites all rely on compromised EZproxy servers. Yet the RA21 website shows a diverse group of actors, draw your own conclusions as to how much work those others are doing.)
If you are interested, find my (occasionally cryptic) notes below with links to the detailed session descriptions and let me know if you have any questions. To my knowledge, presenter slides and content will only be available to SSP members, of which I am not one, so my notes may be as good as it gets.
WEDNESDAY, MAY 31
PREDATORS, “PIRATES” AND PRIVACY: PUBLISHERS AND LIBRARIANS EXPLORE NEW CHALLENGES IN SCHOLARLY COMMUNICATIONS
8:30 – 10:00AM
https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/seminar-2/
Intro remarks from Todd Carpenter, NISO
- Trust is essential in scholarly publishing
Hillary Corbett
- Not all predators are equally bad; there are minor and major infractions
- But most blacklisting is binary, is it possible to capture the shades of gray? Should we for our faculty?
- Low quality is not necessarily predators
- Predators are getting more sophisticated, there are “spoofed” journals
- Librarians need to educate their faculty but particularly students about predatory/low quality journals because many of them are discoverable via Google and even Google Scholar
Kathleen Berryman, Cabell’s – they are taking up the work Beall did
- Cables is taking the approach of “any misrepresentations” will de-whitelist a journal
- How is the whitelist different from Beall’s list?
- They are working off all “objective” metrics. Any infraction moves a journal from the whitelist to the blacklist. The blacklist tallies the number of infractions
- They don’t think this will solve the problems, awareness needs to be raised, perhaps penalizing methods developed
Rachel Burley, SpringerNature
- Some spoofed journals are almost replicating known legitimate journals right down to the same HTML/CSS templates and published version formatting
- Springer nature is redesigning some of theirs to highlight all the associations with professional and scholarly association affiliations, to bring out the trust factor
- Promoting the “think, check , submit” meme
David, OUP
- We think that the authors are the “prey” in the predators metaphor
- But maybe the authors are complicit? Then the prey are the readers or the tenure and promotion committees
- Recent study showed that 75 percent of authors in predatory journals are from Asia, or India.
- Time is researchers’ most precious commodity, if their oversight committees do not care about predatory journals, why wouldn’t they publish there?
- Conspiracy theorist love low quality journals
- Universities need to get in board. In India, the University Grants Commission put out a Whitelist that everyone will need to use to get ‘credit’.
- Q&A Isn’t the price of APCs too high? Can’t blame it all on bad motives, many researchers don’t have funds.
Michael Clark and Jason Price
- Looked at random sample of 2015 articles indexed in Scopus
- Searched for them in several places, how many of them could you get to the full text in 3 clicks from where you started searching?
- (Gold, green, “rogue”, pirated)
- Rogue I.e. On Researchgate, academia.edu, otherwise indexed via GS not on publishers website
- Pirated, I.e. Sci-Hub
- Including rogue methods, 55 percent were available
- They walked through various search workflows to see which one gets the most content the fastest: sci-hub best, followed by google scholar, not worth it to start search via one of the rogue OA sites
- Next project: looking at availability of OA materials in library discovery systems
- Q&A Employee from Annual Reviews talked about how they implemented “subscriber links” in google scholar, in collaboration with Google, to get legal access easier. (Need to look into this.)
Craig, Silverchair
- How to be a pirate: mass downloading, data breaches, uploading of C protected content, password sharing
- Why did we get sci-hub? Pricing, availability, convenience?
- The sci-hub “problem” often falls below the abuse monitoring thresholds, so it is impossible to stop through traditional methods
- What else? Takedowns, but they should not be automated too much
- Improving security of EZproxy, required https, OCLC software improvements
Daniel Ayala, ProQuest
- Some data will always be collected, this doesn’t necessarily mean privacy violations
- Lots of researchers are very suspicious about data collection, as are librarians
- Security causes access friction, publishers may want to move beyond IP authentication, if they do so, that enables even more individual data collection
- Publishers need to be transparent about what they’re doing with the data they have
- NISO has a framework for privacy
- Single Sign On is increasing, this is good for both convenience and enabling data collection
- RA21
- Maybe blockchain could be an authentication method
Peter Katz, Elsevier
- Elsevier has constant bot monitoring
- Now citation managers are allowing people to get bulk downloads for corpus of citations, this gets flagged a lot as malicious. Elsevier wants people to download them individually…. or use their API
- Chinese ISPs, via VPNs are the main source of Elsevier content theft now.
- Some sites now sell login credentials, phished
- Sharing SSO passwords is very dangerous (or getting phished) because it gives access to everything
- The universities where Elsevier consistently sees bulk content theft, have lax security. The ones with robust security see very little activity that turns out to be malicious.
- Many publishers are developing bots/algorithms that can “hitch a ride on the search URL” and ID sci-hub traffic (which is below the bulk flagging threshold).
- Q&A
- Piracy is a symptom of a failing access or customer service or user experience.
- 2 factor authentication, if you can force it on people, will solve almost all these “problems”
- Does Elsevier think they’ll be able to take down Sci-hub? Rep says not within the next 3 years.
SPONSORED SESSION: YEWNO: UNEARTHING HIDDEN VALUE
12:30 – 1:30PM
- MIT press is using Yewno
- Yewno is based on complex systems and linguistics analysis. (Originally developed for “econophysics “ purposes)
- It is an inference engine
- “ completely unbiased “ LOL, eyeroll
- They formed in 2014 and use AI to provide a topical hierarchical map of any document
- Can ingest any machine readable document, no additional metadata encoding required
- Provides sub chapter topic breakdowns
- The concept map can be set to include all the metadata Yewno knows about or only the stuff you have in your catalog
- MIT press is using this for better discoverability and to determine which books to put in which packages.
- They have developed many use cases and tested them
- This tool would be great for collection development
- Yewno contains comparison features and longitudinal information as well, if the corpus is historical
- The AI does not use a dictionary!!!!!! Instead it looks at the context in the sentence and groups of sentences. Wow.
- They have a German version, no plans to go further now.
- Their main market is B2B, keywords for ecommerce, advertising, publishers. But some libraries are interested in using it in catalogs.
SPONSORED SESSION: SHERIDAN: THE TRANSITION FROM PDF TO HTML: OUT WITH THE OLD AND IN WITH THE NEW
1:30 – 2:00PM
- PDF was a revolutionary file format for publishers
- Historically, pdfs were much more downloaded and viewed than HTML
- Sheridan says HTML is the future! … “the PDF is dead, long live the PDF!”
- Now they want to do XML-first workflows. Start outline in XML, port to HTML, all edits then take place in WYSIWYG HTML editor that creates XML and can export to PDF once final version is ready. Called ArticleExpress.
Camille
- Talked about the Rockefeller press workflows. Used to use Word and InDesign. Now ArticleExpress. Big time savings, on average 48 hours per article. The word to InDesign conversions also allowed introduction of errors.
- Lots of workflow details. Publishing is complicated!
- Cons of the ArticleExpress: old browsers and Safari don’t work. Staff training time.
Kate, AAP
- American academy of pediatrics is using the ProofExpress feature (a sub feature of article express)
- Proof express keeps all the edits in one place, so no collating of the edits, no emailing a bunch of pdfs back and forth. Streamlined process. Cost savings of $23k per year.
Gary
- Very complicated material about XML…
- Sheridan also has a authoring product: author cafe, to cut Word out of the picture entirely. All work done in same cloud environment from start to finish. Huge time savings if users (submitting authors) use it.
- Q&A, ePub working group has been trying to kill the PDF for years. Next revision of the file format will allow for seamless HTML rendering so this will tie together nicely.
SPONSORED SESSION: EDITAGE: HELPING PUBLISHERS GET CLOSER TO AUTHORS: PERSPECTIVES FROM A GLOBAL SURVEY OF ACADEMIC AUTHORS
2:30 – 3:30PM
- Editage company spiel
- They ran an industry survey, distributed in 5 languages, this is a sneak peek
- For many geographic areas it is not just publish or perish, but English or perish.
- These communities need help fitting into the western scholcomm ecosystem.
- Presenting preliminary results
- Biggest struggle is “manuscript preparation “, then “responding to reviews comments”
- Most authors don’t see ethics as a problems, but a lot of journal editors do. Big gap here suggesting authors don’t understand traditional scholcomm ethical norms.
- When seeking help, most authors just use Google, not the help pages on journals websites… LOL
- Top 3 priority for authors when selecting journal: high impact factor, have published similar papers, short time to publication
- Most authors don’t think instructions for authors are “clear and complete”. Need to usability test these.
34 percent of authors understand benefits of OA. (Increased access, citation advantage)
- When authors contact journals, majority of the time they were not totally satisfied with replies.
- Asian culture has big barriers to confronting authorities, journals need to reduce barriers to deal with this.
- Questions about predatory publishers give results that show authors are aware of the problem, education efforts are working.
- However, lots of confusion about who should be listed as coauthors.
- What do authors suggest for improvement in publishing? Time to publication, peer review process, reducing bias “perceived anti Asian bias in the subjective comments answers”
- Key takeaways: there is a lot that is broken. Publishers can’t fix it all. But they can improve directions, improve communication, reduce time to publication. Big pipeline problem, PhD programs are not preparing grads for nuts and bolts of how papers get written and published.
KEYNOTE: PAULA STEPHAN
4:00 – 5:00PM
https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/keynote/
- Postdoctoral work is basically a jail, low pay, long hours, often not considered employees for fringe benefits purposes
- Publishing is crucial , it is how people escape the postdoctoral jail
- There is an arms race here, lots of people competing for 1st or 2nd author in a high impact journal, not many options in each field.
- Age of (person) at first grant received has increased over the past 30 years.
Read: Zolas et al, 2015 in Science.
- Most new PhDs will not go into tenure track jobs
- What are implications for publishing?
- Number of R&D firms in the US that are publishing has dropped dramatically, only about 10% now.
- Many firms no longer do basic research, working on applied now.
- Universities now operate as high end shopping malls, putting up space and branding while a lot of the actual revenue comes from without
- For many, it isn’t publish or perish, but funding or perish
- Estimated that PIs spend 42 percent of their time in grant administration/writing
- The PIs then fill their labs with temporary postdocs or grad students
- This started in the US but is expanding to other countries rapidly
- Massive risk aversion. Almost all funding is directed toward “sure bets” .
- NIH has explicit requirement about continuity of “research direction “. Built in risk and innovation reducer
- Read “rescuing us biomedical research from its systemic flaws” PNAS
- Bibliometrics reinforce risk aversion. Novel research is more likely to fail. Hardly anyone publishes negative results.
Reviewers are blinkered by bibliometrics in Nature read this.
- Highly novel papers are almost always published in low-impact journals.
- In short, things are disastrous, particularly as we anticipate federal funding declines. State funding of public universities continues to decline. Private foundation funding is increasing but exclusively focused on narrow applied topics - “cure industry”.
Her book How economics shapes science.
She also recommends an NBER book The changing frontier : rethinking science and innovation policy.
THURSDAY, JUNE 1
KEYNOTE: JEFFREY MERVIS: SCIENCE AND THE TRUMP ADMINISTRATION: WHATS NEXT?
9:00 – 10:00AM
https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/keynote-2/
- The “war on science”, e.g. Chris Mooney, rhetoric is overblown. Many republicans support certain science funding, just not the funding democrats always support.
- Every government policy is a result of money, people, and ideas. What will Trump do?
- Budget proposal has deep cuts to research funding. This comes not from a thorough analysis of the areas where science funding is wasted, but is an across the board cut.
- At present, none of the cabinet appointments have any science training or significant attachment to public higher ed.
- With the exception of NSF, all the agencies are being run by interim admins.
- Less than 10% of the spots requiring confirmation have been filled. Both Obama and Bush were much farther along at this point. Trump admin seems indifferent to full staffing.
- Census bureau is not fully staffed or funded, very worrisome since 2020 census is coming up.
- OSTP funding is actually preserved at Obama level, but trump hasn’t filled leading spots. Unlikely that anyone respectable to scientists will actually take the position, even if nominated.
- DARPA may get more funding, as may NASA but civilian/domestic science funding will almost certainly fall.
- The admin has put research funding in a category of programs that “haven’t proven their value “. We need to lobby to correct this.
- Indirect cost recovery: a proposal we should be scared of. Would cap NIH funding on certain projects. Trump proposal implies that universities are basically making a profit off federal NIH grants.
- Good news is that the admin is working on a report in “evidence based policy”. We need to watch out for this. Could be good or bad.
- Pew did a poll about the science march, find support for it was partisan.
- Republicans “aren’t buying what you’ve been selling” re: the value of most science funding.
- Q&A
- What can private citizens do at federal or state level? There are dozens of power centers in DC so your efforts need to be targeted. State research funding is minimal. Perhaps get involved in state redistricting, get more pro-science people elected and in the pipeline.
- Climate change…
- What would a recovery look like? We shouldn’t think about science as a unified thing. The funding isn’t unified, the support isn’t. The sky isn’t falling at the same rate for all areas.
- Can private funding make up the difference? Doubtful. It is increasing but only marginally.
- Trump admin hasn’t renewed the contracts of the heads of national academy of science. Will we lose a generation of science leaders? These will get filled eventually.
CONCURRENT 1D: RESEARCH DATA POLICIES: CRAFT EFFECTIVE POLICY AND IMPROVE YOUR IMPACT
10:30 – 11:30AM
Rep from Springer-Nature
- There’s now considerable evidence that making the underlying study data available and linking it increases citation rates
- Funders are increasingly requiring open data, presently 40 major funders require it
- Journals policies on data sharing are very confused. Very little directions on how the mandates should be fulfilled.
- Springer nature has looked at the polices of their journals, there are 4 basic categories. Not encouraged, encouraged, required but no directions or clear enforcement, required and compliance monitored
- Springer Nature made their polices available CC BY
- They’ve estimated it only takes about 10 minutes for editors to get the data availability statements from the authors into the papers.
- For long term success, the policies need monitoring .
- Implementing this is time consuming, need to engage with each journal individually.
Rep from PLoS
- Open data has many benefits, replication, validation, new interpretation, inclusion in meta studies
- PLOS has a strong data sharing policy
- Less than 1 percent of authors are unwilling to share and abide by the policy
- Authors need to get credit for the data they Share, the data need to be citable, other writers need to cite it. This is an evolving area.
- Compliance is murky, does the data behind a figure need to be shared? Or all the data that was collected? PLOS wants to err on the later side.
Rep from Harvard Press
- This is newish, lots of policies still being developed and adopted. Lots of discipline variation.
- Biomedical community has lead the policy front.
- Data she’s collected show that authors comply with even the strongest policies.
- In biomedical area, trend/preference is for the data to be cited with a persistent identifier in the reference list.
- Harvard Dataverse is an open repository for any journal
- Publishers and journals need to plug supported repositories, give authors more guidance
- Ideally the journals tech systems can work with any APIs that repositories have. (Repositories need APIs…)
Rep from Dryad
- All data in there is from something that has been published.
- Every item is “light touch” human curated.
- It is a non profit governed by a board of customers.
- Open data should be thought of as preservation, not just for reproducibility
- The best time to get the data into a repository is before the article is actually published.
- End result is an article and a dataset that link to and cite each other, API allows embargoes
- Curation involves making sure data is not sensitive or violating IP, that it is in appropriate file format and doesn’t have a virus.
- Data download is free, the journals pay to have the data in Dryad.
CONCURRENT 2C: PREPRINTAPOOLOZA! WHAT’S HAPPENING WITH PREPRINTS?
2:00 – 3:00PM
Jessica Polka, ASAPbio
- Precincts are increasing!!!
- Researchers are concerned: about getting scooped (not really a valid concern), about presenting unpeer reviewed results (people do this at conferences all the time), preprint will expose potentially misleading information to the general public
- Most preprints in arXive appear in a peer reviewed journal eventually
- Transparency about the peer review status of pre prints is essential. No one benefits from conflating preprints with a finalized version.
- Should preprints be cited? If not, we may be encouraging plagiarism.
Lisa,
- How does preprints usage differ across disciplines?
- Broad institute (MIT & Harvard) looked into this
- Broad office of communication does not publicize preprints or any preliminary results
- Any intellectual property claims need to be filed before a preprints is made publicly available
- Big differences among Broad researchers about preprints use, appears to depend on general journal policies for the field (I.e. Whether preprints are allowed in main journals or not)
Rep from Cold Spring Harbor laboratory
- biorxiv modeled on arxiv
- All the versions are available, when something is published, it gets linked to the publishers version of record
- Then water came through the ceiling!!!!!!!!!!!!!! (We moved rooms)
- Biorxiv is growing exponentially at the moment
- Even though the papers haven’t gone through peer review, everything that gets uploaded to biorxiv gets viewed by a biologist to verify “fit”.
- Biorxiv is supported by Chan Zuckerberg foundation
- They are looking for ways to automate their human upload review and improve the conversion of documents to XML
Rep from PLoS
- Both Wellcome Trust and CZI are requiring OA deposit and accept preprints.
- This, along with many preprints servers launching in past 4 years, is big disruptive potential.
- Preprints accelerate the discovery process and publishing process.
- PLOS will accept preprints from biorxiv, they also comb over the preprint servers and solicit the authors to publish in PLOS. This is huge, it changes the dynamics between author and publishers
- Technology interoperability has improved dramatically in past 5 years, this allows rapid conversion of documents. Easy for authors to shift manuscripts around from journal to journal if it is on a preprint server that has an API (and the final publisher can interact with it).
- Q&A
- Licensing….. very complicated.
- The folks at arXive still get “anonymous” comments/reviews via email because they don’t want their criticism to be visible to the authors. Very interesting. Open peer review has a long way to go, not proven model.
CONCURRENT 3A: WILL RICHER METADATA RESCUE RESEARCH?
4:00 – 5:00PM
https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/concurrent-3a/
Some panelist
- Metadata needs to be better, richer, more accurate.
- More and more, people are trying to map interconnections.
- Lots of publishers “set it and forget it” once they set up a metadata workflow. Metadata2020 is trying to improve this.
- The entire scholcomm community needs to be on board with this.
- Automation of this process is key, the volume is simply too large to use people.
- Finer gained is always better
Lady from ARL
- Research libraries are reliant on good metadata. Institutions want to know all their research outputs. This is very hard without reliable quality metadata.
- Researcher affiliation is surprisingly hard, requiring human curation since use of standard controlled vocabulary isn’t widespread
- Long discussions ensued. One obvious takeaway, everyone needs to use ORCID! Require one for all your PhD students!
- What are the tradeoffs in having the researchers/depositors put in the metadata at time of submission? Is there a better way?
- Maybe making metadata enrichment more collaborative can help.
- What if we used machine learning, AI, to generate the metadata ? What accuracy level would be acceptable in order to accept the richer records? To have a general use AI comb over so much content, the publishers would need to allow it in their licenses.
FRIDAY, JUNE 2
PLENARY: PREVIEWS SESSION – NEW AND NOTEWORTHY PRODUCT PRESENTATIONS
9:00 – 10:30AM
- Lots of innovation and disruption in the field
- Innovation is not a luxury, it is essential to keep customers now
- Atypon. Seamless individual authentication . Atypons shared WAYF cloud, login once, the participating publishers share user authentication data, so access is seamless across sites and users can be tracked across sites. Will launch late 2017.
- BenchSci. Trope is that scientists only engage with the literature sporadically in order to keep up to date. Actuality is that many consult it in small doses every week. People are often deluged in content though. Benchsci is an AI product that reads through papers to find biomedical information and allow filtering.
- Code Ocean. Finding code or algorithms used in research is difficult. Not always available, many languages used, may need debugging. Code ocean hosts code in the cloud, supports all major languages, allows people to run code in the cloud and view in browser. It is a reproducibility platform. Cuts out the GitHub and downloading and other tweaking middlemen processes.
- Crossref web events. Lots of places in the web refer to scientific information. Crossref and datacite collaborated to develop a tracking mechanism for these references (tweets, Wikipedia, Facebook, blogs, etc.). Not a metrics, no user interface, all the info goes through the API, interpretation of the data is done by the end user, not crossref.
- Hypothes-is. W3C approved an annotation standard this year. So annotation is coming to the web in a big way, fast. Hypothesis is an nonprofit. Annotation allows all documents to become living, version of record preserved but also able to be continually updated if necessary. Journal clubs are forming using the software, they are also trialing using it for peer review.
- LibLynx. Identity management, in a world where people use single sign on allows tracking and personalization. Presently, identifying patrons is siloed across many platforms, users can also have access through many IDs. Users have a disconnected access experience, often having to login 2x (library, then platform). By harmonizing ID management, liblynx allows cross site tracking and upselling.
- NEJM knowledge +. A learning and assessment tool for physicians to pass their board exams. Uses AI to benchmark where people are and only show them relevant content to get them up to the next benchmark level. Originated after detailed consultations with doctors.
- PSI. Firm that “keeps content safe”. They offer subscription fraud detection services, have recovered over 50 million in damages for their clients in past 10 years. They have a validated database of IP addresses (and API) to ensure that no one is getting unintended access as universities change their IP addresses. They also offer bribery detectors, LOL.
- RedLink. Had new platform, Remarq. It is their strike back against academia.edu and Researchgate. They embed in existing websites and allow comments and annotation, as well as circulation “sharing”. All of this require only an addition of JavaScript to existing sites. They also offer polling and user interaction detached from an article.
- SCOPE. Publishers need better metadata to sell their books, in order to do this for the backlist catalog, they have an AI product that generates abstracts and keywords for each chapter which then can go to human curators to verify. “conCise” product name. Several publishers on JSTOR are already use it for chapter abstracts and subject headings
- Digital science. Dimensions for publishers. There aren’t many accurate leading indicators of use. Dimensions is a knowledge platform that allows tracking and aggregation of funding information, useful for soliciting reviews, IDing conflicts of interest, and scouting new topic areas for sales.
- UC press, Editoria. Luminos. It is a web based open access workflow management and book development tool. Ingested Word files, then everything else can be done in the platform all the way to the final product. Based on the PubSweet framework. Partnership with Collaborative Knowledge Center.
- U Michigan Press. Fulcrum. A platform that allows authors to present multimedia content into their work. Ebook don’t allow this (well). Fulcrum is a scalable platform that can allow this and has the player technology embedded in it so meets preservation requirements. Open source.
- Zapnito. They offer white label “knowledge networks “ and “feeds”. Lots of buzzwords about knowledge frameworks, I’m not really sure what this actually does…. but both SpringerNature and Elsevier are customers.
CONCURRENT 4E: BUT WHAT DO THE USERS REALLY THINK? USER EXPERIENCE AND USABILITY FROM ALL SIDES OF THE TABLE
11:00 – 12:00PM
- Little solid research on information seeking behavior beyond the STM world. Humanities publishers have a long way to go, need to do more usability testing
- At EBSCO they moved user research into the product management department out of the quality assurance department. They are working continuously with product owners of ebooks to improve service.
- Everyone should read “democratization of UX” on medium. We need to go beyond white papers. They are fine but key points need to break out of the paper container into talking points and query-able databases.
- Instead of general user testing, to make the results widely understood, many are using ‘user personas ‘. Then create testing panels of users that come close to the personas.
- In order to make research actionable, the research questions need to be actionable. UX testers also need to stay ahead of the user. Henry ford “if I had asked my customers what they wanted, they would have told me ‘a faster horse’.”
- To make better products, publishers middlemen vendors and libraries must collaborate on user research.
CONCURRENT 5E: WHO’S FASTER, A PIRATE OR A LIBRARIAN? SCI-HUB, #ICANHAZPDF, ALTERNATIVE ACCESS METHODS, AND THE LESSONS FOR PUBLISHERS AND THEIR CUSTOMERS
1:30 – 2:30
My panel. Slides not posted online.
CONCURRENT 6F: IMPROVING ACCESS TO SCHOLARLY RESOURCES FROM ANYWHERE ON ANY DEVICE
2:30 – 3:30
Ralph, ACS
- RA21
- Access methods have changed a lot in the past 30 years but not nearly as much as they could have changed.
- IP authentication was invented in the mid 90s and we’re still using it. Why? There are other ways.
- Detailed walkthrough of how hard it is to get to an article from off campus.
- RA21 is joint initiative between NISO and STM to move away from IP authentication
- Researchers want seamless access from multiple devices
- IP authentication facilitates piracy (scihub leverages IP authentication). Publishers want to be able to shut out individual users they suspect of piracy, but shutting off an IP address affects hundreds or thousands of people.
- There is a black market for university login credentials
R. Wenger from MIT
- Read Roger Schonfeld post on Scholarly Kitchen 2015/11/13
- IP authentication is an anachronism and “not sustainable” in the long term.
- Academic libraries and publishers have an opportunity to collaborate on a new system
- Libraries care about privacy and walk ins, protection of usage data
- RA21 is going to use a SAML based system, of which shibboleth is one.
- There is a lot of inertia behind IP authentication in libraries, will need to be overcome
- Benefits for libraries are: no need to maintain IP ranges, reduced use of proxy servers, identity management likely outsourced to campus IT, granulated user data.
- rwenger@mit.edu
Chris from Elsevier
- SAML is a federated identity management technology (shibboleth is one version)
- RA21 is doing pilot programs right now, 3. They have a privacy preserving WAYF (where are you from) option.
- How it would work: sign in once using a university email, it gets stored in the browser, good to go forever on that device in that browser (unless delete the cookie). Cookie is privacy preserving by only using the email domain address, so no individual data stored.
- www.ra21.org
- OCLC has been involved in the discussions but is not a formal partner.
- This not “a cross publisher login system”. Rather, users always login in to their institutional SSO and are then directed back to the content.
CLOSING PLENARY: DESSERT COURSE: A DISCUSSION WITH THE SCHOLARLY KITCHEN CHEFS
3:30 – 4:30
https://www.sspnet.org/events/annual-meeting-2017/2017-schedule/closing-plenary/
Scholarly Kitchen discussions
David Crotty, OUP
- Why hasn’t scholarly publishing been disrupted already?
- The number of authors and articles increases every year. But the subscriptions decline almost every year. Thus new growth strategies need to emerge.
- Publishers are increasingly inserting themselves in every aspect of the research and dissemination process. (Vendor - publisher mergers)
Todd Carpenter, NISO
- Money is the big problem, states, university, students, everyone is tapped out. Forecast is that resources will only stagnate or decline
- Have libraries devotions to privacy set them back? How will they deal with the personalization that users “want”?
Bob, American Mathematics Society
- For most societies, membership is declining, revenue is stagnant or declining
- They need to demonstrate their value better
- To outsource or not?
- There are big questions of scale, societies have trouble competing with the big major publishers. Can they remain independent and in-house?
Kent, Redlink
- Politics is now greatly affecting the scholcomm ecosystem
- New UK draft policy would require researchers to show their funded work to government before publication
- AI may save us. Or it may destroy things.
- Now that the intermediary institutions are Silicon Valley (not neutral) we need to continually build in skepticism about the algorithms that are determining the results we see.