As the world experiments, innovates, and legislates around AI, we have gathered a reading list from the last month, part of a new monthly series. (Read the last issue here.)

Scholarly Publishing

  • Silverchair’s AI Lab Launches Prototypes to Transform Science Communication: We’re biased, but we think this announcement is pretty exciting. The AI Lab is a flexible space where we can bring prototypes to our clients quickly, testing their effectiveness and ensuring they are helping publishers achieve their goals, from improving user experience to protecting the value of content in an evolving knowledge ecosystem. The first outputs of the AI Lab include a content discovery and recommendation tool that creates new ways for users to interact with content using RAG frameworks; SilverChat, which acts as a personal Silverchair Platform expert for clients; and AI-generated summaries that make research findings more accessible by generating plain-language explanations of scholarly journal articles. (Silverchair, January 29, 2024)
  • Researchers plan to release guidelines for the use of AI in publishing: In 2023, journals and publishers scrambled to release guidelines for how AI could and could not be used in publishing workflows. Now, to standardize (and simplify) those guidelines, a group of researchers aim to find consensus around a shred set of rules for the use of AI. (Chemical & Engineering News, January 19, 2024)
  • There is More to Reliable Chatbots than Providing Scientific References: The Case of ScopusAI: A researcher shares her experience with beta testing Scopus AI, detailing challenges like plagiarism, fact-checking false information/hallucinations, and a lack of transparency. (The Scholarly Kitchen, February 21, 2024)
  • Scientific Journal Publishes AI-Generated Rat with Gigantic Penis In Worrying Incident: And of course, no roundup is complete without mentioning the now infamous Rat Penis Debacle. (Vice, February 15, 2024)


  • Accelerating AI Skills: Preparing the Workforce for Jobs of the Future: Amazon Web Services released a report that contains insightful takeaways for those working in technology: “Surveyed employees anticipate AI will have some positive impact on their career (84%). Moreover, nearly eight in 10 workers (79%) are interested in developing AI skills to advance their careers. The top three reasons employees cited a desire to learn AI skills are: improved job efficiency (51%), higher salary (44%), and faster career progression (42%). Employers indicate they would pay a salary premium for workers with AI skills. This wage premium could be at least 30% and varies by department.” (Amazon Web Services, November 2023)

Legal & Ethical

  • Anthropic researchers find that AI models can be trained to deceive: Anthropic researchers have found that the most commonly used AI safety techniques have little to no effect on the models’ deceptive behaviors. One technique called adversarial training even taught the models to conceal their deception. It's not currently clear whether the deceptive behavior can be cultivated in the wild (i.e., without explicit training on deception). (Tech Crunch, January 13, 2024)
  • IFI Insights: Opening the Patent Picture on Generative AI: This report from IFI Claims offers a snapshot of how many gen AI patents have been filed in the last few years, and by whom. (IFI Claims, February 6, 2024)
  • What was Sora trained on? Creatives demand answers: OpenAI hasn't said where their training data for Sora came from, leading many people to speculate whether the training data included copyright content (likely) and whether artists and creatives have any rights or recourse: "...publicly-available doesn't always translate to public domain." (Mashable, February 16, 2024)
  • Why The New York Times might win its copyright lawsuit against OpenAI: This article, written by a journalist and a lawyer, warns AI companies about the potential perils of copyright infringement by comparing the OpenAI's fight against the NYT's copyright infringement case to's fight against the recording industry back in the early 2000s. They argue that fair use isn't designed to scale, diving into detail on what companies have to consider in potential fair use cases and outlining specific challenges that these companies will have to hurdle. (Ars Technica, February 20, 2024)


…and just plain weird

1993 1999 2000s 2010 2017 calendar facebook instagram landscape linkedin news pen stats trophy twitter zapnito