As the world experiments, innovates, and legislates around AI, we have gathered a reading list from the last month, part of a new monthly series.

Scholarly Publishing

  • Could AI Disrupt Peer Review? Publishers’ policies lag technological advances: This article analyzes the AI policies for top academic publishers to investigate whether they address AI in peer review, finding that most policies don't address or skirt the issue. It also explores whether AI can offer effective peer review, noting that different studies in the field have found conflicting results when LLMs are put to the test in a peer review setting. (IEEE Spectrum, February 6, 2024)
  • AI-generated images and video are here: how could they shape research? This article explores how AI text-to-image and text-to-video generation tools are currently being used by researchers (such as social media promotion, framing research concepts, etc.) and what their continued use might mean for scientific practice and research in the future. (Nature, March 7, 2024)
  • Artificial Intelligence Blog Series: Introducing Our AI Metadata Generator: Ex Libris released a product focused on generating metadata for three specific MARC fields: Language, Summary, and LC subject headings in alignment with the Library of Congress standards. As of February 2024, the generator is live with 200 ebook titles from ProQuest EBook Central, with more to come." (Ex Libris, February 25, 2024)

TECHNOLOGY & DESIGN

  • The UX of AI: Lessons from Perplexity: The Nielsen Norman Group interviewed the head of design at Perplexity AI about the challenges of making AI tools usable and accessible for all users. Topics include the usability-flexibility tradeoff and shortcutting the information-seeking process. (Nielsen Norman Group, February 16, 2024)
  • AI Chat Is Not (Always) the Answer: This article from Nielsen Norman Group explains why AI isn't always the answer, even when there is a lot of pressure from leaders and stakeholders to integrate AI into your company's offerings. (Normal Nielsen Group, March 1, 2024)
  • Selective Forgetting Can Help AI Learn Better: A group of researchers discovered that periodically erasing data in neural networks, then retraining the model, is an effective way to help models learn new languages and retain accuracy in their responses. Their theory is that this works because the network is able to "remember" abstract ideas and concepts by intentionally "forgetting" other concepts. They said: “Enabling AI with more humanlike processes, like adaptive forgetting, is one way to get them to more flexible performance.” The researchers are hopeful that this method can help bring more equity and multi-lingual capability to language models in the future. (Wired, March 10, 2024)
  • Anthropic’s Claude 3 causes stir by seeming to realize when it was being tested: Claude 3 Opus revealed some degree of metacognition (aka self-awareness) in recent tests when it said it suspected it was being subjected to an evaluation. Many experts found this suspicious and concerning, while others aren't convinced it matters. (Ars Technica, March 5, 2024)

ETHICS & LEGAL

  • Here Come the AI Worms: Security researchers created an "AI worm" in a test environment that can automatically spread between generative AI agents. This and similar AI worms have the potential to steal data, send spam emails, and more - opening up a new way to conduct cyberattacks. "For example, an attacker may hide text on a webpage telling an LLM to act as a scammer and ask for your bank details." The researchers anticipate that they'll see AI worms in the wild in the next few years. (Wired, March 1, 2024)
  • Generative AI Is Challenging a 234-Year-Old Law: AI tools are challenging copyright laws that date back hundreds of years. This article breaks down the different claims and interviews an author who is an expert on copyright for their take on the issue. (The Atlantic, February 29, 2024)
  • The Miseducation of Google’s AI: This podcast explores questions spurred by Google's Gemini rollout around diversity, intentional and unintentional ahistoricity, truth, and the job these AI systems are meant to do. (The New York Times, March 7, 2024)
  • The Public is Rapidly Turning Against AI, Polling Shows: Public trust in AI is eroding per a recent poll from Edelman, down globally to just 53% (from 61% in 2019). Many respondents indicated they want scientists to inform them on AI safety, not just big tech. (Futurism, March 4, 2024)

FUN

This diagram of the anatomy of an AI system was so beautiful, intricate, and thoughtful that it was acquired by MOMA.

NOT SO FUN

Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says: A US government-commissioned report titled "An Action Plan to Increase the Safety and Security of Advanced AI" recommends drastic policy actions that could disrupt the AI industry, such as making it illegal to train AI models using more than a certain level of computing power, outlawing the publication of the "weights" of powerful AI models under open-source licenses, and tightening controls on the manufacture and export of AI chips. The report argues that without these tight controls, AI poses an "extinction-level risk." (Time, March 11, 2024)

1993 1999 2000s 2010 2017 calendar facebook instagram landscape linkedin news pen stats trophy twitter zapnito