As the world experiments, innovates, and legislates around AI, we have gathered a reading list from the last month, the fifth in a new monthly series. (Read the last issue here.)

Scholarly Publishing

  • Large Language Publishing: This long-but-worthwhile thought piece explores:
    • The idea of "surveillance publishing," the concept that big publishers can extract data from customer and scholar behavior to feed predictive models that, in turn, get refined and sold to their customers at a high cost
    • The potential for the scholarly record to serve as a "hallucination-slayer" (or not)
    • The AI acquisition binge in the scholarly publishing space
    • What AI could mean for the future of Open Access (Force11, January 2, 2024)
  • The Truth Is in There: The Library of Babel and Generative AI: In a guest post, Isaac Wink, Research Data Librarian at the University of Kentucky, compares generative AI to the Library of Babel, noting that information produced by LLMs is not inherently trustworthy and must be fact-checked before sharing it. (The Scholarly Kitchen, December 20, 2023)
  • The Future of Data in Research Publishing: From Nice to Have to Need to Have? This just-accepted article explores the necessity of clean, open data to ensure the validity of content produced and analyzed by generative AI: "The current state of open data in scholarly publishing is in transition from 'nice to have' to 'need to have.’” (Harvard Data Science Review, December 21, 2023)
  • Is AI leading to a reproducibility crisis in science? This article explores a worry expressed by some scientists: that ill-informed use of artificial intelligence could generate an abundance of unreliable or useless research and could lead to a reproducibility crisis in science. (Nature, December 5, 2023)
  • Fortune Brainstorm AI Conference: Themes and Ideas: Chef Ann Michael shares her takeaways from the Fortune Brainstorm AI conference. All are useful and pragmatic, but our favorite was: "Proceed with caution, but proceed!" (The Scholarly Kitchen, December 20, 2023)
 

Hot Takes

  • Pluralistic: What kind of bubble is AI? Cory Doctorow, known for his hot takes, explores the AI bubble, the risk tolerance of high value AI applications, and what might happen if and when the AI bubble bursts. "The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker." (Pluralistic, December 19, 2024)
  • Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real: A sentence you probably never thought you'd read: AI-generated images of wooden dog sculptures have gone viral on Facebook. As ridiculous as the headline sounds, the phenomenon raises questions about how to ascribe credit to otherwise stolen artwork, as well as how easy it is to spread misinformation using AI generated content. (404 Media, December 18, 2023)
  • AI's colossal puppet show: "Stop saying, 'AI did this' or 'AI made that.'" Why? Because in doing so, we attribute a certain amount of agency to AI tools, which can distance its designers and creators from assuming responsibility for its outputs. Language is powerful: by framing LLMs and AI products as tools created by individuals and organizations, we help to hold those same creators accountable for what they release. (Axios AI, December 20, 2023)
 

General

  • AI, and everything else: Delivered as a keynote at the Slush conference in Helsinki, this presentation from tech analyst Benedict Evans explores AI as a macro trend in 2023 and asks questions about what the future looks like with gen AI. Thought-provoking and rich with data to support the ideas. (Benedict Evans, December 2023)
  • NSF launches EducateAI initiative: The National Science Foundation announced a new EducateAI initiative, which aims to enable educators to make high-quality, audience-appropriate artificial intelligence educational experiences available nationwide to K-12, community college, four-year college and graduate students, as well as adults interested in formal training in AI. (National Science Foundation, December 5, 2023)
  • Liquid AI, a new MIT spinoff, wants to build an entirely new type of AI: And that type is liquid neural networks. TechCrunch explains what a liquid neural network is, where the idea sparked, how it's different from the well-known GPT model, and where it has advantages over other forms of generative AI. (TechCrunch, December 6, 2023)
  • Classifying Source Code using LLMs — What and How: This piece is chock full of great advice on using LLMs. The featured case study centers on determining if some given code was malicious. One key piece of advice: running an LLM is expensive - first check if another method might work for you. (Towards Data Science, December 28, 2023)
Thanks for reading along this year! Have an article we should include in a future issue? Email info@silverchair.com.

1993 1999 2000s 2010 2017 calendar facebook instagram landscape linkedin news pen stats trophy twitter zapnito