As the world experiments, innovates, and legislates around AI, we have gathered a reading list from the last month, the fourth in a new monthly series. (Read the last issue here.)


  • Unlocking insights in scientific literature: As part of the launch of the new Gemini model, Google released a video that explores the use case of reviewing hundreds of thousands of research papers and extracting relevant data to build / update a dataset. Then it scans an original graph and update it with new data. (Google, December 6, 2023)
  • Ask The Chefs: The US Executive Order on Artificial Intelligence: This article offers a wide range of perspectives and hot takes on the US Executive Order on Artificial Intelligence, such as: "This is the worst time to be making formative decisions — and most particularly regulation — about the governance, shape and impact of a technology, since expectations are so out of step with reality." (The Scholarly Kitchen, December 4, 2023)
  • Food for Thought: What Are We Feeding LLMs, and How Will this Impact Humanity?In a recap of his Platform Strategies opening keynote, Silverchair CTO Stuart Leitch explores the question, "What happens when we have models growing ferociously in capability, but we decline to train them of the very best sources of human wisdom and instead have them learn on the longer tail of less rigorously curated information, or information that is out of date?" (The Scholarly Kitchen, December 11, 2023)


  • Big AI Companies Find a Way to Identify AI Data They Can Trust: The Data & Trust Alliance has developed standards for describing the origin, history and legal rights to data. The standards are essentially a labeling system for where, when, and how data was collected and generated, as well as its intended use and restrictions. The Alliance is a nonprofit group made up of large companies and organizations, including American Express, Humana, IBM, Pfizer, UPS and Walmart, as well as a few start-ups. The alliance members believe the data-labeling system will be similar to the fundamental standards for food safety that require basic information like where food came from, who produced and grew it and who handled the food on its way to a grocery shelf. (The New York Times, paywall, November 30, 2023)
  • Meet the Lawyer Leading the Human Resistance Against AI:“I’m just one piece of this—I don’t want to call it a campaign against AI, I want to call it the human resistance.” Programmer-turned-attorney Matthew Butterick is the lawyer behind several of the big IP cases facing AI companies right now, including cases against OpenAI and Meta. (Wired, November 22, 2023)
  • ChatGPT's training data can be exposed via a "divergence attack": LLMs like ChatGPT are trained on vast amounts of text data from books, websites, and other sources. While their training data is typically a secret, a recent study by Google DeepMind, the University of Washington, UC Berkley, and others found that LLMs can sometimes remember and regurgitate specific pieces of the data they were trained on with the right prompts. This phenomenon is known as "memorization." One of the most concerning findings was that the memorized data could include personal information (PII), like email addresses and phone numbers. (Stack Diary, November 29, 2023)


  • Welcome to the Gemini Era: After a year of hastily rushed to market AI offerings, Google (no surprise) delivered the sleekest rollout we've seen of an AI product to date. With impressive stats, demos, use cases, and more, Google's Gemini launch clearly demonstrated the power of the new tool in an accessible way. (Google, December 7, 2023)
  • Perplexity Introduces Online LLMs With Real-Time Information: AI startup Perplexity's new LLMs can leverage real-time data from the internet to provide responses. The pplx-7b-online and pplx-70b-online models are publicly accessible via the Perplexity API and Labs web interface. (Search Engine Journal, November 29, 2023)
  • Paid Leave AI: Spearheaded by Moms First (a nonprofit started by the founder of Girls Who Code), Paid Leave New York moms and caregivers navigate paid family leave. The system feed users’ situations into OpenAI's GPT-4 to help determine whether they're eligible for paid leave, what forms they need to complete, and what information they’ll need to gather. (Axios AI, December 5, 2023)
  • Amazon finally releases its own AI-powered image generator at AWS re:Invent 2023: Amazon's Titan Image Generator is now available in preview for AWS customers on Bedrock (Amazon's AI development platform). Amazon says that Titan Image Generator was trained on a “diverse set of datasets” across a “broad range of domains,” can be optionally fine-tuned on custom datasets, and includes built-in mitigations for toxicity and bias. The source of those datasets and the terms on which they were ingested, however, remains undisclosed. (Tech Crunch, November 29, 2023)


  • When AI Unplugs, All Bets Are Off: This article imagines a future of AI assistants in a world where edge computing is the primary option for ensuring user privacy and chatbot speed. Since edge computing allows for a lot of individualization, it makes sense that personalized AI assistants would get weird fast. (IEEE Spectrum, December 1, 2023)
  • Amazon’s AI Reportedly Suffering “Severe Hallucinations”: Amazon's new AI chatbot, Q, reportedly suffers from severe hallucinations and is prone to leak confidential data. Fun! (The Byte, December 4, 2023)
  • Number of websites blocking Google-Extended jump 180%: The number of websites blocking the Google-Extended standalone product token jumped 180% since Google introduced the functionality in September. More than 250 websites now block Google-Extended, including The New York Times, Yelp, and 22 Condé Nast properties. The plugin allows sites to block Bard, Vertex AI generative APIs, and future generations of models from accessing their content. (Search Engine Land, November 27, 2023)


  • Dress Rehearsal: In the best write-up we've seen on the matter, Clarke & Esposito's The Brief offered a deep dive into the goings-on at OpenAI, with insights on the governance structure, social movements, and key players contributing to recent events. "Perhaps the most substantive point to take away from this affair is that OpenAI had a dress rehearsal for the development of AGI or another potentially harmful technology and it failed. It is clear there is no putting the genie back in the bottle." (Clarke & Esposito’s The Brief, November 30, 2023)
  • Looking back at a transformative year for AI: “It has been a year since OpenAI quietly launched ChatGPT as a ‘research preview.’” This article takes a look back on a year of AI developments since that time. (Venture Beat, December 3, 2023)
  • OpenAI COO thinks AI for business is overhyped: The COO of OpenAI said companies shouldn't expect AI to transform their businesses overnight, saying, “there’s never one thing you can do with AI that solves that problem in full.” He added that the tech is still young and experimental, and isn't necessarily prepared to be fully entrenched in critical tools and applications. (The Verge, December 4, 2023)
  • Good old-fashioned AI remains viable in spite of the rise of LLMs: This article reminds us that despite the LLM hype, "good old-fashioned" task-based AI is still useful and in many cases, more appropriate, for solving certain kinds of problems. “There is clearly still a role for task-specific models because they can be smaller, they can be faster, they can be cheaper and they can in some cases even be more performant because they’re designed for a specific task.” (Tech Crunch, December 1, 2023)
Thanks for reading along this year! Have an article we should include in a future issue? Email

1993 1999 2000s 2010 2017 calendar facebook instagram landscape linkedin news pen stats trophy twitter zapnito