AI, LLMs, and machine learning are set to disrupt every aspects of our lives, from the way we live to the way we consume information, and from the way we communicate to the way we evaluate truth and research. As the world experiments, innovates, and legislates in this area, we have gathered a reading list from the last month, the third in a new monthly series. (Read the last issue here.)

SCHOLARLY PUBLISHING & RESEARCH

  • AI In Scholarly Publishing: Last month's opening keynote at our Platform Strategies event discussed the big-picture evolution of AI and its applications to our industry. Watch the recording & read the transcript in our archive. (Silverchair, September 27, 2023)
  • This Scholarly Kitchen article makes a case for ending human peer review in favor of AI-only peer review, citing reviewer burden, lack of sustainability, and insufficient value exchange between journal and reviewer (Scholarly Kitchen, September 29, 2023)
  • Meanwhile, this Scholarly Kitchen article calls for a balance between AI and human peer review, stating that the right balance can improve detection of faulty papers early, reduce reviewer burden, and increase the sustainability of the peer review process (Scholarly Kitchen, September 28, 2023)
  • This Nature article explores the various ways that AI could disrupt (or is already disrupting) scholarly publishing. (Springer Nature, October 10, 2023)

LEGAL & POLITICAL

  • OpenAI, Microsoft, Google, and Anthropic have banded together to form the Frontier Model Forum. This group intends to jointly develop technical benchmarks and standards for "frontier AI" systems, promote best practices for responsible development and use of AI, collaborate with policymakers to build trust and mitigate risks of using AI and apply AI knowledge to respond to global issues such as climate change and cancer. It's a smart move on their part - they're asking to be regulated and are being cooperative with the US government, which bodes well for the success of their companies and infrastructures. (Tech Crunch, July 26, 2023)

ETHICS

  • AI uses a lot of power, but it's also great at helping with problem-solving for certain kinds of problems. This means it has the potential to dramatically contribute to climate change via massive carbon emissions from energy consumption or help us find solutions to problems created by climate change. As of Fall 2023, though, AI-driven carbon emissions represent just a tiny fraction of global emissions this year. More data and (ethical) experimentation is needed to bottom these kinds of questions out. (Politico, September 27, 2023)
  • Silverchair Universe partner DCL recently held a lunch and learn called "Hallucinate, Confabulate, Obfuscate: The Perils of Generative AI Going Rogue." In it, the panelists cover how hallucinations happen, model drift and decay, whether you can make an LLM forget or unlearn something, if document structure matters in an LLM-ruled world, and more. (Data Conversion Laboratory, October 12, 2023)
  • Anthropic, pioneers of the constitutional AI concept, gathered data from the American public to help augment, temper, and upgrade their AI constitution. There was only a 50% overlap between Anthropic's original constitution and the one suggested by the public. This indicates that while technical SMEs may not have the context needed to create policies that encourage trust in AI, they can get the information they need using public input practices to build robust, safe approaches to AI tech. (Anthropic, October 23, 2023)

GENERAL

  • Neilsen Norman Group addressed the differences among Generative AI Bots, including ChatGPT, Bard, and Bing. That found that Bing Chat was rated less helpful and trustworthy in comparison to ChatGPT and Bard, mainly due to poor information foraging and user-interface issues.   (Nielsen Norman Group, October 1, 2023)
  • An easy-to-understand, visualized explainer from the Financial Times covering how LLMs, tokens, vectors, embeddings, transformers, probability scores, work. This explanation is helpful even if you already know what all of these terms mean and how the technology works, and it's extra helpful if you're still learning or need a refresher. Key quote: "LLMs are not search engines looking up facts; they are pattern-spotting engines that guess the next best option in a sequence." (Financial Times, September 12, 2023)
  • Marc Andreessen's "techno-optimist"" manifesto on AI is exactly what it sounds like: an optimistic (if dramatic) perspective of the abundant and successful future that humans and computers can make using AI. He goes on to suggest that the bright future he believes is possible only without AI tech regulation, avoiding any discussion of the negative consequences of misusing this technology. (The Techno Optimist Manifesto, October 16, 2023) (Suggested pairing: Yoshua Bengio's "humanity defense organization" piece.)
  • Generative AI is an "anything tool" that may not need special training in the future. Plus, a human-friendly explanation of how we got to where we are now with LLMs, how LLMs use probabilistic math (aka "gradient descent") to generate outputs, steerability, RLHF, embeddings, and new frameworks that will shift us away from chatting and toward processing data. (ARS Technica, August 23, 2023)

TECHNOLOGY

  • ChatGPT now accepts voice and image inputs. It can now talk back to you too, using 5 different voices that are much more human than Siri, Alexa, or Google Home. These features dramatically expand the use cases ChatGPT might be able to satisfy, going beyond a chatbot and becoming more of an assistant. (Open AI, September 25, 2023)
  • Google now respects a new Google-Extended flag in a site's robots.txt, which tells their crawlers to include a site in search without using it to train new AI models like the ones powering Bard. Previously, the only way to prevent being included in AI models was to block Google entirely, which resulted in not being included in search results (not desirable for obvious reasons). (The Verge, September 28, 2023)

CONTENT CREATION

  • In a recent preprint, researchers found that AI "watermarks" are easy to fake and break. This poses numerous questions for information providers and consumers: How can we create more robust provenance and authenticity markers? Can we make them pervasive and indelible? Will consumers even use these watermarks to confirm the veracity of information before they share it or believe it? (Wired, October 3, 2023)
  • A Loki season 2 poster has been linked to a stock image on Shutterstock that seemingly breaks the platform’s licensing rules regarding AI-generated content. This incident is almost emblematic of the creative community's longtime concerns about AI image generators being trained on their work without consent or replacing human artists, and the tech community's worries about preventing or regulating the proliferation of AI-generated (and potentially therefore false) content. (The Verge, October 9, 2023)
To keep up on future reading lists and other community & industry updates, subscribe to our monthly email newsletter.

1993 1999 2000s 2010 2017 calendar facebook instagram landscape linkedin news pen stats trophy twitter zapnito