At the 2024 Society for Scholarly Publishing annual meeting in Boston, I moderated a session titled, “The Other AI: The Role of Actual Intelligence In the Future of Scholarly Publishing,” which explored the aspects of our work in scholarly publishing that are uniquely human.

the other AI ssp session

These range from the art of curation to the power of community building and supporting DEI needs. As we prepared for the session, my fellow panelists (Abhigyan Arun, CEO, TNQ Technologies; Nikesh Gosalia, President, Global Academic & Publisher Relations, Cactus Communications; Penelope Lewis, Chief Publishing Officer, AIP; and Damita Snow, Director, Accessibility and Diversity, Equity & Inclusion Strategy, Publications and Standards, ASCE ) assembled a list of further reading for those wishing to continue exploring this topic.

  • Coded Bias (film): available on Netflix, this documentary investigates the bias in algorithms after M.I.T. Media Lab researcher Joy Buolamwini uncovered flaws in facial recognition technology.
  • The Human Element: Overcoming the Resistance That Awaits New Ideas (book):  Designed for “business leaders, product managers, educators, and anyone else who seeks to bring new and exciting ideas to life, The Human Element is an indispensable resource to help people overcome the powerful forces of human nature that instinctively resist change.”
  • How to Implement AI–Responsibly (Harvard Business Review): “Researchers engaged with organizations across a variety of industries, each at a different stage of implementing responsible AI. They determined that, although data engineers and data scientists typically take on most responsibility from conception to production of AI development lifecycles, non-technical leaders can play a key role in ensuring the integration of responsible AI. They identified four key moves — translate, integrate, calibrate,and proliferate — that leaders can make to ensure that responsible AI practices are fully integrated into broader operational standards.”
  • What Medium’s Tony Stubblebone Has Learned About Tech & Journalism (Semafor): In an interview, Medium CEO Tony Stubblebine spoke about the human intervention needed to identify AI-generated and poorly written content:
    • Q: Are you seeing AI-generated content on the platform?
    • A: That’s the worst part of it. It’s not just that there’s no exchange of value. It’s a huge cost now. Spam got cheaper. Every couple of weeks, we test the latest AI detection tools. None of them are good enough for platform use. But the good news is that humans spot this stuff pretty easily. But even if they’re misidentifying sh!&%y writing with AI writing, it doesn’t matter. The whole point of having humans look at it is to find the stuff worth recommending. And this is one of the side benefits of putting humans in the loop of recommendations again. This is not why we did it — we did it because we think human expertise and human curation is really valuable. But as soon as the AI-generated content started showing up, it was the humans that spotted it immediately. So we have a lot of it on the platform, but for the most part, it stays out of the recommendations because it’s all trash.
  • The Most In-Demand Skills for 2024 (via Axios): Despite (or perhaps because of) AI-related developments, communication remains the most valued job skill:
    • Communication is the most in-demand job skill for the second year in a row, according to data gathered by LinkedIn, reports Axios' Eleanor Hawkins.
    • Why it matters: Communication skills trump AI skills — for now. Customer service and leadership ranked second and third, respectively, followed by project management, management and analytics.
    • What they're saying: "Human skills — like communication, leadership and teamwork — are particularly critical in this moment," says LinkedIn Learning global head of content strategy Dan Brodnitz.
    • "With a rise in remote and hybrid work, and now AI, the need for human connection and people skills have become even more important."
  • AI Challenges for Librarians (Research Information): In this article, Darrell Gunter takes a critical look at racial bias and ethics in artificial intelligence. “Language matters. Words matter. Words grouped together create a context. Remember this statement: ‘content in context.’ Algorithms are made up of words and directions to achieve an information hypothesis. This is one example of the challenges we as a community face with AI.”
  • “When Your Technical Skills are Eclipsed, Your Humanity Will Matter More than Ever” (New York Times)
    • As one example, the release of ChatGPT is challenging everything within education, giving us the opportunity to rethink the role of teachers, students, and what education is even for. “We’ve been teaching students how to write like machines for a long time,” Stephen says, “and now we’re going to have to teach them how to write like human beings.” - Stephen Marche
    • “Human skills are more critical in an AI environment as robots and algorithms rely on human inputs and do not have the ability to process emotions. If we pivot too far towards technical skills, we miss out on the benefits of human skills in the workplace.”
  • The Unique Value of Our Human Skills in an AI Powered Future (Forbes): “Certain non-routine aspects of work, namely emotion and context, are fundamental components of human skills that are challenging to automate. We should not underestimate the importance of these skills in critical thinking, problem-solving, effective communication and judgment, emphasizing the need for educational systems to prioritize their development.”
 

For more AI-themed reading, be sure you’re subscribed to Silverchair’s AI Lab Reports newsletter.

1993 1999 2000s 2010 2017 calendar facebook instagram landscape linkedin news pen stats trophy twitter zapnito