The final event in our spring 2026 Platform Strategies Webinar Series brought together a publisher, a librarian, and a technologist to work through a question that's moved from speculative to urgent in the span of about eighteen months: as AI agents begin to act as intermediaries between researchers and scholarly content, what does that mean for the publishers, platforms, and libraries that have long served as the infrastructure of knowledge discovery?

Moderating the conversation was Stephanie Lovegrove Hansen, VP of Marketing at Silverchair. Joining her were Jane Jiang, Director of Libraries at Union College of Union County New Jersey; Andrew Smeall, VP of Product Innovation at Sage; and Jeremy Little, VP of AI at Silverchair. Watch the recording or read the recap below.

What Are We Actually Talking About?

Jeremy opened with a brief orientation on the technology, since the terminology around MCPs, data connectors, and AI agents is still being used inconsistently across the industry. The core concept is that a data connector gives an AI agent direct, structured access to a content source, so that the agent itself, not the user, is making the calls to retrieve and surface research. That distinction matters, because it changes the nature of the interaction fundamentally. An agent equipped with a publisher connector can traverse a content corpus, retrieve relevant material, and fold it into a response without the user ever navigating to a publisher site.

The scale of adoption on the developer side is striking. MCP, released originally by Anthropic, went from 100,000 downloads in its first month to 97 million monthly downloads today. Claude's connector directory now lists nearly 400 integrations. "What we're really seeing," Jeremy said, "is that developers are sprinting towards a world where they're building tools for AI agents to use." (Learn more about MCPs by exploring our MCP 101 Reading List.)

Where Adoption Actually Stands

If the developer momentum is clear, the picture on the end-user side is considerably murkier, and the panel was candid about that gap. Despite efforts to build and offer MCP-type solutions, actual usage remains uneven and the integration landscape is fragmented. The underlying problem is that AI chat tools don't yet understand the scholarly record the way publishers do. They can't reliably distinguish a peer-reviewed article from a preprint, or account for retractions and versioning. "The AI chat experience right now is still a little bit of a false friend," Andrew said, "where it gives such plausible-sounding answers, but the answers are not well-grounded in the scholarly record."

Jeremy framed the adoption challenge in terms of three readiness layers: the technology itself, provider readiness (are publishers and platforms actually offering connectors to experiment with?), and user readiness. Interestingly, he argued that the third barrier may be the least severe here. "Ironically, it seems like users are ready for this. But because of how fast this has moved, the other two sections are catching up." Researchers are already conducting much of their work through ChatGPT and similar tools; what they're missing is grounded access to peer-reviewed content from within those environments. Connectors are the structural answer to exactly that problem.

The View from the Library

Jane offered a perspective that grounded the conversation in the day-to-day reality of students, researchers, and the librarians who are absorbing much of the impact of AI adoption in higher education.

For many students, AI tools have become the default starting point for research. "For a lot of them, tools like ChatGPT have become the new starting point for brainstorming, understanding a topic, or even figuring out how to begin an assignment." AI lowers the barrier to entry, which Jane sees as genuinely valuable for users like first-generation college students navigating the research process for the first time. But staying in that environment too long is a pattern librarians are seeing with increasing frequency. "We're seeing students come to the reference desk with very polished AI-generated overviews, but without a real understanding of the underlying articles or where the information came from."

Jane described the library's role as having shifted accordingly: from helping people locate information to teaching them how to evaluate it, verify it, and move from an AI-generated summary to actual scholarly sources. That's a meaningful change in what librarians are being asked to do, and one that institutions are still working out how to resource and support.

On the question of whether publisher brand still carries weight with students, Jane's experience is that it does, but increasingly indirectly. Students aren't starting their research by choosing a publisher — they're asking whether a source is credible, whether their professor will accept it, whether it will pass muster academically. That trust is mediated through libraries, databases, and faculty expectations. "With AI tools, that layer of visibility can disappear if students are getting answers without clear attribution. They may not even realize whether the source is reliable or high-quality or not."

The Brand and Standards Challenge

Next, the group discussed what happens to publisher identity — authorship, provenance, peer review status — as content gets chunked, retrieved, and folded into AI-generated responses. Andrew drew an analogy to JATS and the effort it took to establish structured, semantically coherent tagging standards for journal content across the internet. "That standard then helped this content be reusable in all kinds of different places. And that standard doesn't exist yet for a technology like MCP."

The problem is two-layered. First, publishers need to agree on how to communicate structured content to AI agents — what metadata gets surfaced, how article type and version and provenance get preserved through a connector interaction. Second, the AI companies themselves need to surface that provenance in their interfaces, so that a user receiving an AI-synthesized answer can see where the content came from and what kind of source it is. "When a hundred article snippets get munged together into a response, all that stuff gets lost again." Getting the major AI platforms to prioritize scholarly publishing standards in their attribution and display decisions is, as Andrew acknowledged, a genuine challenge: "I don't think ChatGPT and Anthropic are that focused on scholarly publishing. So it's hard to get them to pick up the phone."

Jeremy was somewhat more optimistic about the underlying compatibility between existing publisher metadata infrastructure and what AI systems need, noting that long context windows and structured metadata may turn out to be reasonably well-suited to each other.

Looking Ahead

We closed by asking each panelist what worries them and what they're most excited about. Jane pointed to the genuine difficulty of detecting AI-generated work and the erosion of basic citation literacy among students who've grown accustomed to automated tools doing that scaffolding for them. Andrew worries about a consolidation dynamic where the technical complexity of AI adds yet another barrier to sustainability for smaller publishers and societies. And Jeremy raised the risk of a deteriorating scholarly record if non-grounded AI tools become the primary interface through which research is found and synthesized.

The optimism was equally genuine, however: Jane described AI as a meaningful leveler for students who might otherwise find the research process too intimidating to enter. Andrew pointed to the potential for AI to make vast archives — in languages and formats that were previously difficult to search — accessible to researchers who couldn't reach them before. Jeremy returned to connectors as the structural mitigation for the risks he'd identified: when AI tools are equipped with grounded, curated, publisher-vetted content, the research they support is better for it.

The through line was something Andrew articulated toward the end of the session, and that resonated across all three panelists: even as AI takes on more of the underlying work of research, the human relationships between researchers, editors, librarians, and students remain genuinely valuable. "I do still have faith that there's something fundamentally human and important about these research communities."

 

That's a wrap on the spring 2026 Platform Strategies Webinar Series! Recordings of all three events are available here. We'll be continuing this conversation in person this September — the Platform Strategies event takes place September 23 in Washington, DC, and this year's theme is Signal and Noise. Learn more and register here. 

1993 1999 2000s 2010 2017 calendar facebook instagram landscape linkedin news pen stats trophy twitter zapnito