What happens when researchers stop visiting your website — and start asking an AI instead? 

NISO Plus 2026 in Baltimore was one of those conferences where you could feel the industry working through a real inflection point in real time. AI-driven access to scholarly content — web scraping, agentic tools, chatbot-based research — was everywhere in the program, and the conversations around it were substantive and urgent. 

Emily Hazzard and I had the opportunity to co-lead a workshop titled Headless Scholarly Infrastructure: The Hard Questions About Access, Identity, and Standards Nobody's Answering Yet. It was well attended and the discussion was excellent, which I think says something about where the community's head is at right now. Publishers and librarians are actively trying to figure this out. 

The Shift in Tone 

If you've attended scholarly publishing conferences over the past few years, you've heard the anxiety. Bot traffic was the enemy. AI was a threat to be blocked. The mood at NISO Plus 2026 was markedly different. 

The conversation has moved from avoidance to engagement. Publishers, librarians, and standards bodies are no longer asking whether to deal with AI-driven access — they're asking how. That shift matters. It means the industry is ready to build solutions rather than just build walls. 

What We Presented 

Our workshop broke down the three main ways AI systems interact with scholarly content today, each with different implications for publishers: 

User agents — tools like ChatGPT's web search, Deep Research, and Perplexity — represent completely anonymous traffic. They're expensive to serve, nearly impossible to attribute, and frequently blocked by bot detection. Yet increasingly, they represent real researchers doing real work. As Zhao and Berman found in their December 2025 study, blocking generative AI bots can reduce total website traffic by 23% and real consumer traffic by 14%. Blocking bots may mean blocking your audience. 

Data connectors — including MCP (Model Context Protocol) servers and API-based access — offer a more structured path. They support user authentication, basic usage statistics, and can handle scale in ways that web scraping simply cannot. This is the category where much of the industry's attention is focused right now. 

Publisher-provided AI tools — platforms that embed intelligence directly into the content experience — offer the most insight and control, but at a significant cost in development and maintenance. 

We then ran the room through a "nightmare scenario" exercise, asking tables to build worst-case futures around five themes: usage collapse, attribution breakdown, access control failure, metadata apocalypse, and infrastructure fragmentation. The specificity of what people came up with — and the urgency behind the proposed solutions — reinforced that this isn't theoretical anymore. 

What We Heard from the Community 

Researchers are already using AI tools for literature discovery and review, but they aren't always satisfied with the results. Many default to whatever web search is built into their AI assistant, which sometimes leads them to good sources — but through pathways that are invisible to publishers and librarians. The discovery layer is shifting underneath us, and the existing infrastructure hasn't caught up. 

MCP was a major topic across the conference. The protocol is still very new, but it is well understood and top-of-mind for both publishers and librarians. That said, the ecosystem remains young: 

  • COUNTER is actively exploring how to account for AI-mediated access but hasn't landed on a definitive approach yet. They're receptive to input and want to get it right. 
  • Entitlements and authentication remain unsolved at scale. Publishers don't yet have clear answers for how to handle advanced entitlements, institutional anonymous access, or consumption-based licensing through AI channels. 
  • Centralized AI platforms like ChatGPT Edu are gaining some adoption in academic libraries, but deployment is still early and uneven. It's not clear how widespread usage actually is or how these tools will interact with institutional access systems. 

Why We've Been Experimenting with MCP 

The concerns raised across the conference — around uncontrolled scraping, invisible usage, broken entitlements — are exactly the reasons Silverchair has been experimenting with MCP internally for over six months. We saw early on that the core problem publishers are going to face is the need for dedicated data pipes for bot and AI traffic, separate from the traditional web experience. Right now, MCP is the protocol the community is rallying around, and we've been investing in the AI primitives that make it work: entitlement-aware content retrieval, structured search designed for machine consumption, and access controls that translate across contexts. These are the building blocks that any dedicated AI access channel will need, and we think the industry needs to be developing them now rather than waiting for the dust to settle. 

That thinking led us to launch the Discovery Bridge earlier this year — our MCP server that connects scholarly content to AI-powered research workflows. It lets researchers search and access publisher content through AI assistants like Claude, ChatGPT, and Perplexity — while maintaining institutional access controls, respecting paywalls, and automatically filtering retracted articles. We're excited about it, and our publisher partners have been engaged in shaping it from the start. 

But we're also clear-eyed. The broader ecosystem — the standards, the usage metrics, the authentication frameworks — is still taking shape. The timing for when MCP servers are broadly released to the public is still a conversation the industry is having together. We're at the table for that conversation, and NISO Plus reinforced that the table is getting bigger. 

What Comes Next 

The numbers are hard to ignore. Over 51% of all web traffic in 2024 was automated bots. More than 90% of researchers now use AI tools in their workflows. Open repositories report aggressive bot activity causing weekly service disruptions. These trends aren't going to reverse — the question is how we respond to them. 

What gave me optimism at NISO Plus was the quality of that response. The industry isn't panicking or pretending this isn't happening. People are actively problem-solving, and the conversations are grounded in real technical and business constraints rather than abstract fears. 

At Silverchair, we're committed to being part of that work — building the infrastructure, testing the protocols, and staying engaged with the standards bodies and communities that will shape how this all plays out. If you're a publisher thinking through how AI access fits into your strategy, we'd love to continue the conversation. 

 

To learn more, email ai.lab@silverchair.com 

1993 1999 2000s 2010 2017 calendar facebook instagram landscape linkedin news pen stats trophy twitter zapnito