Our first event, "Understanding AI Traffic: Bots, Crawlers, and What They Mean for Your Platform," was an ideal place to start that conversation — grounding it in real traffic data, honest uncertainty, and strategic thinking from leaders working through these questions in real time.
I moderated a discussion with Paul Gee, VP of Digital Product Management and Development at the JAMA Network; Lou Peck, CEO of The International Bunch; and Robb Burgess, Silverchair's VP of Technical and Security Operations. Part data briefing, part strategic workshop, and part honest reflection on how fast the ground is shifting - watch the recording or read the recap below.
Setting the stage
Robb Burgess kicked off the session by sharing traffic data from the first ten weeks of 2026 across the Silverchair Platform. Of the traffic Silverchair's infrastructure is seeing, a substantial portion — roughly comparable in volume to human traffic — is either served to known bots or turned away entirely. Most of what gets turned away isn't malicious; it's automated systems that can't pass a basic challenge check. But the scale is significant, and it's prompting real questions about how publishers are thinking about access and dissemination.Robb drew a useful distinction between bot types that often gets collapsed in industry conversation. Search bots — Googlebot, Bingbot, and newer AI-powered equivalents from OpenAI and others — function essentially like traditional indexing tools, building the search indexes that power discovery. Training bots, which attempt to scrape content to feed large language model development, are a different matter entirely. "We block those outright for all client sites and all traffic," Robb explained, citing content accuracy as a core reason. If a correction is issued after an LLM has already ingested an earlier version, there's no guarantee the updated information gets picked up. User bots — the agents that retrieve content on behalf of a person actively using a tool like ChatGPT — represent the third and most strategically interesting category, and the one where clients are actively experimenting.
That third category is also where referral patterns are shifting in noticeable ways. "Traditionally, Google Scholar has been at the top of the referral charts," Robb noted. "And we're seeing ChatGPT referral creeping up to be comparable. In some cases, it's now the fourth highest referral we're seeing."
From Paralysis to Experimentation
For JAMA, the shift from defensive posture to active experimentation has been deliberate and data-driven. Paul Gee described how JAMA moved from a broad block on all AI crawlers in 2024 to selectively admitting a small number of user agents — and then monitoring the results closely. The traffic profile that came through looked like what they hoped to see: institutional sources, healthy time-on-site, and meaningful propensity to engage. "If what these tools do is qualify your traffic for you so that it's set up to deliver an experience that's better for the user — and they're more engaged, they're more likely to cite — then it feels like a win-win."Lou Peck brought the consultant's perspective, working across organizations of varying sizes, missions, and resources. Her framing of the core tension was apt: "We're finding that some publishers are really paralyzed by the scraper's dilemma." The anxiety is understandable — concerns about brand dilution, content devaluation, and AI hallucinations mixing with high-quality research are real. But Lou pushed back on the instinct to default to restriction, arguing that blocking too aggressively has its own costs. "Stop treating AI traffic as noise and something to be blocked," she said. "You're only actually going to end up making your research invisible."
Infrastructure is critical
Robb offered a look at what happens on the infrastructure side when publishers decide to open the door to AI bots. For some, the traffic increase has been modest — 5% to 10%, easily absorbed within existing capacity. For others, the jump has been between 25% and 50%, which requires a different conversation. Rate limiting is one tool; proactive infrastructure planning is another.Robb also described something that's still in early exploration: serving AI traffic in a format optimized for machine consumption. Most publisher sites are built to be readable and visually navigable for humans, but not necessarily structured in the way machines prefer to ingest content. Stripping the CSS and presenting clean, XML-structured content to AI agents is a natural next step — one that reduces server load and improves the quality of what these systems receive. It's a small but telling example of how the work of serving both human and machine users is beginning to diverge technically, even when the underlying content is the same.
Rethinking What We Measure
Lou observed that the analytics frameworks publishers have relied on for years are increasingly poor fits for a world where AI intermediaries are doing significant work on behalf of end users. A bot that ingests an article in half a second registers as a zero-click session. Referral traffic from AI tools doesn't always look like traditional search traffic. "Search referrals, we've seen them plummet by about 33% to 55% in the last year."The emerging concept of "inference presence" — ensuring your content is the preferred and trusted source for AI systems — may matter as much going forward as traditional SEO metrics did in the past decade. Lou pointed to ongoing work at NISO and COUNTER as meaningful steps toward giving publishers better visibility into how their content is being accessed, by whom, and through what kind of intermediary.
Paul drew a useful parallel to the early days of SEO: "If we end up making special types of metadata around our articles that people don't see but the bots consume, it stresses me out a little bit that we're making better content for the bots than we are for the humans." His hope is that optimizing for machine legibility and optimizing for human clarity converge more than they diverge. Lou made a similar point: "Optimizing for a bot is actually optimizing for clarity and accessibility." If a machine can't navigate your research hierarchy or metadata, there's a good chance someone using assistive technology can't either.
Notes to Our Past Selves
We closed by asking each panelist what they wish they had understood sooner. Robb said he wished he had gone deeper earlier into the underlying infrastructure of large language models and RAG retrieval systems. Paul pointed to the work of ensuring that AI traffic is being routed to the right content — and the opportunity to use that work to make the overall experience better for human users too. Louwanted the knowledge that clicks were going to become a vanity metric, and that the community needed to start rethinking its measurement frameworks before the disconnect became more disorienting than it already is.The closing message across all three panelists was consistent: understand your organizational mission, stay open to experimentation, and don't let the legitimate complexity of these decisions become an excuse for inaction. As Lou put it: "The future's here. The future's now." That's a fair summary of where this conversation is going — and a good reason to keep having it.
You can watch the full recording of our March session here.
The series continues with two more free events this spring. On April 14, we turn our attention to on-platform AI discovery — semantic search, recommendation engines, and how AI is reshaping the way researchers find and engage with content. On May 12, we'll examine what happens when research moves entirely outside the platform, exploring AI agents, chat tools, and the MCP ecosystem.
Register for both events here. Both sessions are free and open to the community — we'd love to see you there.