Yet, that enthusiasm at some point peters out. Why?
The potential toll of AI on human beings is what a study recently published in Harvard Business Review termed “brain fry” or “cognitive exhaustion from intensive oversight of AI agents.” Importantly, the study concluded that brain fry is the result of intensive AI oversight activities coupled with the management of multiple AI tools at once, concluding that the management of more than three AI tools is cognitively too many.
Beyond the moral imperative of maintaining ethical and human-centered workplaces, this matters because high AI adoption is positively correlated with an employee’s overall engagement with and value to an organization, yet brain fry is correlated with a 39% increase in active worker intent to quit. In other words, it is our most valuable people who will be inspired to leave because of their experiences with the brain fry brought on by using AI tools.
As I reflected upon how evidence of the study’s findings shows up in my own workplace, two important observations became clear to me: The need for coherent AI management training and the need for people to protect the most meaningful aspects of their work.
We Need to Cultivate Skilled AI Managers
Managing AI agents is not dissimilar from managing people. We've become fascinated with the idea that AI tools provide us with the analytical horsepower and productivity of a human being while shedding the need for rest and the propensity to become opinionated about what they are asked to do. In its raw expression, an LLM will take minutes, if not seconds, to grind through a research project that would have taken an entry-level employee days to complete.Had you assigned this project to a human worker, you would have agreed upon a deadline and perhaps even set aside time on the due date to review the draft and provide substantive edits. Instead, you submit a query and immediately receive the result. Naturally, you need to review the work just as you would have reviewed the work of a human being. But here is the key: you feel pressured to review that work immediately—and just as quickly as possible—so that you don't become the factor that is compromising the speed and efficiency of the AI-driven production process. As managers of AI workers, we cede control over the pacing of work and instead become captive to the idea that speed above all is the chimera to be chased.
The skill missing in our emergent AI-first workplaces is that of human orchestration, or, put more accurately, human management of AI talent. If I were to analyze the behavior of an LLM just as I would the behavior of any employee at Silverchair, I would assess that it produces generally high-quality work with impressive speed, but that it often lacks the requisite depth and can be derivative and repetitive in its responses. With appropriate mentoring and coaching, this worker has the potential to become incredibly valuable to the company; however, they require a great deal of investment and oversight to be truly effective.
If you are a manager of people and you had someone like this on your team, would you feel compelled to review their assignment immediately upon submission? Or, with the understanding that they can make egregious mistakes while at the same time sounding authoritative and persuasive, wouldn't you instead take a pause and set aside additional time to review their work product?
The trouble is we have transformed every single person in our companies, regardless of managerial skill level, overnight into the manager of multiple such staff members. We expect individual contributors who have never been taught how to orchestrate the work of other human beings to effectively lead small teams of machines with a superhuman capacity to consistently produce solid B work and the naive eagerness of a high school honors student.
What we need is training for everyone on how to manage this new class of problematic, yet promising, worker so that they do not inflict irreparable cognitive harm on our most valuable human colleagues. A lot has been written about the need for employers to build in breaks and downtime to alleviate the strain of AI orchestration. I believe these well-intentioned instructions miss the root cause, which is a lack of managerial skill. People whose jobs already demand a high degree of sophisticated AI orchestration (and we have many such talented people at Silverchair) need to be empowered with proven tools for ensuring that work is done at a pace that is sustainable for people and produces quality long term results for the enterprise. We need to balance the craving for superhuman efficiency against the imperative of human-level reasoning, creativity, and discernment.
In practical terms, this can look like strong project leadership, mapping out the intended outcome and working backward to establish key milestones. To the extent those milestones involve AI-generated deliverables, it means building in appropriate time for human review. It also means explicitly removing any implicit pressure people might feel to work at the pace of machines.
We Need to Preserve a Sense of Meaning
In my own work, Claude has become a valuable thought partner. Always poised with a pen and notepad at my beck and call, I can summon it at will throughout the day explore any idea, however fringe or whimsical, that might bubble up to my consciousness. Because I don't fear that I am wasting the time of a human being, I am libertine with my requests. The feeling this indulgence generates, however, is more akin to the feeling I get when I've allowed myself to doom scroll on Instagram for an hour rather than the satisfaction of actual research. This is what poorly managed AI usage produces: a state of exhausted ennui that leaves us feeling inadequate as humans without achieving the countervailing promise of cogent and speedy work.This leads to my second observation, which is the urgent need for human beings to protect the meaning-making aspects of their work. When I read the aforementioned HBR study, I was struck by the corresponding finding that, where workers use AI to outsource repetitive and lower value tasks, the impact is a reduction in mental fatigue. This finding leaves something unsaid: Is there something about the nature of the tasks that people are outsourcing to AI that contributes to a sense of mental fatigue? In other words, are we outsourcing things to AI that might have ordinarily enabled a sense of well-being and satisfaction in our work?
As chronicled in the Big Think article, “The hidden cost of letting AI make your life easier,” the philosopher Sven Nyholm cites political theorist Rob Goodman’s distinction between “process goods” and “outcome goods,” where outcome goods are the finished results of activity and process goods are the result of the doing itself. Insomuch as the experience of struggling through a problem to reach a resolution is what makes achievement meaningful, this is what AI threatens most.
Nyholm warns about outsourcing to AI the experiences we would normally engage with that, by challenging our intelligence, fortitude, and creativity generate a sense of meaning in what we do. What Nyholm terms the “meaning gap” emerges whenever we—deliberately or not—assign AI to complete these meaning-generating activities.
Take, for example, the authoring of this piece. Not a single word you are reading was generated by AI. I derive an immense sense of satisfaction from diving into a topic that interests me, reading everything I can get my hands on, and then allowing myself time for reflection to allow original ideas to emerge. I was feeling some time pressure to write this, so I did seriously contemplate shedding my initial ideas onto a page and asking Claude to generate for me a proposed outline—or at least to reflect back a critique of what I was saying.
I resisted the urge to do so, and I am glad that I did. The pleasant mental fatigue I feel from having struggled through the concepts, the research, and the prose myself is a good part of what makes my work meaningful. Had I outsourced even a portion of this project to my AI assistant, the output may have been produced more quickly (and, who's to say, with higher quality), but the impact on me would have been a sense of hollowness.
It is hard not to see how this invokes Jean Baudrillard's concept of hyperreality, which he advanced in his 1981 book, Simulacra and Simulation: the idea is that we live in a society dominated by simulacra, or copies without originals. Simulacra generate their own reality rather than actually representing something real. According to Baudrillard, the saturation of society with simulacra threatens to render everything infinitely mutable, and therefore meaningless. The more the line between fiction and reality is blurred, the more difficult it becomes to generate meaning
I believe that this is the aspect of AI management where human beings have the most agency: understanding themselves enough to know how they generate a sense of meaning in the world, and then becoming vigilant about preserving those experiences for themselves rather than capitulating to the simulacra of AI in the frenzied quest for efficiency.