A critical look at how generative AI rewires workplace authority, erodes authorship, and installs a new priesthood of coders and corporate systems.
Prologue: From the Stack to the Pulpit
This piece began with a Monday newsletter: Morning Tap 2026: Blue Screens, Cigar Smoke, and the Death of the Author. The idea was simple. Microsoft retired the last recognizable trace of Windows. Ai replaced the taskbar. A field study at P&G showed that a solo worker with GPT-4 outperformed a full team. And in the background, a larger shift: authorship, agency, and accountability began dissolving into syntax.
One entry in particular set the fuse: The Last Prophet Is a Prompt. It framed the transformation not as a technical upgrade, but as a theological one. Command lines became liturgy. Prompt engineers stood in for priests. The user wasn’t just a worker anymore. They became a believer—one whose faith was measured in tokens and completions.
This article takes that thread and pulls. The question is no longer what Ai can do. The question is who controls the system that decides what’s worth doing. If the interface is sacred, the code is law, and your output is judged by a machine trained on everyone else’s best attempts, where does that leave you?
The answer, like most uncomfortable truths, isn’t in the data. It’s in the structure.
This is that structure, examined.
We might already see the seeds of this: companies like OpenAI, Microsoft, and Google control advanced models trained on global data, becoming indispensable sources of “knowledge as a service.”
Introduction
Generative AI is no longer a futuristic idea; it is already embedded in the daily workflows of major companies. Recent evidence from Procter & Gamble (P&G) shows how a "cybernetic teammate" can alter team performance and experience.
In a 2025 field experiment, 776 P&G professionals worked on innovation challenges either with or without access to a generative AI assistant. The results were striking: a single individual equipped with AI matched the output of a human pair working without AI. AI assistance also sped up the process: participants with access to GPT-4 finished their tasks around 12% faster than their peers, even while producing more detailed solutions. This implies a fundamental shift in the division of labor. Knowledge work, once the domain of specialized human collaboration, is being restructured by AI in ways that compel organizations to reconsider who holds expertise and how power operates in the workplace.
power “creates and recreates its own fields of exercise through knowledge” - Foucault
AI as Collaborator: Performance and Emotional Shifts
In P&G’s trial, generative AI functioned less like a traditional tool and more like a collaborator. Individuals using AI achieved comparable outcomes to two-person teams, essentially meaning one human plus an AI could do the work of two humans. This reveals a reconfiguration of labor: tasks and ideas that used to require coordination between specialized colleagues can be handled by a single worker amplified by a machine. As a result, the boundaries of roles and expertise began to blur. Engineers and marketers in the study, for example, no longer produced siloed technical or commercial ideas. With AI help, they all generated well-rounded proposals that integrated both domains. The AI’s presence dissolved old divisions, distributing knowledge more evenly across participants.
Surprisingly, working with an AI teammate also improved how people felt about their work. Participants with access to the AI reported higher enthusiasm and lower frustration than those working without it. In effect, the chatbot’s friendly, always-available guidance provided some of the social and motivational support that a human colleague normally would. Rather than introducing stress or alienation, the technology in this case gave a positive emotional boost. This counters the usual narrative that automation makes work dehumanizing; here the generative AI acted as a kind of social companion, hinting at a future where emotional labor in teams is partly offloaded to machines.
Knowledge, Expertise, and Power Dynamics
The integration of AI into knowledge work raises profound questions about expertise and power. Philosopher Michel Foucault famously argued that knowledge and power are entwined; those who control what is known also wield power, and power in turn shapes the production of knowledge. In a corporate setting, generative AI becomes a new apparatus of knowledge. By drawing on vast datasets and organizational information, AI can supply insights that were once the guarded domain of human experts. In the P&G experiment, even junior or less-experienced employees could produce expert-level ideas when guided by AI. In other words, the AI flattened the hierarchy of expertise: an “amateur” with a good prompt could rival a veteran’s output. This democratization of expertise disrupts traditional gatekeeping. The power once held by specialists, the sole arbiters of certain knowledge or skills, is partially transferred to those who control or have access to the AI.
At the same time, power shifts upward to those who design and deploy the AI systems. Foucault’s insight that power “creates and recreates its own fields of exercise through knowledge” resonates here. Management can embed institutional knowledge into an AI model and shape its outputs according to corporate priorities. The knowledge the AI offers is not neutral; it is curated and limited by the data and rules provided by those in power. We already see this with search engines: Google’s algorithms prioritize certain information and reflect commercial and cultural biases, effectively acting as a gatekeeper of knowledge. Likewise, an enterprise AI could become the new gatekeeper within a company, channeling employees’ problem-solving along approved paths. Employees might find that “knowing how to ask the AI” becomes as important as knowing the answer, altering what it means to be skilled. The old adage “knowledge is power” now cuts both ways: workers gain new knowledge via AI, but they also relinquish some control over knowledge creation to the algorithm and its owners.
Surveillance Capitalism and Data Colonization
The rise of generative AI in workplaces also extends the logic of surveillance capitalism. Scholar Shoshana Zuboff, who coined that term, observes that the tech industry accumulates immense power by harvesting data and turning it into predictive products. Generative AI represents a “second coming” of this trend, repeating the “original sins” of Big Tech’s pursuit of total knowledge and control. Every prompt an employee gives to an AI and every output it generates can be logged, analyzed, and used by the company. In effect, workers’ cognitive labor (their ideas, drafts, and problem-solving steps) becomes raw data for the corporate machine. This practice recalls earlier waves of labor monitoring, but on a deeper level: not just measuring keystrokes or call times, but capturing the creative process itself as data.
Historian Yuval Noah Harari warns that unchecked data extraction and AI could lead to “data colonialism,” where private corporations or states monopolize information to dominate markets and people. In the context of knowledge work, this suggests a new imperialism over human expertise. If a corporation’s AI absorbs all collective know-how and continuously learns from every employee interaction, the company amasses an unprecedented knowledge repository that individual workers can’t match. Harari notes that a corporation harvesting data worldwide can "easily monopolise the sector it operates in". We might already see the seeds of this: companies like OpenAI, Microsoft, and Google control advanced models trained on global data, becoming indispensable sources of “knowledge as a service.” Meanwhile, organizations like P&G or JPMorgan train proprietary AIs on their internal data to avoid dependence on outside platforms. The impulse is clear: in this new scramble for AI supremacy, data is power, and every actor seeks to guard their own trove.
There are reports of workers quietly using AI tools while keeping their techniques secret, worried that if they reveal how much AI improves their output, they will make themselves replaceable.
Automation, Control, and the Labor Continuum
None of these developments are entirely without precedent. The pursuit of efficiency and control over work has a long history. In the early 20th century, Frederick Taylor’s scientific management sought to break tasks into measured, repeatable steps, wresting control of craft knowledge from workers and giving managers the power to dictate a “one best way” of working. Today’s AI-assisted workflows can be seen as a digital extension of that impulse. With generative AI, management can standardize cognitive tasks much as machines standardized physical tasks. The AI provides templates, suggestions, and answers that nudge employees toward certain methods. This has echoes of what some call digital Taylorism, where knowledge-based tasks are codified and optimized by algorithms.
Corporate leaders certainly frame AI in these terms. JPMorgan Chase, for instance, is rolling out a custom AI assistant to 140,000 employees with the explicit goal of boosting throughput or reducing costs by an estimated $2 billion. Daniel Pinto, the bank’s president, described plans to use AI in “every single process,” saying the bank will either “process a lot more for the same money or spend less”. This drive mirrors the old industrial quest for productivity, but now applied to clerical and creative work. The difference is that instead of stopwatches and assembly lines, the tools are algorithmic prompts and machine learning models. And just as in Taylor’s time, workers find themselves adapting to new methods imposed from above. Now the manager’s clipboard is replaced by an AI platform that not only monitors performance but actively participates in the work itself.
Notably, these changes also provoke worker anxieties and adaptations. Some employees may welcome AI as a productivity enhancer, but others fear it as a threat to their role. If an AI can capture an individual’s hard-earned knowledge and make it reproducible, the individual’s bargaining power diminishes. There are reports of workers quietly using AI tools while keeping their techniques secret, worried that if they reveal how much AI improves their output, they will make themselves replaceable. This underscores that technology alone does not determine workplace power; how it is implemented and by whom matters greatly. Will AI augment workers, giving them superhuman reach, or will it mainly serve to tighten managerial oversight and cut labor costs? The current evidence suggests both forces are at play. Generative AI simultaneously empowers the individual employee with new capabilities and empowers the employer to reshape workflows and capture the value of those capabilities.
Conclusion
In the end, the “cybernetic teammate” may amplify human productivity, but it also quietly recalibrates who holds the reins. Expertise can be democratized even as control over knowledge flows becomes more centralized. For all the talk of AI partners in the office, this new colleague ultimately answers to whoever owns the server.
Jax
Sources:
Harvard Business School Working Paper – The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise
Ethan Mollick, One Useful Thing – Summary of "The Cybernetic Teammate" study
Michel Foucault – Power/Knowledge concept explanation
Shoshana Zuboff – Harvard STS lecture on AI & surveillance capitalism
Yuval N. Harari – Warning on data colonialism (The Week)
CIO Dive – P&G’s internal generative AI rollout (chatPG)
CIO Dive – JPMorgan Chase deploying generative AI to workforce