It sounds like a science fiction setup: companies are at risk from disruption from AI unless they can convince their secret cyborgs to reveal themselves. But I think it is an accurate summary of the dilemma facing organizations.
To understand what I mean, and why it is so important, we need to start with a basic premise. Large Language Models are a breakthrough technology for individual productivity, but not (yet) for organizations.
The initial evidence suggests that AI can have huge impacts on individual productivity. Early controlled studies have suggested time savings of anywhere from 20% to 70% for many tasks, with higher quality output than if AI wasn’t used. Yet, the current state of AI primarily helps individuals become more productive, not so much helping organizations as a whole. That is because AI makes terrible software. It is inconsistent and prone to error, and generally doesn’t behave in the way that IT is supposed to behave. So, right now, AI doesn’t scale well. But, as a personal productivity tool, when operated by someone in their area of expertise it is pretty amazing.
Today, billions of people have access to Large Language Models and the productivity benefits that they bring. And, from decades of research in innovation studying everyone from plumbers to librarians to surgeons, we know that, when given access to general purpose tools, people figure out ways to use them to make their jobs easier and better. The results are often breakthrough inventions, ways of using AI that could transform a business entirely. People are streamlining tasks, taking new approaches to coding, and automating time-consuming and tedious parts of their jobs. But the inventors aren’t telling their companies about their discoveries; they are the secret cyborgs, machine-augmented humans who keep themselves hidden.
Shadows of AI
There are at least three reasons these cyborgs stay secret. But they all boil down to the same thing: people don’t want to get in trouble.
The problems start with organizational policy. Many companies have banned ChatGPT use, often because of legal concerns that remain somewhat vague, based on uncertainty over the technology and regulatory worries. And, while these legal teams are doing their job, there is a growing gap between the rumors (AI will steal your data! AI is illegal to use!) and the attempts by AI companies to make their tools usable by companies by not stealing data and meeting legal requirements. For example, Anthropic’s Claude AI is HIPAA compliant, while OpenAI and Microsoft have announced a focus on security and compliance as well. To the extent that large-scale AI prohibitions are warranted, it is likely to be a temporary matter, and companies should focus on starting to build targeted policies focusing on specific types of use, rather than blanket bans.
But these bans are having a big effect… they are causing employees to bring their phones into work and access AI from personal devices. While data is hard to come by, I have already met many people at companies where AI is banned who are using this workaround - and those are just the ones willing to admit it! This type of Shadow IT use is common in organizations, but it makes using AI in violation of company policies, and therefore something to keep hidden.
And that isn’t the only reason that AI users fear revealing that they are cyborgs. Much of the value of AI use comes from people not knowing you are using it. The ability of AI to write in ways that seem human is very powerful, but only if people think it is coming from an actual human. A couple of weeks ago, I discussed The Button, the write-it-for-me option that will soon be available in every Office and Google application. I now have access to it in Gmail, and, as you can see below, it does an excellent job at generating credible content about complicated and sensitive issues. Everyone is going to be using The Button. We know from research, that when people learn they are receiving AI-created content, they judge it differently than if they assume it comes from a human. Another good reason to keep use secret. Unsurprisingly, when I conducted a bit of an unscientific Twitter poll, over half of generative AI users reported using the technology without telling anyone, at least some of the time.
All of this shadow use leads to the final concern, the justified worry that workers might be training their own replacement by figuring out how to work with AI. If someone has figured out how to automate 90% of a particular job, and they tell their boss, will the company fire 90% of their coworkers? Better to keep usage secret, and avoid any risk.
Revealing the cyborgs
All of the usual ways in which organizations try to respond new technologies don’t work well for AI. They are all far too centralized and far too slow. The IT department cannot easily build an in-house AI model, and certainly not one that competes with one of the major LLMs (and also: AI doesn’t work like software). Consultants and system integrators have no special knowledge about how to make AI work for a particular company, or even the best ways to use AI overall. The innovation groups and strategy councils inside organizations can dictate policy, but they can’t figure out how to use AI to actually get work, only the workers, experts at their own jobs, know that.
So, at least for now, the only way for an organization to benefit from AI is to get the help of their cyborgs, while encouraging more workers to use AI. And that is going to require a major change in how organizations operate.
First, they need to recognize that the employees who are figuring out how to best use AI might be at any level of the organization, with any sort of history or past performance record. No company hired employees based on their AI skills, so AI skills might be anywhere. Right now, there is some evidence that the workers with the lowest skill levels are actually benefiting most from AI, and so might have the most experience in using it, but the picture is still not clear. As a result, companies need to include as much of their organization as possible into their AI agenda. And that means that will need to provide broad training to their workers, as well as building the tools needed for them to share what they have learned, such as crowd-sourced prompt libraries.
Second, leaders need to figure out a way to decrease the fear associated with revealing AI use. They can offer guarantees that no employees will be laid off as a result of AI use, or promise that workers can use the time they free up using AI to work on more interesting projects, or even end work early. Early studies suggest that workers can find themselves happy to use AI because it removes boring work, so this incentive might be appealing. And this where organizations with high degrees of trust and good cultures will have an advantage. If your employees don’t believe that you care about them, they will keep their AI use hidden. Psychological safety can help mitigate employee concerns.
Third, organizations should highly incentivize cyborgs to come forward, and expand the number of people using AI to create new ones. That means not just permitting AI use, but also offering substantial rewards tp people finding substantial opportunities for AI to help. Think cash prizes that cover a year’s salary. Promotions. Corner offices. The ability to work from home forever. With the potential productivity gains possible due to LLMs, these are small prices to pay for truly breakthrough innovation. And large incetives also show that the organization is serious about this issue.
Finally, companies need to act quickly on some basic questions: what do you do with the productivity gains you might achieve? How do you reorganize work and kill processes that are made hollow or useless by AI (if your cyborgs are automating their performance reviews, what purpose do they serve)? How do you manage and control work that might include risks of AI-driven hallucination and potential IP concerns? There no easy answers, but AI is here, and already having an impact in many industries and fields. Putting off considering these concerns will only result in worse long-term outcomes. So, prepare to meet your cyborgs, and start to work with them to create a new, and better, organization for our AI-haunted age.
I worry about the quarter-over-quarter, market-driven economics of the big tech companies that are rolling AI out within their organizations. Some companies have concluded that they can't allow generative AI in any fashion to be used by employees, but in all cases, they're necessarily thinking about how whatever they do will affect their bottom line... but like next quarter, not 2 or 3 years down the road.
We need companies who boldly stand against the Wall Street short-termism and plan for multiple year horizons, where AI is thoughtfully and carefully integrated into all aspects, but with an architecture that overarches everything, and with tremendous oversight and guidance... but what company on Wall Street is actually capable of doing this?
Ethan, you have inspired me to think and write more about this topic. Your pieces here on Substack are excellent thought leadership.
As soon as I can afford it, I’m going to create a budget for every employee to expense whatever AI tool they want. Requirement: they write a paragraph about how they use it once a month and share it with the team.