Conversations about the future of AI are too distantly apocalyptic.
And I totally get the fact that there are serious people who are very worried that AI will become sentient one day soon and we will create a new Machine God that might murder or save us all. Discussing that seems important, as does discussing the much more mundane and immediate threats of misinformation, deep fakes, and proliferation enabled by AI.
But this focus on apocalyptic events also robs most of us of agency. AI becomes a thing we either build or don’t build, and no one outside of a few dozen Silicon Valley executives and top government officials really has any say over what happens next. But, the reality is we are already living in the early days of the AI Age, and, at every level of organizations, we need to make some very important decisions about what that actually means. Waiting to make these choices means they will be made for us.
The coming disruption
Regardless of any pauses in AI creation, and without any further AI development beyond what is available today, we already know that AI is going to impact how we work and learn. We know this for three reasons. First, AI really does seem to supercharge productivity in ways we have never really seen before. Early controlled studies show large-scale improvements at work tasks, as a result of using AI, with time savings of 30%+ and higher quality output for those using AI. Add to that the test scores achieved by GPT-4, and it is obvious why AI use is already becoming common among students and workers, even if they are keeping it secret.
We also know that AI is going to change how we work and learn because it is affecting a set of workers who never really faced an automation shock before. Multiple studies show the jobs most exposed to AI (and therefore the people whose jobs will change the most as a result of AI) are the most educated and highly paid workers, and the ones with the most creativity in their jobs. The pressure for organizations to take a stand on a technology that affects their most highly-paid workers will be immense, as will the value of these workers becoming more productive.
And we know disruption is coming because these tools are about to be deeply integrated into our work environments. Microsoft is releasing Co-Pilot GPT-4 tools for its ubiquitous Office applications, even as Google does the same for its office tools. And that doesn’t count the changes in education, from Khan Academy’s AI tutors to recent integrations announced by major Learning Management Systems. Disruption is fairly inevitable.
But the way this disruption effects our our companies and schools are not inevitable. We get to chose what happens next.
Every organizational leader and manager has agency over what they decide to do with AI, just as every teacher and school administrator has agency over how AI will be used in their classrooms. So we need to be having very pragmatic discussions about AI, and we need to have them right now: What do we want our world to look like?
Choices and consequences at work
As a widely-used General Purpose Technology, AI will impact aspects of many industries in many different ways. Therefore there is no single rulebook to follow, each industry and company will have to make many choices about how to react to AI in the coming months and years. Here are a few topics to start discussing:
What do you do with the extra efficiency? Assume early studies are true and we see productivity improvements of 30%-80% on various high value professional tasks. I fear the natural instinct among many managers is “fire people, save money,” but it does not need to be that way, and it shouldn’t be.
There are many reasons for companies to not turn efficiency gains into headcount or cost reduction. Companies that figure out how to use their newly productive workforce should be able to dominate those who try to keep their post-AI output the same as their pre-AI output, just with less people. And companies that commit to maintaining their workforce will likely have employees as partners, who are happy to teach others about the uses of AI at work, rather than scared workers who hide their AI for fear of being replaced. Psychological safety is critical to innovative team success, especially when confronted with rapid change. How you use this extra efficiency is a choice, and a very consequential one.
How will you use AI to increase employee flourishing? The value of many kinds of work that were previously precious is likely to decline. For example, persuasive writing and basic analysis are tasks AI handles well, but they were previously rare and valuable human skills. How will your organization address the changing nature of job tasks?
There are hints buried in the early studies of AI about a way forward. Workers, while worried about AI, tend to like using it because it removes the most tedious and annoying parts of their job, leaving them with the most interesting tasks. So, even as AI removes some previously valuable tasks from a job, the work that is left can be more meaningful and more high value. But this is not inevitable, so managers and leaders must decide whether and how to commit themselves to reorganizing work around AI in ways that help, rather than hurt, their human workers. You need to ask: what is your vision about how AI makes work better, rather than worse?
How do you reorganize work? The systems we use to manage and control work are built around our current technological and organizational constraints. The modern organization chart and structure, after all, were first developed to run railroads in the 20th century. And even more recent systems, like agile software development, are built around the limits of human cognition and management. With AI, there is the opportunity, and even a necessity, to change how work is organized. That can be into a Panoptican, where every move is monitored by AI; or a world where people are put into more self-directed roles using AI to accomplish more than they did before; or something else all together. Again, choices will differ across organizations and requires careful thought of both consequences and advantages by organizational leaders, along with a lot of experimentation.
Choices and consequences in education
AI is just starting to affect companies, but educators were the first to see the disruption brought by AI when ChatGPT was released last year. Now, AI cheating is fundamentally undetectable, and it isn’t even always clear what cheating means (using an AI to outline a paper? correct your work? explain a problem?). It passes most of our hardest tests. It shows promise as a tutor. The changes of AI are already here, and now teachers and administrators need to make choices. Here are a few vital topics to consider:
How do we realize the gains of AI? The impossible dream of personal tutoring and instruction for each student may, finally, be actually achievable with AI - but classrooms are not going away, nor should they. In fact, it seems likely that educators can use AI to boost classroom learning while reducing their workload. But efforts to improve education with AI are scattered and idiosyncratic. With so many possibilities, we need to start considering, right now, how to best take advantage of these tools in education, because they are already here.
How do we replace what we are losing? Having someone else do your homework destroys the value of homework. Having AI write your essays destroys the thinking that essays engendered. AI obsoletes a lot of older learning techniques, either because of the rapid expansion of cheating or because it is weird to teach people how to use skills that have become superseded by our newer AI tools. That is not always a good thing, as AI disrupts some of the most important lessons about how to think and write. Teachers need to consider, in a realistic way, what they will do to replace these lost lessons. Some solutions will be crude (more handwritten essays in class, more oral exams), but we may also be able to find better paths forward.
How do we make this universal? AI can be a force that profoundly improves educational opportunities for people all over the planet. But we don’t yet understand enough about its value and limits. Educators need to start testing, and sharing, what they learn, so we can understand who is left out, and who is helped, by the rapid embrace of AI in education. We can live in a world where AI helps personalize learning for students of all ability levels and backgrounds, or we can continue long-standing inequities. What education leaders do in the coming months will play a big role in how AI is viewed, and adopted.
Catastrophe / Eucatastrophe
AI will transform some industries more than others, just as some jobs will greatly changed, while others will remain as they always were. Right now, no one call tell you exactly what will happen for any particular company, or school. And any advice will be obsolete when the next generation of AI is released. There is no outside authority. We have agency over what happens next, for good and for bad.
Rather than just being worried about one giant AI apocalypse, we need to worry about the many small catastrophes that AI can bring. Unimaginative or stressed leaders may decide to use these new tools for surveillance and for layoffs. Educators may decide to use AI in ways that leave some students behind. And those are just obvious problems. But AI does not need to be catastrophic. In fact, we can plan for the opposite. JRR Tolkien wrote about exactly this, a situation he termed a eucatastrophe, so common in fairy tales: “the joy of the happy ending: or more correctly of the good catastrophe, the sudden joyous ‘turn’… is a sudden and miraculous grace: never to be counted on to recur.” Correctly used, AI can create local eucatatrophes, where previously tedious or useless work becomes productive and empowering. Where students who were left behind can find new paths forward. And where productivity gains lead to growth and innovation.
The thing about a widely applicable technology is that decisions about how it is used are not limited to a small group of people. Many people in organizations will play a role in shaping what AI means for their team, their customers, their students, their environment. But to make those choices matter, serious discussions need to start in many places, and soon. We can’t wait for decisions to be made for us, and the world is advancing too fast to remain passive. We need to aim for eucatastophe, lest we encounter its opposite.
One way to detect when something is written by a human (or at least edited) are the subtle mistakes that an LLM wouldn't have made.
There are at least two in this article - were they on purpose?
Nice piece, Ethan - very thought provoking. It'd be great to get you opinion on a few of the things you mentioned:
- Does the replacement of some people by AI necessarily only lead to the same amount of work now being done by the AI? I would assume AI would allow for a lot more scaling, both in terms of volume and speed. Panoptican or not, some positions will certainly get entirely eliminated.
- Ultimately, wouldn't this replacement be a function of competitive costs rather than simply employee policies? It's a bit like AI adoption right now. Ever since OpenAI let the cat out of the bag, everyone (Big Tech) has to follow suit. They weren't all far behind, having introduced competitive offerings pretty quickly thereafter, but they hadn't made the first move. OpenAI forced them to.
- Like you rightly point out, I would wager that as much as worrying about cheating, educators will need to worry about what skills are now valid. The objectives of evaluation and even teaching might become as important as worries about how to evaluate learning.
We'd done a piece recently on how AI would affect the MBA, and many similar concerns came out there.