35 Comments

Something I’ve discovered about myself after using AI to help me write and prep for interviews is that it’s operating much like GPS has for me. I used to use maps and learned to memorize streets and addresses (similarly with phone numbers), but now as I use GPS for navigating ALL THE TIME, I have no idea where anything is. I don’t remember what freeway number is my turnoff or what’s on the corner that cued my turn before. I’ve become dependent and MUST use it now. Same with my writing...I’ve become lazy about even trying to write anything. I look at notes as I give interviews rather than relying on my own knowledge of my history.

So this is the danger. We become dependent and lazy, using the tools to do all our work and probably making us all more similar and far less creative.

Expand full comment
Sep 24, 2023·edited Sep 24, 2023

Thanks for the great read, Ethan :)

I believe both elevator and kingmakers models are simultaneously true, personally.

GPT-4 is an intellectual tool, but since that is a bit more abstract, let me compare it to a physical tool for a second - a chainsaw.

Prior to the invention of the chainsaw, when high-skill (HS) and low-skill (LS) axe-wielders wanted to chop down a tree, they had to bring a lot more to bear: the angle of the swing, the grip of the heft, optimal pacing to make decent progress without burning through their energetic resources. In other words, a mix of knowledge and muscle memory, a set of skills.

After the chainsaw's invention, HS axe-wielders learned that some of their strategies were transferable to slightly increase their effectiveness, but overall, the benefit was just in the job getting done more easily. For LS woodcutters, the chainsaw was a godsend. Most of the challenging aspects they lacked skills for were essentially automated away by the steady chug of the engine.

But on the whole, both HS and LS axe wielders became LS chainsaw wielders. What does a HS chainsaw wielder look like? Just google "chainsaw sculpture" for an idea of what's possible. Or "chainsaw ice sculpture" for a twist.

It's easy to pick up a chainsaw and learn the basic uses of the tool to get a quick benefit from it, but harder to master its use on challenging, even previously impossible challenges. Please correct me if I'm wrong, Ethan, but I don't suppose many of the high-performers in this study had specifically studied how to leverage the tools even more effectively (i.e. prompt engineering).

I'd be very interested in the results of a follow-up study comparing the gains from AI-assisted professionals against a similar group that receives a couple of weeks of instruction in prompt engineering.

---

Anyway that's my whole elevator argument. As for the kingmaker scenario, I think that's reserved for "artists" - the michelin-chef level executives who aren't just interested in productivity, but passionate with a dual expertise in their subject matter AND working with LLMs.

Expand full comment

Thank you for this work. It is one of the few constructive arguments since the AI discussion took off 6 months ago. 1+1=3

Expand full comment

I think we should also recognize that this is a dynamic process: some firms and some workers will adopt and embrace AI augmentation faster than others. Even among those that adopt it, some will do so better than others. Hence, even if AI is an equalizer at some hypothetical equilibrium, in practice it may be a kingmaker due to variation in the speed at which it is embraced and well applied.

And of course, AI is advancing at a breakneck speed such that almost no one can consistently stay on the frontier. Early adopters may even find themselves developing intuitions and habits that are counterproductive for applying subsequent, more advanced tools.

Expand full comment

Very interesting article. I also read the paper. It provides great insights. A few thoughts:

- did you ensure that all participants were not using AI already? Top performers may already have adopted AI and this may skew results

- “ChatGPT+overview” is like comparing a 1st time driver to someone who had his first driving lesson... the jump in knowledge/skills is usually huge when you start from 0 but it does not make you a good driver

- if BCG was to adopt AI, they would put their consultants through in-depth AI training before they use AI in the field; that could be the next research to (in)validate those findings

- questions within the frontier asked to generate 10 ideas. How realistic is this? It’s asking to produce lots of garbage fast; in a real world, a BCG consultant won’t last 2min. In the exec room with so many ideas; did top consultants actually perform better because they knew generating 10 ideas was unrealistic? Is the paper jumping to conclusions on productivity based on tasks designed to produce “some output” with no connection to productivity that can be monetized (would a client pay for the task output?)

Expand full comment

In the examples where uplift for "bottom performers" were higher, how do you validate that it is not simply regression to the mean?

What was the "baseline" vs "experimental" change for the top/bottom groups in a case of no intervention?

Expand full comment

This was an interesting read—a few thoughts.

Would it lead to below-average performers not learning or building tacit knowledge and outsourcing their thinking to a tool? Eventually, in most cases, work is more than producing a document or an option paper. How much can LLM help implement the strategy/solution/software and would LLM make a below-average person as efficient as someone highly motivated with a lot of tacit knowledge?

How would it work in a company where people are not highly motivated, unlike BCG? Would we see the same level of performance improvement?

Would we all start writing and sounding very similar over time as everyone is using the same/similar tools, LLM's biases and solutions/options will become our biases and solutions/choices? Humans tend to outsource thinking as it is very taxing on the brain. So, people who are not using LLM first (people who think first and then put their output to LLM to enhance it rather than people who use LLM tool first) will have an advantage over people using LLM first.

The goal of the LLM is to make everyone above average, but at what cost, i.e., losing our originality and outsourcing our thinking and brain?

Expand full comment

Here's a pdf link to the paper that doesn't obnoxiously try to try to force you to accept a cookie before letting you download it:

https://www.hbs.edu/ris/Publication%20Files/24-013_8f3583c2-2e9a-4379-9697-a93bd6a84133.pdf

Expand full comment

This is such a relevant article to what I am working on right now. I think you are right on the money!

Expand full comment

This is one of the best articles I've read that has asked these important questions and done so in such a clear and forthright manner.

Wait a minute... Did AI help write this article..?

I'm kidding. Sort of.

Expand full comment

fantastic article on a very important subject. This is my first read of yours, and look forward to many more. well done.

Expand full comment

We can look to the music industry, where this pattern (speaking of ML...) has already occurred. Music creation software such as Pro Tools has allowed average-to-below-average creators to create songs more easily. We now live in a sea of musical mediocrity with thousands of tepid songs with only a small fraction of creative or commercial quality. A lower-skilled guitarist can now purchase a low-priced software module to give that person the same sound as a pro guitarist who worked for years to attain a sound with expensive gear. Certainly, high-skilled musicians wield these tools surgically and opportunistically (or even not), but the others who have risen from the big part of the Gaussian curve churn out gallons of blandness. I fear this pattern will reoccur in the age of LLMs.

Expand full comment

If experience (or muscle memory) is a big factor in the productivity gap between top and bottom performers, this generation of AI models (after having processed relevant knowledge) will bring the bottom performers close to par or at par with top performers, if they know how to best utilize these AI models (what you refer as Cyborgs).

Based on all that I have read and seen thus far, the benefits of these AI models will get distributed unevenly, but the axis this time instead of experience will be how well performers utilize or integrate AI into their respective workflows.

Expand full comment

I love Ethan's writing, and I sadly haven't been able to find many (any?) other people writing about AI with the same clarity and insightfulness. Anyone here have any recommendations??

Expand full comment

Well, to state that you become good at writing with AI is highly suspect. Good at copy and paste. Writing is a craft, a creative process. There is a big difference between someone writing out of deep experience than auto generated text that at times may be hallucinating. It may on the surface be difficult to tell the difference. But it is there.

What do you think about AI systems training their systems on your writing? It's already started. Read about this important topic in the post https://boodsy.substack.com/p/the-ai-bots-are-coming-for-your-substack

Expand full comment

"...what do companies do in response? Hire less skilled workers and have them boosted by AI? Expect more work out of all their employees? Focus on working with employees so that they become Cyborgs? Or are they tempted to cut wages or headcount?"

In my experience working to develop AI systems in a laboratory context the vision from the C-suite certainly seemed to be using it to reduce labor costs (primarily by worker attrition and reduced hiring). I would encourage those folks to think bigger and see how AI can be used more in the Cyborg/Centaur model to boost efficiency and improve accuracy, rather than grasp at the low hanging fruit of cost cutting (granted, some of that may happen anyway as a side effect, but I don't think it should be the primary goal).

Expand full comment