22 Comments

One way to detect when something is written by a human (or at least edited) are the subtle mistakes that an LLM wouldn't have made.

There are at least two in this article - were they on purpose?

Expand full comment

Nice piece, Ethan - very thought provoking. It'd be great to get you opinion on a few of the things you mentioned:

- Does the replacement of some people by AI necessarily only lead to the same amount of work now being done by the AI? I would assume AI would allow for a lot more scaling, both in terms of volume and speed. Panoptican or not, some positions will certainly get entirely eliminated.

- Ultimately, wouldn't this replacement be a function of competitive costs rather than simply employee policies? It's a bit like AI adoption right now. Ever since OpenAI let the cat out of the bag, everyone (Big Tech) has to follow suit. They weren't all far behind, having introduced competitive offerings pretty quickly thereafter, but they hadn't made the first move. OpenAI forced them to.

- Like you rightly point out, I would wager that as much as worrying about cheating, educators will need to worry about what skills are now valid. The objectives of evaluation and even teaching might become as important as worries about how to evaluate learning.

We'd done a piece recently on how AI would affect the MBA, and many similar concerns came out there.

Expand full comment

AI will sadly be yet another tool for the wealthy to exploit the poor with, yet again.

Its great to hear sensible discussion about AI but I keep seeing an important point being missed - the 4 day working week.

The creation of the PC was going to change our way of life in the 70's and 80's and make things easier for employees but instead, the extra productivity granted by the humble 8MB of RAM computer was used to exploit its workers for profit. There was no 4 day working week back then and rather have discussion aboutvthat possibility, people are more concerned about deep fakes and sentience. AI should be about getting our lives back and having more time on the earth to spend doing what we want.

The problem is decisions about the mental health and wellbeing of the workforce is out of our control - leaders see dollar signs with AI and forget that most employees want more time with their families - who really wants to work?

Expand full comment

Finally! Someone with a brain!

Expand full comment

Here are some of my thoughts on AI in education:

There are two aspects of our current education system: teaching/learning and evaluation.

ChatGPT and LLM AI have provided students with a way to "trick" the evaluation part of the system. "Trick" in the sense of produce work that passes the evaluation metric without actually attaining the level of learning the evaluation metric is supposed to authenticate. This means teachers definitely have to re-examine and possibly reconstruct their evaluation frameworks to avoid this.

It is much less clear that LLM AI will be of much use for teaching/learning. After all, cogent analyses and information summaries of whatever the topic at hand is are almost always provided by the teacher as part of the course material. The challenge (I think, typically) is to get the students to engage the material themselves. The fact that LLM AI can generate similar analyses/summaries as are already available to the students wont affect this basic challenge facing teachers.

Arguably, LLM AI will be able to produce more specific analyses/summaries customized to whatever difficulties a particular student is facing: that might make teaching/learning more efficient, saving the student the time of hunting through the source material for the bits relevant to their particular understanding. On the other hand, the skill of filtering a large pile of information, deciding what is relevant to the topic at hand and what is not, will probably atrophy if we outsource it to the AI. Is that a step forward in terms of learning? I think probably not.

For example, it would be easy for a student to ask AI to generate an essay on the lead up to any major geopolitical conflict, looking for the underlying causes and the important decision / inflection points. Even after careful engagement, reading and absorbing the logic implicit in the AI produced essay, I think the student's understanding will be far short of the understanding of a student who looked through the available sources and made up their own mind on the topic.

Getting the AI to write the essay, even including the time for the student to genuinely engage with (read and absorb) it, is undoubtedly more "efficient" (in the sens of less time consuming), then wading through all the incomplete, contradictory, sometimes wildly nonsensical perspectives on the actual events. But once again, the real loss is not in the understanding of that particular geopolitical event, but in not developing the skills to reach valid, defensible conclusions yourself.

Is reliance on AI a slippery slope to a "group think" crisis of Biblical proportions? It certainly might be!

A more specialized example: math teachers are well aware of the difference between a student who can follow/understand and correctly present a proof (something AI can probably help with) and those who can prove something on their own (something frequent reliance on AI might actually impede).

Expand full comment

“a situation [J.R.Tolkien] termed a eucatastrophe, so common in fairy tales: ‘the joy of the happy ending: or more correctly of the good catastrophe’ ”

aka ‘Deus Ex Machina’ or something *implausible* in a story to resolve an issue with the plot.

There might be a happy ending rather than an apocalypse but there will certainly be a period of painful disruption. Technology change is a leading indicator of social change.

Expand full comment

Excellent read, and lots to think about! I share the view that there is no need for the growth of AI to spell disaster; but we do need people to step up in their organisations and take responsibility for planning and introducing AI to empower people, rather than waiting for shareholders and bad managers to use it as a blunt instrument to squeeze more profit out.

I think there's a space opening up here for workers to recommend and advise on where AI can improve jobs - both for themselves and their employers. Unions should be all over this, and in less-unionised workplaces, workers should beware the experts who may be brought in to 'modernise' business practices! There are many paths we can walk from here; but employees will need to advocate for themselves and resist attempts to simply squeeze more productivity out of them.

Expand full comment

I have liked all of your articles and am directly inspired by them in some of my teaching practice.

However I think you are wrong about the effect on labour, both in the positive and normative sense.

Your argument seems to be that individual companies could benefit (at least in some time horizon) from maintaining their workforce as this allows them a competitive advantage against rivals. But this implies the rivals will be failing and jobs are lost there. So at the sector level, where there are big productivity gains we should expect to see fewer workers.

I would also say this is desirable as long as the shock is not too sudden. The reason we aren't all out in the fields growing food is because of massive productivity gains in agriculture and few of us see this as a bad thing. Why should this be different?

Expand full comment

Great food for thought. When I was young computers and automation were going to bring us all leisure time. Seriously, people were discussing what we would do with all that free time.

The purpose of a Publicly Listed Corporation is to maximise revenue for shareholders. This provides all the information we need to guess how the future will pan out over the next 10 years.

One thing is certain, AIs offer such advantages in military technology that they are not stoppable. They will not be banned. China is committed to autonomous AI fighting units and NATO operators will only intervene with a crude kill/no kill switch. Just imagine a swarm of 6cm long AI "bugs" swarming along a trench or into a house, each equipped with a small warhead. The mother bug waiting outside to resolve complex issues....

This technology really is scary, the stuff of nightmares.

The bugs will be perfected once AIs can write AIs. This will happen sometime between 2025 and 2030.

Expand full comment

Given the shortage of high tech workers, this may be the force multiplier that is badly needed in the industry.

Expand full comment

Love it! While it will take generation(s) for the effects of accessible education to get rolling, this period is a literal renaissance-esque inflection point.

Education is the #1 best way for individuals to improve their lives and has a crazy network effect.

Can't wait to be an old man and see how big the world's brain has gotten.

Expand full comment

Agency? From BigTech to automation to how A.I. will learn by itself, the very loss of agency is what some of us are worried about the most. I can understand you work in education, but I don't find the argument persuasive at all.

We don't even know what the future of work will look like in a Post AGI world, and that is part of the problem. Certainly teachers and professors will be for the most part obsolete because this old system we got just isn't one that's working or even mildly preparing us for what is to come.

Expand full comment

Thanks for another thoughtful piece Ethan. I respect the concerns, both existential and moderate, that many have raised. I am in no way sweeping them aside. I do, however, want to raise an alternative framing of the issues. A lot of the dystopian view is based on currently unknown pathways and outcomes. Folks expressing these views often describe them in probabilistic terms. That is, the potential for a particular dystopian outcome is >0%. Therefore, given the gravity of the that outcome, even a tiny % chance is an unacceptable risk level. I rarely hear a similar framing for a utopian view. What is the chance that advanced AI in the future is the answer to preventing an asteroid hitting the earth? What is the chance that AI could be the tool that ultimately allows us to solve the global warming crisis? What is the chance that AI could be the tool that allows us to cure cancer or solve world hunger? Like every significant technology to date (fire, metallurgy, electricity, computers) AI will undoubtedly be used both for human progress and for evil deeds. Without taking a balanced approach to considering the pluses along with the minuses, we won’t be able to create optimal policies as a society.

Expand full comment

All the more reason to:

1. Teach Critical Thinking skills in High School, a designated course.

2. Get ahead of AI adoption by teaching how to use it/(misuse it?). Again, a require HS course. The curriculum would be ever evolving, and rapidly. But, that is OK.

Expand full comment

The teaching profession will be resistant to AI because there are aspects of the daily workflow for which AI will not initially provide relief. Schools should reorganize to make better use of technology (including AI) and become more flexible. Some of these duties include: communicating with parents, non-teaching administrative school duties, collaborating with colleagues, Feedback on student work, prepping for next class, lesson plan adjustments, faculty meetings, etc.

Expand full comment

excellent and gives a path to avoid dystopian mindset

Expand full comment