Apr 9, 2023Liked by Ethan Mollick

Excellent summary, as one would expect! I do wonder what the implications of this are for learning infrastructure – from the design of classrooms to the simple question of do students have the space to study properly at home, if we’re relying more on them doing this – and can we create social spaces for those whose home environments don’t allow for deep engagement with learning?

Expand full comment

Re the analogy with the introduction of calculators:

"A practical consensus was achieved. Math education did not fall apart."

Did it not? First of all, mathematics (abstract concepts, definitions, theorems, proofs) is not the same as arithmetic. Calculators (for the most part, ignoring things like Wolfram Alpha) do arithmetic. The relevant metric is "numeracy": numerical skill.

I haven't been able to find any definitive research comparing numeracy in the 1960's say to numeracy in the 21st century (and I'm not sure how what that research would look like) but at least anecdotally, it seems that numeracy today is of great concern in many countries. Is that partly due to the early and widespread use of calculators? Once again, subjectively (I teach math), I'm inclined to guess the answer is yes (although my sample size is relatively small, and highly self selected).

50 years from now, will teachers of literature and essay writing, poetry and written communications (like journalism), have similar anecdotal reactions to the current level of literacy post AI? Will we have students who can't write basic prose (without AI help) alongside the ones who can't do basic arithmetic (without a calculator)? I suspect/fear the answer is yes.

Expand full comment

I teach college students at a university in Japan. There are several questionable points in this essay, and in much of the AI-in-ed boosterism:

1. False information in the output: Amazing that this problem is not central in this essay. AI-in-ed boosters routinely downplay or marginalize this, but it's of the essence: LLMs are simply not designed to give correct information. As Yoshua Bengio has pointed out, they need to be united with a world model before this is possible - and a lot will hinge on the quality of that model.

Pretty much every chat I've originated with GPT-4 has had errors to a greater or lesser degree, even when asking it about itself (e.g. cut-off date). You can't simply count on this being fixed someday, while recommending LLMs as teachers today. And in a classroom situation, this problem is compounded by the facts that (i) two students working on the same assignment won't necessarily get the same output, and (ii) they often won't have the domain knowledge to find the mistakes.

2. Cheating: Sure there has always been cheating, but that's a red herring. Anti-cheating policies are best understood as a deterrent: they make the well-intentioned majority of students hesitate to do the wrong thing. More students will be induced to cross that line by the facts that GPT-facilitated cheating is (a) much cheaper and easier to access than, say, buying a paper from a third-party vendor, (b) easier to fine-tune to a specific instructor's assignment than such a pre-fab paper and (c) difficult-to-impossible to detect.

3. The calculator narrative: This is specious in so many ways. 20 students with calculators should get the same answer; 20 students using Chat GPT can get 20 different answers, each riddled with its own assortment of false stuff. Arithmetic is not the fundamental way that humans communicate; but language and truth are each essential to communication and the trust that holds our societies together. At the same time, there are contexts when we want people to use calculators (with printable output), such as when calculating restaurant tabs and commercial invoices -- but there isn't any context where LLM output is going to be preferred to human output, outside of contrived academic assignments and capitalistic demands for making thought-intensive tasks less thoughtful and more efficient, for the sake of profit.

Past generations worried about students losing skills that were important, given the technologies of the day: clear handwriting, arithmetic calculations by hand. To some extent we've muddled through with the loss of the first, but much less so with regard to the second. (Reality check: most of my students are in business or economics programs, and most of those students avoid quantitative exam questions like the plague, even when no more than high school math is needed and calculators are allowed.) In the 21st Century verbal communication, originality, and truthfulness are far more central to our polity and our culture than handwriting, or even arithmetic.

My wife and I had lunch today with a friend who has published 10 books of fiction to date, and who has twice been nominated for the top literary prize in Japan. She said she owed everything to very tough and persistent editors.

Where will those editors -- and authors - come from 20 or 30 years from now if AI-assisted composition -- literally a form of regression to the mean -- becomes the norm for well-educated citizens? Where will our politics be if everyone speaks with the same Velveeta processed cheese product voice? (Or gets the AI to write it in the style of a pirate, or Churchill, or some other simulation of a stand-in who actually had a style?)

4. The question that really puzzles me is how pundits like Prof. Mollick can be so darn certain of the beneficial long-term social impact of LLMs when the latter have only been available to the public for a few months. Surely people who have lived through the dot-com bust and the subprime bust should be able to foresee there is at least a chance they'll be writing "what were we thinking?" op-eds a few years from now.

The argument goes: We live in a world that's already filled with easily-available LLMs, and the cat's out of the bag: it's virtually impossible to control their spread. We need to teach our students how to live in such a world. Therefore, we should encourage them to use this stuff. But let's reframe the premises: for "LLMs," substitute "oxy and crystal meth." The premises remain true, but I don't think we'd accept the conclusion anymore. And the analogy isn't so far-fetched, since meth used to be legal and prized for its efficiency-improving effects. Maybe we should approach our use of LLMs in the classroom rather more skeptically and cautiously than the AI-in-ed lobby is pushing us to do.

Expand full comment

Agreed, Analogy of calculators is relatable but not necessarily it gives the affirmation that AI would exactly follow the same path, but the trajectory is different for sure. My concern...as long as AI serves as an tool, enabler, aid in what we want to do, instead we(humans) becoming its subjects/slaves...fingers crossed... I doubt...it will just dilute the cognitive/intuitive abilities of the learners and make more dependent or handicapped by these modern technologies...

Expand full comment

Hi . I happen to be reading the book THE TYCOONS by Charles Morris. The technological changes in the 1870's ended the 'one room schoolhouse' and remade education and society in ways unimaginable at the time. One change in present day may be education with no age limit.

Expand full comment

Prof. Mollick,

As always, your posts on AI are insightful. I have been relentlessly sharing them.

I did note a potential typo:

“Attitudes shifted quickly, and by the late 1970s parents and teachers both became more enthusiastic by However,”

Right after the little Professor photo. I believe it should read:

“Attitudes shifted quickly, and by the late 1970s parents and teachers both became more enthusiastic. However,”

Expand full comment

Hello Mr. Mollick

I've learned about 'flipped classroom' from your article. Thanks a lot. Your work matters.

Expand full comment

I agree with the great opportunity in teaching. I always felt more challenged in a didactic teaching environment rather than a passive one, such as lecturing or reading. So with AI, you have a tireless tutor who can always come up with the “next” question and expect a student to draw certain conclusions. Deductive reasoning is championed. However boundary conditions will need to be set up to avoid confabulation.

Expand full comment

This is a well-articulated argument for the reality that is now fully upon us. From a developing country perspective, however, one has to sensitize us all to the unequal education infrastructure that currently plagues us and how this reality pretty much exacerbates such inequity.

Platforms like Khan Academy are currently accessible to those with internet access and youtube needs strong bandwidth, both of which exist for those we deem privileged. These are just the basics!

AI has impacted education in a way that has made it clear that access to certain infrastructure should be a basic human right and without such a right met, then we are certainly creating inequality deliberately yet again.

However, to the argument you make for AI-infused learning. There is no doubt that changes are upon us and they are shifting the paradigm of teaching and learning. Assessment policies and procedures need to be revised and more critical thinking has to be examined rather than the ability to repeat content. More engaged teachers are needed in order to create more engaged learners. Moreover, a clearer vision of the future world of work is necessary if we are going to adequately link the learning outcomes to the future jobs that lie ahead for young people.

With all these really big questions, we need to go back and really think through what we are creating here! What world are we creating and what people will occupy it. Answering these questions will ease the anxiety of policymakers and citizens alike and bring much-needed confidence into the education system that is clearly anxious

Expand full comment

You touch on some of the biggest points about AI in the classroom. Great work. The calculator analogy I’ve heard a lot, and I’m not sure how well it fits, but your second point is a really interesting!

Expand full comment

A great article - plainly stated and with forward looking ideas. Someone should create flipping class programs to beta test in certain fields and get the content out asap.

Expand full comment

As a math teacher I somewhat agree with the calculator analogy but not completely. I see students, every day, get hung up on 6x7 when figuring out 6x7 isn't the root of the problem they are solving. Yes, they can pull out the calculator to figure that out but to already know what 6x7 is automatically thrusts the student forward into a more comfortable position to solve the problem at hand. To know what AI knows and not have to ask .... or to learn what AI knows and not have to ask ... what's going to put students into more stable positions for higher-level learning and problem solving?

Consider this: I ask students to design a roller coaster ride using what they've learned in Pre-Calculus. Instead of working together to develop one they type it into AI and it creates the roller coaster ride for them. What skills have they skipped developing in not going through the process of creating the ride themselves? Are these skills we consider non-essential? I think we need to be very, very careful about haphazardly allowing the use of AI ...

Expand full comment

Oral exams and small group discussions/debates have always been a better gauge of someone's true grasp of a subject, at least when properly evaluated by a teacher.

Essays serve their own purpose, but as a writer, I feel weird saying that it might be now be served and irrelevant for most students leading into the future.

AI could even have Socratic-style conversation, teach, and properly evaluate a students knowledge, "face-to-face" one day.

Expand full comment

Interesting article provoking thoughtful discussion in the comments! My question is whether, as you wrote, listening to a lecture is truly passive learning? Even though we can't "see" activity occurring, if the audience is truly listening, then surely they are actively learning as they think about and evaluate what they hear. This is why we listen to podcasts and read blogs. These are surely active forms of learning, as our brains are (ideally!) actively engaged with what we hear and read.

Expand full comment

Thanks for this thoughtful, insightful summary. It got me thinking how I can use AI to help generate active learning exercises for my own teaching.

Expand full comment

I asked GPT to design a rubric for an intro to nutrition course and then teach me based on the rubric. So far, it’s been excellent. It’s easy to take detours when it mentions a topic I’m interested in. It asks engaging questions to check my understanding, and so far, none of what its shared is incorrect.

The biggest problem is that I feel compelled to check constantly, especially when it mentions anything that feels deep in the science.

Expand full comment