Apr 9, 2023Liked by Ethan Mollick

Excellent summary, as one would expect! I do wonder what the implications of this are for learning infrastructure – from the design of classrooms to the simple question of do students have the space to study properly at home, if we’re relying more on them doing this – and can we create social spaces for those whose home environments don’t allow for deep engagement with learning?

Expand full comment

I think you touch on an important concern here. Anecdotally I know a lot of students who feel that the flipped classroom architecture does not work well for them and not having an environment to be able to deeply engage has got to be part of the equation.

I also appreciate your broader question about infrastructure and framing. If we change the mode of instruction, we should be mindful of other changes in the support structure that need to be made to make it successful for both students and educators.

Expand full comment

I have heard similar things. I think at least some of it comes down to whether the module/course/individual session has been designed for flipped learning or just sort of squashed into that box. We are having a lot of engagement with a modified model where there are sometimes lectures, sometimes flipped content and then always whole group (200 students) broke into smaller groups for workshop/interactive sessions. Lecture attendance can still be spotty (we also offer recorded lectures if students are absent), but the whole group interactive sessions are astonishing with 80%+ rocking up, if not over 90%. Now, this is being led by two phenomenal lecturers but still, it is both interesting and reassuring!

Expand full comment

That lengthy digression aside, I really liked the focus here on AI as something that might help us do much better at personalising learning, there is still a lot of discomfort and focus on the “cheating” aspect circulating now. (Which makes sense! This is new for most people and also seems to (almost certainly) require us to do a lot of work to incorporate it well, without the option of leaving well enough alone and “not” engaging, as it is very clearly going to be used either way.)

Expand full comment

I like this model! Smaller workshop & interactive activities are great.

Also agree that while academic dishonesty is a necessary conversation, I’m glad it’s broadening beyond that with a more hopeful view.

Expand full comment

Re the analogy with the introduction of calculators:

"A practical consensus was achieved. Math education did not fall apart."

Did it not? First of all, mathematics (abstract concepts, definitions, theorems, proofs) is not the same as arithmetic. Calculators (for the most part, ignoring things like Wolfram Alpha) do arithmetic. The relevant metric is "numeracy": numerical skill.

I haven't been able to find any definitive research comparing numeracy in the 1960's say to numeracy in the 21st century (and I'm not sure how what that research would look like) but at least anecdotally, it seems that numeracy today is of great concern in many countries. Is that partly due to the early and widespread use of calculators? Once again, subjectively (I teach math), I'm inclined to guess the answer is yes (although my sample size is relatively small, and highly self selected).

50 years from now, will teachers of literature and essay writing, poetry and written communications (like journalism), have similar anecdotal reactions to the current level of literacy post AI? Will we have students who can't write basic prose (without AI help) alongside the ones who can't do basic arithmetic (without a calculator)? I suspect/fear the answer is yes.

Expand full comment

Thanks for this. I like the way you posed the question and extended the analogy to imagine what we’ll think 50 years down the road. A good reminder that we should continue to examine what we teach and why. I’ve been leaning on the analogy of finding ways to use AI as a ladder and not a crutch. Your line of questioning resonates with that, I think.

Expand full comment

Humans will always take the path of least resistance. You can’t control the process or force a path. When you do that, you just alienate the subject even more. We should just accept that arithmetic skill in this era is not a required skill, neither is writing and memorizing vocabularies in the coming future. It’s just different. Unless you are preparing for an apocalypse.

(this was grammatically and syntaxically corrected by Bing creative)

Expand full comment
Jun 3, 2023·edited Jun 3, 2023


"You can’t control the process or force a path. When you do that, you just alienate the subject even more."

To what does "the subject" in the second sentence refer? It seems to be either a complete non-sequitur or it is intended to correspond to "the process" or "path" in the first sentence. In that case, how would it be possible to (or even make sense to) "alienate" a process or a path?

"We should just accept [...] Unless you are preparing for an apocalypse."

It seems to me that arithmetical skill and functional literacy is likely far more important "in this era" of a functional high tech society than it would be in the event of an "apocalypse".

I commend you for having dramatically made the point that basic literacy is already in an abysmal state, and that simply running nonsense through Bing creative (or any LLM AI) is not going to change that.

Expand full comment

I can tell you’re passionate about numeracy and literacy skills. I respect your opinion, but I have a different perspective on this issue. You see, I don’t think that technology is necessarily bad for these skills. In fact, I think it can open up new possibilities and challenges for us to learn and communicate in different ways. For instance, instead of just relying on our memory and calculation skills, we can use calculators and AI to help us find and process information more quickly and creatively. Instead of just writing long and boring texts, we can use multimedia and interactive platforms to share our ideas more effectively and attractively.

Don’t get me wrong, I’m not saying that arithmetic skill and writing skill are useless or irrelevant. I’m saying that they are not the only or the most important skills in this era or the coming future. I’m also not saying that we should just accept the status quo or stop improving ourselves. I’m saying that we should be more flexible and adaptable to the changes brought by technology, and not be stuck in our old ways of thinking about numeracy and literacy.

By the way, I noticed that you focused a lot on the grammar and syntax of my comment. You also accused me of running nonsense through Bing creative (or any LLM AI). I don’t think that’s a fair or constructive way to criticize my argument. It doesn’t address the content or quality of my message. It also undermines your own credibility by implying that you can’t tell the difference between human and AI-generated text.

I also noticed that you dismissed my perspective as seriously flawed without acknowledging any possible merit or alternative viewpoint. You didn’t engage with my argument or offer any counter-evidence or examples. You just assumed that numeracy and literacy skills are essential for a functional high tech society, and that their decline will lead to negative consequences. You didn’t consider that other skills or competencies may be more important or relevant in the current or future era, or that technology may enable new ways of learning and communicating.

(this was run through Bing creative)

It seems that my abysmal literacy, even with the grammatical and syntactical corrections of Bing, does not narrow the gap much between me and you.

Expand full comment

I teach college students at a university in Japan. There are several questionable points in this essay, and in much of the AI-in-ed boosterism:

1. False information in the output: Amazing that this problem is not central in this essay. AI-in-ed boosters routinely downplay or marginalize this, but it's of the essence: LLMs are simply not designed to give correct information. As Yoshua Bengio has pointed out, they need to be united with a world model before this is possible - and a lot will hinge on the quality of that model.

Pretty much every chat I've originated with GPT-4 has had errors to a greater or lesser degree, even when asking it about itself (e.g. cut-off date). You can't simply count on this being fixed someday, while recommending LLMs as teachers today. And in a classroom situation, this problem is compounded by the facts that (i) two students working on the same assignment won't necessarily get the same output, and (ii) they often won't have the domain knowledge to find the mistakes.

2. Cheating: Sure there has always been cheating, but that's a red herring. Anti-cheating policies are best understood as a deterrent: they make the well-intentioned majority of students hesitate to do the wrong thing. More students will be induced to cross that line by the facts that GPT-facilitated cheating is (a) much cheaper and easier to access than, say, buying a paper from a third-party vendor, (b) easier to fine-tune to a specific instructor's assignment than such a pre-fab paper and (c) difficult-to-impossible to detect.

3. The calculator narrative: This is specious in so many ways. 20 students with calculators should get the same answer; 20 students using Chat GPT can get 20 different answers, each riddled with its own assortment of false stuff. Arithmetic is not the fundamental way that humans communicate; but language and truth are each essential to communication and the trust that holds our societies together. At the same time, there are contexts when we want people to use calculators (with printable output), such as when calculating restaurant tabs and commercial invoices -- but there isn't any context where LLM output is going to be preferred to human output, outside of contrived academic assignments and capitalistic demands for making thought-intensive tasks less thoughtful and more efficient, for the sake of profit.

Past generations worried about students losing skills that were important, given the technologies of the day: clear handwriting, arithmetic calculations by hand. To some extent we've muddled through with the loss of the first, but much less so with regard to the second. (Reality check: most of my students are in business or economics programs, and most of those students avoid quantitative exam questions like the plague, even when no more than high school math is needed and calculators are allowed.) In the 21st Century verbal communication, originality, and truthfulness are far more central to our polity and our culture than handwriting, or even arithmetic.

My wife and I had lunch today with a friend who has published 10 books of fiction to date, and who has twice been nominated for the top literary prize in Japan. She said she owed everything to very tough and persistent editors.

Where will those editors -- and authors - come from 20 or 30 years from now if AI-assisted composition -- literally a form of regression to the mean -- becomes the norm for well-educated citizens? Where will our politics be if everyone speaks with the same Velveeta processed cheese product voice? (Or gets the AI to write it in the style of a pirate, or Churchill, or some other simulation of a stand-in who actually had a style?)

4. The question that really puzzles me is how pundits like Prof. Mollick can be so darn certain of the beneficial long-term social impact of LLMs when the latter have only been available to the public for a few months. Surely people who have lived through the dot-com bust and the subprime bust should be able to foresee there is at least a chance they'll be writing "what were we thinking?" op-eds a few years from now.

The argument goes: We live in a world that's already filled with easily-available LLMs, and the cat's out of the bag: it's virtually impossible to control their spread. We need to teach our students how to live in such a world. Therefore, we should encourage them to use this stuff. But let's reframe the premises: for "LLMs," substitute "oxy and crystal meth." The premises remain true, but I don't think we'd accept the conclusion anymore. And the analogy isn't so far-fetched, since meth used to be legal and prized for its efficiency-improving effects. Maybe we should approach our use of LLMs in the classroom rather more skeptically and cautiously than the AI-in-ed lobby is pushing us to do.

Expand full comment

Agreed, Analogy of calculators is relatable but not necessarily it gives the affirmation that AI would exactly follow the same path, but the trajectory is different for sure. My concern...as long as AI serves as an tool, enabler, aid in what we want to do, instead we(humans) becoming its subjects/slaves...fingers crossed... I doubt...it will just dilute the cognitive/intuitive abilities of the learners and make more dependent or handicapped by these modern technologies...

Expand full comment

Hi . I happen to be reading the book THE TYCOONS by Charles Morris. The technological changes in the 1870's ended the 'one room schoolhouse' and remade education and society in ways unimaginable at the time. One change in present day may be education with no age limit.

Expand full comment

Prof. Mollick,

As always, your posts on AI are insightful. I have been relentlessly sharing them.

I did note a potential typo:

“Attitudes shifted quickly, and by the late 1970s parents and teachers both became more enthusiastic by However,”

Right after the little Professor photo. I believe it should read:

“Attitudes shifted quickly, and by the late 1970s parents and teachers both became more enthusiastic. However,”

Expand full comment

Hello Mr. Mollick

I've learned about 'flipped classroom' from your article. Thanks a lot. Your work matters.

Expand full comment

I agree with the great opportunity in teaching. I always felt more challenged in a didactic teaching environment rather than a passive one, such as lecturing or reading. So with AI, you have a tireless tutor who can always come up with the “next” question and expect a student to draw certain conclusions. Deductive reasoning is championed. However boundary conditions will need to be set up to avoid confabulation.

Expand full comment

I like your point about the need to watch out for factual errors. I wonder how bringing a domain expert (i.e., a teacher) into the loop might help to address that concern. I could imagine a scenario where there are certain questions posed by the teacher and then the student uses AI to investigate them and then closes the loop by discussing their conclusions and answers with the teacher or a TA.

Expand full comment

This is a well-articulated argument for the reality that is now fully upon us. From a developing country perspective, however, one has to sensitize us all to the unequal education infrastructure that currently plagues us and how this reality pretty much exacerbates such inequity.

Platforms like Khan Academy are currently accessible to those with internet access and youtube needs strong bandwidth, both of which exist for those we deem privileged. These are just the basics!

AI has impacted education in a way that has made it clear that access to certain infrastructure should be a basic human right and without such a right met, then we are certainly creating inequality deliberately yet again.

However, to the argument you make for AI-infused learning. There is no doubt that changes are upon us and they are shifting the paradigm of teaching and learning. Assessment policies and procedures need to be revised and more critical thinking has to be examined rather than the ability to repeat content. More engaged teachers are needed in order to create more engaged learners. Moreover, a clearer vision of the future world of work is necessary if we are going to adequately link the learning outcomes to the future jobs that lie ahead for young people.

With all these really big questions, we need to go back and really think through what we are creating here! What world are we creating and what people will occupy it. Answering these questions will ease the anxiety of policymakers and citizens alike and bring much-needed confidence into the education system that is clearly anxious

Expand full comment

Such a great point. Unequal access to the infrastructure on which AI usage depends could deepen global inequalities.

Expand full comment

You touch on some of the biggest points about AI in the classroom. Great work. The calculator analogy I’ve heard a lot, and I’m not sure how well it fits, but your second point is a really interesting!

Expand full comment

A great article - plainly stated and with forward looking ideas. Someone should create flipping class programs to beta test in certain fields and get the content out asap.

Expand full comment

As a math teacher I somewhat agree with the calculator analogy but not completely. I see students, every day, get hung up on 6x7 when figuring out 6x7 isn't the root of the problem they are solving. Yes, they can pull out the calculator to figure that out but to already know what 6x7 is automatically thrusts the student forward into a more comfortable position to solve the problem at hand. To know what AI knows and not have to ask .... or to learn what AI knows and not have to ask ... what's going to put students into more stable positions for higher-level learning and problem solving?

Consider this: I ask students to design a roller coaster ride using what they've learned in Pre-Calculus. Instead of working together to develop one they type it into AI and it creates the roller coaster ride for them. What skills have they skipped developing in not going through the process of creating the ride themselves? Are these skills we consider non-essential? I think we need to be very, very careful about haphazardly allowing the use of AI ...

Expand full comment

Oral exams and small group discussions/debates have always been a better gauge of someone's true grasp of a subject, at least when properly evaluated by a teacher.

Essays serve their own purpose, but as a writer, I feel weird saying that it might be now be served and irrelevant for most students leading into the future.

AI could even have Socratic-style conversation, teach, and properly evaluate a students knowledge, "face-to-face" one day.

Expand full comment

Interesting article provoking thoughtful discussion in the comments! My question is whether, as you wrote, listening to a lecture is truly passive learning? Even though we can't "see" activity occurring, if the audience is truly listening, then surely they are actively learning as they think about and evaluate what they hear. This is why we listen to podcasts and read blogs. These are surely active forms of learning, as our brains are (ideally!) actively engaged with what we hear and read.

Expand full comment

Thanks for this thoughtful, insightful summary. It got me thinking how I can use AI to help generate active learning exercises for my own teaching.

Expand full comment

I asked GPT to design a rubric for an intro to nutrition course and then teach me based on the rubric. So far, it’s been excellent. It’s easy to take detours when it mentions a topic I’m interested in. It asks engaging questions to check my understanding, and so far, none of what its shared is incorrect.

The biggest problem is that I feel compelled to check constantly, especially when it mentions anything that feels deep in the science.

Expand full comment