53 Comments

I teach economics at OU (University of Oklahoma). I mostly teach upper-level classes (advanced BA, MA, PhD) that involve writing lots of computer code. Starting this calendar year, I've adjusted my classes to give students unfettered access to any AI resource they want when completing coding problem sets and research projects. For exams, I have them meet with me 1-on-1 for a conversation / interview to check their knowledge of key concepts without AI assistance.

So far this has worked well, but I have relatively small class sizes (no more than 20 students). This approach obviously doesn't scale well, so I'm not sure what I would do for larger classes.

I view part of my role as instructor to teach the students how best to use the AI, and that starts with getting them to become more familiar with it, as you've written many times. My experience is that the more motivated students don't want to use the AI because they view it as cheating themselves out of learning. So it's actually useful for them to be asked to use the AI by the instructor.

Expand full comment

I teach mostly juniors and seniors at a competitive Independent school outside of NY. I can concur that many of the top students are very anti-AI - they view it through the lens of 'de-skilling" and, quite frankly, based on their limited use, are not impressed with many of the results. I have created a student committee to share thoughts about AI and my experiences and gather ideas and reactions from students across multiple grades. Like many schools, we have very little guidance about AI use. We are still in a mode, as I suspect many schools are, of viewing AI solely through the lens of cheating - these policies are typically made by Department Chairs and Administrators who have almost no experience with AI tools beyond initial impressions of ChatGPT from 2 years ago. Keeping up with new developments is virtually impossible - both because of teaching demands as well as the haphazard roll out and dizzying number of new products. Ethan's blog has been extremely helpful in this regard but the explosion of different models, abilities, and use cases can feel overwhelming. Ethan: Any idea what kind of breakthroughs GPT-5 might have in store? I'm still of the opinion the models need to get significantly more accurate before a real tipping point is reached.

Expand full comment

I have a very similar job to yours, it seems, and I have had the same experience. The strongest students want nothing to do with AI -- partially because they are afraid of getting caught but mostly because their teachers give assignments that AI won't do a very good job on. They understand that relying on AI won't help them become better writers, nor will it produce work that will get them the grades they desire. As far as I can tell, it's mostly our weaker students, those who might not necessarily be able to understand why the AI-written paper is not great, who rely on AI.

Expand full comment

Listen to your top students, not Ethan. The kids are alright and they see through the facade that is the promise of AI.

Expand full comment

I disagree that there's any imperative to "integrate" AI into education, at least if we think about education at the individual course and learning experience level. However, we do need to think about education as happening in a world where AI exists. Maybe that sounds like the same thing, but the distinctions are important, IMO. The most important issue is to disrupt the transactional view students have of school, a view which is fueled by the emphasis on grades and extrinsic motivation. If the point of the exercise is to simply turn something in for credit, students will do what they've always done, find a path to producing that product that meets that need. Learning may or may not happen, but it is certainly not central to how students go about their work. This has always been true. I was verbatim copying the answers to the odd numbered questions for my math homework from the back of the textbook in the 1980s. I was cramming for tests and forgetting everything within hours during college in the 1990s. Homework as a vehicle for learning - as opposed to being a mechanism for giving credit - has largely been discredited for decades.

The transactional view of education has been steadily rising this millennium. I wrote a blog post in 2013 about attitudes I'd been observing in students towards my required first-year writing course for years. They would have gladly taken credit while doing nothing - and therefore learning nothing - at the time. Now we have technology that can do some of that work, making my hypothetical real. If we want students to learn, we have to give them something worth doing, we have to assess the process as much of the product, and we have to give them opportunities to reflect on their own learning. https://www.insidehighered.com/blogs/just-visiting/everyone

Expand full comment

If education's mission is to prepare students to be successful out in the world, the imperative to integrate AI into education is to prepare them for the world in which AI exists. It's a tool, and often an advantage. Cheating aside, moving forward with education pretending like AI doesn't exist - or it's the root of all evil - is doing students a disservice. It's an opportunity to learn how to use it wisely and to their advantage. I agree with Rob, too, that "the worst path aims for greater technological surveillance and monitoring academic work", which a lot of educators seem to have gravitated towards.

Expand full comment

Several thoughts:

1. I do not personally believe that AI is "the root of all evil" and to ascribe that position to people who are critical of its adoption and use is a classic fallacy. My forthcoming book is entirely about acknowledging the existence of AI and using it as a tool as opposed to outsourcing writing-related activities that are better left to humans.

2. When we say "successful out in the world" what do we mean by that? Are we talking about economic/employment prospects? Social/emotional development? Happiness? Do we imagine that people who do not use AI will not be able to be economically secure and living contented lives in a future where this technology exists? If so, this sounds like a preemptive embrace of a vision most of us would have identified as dystopian not too long ago.

3. AI is often an advantage at what? Speed and efficiency are the most common benefits I've seen, at least when it comes to writing which is where I concentrate my efforts. Is the best way to learn to use AI "wisely" to necessarily integrate it throughout the educational experiences? The vast majority of people who are using AI wisely presently didn't have access to it prior to Nov. 2022, but they're managing to meet this bar. What is it in those experiences that allows them to do so? Shouldn't we be mindful of the possibility that the best way to be able to use the tool may involve learning how to do stuff without the tool?

4. I stand with you and Rob that surveillance, technological or otherwise is not the way to go, a stance I held long before the arrival of ChatGPT, and which I extend to ubiquitous tools like the LMS, and even pedagogical practices like grading, and standardized tests.

If AI is going to be part of education, we can't start with AI, we have to start with education, which to me means learning, which I also believe is distinct from "schooling."

Expand full comment

I consider the perspective of economic and employment prospects as the most applicable in the context of education. Other perspectives are less obvious and potentially more troubling, like the emotional/social development you pointed out. "Do we imagine that people who do not use AI will not be able to be economically secure and living contented lives in a future where this technology exists? If so, this sounds like a preemptive embrace of a vision most of us would have identified as dystopian not too long ago." - no, of course not. There will be plenty of people who do not use AI and live happy lives; just like there are plenty of people who don't use the Internet and live happy lives. But for the majority of people, higher fluency in using the Internet early on put them at an advantage (economically, employment-wise, exposed to opportunities). I expect the same to happen with AI. Speed and efficiency equals ROI in the context of employment. At least one study confirmed increased creativity too.

I agree with your last point. I carve-out a response less to your comment specifically, and more to the pretty prominent academic response that AI tools are or should be uniformly banned is at best short-sighted and unrealistic, and at worst detrimental.

Expand full comment

I think it's important to recognize that the idea that economic and employment opportunities should take primacy in the context of education is a relatively new framework for how we view the experience and purpose of school. I think we also need to recognize that this is a framework that can and should be contested, rather than treating it as the default purpose of school and schooling. The public schoolhouse is rooted in the very notions of shared democracy and democratic principles. School as a place to form character, to bond with others, to experience different perspectives, to develop as a "whole person" was a far more prevalent view, even into my own lifetime.

What has the transactional view of school/education (e.g. this is what you do to get a job) done for both the quality of education, and the overall well-being of the citizenry? It's certainly eroded the shared principles of democracy aspect of schools. It's harmed the degree of engagement students have with school. It's made the work of teaching increasingly difficult and even demoralizing.

I see schooling as a process where students should develop agency, self-efficacy, knowledge, and the community and social ties which will help them determine what kind of life and work will bring them security and happiness. This is achieved through a vast variety of experiences which should be oriented around more than whatever future job or career you are going to have.

I think school can and must be about more than getting a job. This is the framework that has made many students actively disengage from school, and which now has them turning to ChatGPT to do their school work because to them it is truly meaningless.

Our lives have to be about more than our status as economic units, don't they? Why shouldn't school recognize that?

Expand full comment

Thank you for debunking the reductive homo economicus model of post-secondary education.

Expand full comment

Why does the pro AI movement sound so much like a cult?

Expand full comment

I had your "ChatGPT Can't Kill Anything Worth Preserving" in mind as I read this piece, John. For me, you and Mollick represent the interesting poles of critical responses to the impacts of generative AI on education. The best path forward aims between them. The worst path aims for greater technological surveillance and monitoring academic work, or reversion to blue books or some computer-based version of in-class writing, to ensure students comply overly bureaucratized methods of instruction.

Expand full comment

Former professor here. "Welcome to my Introductory Biology class. Feel free to use the internet and AI to answer all questions and write all of your homework. However, realize that by doing so you risk reducing the amount you will actually learn. Also, please note that all exams will be conducted in person, on paper, without access to any device or to the internet. Your exam performance will determine 90% of your grade. Now let's get started."

Expand full comment

Please note that... 😂

Expand full comment

We need to discuss the reason for education. WHY do we need to learn? Then what tools we use or not are incidental? Is it for survival? Or happiness or good citizenry? Why can't we focus on the WHY .

Expand full comment

Excellent article. I teach Digital Transformation at Catolica Lisbon School of Business and Economics. During the quarter students have weekly assignments that make full usage of chatGPT. At the end of the course I have a in-class test, solving a real world problem, in a closed environment (no access to external tools or internet). It has worked very well.

Expand full comment

You said, " In general, I am in favor of delegating tasks to AI (the subject of my new class on MasterClass), but education is different - the effort is the point." I strongly disagree. At best, this is confusing the means with the end -- effort expended is what leads to learning; effort is not "the point" of education! At worst, this is confusing a potential means with a potential end -- it is possible to learn without any effort at all! Consider the moments in your life when simply being presented with a new concept opened up your mind and learning became effortless!

Expand full comment

👏👏👏🙌🙌

Expand full comment

Great post. I’d add this approach works equally well if you replace “teacher” with “parent” and “student” with “child”. So much learning happens at home that parents really need to get their head around when and how to introduce AI usage with kids. Of course parents also have to consider how their decision on the topic conflict/align with what their child’s school is doing

Expand full comment

One way to avoid the "homework apocalypse" is to make homework about learning rather than output. If a student has to churn through a bunch of math problems that they will then turn in for a grade out of 10, then there's a temptation to just let AI do it. If the student has to read and understand a text so that they can do something with that information in class -- whether that means taking a quiz, writing an essay, or just not looking ignorant in class discussion -- there's no way for AI to put that information into their brain.

I've always disliked the busywork/do-a-worksheet type of homework; what students do outside of class should serve to prepare them for the main event, which is what happens in class.

Expand full comment

Someone, somewhere must be thinking: "why do we need people to learn anything at all, if all the work will be done by AI?"

Expand full comment

Disagree strongly with the imperative to integrate AI into education proposed by this article, and agree with John Warner's comments below.

It's not surprising to see a "what can we do with this commercial product?"-first attitude (versus a "what the hell is this thing and what's it doing to my society?") coming out of a business school, perhaps.

The uncritical (and pseudo-critical) acceptance of these highly flawed genAI products in academia has been disturbing, considering that they 1) are built on knowledge bases chosen by very small groups of people for commercial gain--and stolen; 2) are already corrupting our arduously compiled inherited corpus of human knowledge with their feedback loops, etc.; 3) violate basic tenets of scholarship by their opacity; 4) present major ethical problems regarding the human labor and vast natural resources used in their construction and operation; 5) are considered by some AI scholars to present existential risks as they are further developed along the current lines.

Efficacy over truth? That should not be considered scholarly.

Expand full comment

I think a more nuanced view might be that AI needs to be discussed and explained to students as opposed to simply integrated in schools. I completely agree that the rush to bring untested commercial products into classrooms is not only premature but potentially ruinous. But the fact of the matter is that students have access to these tools and many use them indiscriminately in ways that concern teachers and have exposed the hollowness of many types of assignments. Ethan has created and demonstrated some interesting and useful prompts perhaps for students at the undergraduate and graduate school level, but we are still very early in getting any good feedback on ways in which generative AI is useful in a 6-12 context (let alone K-5). Looking beyond simply using LLM's to produce responses to HW prompts or answer HW questions, I have found there are ways to engage with AI that do promote learning but require a lot of prior knowledge and skills. And of course many teachers are using them effectively in their own work. It's going to be a lengthy process.

Expand full comment

I'm writing to thank you for this insightful article on AI in education and to seek advice on implementing these ideas more broadly in my institution.

I am a lecturer in a culinary arts degree program in New Zealand, I've been following your work for nearly two years. Your recent article was particularly timely and helped clarify my thoughts on integrating AI into our curriculum.

Inspired by your approach, I've actively encouraged my students to use AI tools to enhance their learning and development. We've been using several of your suggested "Instructors as innovators" prompts, working through prompt engineering models, and recently using Claude's prompt generator.

However, I've noticed that within my organisation I'm an exception among my academic colleagues. The majority of our academic staff have a limited understanding of ChatGPT and consequently, are not equipped to guide their students in using AI effectively.

I was wondering if you could share any ideas or examples of how other institutions have successfully influenced change in AI adoption among faculty. I'm particularly interested in strategies for:

Increasing awareness and understanding of AI tools among academic staff

Developing training programs for faculty on AI integration in education

Creating policies that encourage responsible AI use in academic settings

Any insights or resources anybody could provide would be greatly appreciated.

Expand full comment

I think this is typical of many institutions and often the case with the introduction of new technology - early adopters tend to push the envelope and, if the tools are useful, demonstrate to others what all the fuss is about and then schools slowly bring them on board. AI is no exception but has brought greater visibility and higher stakes because of its generative capabilities. But the larger point you raise - that the majority of staff have limited understanding of AI - is an institutional failure. After administering a survey to faculty about the issues they most wanted to discuss at my school, with AI being at the top of the list, we spent zero time on the topic in our opening meetings. School leaders and administrators need to wrap their head around this and use these tools themselves in order to get a sense of what's going on. Marc Watkins' substack, Rhetorica (https://marcwatkins.substack.com/) is another excellent resource.

Expand full comment

Absolutely agree with recommending Marc Watkins Substack! He has a very thought-provoking essay series on AI, teaching & learning

Expand full comment

Always good to see you return to the topic of the Homework Apocalypse, a phrase that gently mocks the moral panic over generative AI and homework while also treating it as a serious topic. Your closing point about using the disruptions of AI to shape change seems right and I find it cuts against your opening observation that "remarkably little has changed as a result" of the apocalypse.

It may be wishful thinking and confirmation bias, but I see more discussions of the cracks in the edifice of the thoughtless homework and high-stakes testing approach to education that dominates schooling. I see the illusions you identify, but I also see more attention paid to the underlying problem: students don't see any value in the academic work they are assigned and teachers tend to think it is a problem with the students.

Seems to me that is remarkable change, and perhaps the beginnings of genuine reform movement.

Expand full comment

Excellent piece, optimistic and proactive.

Thank you for sharing the scatter chart plotting mental effort vs negative attitude. It exposes an opportunity:

We want people to inhabit the bottom left corner-- feeling good after heavy mental effort.

Since, each dot in the chart is a task - not an individual, we might study those tasks that draw thinkers into the golden corner to see how we can replicate their effective features throughout our education system. Like coaches training athletes, we might help students learn to respond to effort with a certain pride and pleasure.

Expand full comment

Great piece. Tldr:

Problem: the homework apocalypse already happened, with the spread of the internet. There haven't been adjustments to spread of AI, due to teachers thinking they can detect AI, and students thinking they are still gaining knowledge.

Expand full comment

I’m a retired educator and I am extremely interested in AI’s potential impact on education. In graduate school I read Neil Postman’s “Teaching As A Subversive Activity”, and AI presents amazing opportunities for teaching critical thinking. It’s a most exciting time to be an educator.

Expand full comment

The conversation around AI and its role in education often centers on the concern that students are using AI to bypass the learning process, particularly when completing homework assignments. While this is a valid concern, it reveals a deeper issue: the kinds of tasks we are assigning students and how they align with the realities of today’s learning environment. As an English teacher, I’ve observed how traditional approaches to composition, often focused on rigid structures and formulaic processes, make it easy for AI to replicate student work. But the problem here isn’t AI—it’s the assignments themselves and how we define cognitive engagement in education.

In rethinking how AI fits into education, we need to move beyond the idea of AI as merely a tool or an extension of human capabilities. Instead, I propose that we think of AI as part of a distributed cognitive system, where both human and AI cognitive agents work together to produce insights and solve problems that neither could achieve alone. This shift in thinking demands a new level of complexity in the tasks we assign students, one that I call the "zone of adequate complexity."

This "zone of adequate complexity" builds on Vygotsky’s idea of the "zone of proximal development" (ZPD), where learning happens when students are challenged just beyond their current abilities, with appropriate guidance. In Vygotsky’s model, the teacher or peer provides that scaffolding. However, with AI in the picture, the required "zone" for effective learning has evolved. The complexity of the cognitive work must now account for both the human learner and the AI, whose abilities to analyze, generate, and process information change the landscape of what constitutes meaningful cognitive challenge.

In a distributed system of cognition, where both human and AI agents contribute to the process of knowledge creation, the "zone" of learning has shifted. It’s no longer about simply pushing students to perform tasks they couldn’t achieve on their own but about designing tasks that require collaboration between human and AI agents. The level of complexity must be high enough that neither the student nor the AI can complete the task independently. This creates a new cognitive challenge where both human creativity and AI’s processing capabilities are necessary components for success.

For example, rather than assigning essays that AI can easily write, we should design tasks that require students to engage with AI in creative ways—perhaps using AI to generate initial ideas or alternative perspectives, then pushing students to critically evaluate, synthesize, and build upon those ideas. The AI becomes a partner in the learning process, augmenting the student’s cognitive abilities while also requiring the student to engage at a higher level of critical thinking.

This "zone of adequate complexity" introduces a new dynamic in education. Instead of fearing that AI will replace the cognitive work of students, we can leverage AI to push students into more complex, meaningful cognitive engagement. This goes beyond using AI as a shortcut or a tool for automating repetitive tasks. When students work in partnership with AI, they are required to think metacognitively—not only reflecting on their own learning process but also considering how AI processes information and how to guide it effectively. This dynamic interaction between human and AI creates a feedback loop that elevates the cognitive process for both.

The challenge, then, is not to prevent students from using AI, but to design tasks that make the most of AI’s capabilities while pushing students to engage deeply with the material. The "zone of adequate complexity" becomes a framework for this, providing a way to think about how we design assignments that require the combined cognitive power of both humans and AI. In this model, learning is distributed across the system, and meaningful learning occurs when both agents contribute to the process in ways that challenge and expand their individual capabilities.

Expand full comment