45 Comments

Best article I’ve read on AI in education so far.

It feels like we need to get back to basics and decide what education is for.

Is it to learn how to write an essay? Maybe.

Or is it to give our students the skills to be successful in the world?

AI is going to be a massive part of the world of work. So it can’t be ignored.

Expand full comment

Perhaps U.S. classrooms will be forced to adopt Finland-style classroom management. As I understand it (we have had three exchange students from Finland), there is much less homework and more intensive in-class work there. It will be painful for the teachers, but it’s hard to see anything else working.

Expand full comment

Very good points. The trouble is I fear most teachers/schools/colleges will not be "catching up" over the summer, and we should expect horror stories in the next academic year.

Expand full comment

I stopped giving homework in 2015 because all they would do is divide up the work between 2 people and text the others pictures of the answers. When I assigned out of class essay work, I was spending so much time trying to find evidence of plagiarism and the proof to make the accusation stick, that I made more work for myself.

I moved to creating activities and lessons where I can interact with my students and assess them while they are completing in class activities. This gives me the opportunity to work with them individually and in small pairs/groups. Verbal “on the spot” feedback helps guide them in the right direction.

This past January, I only had students only handwrite essays with no phones or iPads in sight - they pitched a fit and begged to turn their essays in late for reduced credit 🤔 just so they could use the Internet. They were that dedicated (and anxious) about their grade.

I reassured them that I’d much rather see their authentic work than what they ripped from ChatGPT.

I am excited for the day when states abandon the necessity of high stakes testing to quantify “mastery.”

Education changes so slowly and Admins have little insight as to the freight train that is coming down the track.

In the meantime, I will continue to curate lessons that help my students develop critical thinking skills while simultaneously integrating AI concepts into my curriculum, so students develop AI literacy.

Expand full comment

Very good article. My own personal view as an educator is that AI will prove to be an accelerator of learning once we adjust to it. It’s strengths are only apparent when we thing analytically, creatively or through abstraction to engineer the prompts that make AI dance.

It’s core strength is low level thinking such as information gathering, distillation and synthesis of large swathes of information (taking much grunt work out of research). This allows us more space for the high level thinking: how can we use this information to identify new insights, ideas, connections (advanced research).

This focus all aligns nicely with WEF Top 10 Skill for the future. It’s no longer about knowing stuff. It about knowing what you don’t know but need to find out in any given context.

This will accelerate the shift away from ‘learning the dots’ to an even more explicit version of ‘joining the dots’

AI will exist in the workplace, so it makes sense to integrate it into education. I have interest in digging around sniffing student work to detect plagiarism. It’s a negative activity and it shows incorrect curriculum design.

Expand full comment

I homeschooled my kiddo. He’s an Aerospace engineer now. If the schools were really “educating”, students would be engaged and interested in the material, homework wouldn’t be required. In a homeschool the student explores until their heart’s content.

Expand full comment

Hi Professor Mollick, this piece is incredible. I'm currently a Stanford sophomore who has been working on this very idea for the last couple of months in the AI & Education wing. I am developing a solution that helps scale oral conversation as a means to gauge true student understanding. Was curious if you would be open to having a conversation about this!

Expand full comment

This hits the nail on the head, Ethan.

Have you heard about Sal Kahn's idea of "an AI tutor in every child's hands"? I've written about this approach/idea, and I think it does more to level the playing field than... well, literally anything.

Expand full comment

Thanks for another interesting article.

I had a question about the following comment that you made: "Instructors are going to need to decide how to adjust their expectation for essays, not just to preserve the value of essay assignments, but also to embrace a new technology that helps students write better, get more detailed feedback, and overcome barriers."

I fully except that we educators need to decide how to adjust expectation for essays. But I am so far unable to see how it's possible to embrace technology built on massive-scale intellectual property theft that is currently leading to huge class action lawsuits. This may be the largest intellectual property theft in world history. How do you embrace that?

Expand full comment

Ethan,

Thanks again for sharing this excellent work.

I was struck by a phrase in your article.

Speaking of students, "They will want to use AI as a learning companion, a co-author, or a teammate."

Seeing this made me realize I will have to deal with this situation of working through our relationship with AI in class in the fall, and I just wanted to say thanks and share with you the way I am currently planning to handle it, in case it's useful to you or any other teachers who have the freedom to explore this profound technology change in class. Sorry it's so long, but hopefully it's a useful addition to the conversation.

I'll start by making a list of possible uses of AI in the class with student help. We'll try to end with generative AI. Then our class will work through the ethical problems we find, so we can develop a collective decision about how the course will look at or work with this technology. FYI my classes all have a systems engineering and design orientation, so we can also work on solutions for the ethical problems we find. The overall problem-solving process is technology neutral. In other words we will explore in order to determine a position rather than beginning with a position on what to do.

So, for my fall classes, I will start by giving a brief tour of the history of the technology and the process by which generative AI is developed. Then I'll serve as facilitator for a roundtable discussion. I will ask my students questions like:

"What is the purpose of the technology from your perspective? Is the purpose legitimate from your perspective?"

"What do you think the benefits of the technology will be? What do you think the costs will be?"

We will then go over the typical places where data is scraped for generative AI, so the students can see the likelihood of their own work already having been scraped into a black box. And we'll go on with our questions, thinking especially of the stakeholders involved whose thought patterns are the basis of generative AI, using questions like:

"What would a reasonable person who has been given full disclosure about the potential effects on their livelihoods and careers be willing to share towards the building of such an AI? Would a person want to share their private diaries? Photographs of their children? Their purchase histories? Their phone numbers? Their research papers? Their music? Their art portfolio?" We'll add to this list as the students generate more possibilities.

Another question we will look at is, "how much do you think you should be paid for the use of any data that you contribute to an AI? Do you think there should be any payment at all? Do you think certain bits of data should be weighted more fully than others, like a work of art or a novel relative to a Reddit post? What about an author's entire life's work? Should we be paid for the prompts that you enter into a chat and that are used for his further development?"

I'll be sure to let the students add questions to this list of benefits and concerns.

We will think about this in terms of a policy problem, where what began as a scientific enterprise, the development of artificial intelligence research, suddenly changed its operational goal to profit rather than knowledge. When you turn that operational goal from a nonprofit to a profit structure, many legal challenges arise that have ethical overtones and may limit the degree to which one might want to participate in the use and improvement of that technology. I think it's important that we all give informed consent to participation in the use of the technology at its currently ethically questionable level. This process should ensure we meet that goal and also really understand the systemic problems so that we can look for solutions as we decide what aspects of the technology we should or should not allow in the classroom or for homework.

Going back to the phrase from your article, Speaking of students, "They will want to use AI as a learning companion, a co-author, or a teammate."

Again I really appreciate your sharing this. I plan to incorporate it into this section of the class by asking the students, "what do you want your relationship with this technology to be?"

I wouldn't have thought of bringing that into the discussion, and I'm wondering how we'll end up phrasing what they want. I would ideally like us to use non-anthropomorphic language to describe whatever the relationship will be, since the systems engineering/critical thinking aspects of the class require precision about requirements in design.

For example, going back to your phrasing, if they say they would like a companion, we'd need to define the requirements of a companion and ask whether an algorithmic prediction engine can fulfill them. I think we'll find the the students require sentience as an attribute here, so at that point we could go into a conversation about what would have to happen in the technology for it to become sentient and able to be a friend. On the other hand, AI can respond to prompts, so how do we precisely define this new relationship?

Currently we're not dealing with any true artificial intelligence (the tech hasn't met that goal). From my perspective, we are currently looking at the data harvesting device that is being used to provide the memetic underpinnings of the new meta-human intelligence they're trying to build and own. That has another whole set of ethical problems related to it. Ownership of a meta-human sentient being? Is that OK, or is that a kind of slavery?

Similarly, current AI can't be a co-author because it doesn't understand anything it outputs or inputs. A co-author has to actually be able to offer a new perspective, and that's different from being able to predict what words should come next based on the non-consensually-scraped thought patterns of millions of other people. On the other hand, AI can produce outputs that mimic the thought patterns of the millions of people whose ideas are scraped into the block box, and that output has potential value. How do we defined that new type of value? And do we want that new relationship? What requirements can we put in place that modifies the nature of the system to remove the ethical problem?

Similarly with the teammate phrase. If we do decide to replace a teammate with an AI, we will ask ourselves if it's OK to replace an actual teammate (one of us) with an algorithmic prediction engine that does its predicting by non-consensually scraping the thought patterns of millions of people into a black box. I'm hoping we will agree not to sacrifice any of the members of our community, so that brings up the question whose job should be taken away? On the other hand, the set of AI prediction-outputs has teammate-like qualities. How do we precisely define them? Do we want them as currently offered?

Given that we're also doing design and policy studies in the classes, we can also ask questions like if this doesn't seem ethically OK today, what might need to be done to shift the system so that it is ethically acceptable to us.

I'm super curious about how the students will respond and what policy we will end up with.

Thanks again.

Expand full comment

Fantastic overview of the issues at play here, Ethan. I am inspired by the AI-required assignment you showed and how it pushes students to use AI tools. My takeaway question is what this looks like in the secondary school space where we are going into September with a world of tension between needing to assess curriculum expectations and AI capabilities.

Expand full comment

Who’s going to be the first one to ask AI how the Teachers’ Unions are going to react to these changes?

Expand full comment

Why don't we just advise students that if they want to learn how to think and become experts in a subject they will have to use AI as a coach, and not as a crutch. When I was a teacher I told my students that if they used Cliff's Notes they weren't cheating me, they were cheating themselves.

Change grading so that it shows the work, or make everything pass or fail. Sooner or later someone's going to figure out that training your brain is a choice you decide to make, not something your teacher imposes on you by being your AI babysitter. That person will become a leader/innovator/founder and set a new bar.

Expand full comment

I took a quick look at the paper you cited that says GPT detectors are biased against non native English speakers, but the paper tests the detector with writing samples that came from a specific set of college age students who are non-native English speakers from a single non-English speaking country. And I wonder if the conclusion might be too broad.

Expand full comment

Wealth of information by Professor Mollick. Thank you!!!!!

Expand full comment

When it comes to working on an essay, there's a simple solution you can use. Just ask your students to turn on the "track changes" option in their editors like Microsoft Word, Apple Pages, or Google Docs. This feature allows you to see all the changes they make to the document, including when they made them (so you can understand how long the entire process took), what they deleted or added, and even how their thought process works. It's an easy way to keep track of their progress and understand their approach to writing.

Expand full comment