14 Comments

This seems like a smart response to ChatGPT. Rather than dismiss it, as I have seen other professors do, you've figured out how to (1) use it to complement your teaching and (2) apprise students of its limitations. It seems like a lot of other knowledge workers, inside and outside academia, would do well to have as rational a response to ChatGPT, and generative AI more generally. as yours.

Expand full comment

You increased expectations comment reminded me of the shift with mobile phones. When I first asked for a 3 minute video submission in an MBA course, I had to bring in AV to offer training, the students had to check-out video cameras, and be given at least a week for editing -- and they thought it was a waste of time. The last time I gave such an assignment it was done during an 1.5 hour course that included writing, shooting, editing, and screening. No complaining. Great reviews. Those of us looking forward to these transitions may want to dredge up more examples of positive adjustments.

Expand full comment
Jan 17, 2023Liked by Ethan Mollick

A smart, proactive adaptation! This is a more sustainable and helpful approach than blanket tech bans. I hope to see more curricula following suit.

Expand full comment
Jan 18, 2023Liked by Ethan Mollick

I want to join others who have commented in appreciation of your approach, Professor Mollick. This seems to me the right spirit for confronting so sudden a sea change, at least in higher education (I can imagine a set of distinct problems in elementary/secondary ed.), useful in framing our responses to what will surely be a further cascade of AI developments.

An earlier blog post (including ChatGPT's superb clock tale) piqued my interest and I've been experimenting with Chat (we're now on a first-name basis) over a variety of genres. (It's all fun for me, I'm retired from teaching.) One of the most interesting things I've discovered concerns Chat's many errors and inconsistencies. When I noticed this bewildering combination of skill and stupidity, I decided to ask Chat to tell me what was causing them, in terms of both specific ones and general principles. (An example would be in the clock story, which, fine as it was, failed to respond to several elements of the detailed prompt. Major inconsistencies also emerged in answers to similar or "identical" prompts in different languages.) I think this self-disclosure feature can also be helpful for students trying to get their arms around AI deployment.

Expand full comment

Agree with so much already said here, but also offer this tangent. On the first day of class this semester (with preservice education students) I invited everyone to help rewrite the syllabus using ChatGPT. I expected excitement and perhaps disbelief (e.g., that their professor would be open to such a thing), but what I got looked like confusion and a bit of resistance. First, nearly none of the students had heard of ChatGPT, and zero had played with it, so they had no idea what I was asking them to do. Second, they appeared to not want to play along with the invitation to co-write the syllabus. So, as a warm up, I asked them to ask Chat what it thought was worth learning in an education program, particularly in this class (Learning and Digital Media). Some students appeared moderately intrigued with Chat's replies, but the class as a whole struck me as underwhelmed. In subsequent weeks some students have talked about exploring Chat and other AI tools, but as of yet there does not seem to be much excitement, and certainly not much wonderment. For example, when I ask how their friends, classmates, and other teachers across the university are responding to Chat, it appears there isn't much of a conversation happening. Meanwhile the Provost sent an "urgent" announcement to faculty and staff about University policies around academic integrity, not to mention the generalized media frenzy from many different sectors (but to my ears, mostly from education). In any case, as thrilling and challenging as this moment is for me (so exciting to be here, finally! almost makes not having flying cars ok!), I'm wondering if we're missing something. The old folks are bewildered and energized, but the young folks (at least where I am) appear nonplussed. It's an admittedly small sample size (very small), but coming back to campus after the holidays I expected something vastly different. I'm confused and wonder if anyone else is finding something similar?

Expand full comment

Hats off to you Ethan. Doubt unis here in Australia are as accepting of the inevitable.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

Everyone is a mentor, not a craftsman. Everyone is a conductor, not a musician. Everyone is an editor, not an author. Everyone is a manager, not a worker. In a way, it is very apt for an MBA class.

Expand full comment

Bravo! I wholeheartedly agree with you. Technology has always been an enabler and it is our duty as educators to stress this. As you have stated, I believe AI tools such as ChatGPT will help educators and learners. You have laid out some templates(?) that all of us can use. Thank you. I see we can delve deeper into subjects than ever before. We can now genuinely teach critical thinking. So, embrace this tech,

Expand full comment

What is the "must have" benefit of these AI tools which justifies all the adaptation which is required?

One way to address the question might be to consider how much we'd be willing to pay to have access to ChatGPT.

Expand full comment

Delegating work to a machine always involves a swap of capacities. If you want the machine to do something, you need to have certain skills/knowledge to complement what the machine is good at. For calculators you need approximation skills (and perhaps more depending on the task For spell checkers you need a grasp of the language. Lots of examples. Your explorations with your students illustrate these complementarities well. For me I think there are at least three things one needs to complement what it does: a rough idea of how it was built and works, its limitations; good prompting skills; and a capacity to judge the quality/validity of its output.

Expand full comment

I've just forwarded this to two of my education clients: both of whom work to get first-gen and marginalized students onto (and through) the college path; already these students are primed to use tech to accommodate their challenges and augment their abilities -- the new AI resources will be welcomed to help level the field of college readiness and competency. You are one of the more positive voices about this new technology...I hope your voice begins to outweigh the detractors.

Expand full comment

I've started to teach ChatGPT in my business courses. Here are my initial thoughts https://rb.gy/dbpj6d

Expand full comment

Your AI Policy hits the nail on the head. Lazy prompts and naive trust are real issues.

When I had it generate writing about an historic event that I'm familiar with - there were contradictions between several different prompts. Facts were incorrect and Chat bloviated almost like a boring guest at a cocktail party.

On more technical issues it seems more accurate.

Another subtle issue is the human hand behind the code. We are asked to accept the idea that a computer can mine the cesspool of the internet, discover truth and produce an Oracle of Delphi. It would be interesting to see if someone can get Chat to contradict the theology of its creators.

Expand full comment

The sanctioned use of ChatGPT in education will not only revolutionize the quality and breadth of what students can now produce but I think it will do the very opposite of what people fear by actually dramaticaly reducing instances of cheating and plagerism. Well done professor, you are way ahead of the curve on this one.

Expand full comment