The street finds its own uses for things, AI Edition
Breakthrough innovations often come from people who use technology, not create it. So, how are students using AI?
Necessity really is the mother of invention.
Take one example: in 1989, a paramedic on a desert bike race was worried about colliding with other riders when he reached for his water bottle. So he took an IV fluid bag, filled it with water, put it in a sock taped to his shirt, and drank from the tubing. It must have been a bizarre sight. From that inauspicious beginning, the Camelbak, the ubiquitous backpack with a water tank, was born.
My old advisor at MIT, Prof. Eric von Hippel, devoted much of his research career to documenting how common this sort of innovation really is (you can read all of his books for free, here). As you can see in the table, his work has made it clear that vast numbers of people outside of traditional R&D fields engage in innovation (I always pause a bit when I see that 22% of surgeons had modified or developed their own medical equipment). He has also shown that many of the key breakthrough products that are later produced by large companies are first developed by users, who share their developments freely with others.
With the advent of practical, consumer-accessible AI in the form of ChatGPT, we are likely to see a similar explosion in usage, far beyond what technology-focused AI researchers anticipated or expected. We have a tool that we literally are not sure of what it does well, now available to almost anyone. And while there are limits to what generative AI can do, I expect an explosion of innovation to follow. In fact it is already happening. See this thread of how it helped direct a movie, for example.
Since I introduced AI to my class a couple of weeks ago, my students have described dozens of uses that I never expected. I thought I would list of few of them. (And, to address the elephant in the room, no one discussed cheating, and, based on a wide variety of factors, I do not think cheating was a major concern for this group of students. But it doesn’t mean it isn’t a large concern for educators overall, especially as AI cheating is not automatically detectable by existing systems.)
Some novel AI uses from students
Making new things: Let’s start with the very first use I observed, which literally happened in the class in which I introduced ChatGPT. When I first showed my undergraduate class AI, I had asked them how many people had experimented with the system in the prior week. Maybe 10% had, but they had all just played with AI for just a few minutes (more on why that is a trap, and how to see the power of AI in a short demo). After I took them through a more complete demonstration, everyone was playing with the system. By the end of the class one of my students, Kirill Naumov, had created a working demo for his entrepreneurship project - a program that would automatically detect a face and play a video clip - using a code library he had never used before, in less than half the time it would otherwise have taken. AI is very, very good at helping people code.
Explaining concepts: Several students told me they used ChatGPT to explain confusing concepts to them “like they were ten years old.” Since AI lies a lot, this might make you concerned, and accuracy certainly can be an issue. In this case, however, the students were using this as a check on their own knowledge, where I think AI can be more effective. Still, I think the use of AI as an explainer is one that educators and technologists should be thinking about, especially as the use is already happening in the wild. (We have a draft paper on how these sorts of errors might actually be used to improve education, as well)
Correcting errors: Students mentioned feeding problems they got incorrect on tests or problem sets to the AI to better understand what they were doing wrong. They then asked for a corrections and explanations. Similarly, students would give the AI drafts of completed papers to get feedback (I have been doing this myself, it is really helpful for proofreading purposes to have an editor who can give you feedback in seconds).
As a model to overcome inertia & uncertainty: A student who had to ask for letters of recommendation for the first time told me that they used the AI to create a draft request that they modified. Another explained to me that they used it to write an email to me, based on bullet points they outlined. I find this type of use one of the most interesting and exciting. There are a lot of things that people need to do that they are unclear about how to start, which can create anxiety and delay. AI can help lower barriers by providing an initial draft.
As a source of inspiration: AI is very good at producing volumes of information. Students used it to generate ideas for student-run clubs, taglines, company names, and more. More on generating ideas with AI here.
As a summary tool: ChatGPT is remarkably good at summarizing large blocks of academic text, material from user interviews, transcripts from meetings, and so on. Students were exploring all of these uses outside of class as well
Use is what matters
The lessons of user innovation is that technology is only really useful when it is used. This might seem like a tautology, but it isn’t. When a new technology is introduced, people adapt it to solve the needs they have at hand, rather than simply following what the manufacturer of the technology intended. This leads to a lot of unexpected uses as the new technology is pushed to solve all sorts of novel problems.
But the innovation associated with users does not end there. Users also turn to their existing technology toolset to solve novel problems that they encounter. Thanks to ChatGPT, practical AI is available to everyone. To understand why that may matter, think about a much less capable system - Excel. As anyone in an organization knows, Excel is already the ad hoc programming language of the office workers, because few of them know Python, but many of them understand Excel. Thus, companies are full of improvised Excel spreadsheets to solve all sorts of problems. People use the tools at hand. With easy-to-use AI, everyone will be applying it to the problems they want to solve. It won’t work in every case, but it will in many cases. And the impacts are likely to be unexpected and widespread.
This is why I think generative AI is disruptive. It is the first general-purpose technology available to non-technical people that can solve practical problems. While its shortcomings may make AI inappropriate for driving cars or diagnosing diseases (for now), those issues will not be as important for many of the other possible uses people will consider.
Keep your eyes open.
Really helpful commentary🙏
Excellent point, thanks!