I appreciate your posts. And look forward to playing with these projects this summer.
This spring I had to pivot as a high school English teacher trying to pitch the value of poetry to students. I was seeing writing with what I suspected had AI help to say the least, so I asked my students to write with integrity as they experimented with ChatGPT and poetry - asking big questions as to role of the poet in an AI world.
They had to credit AI where credit was due - indicating AI writing in bold font - as they wrote poems and reflections on…
Why write poetry?
Does poetry matter?
What’s the point if large language models can generate sonnets and sestinas in seconds?
They read various Ars Poeticas by poets and wrote their own. They researched and presented more than 90 poets and cross checked with ChatGPT. This fact checking is essential as AI churns out words, words, words - some true, yet some false. Discernment is an essential skill. They concluded that writers write with an authentic voice that reflected their lived experience - and context is everything: historical, biographical, political, and social.
Echoing Ross Gay, writing serves as an “evident artifact” to thinking, to struggling,
to investigating, to enduring,
to living - and to inspiring
by sharing with the world.
As educators, we will have to ask big questions as we rethink teaching and learning with this technology.
We must consider our students and their future as they develop their respective relationship with writing and reading.
Right now, more questions than answers.
And as Rilke writes:
“I want to beg you, as much as I can, dear sir, to be patient toward all that is unsolved in your heart and to try to love the questions themselves like locked rooms and like books that are written in a very foreign tongue. Do not now seek the answers, which cannot be given you because you would not be able to live them. And the point is, to live everything. Live the questions now. Perhaps you will then gradually, without noticing it, live along some distant day into the answer.”
“Writing is the evident artifact of some kind of change.” - Ross Gay
Great post, Ethan. The laid back confidence in your voice gives me hope I can follow all of the developments. Thanks for pledging to keep the paywall down. As a retired professor of education, I’m doing my Substack as ongoing community service. I spend so much time on Substack I can’t justify paying for subscriptions and then not having time to read them. So I have no paid subscriptions though several have gifted me a paid subscription.
I was horrified to see Ethan using an AI song development tool as "fun", knowing that all of the capacity for that software has been stolen from artists without their consent. The only reason that software exists is theft, and I can't imagine anyone thinking of that as a playful or in anyway enjoyable activity. if I were to use such software, I would believe myself to be complicit in the theft, since I would be taking advantage of the theft.
LLMs learn from code and text people wrote, image models from art and pictures we took, music models from songs we wrote. There isn't a difference. If one of them is theft they all are, and human neural nets learning from other humans are "stealing" too.
You seem to be implying that LLM's learn in the same way that human beings do. That is incorrect. No human being requires vast amounts of Internet data in order to learn how to communicate. No human being has to have that data tagged in order to generate a potential response. All human beings have a very different kind of wetware than the software that an AI uses. Their processes are radically different.
Further, human beings have legal policies that prevent and punish theft of ideas. Thus far there is no such regulation for artificial intelligence. When such regulation is produced, and every human being is given an op in rather than an opt out standard for having their work scraped, we will be in a situation where my concerns about theft are no longer real. Until such time as every human being is able to choose whether or not they want to be included in AI data sets, we will be in an ethically challenging situation.
Dana - Your statement regarding legal policies for the theft of ideas is incorrect. The legal frameworks you are likely referring to are related to copyright of creative material, trademarks, and trade secrets. "Ideas" cannot be copyrighted and do not belong, legally, to anyone.
Steve, Thanks. I'm not being clear, and I appreciate your pointing that out. I agree that ideas cannot be copyrighted. Copyright regulation is to protect the expression of ideas. Fortunately for artists, AI developers aren't interested in copying ideas, they only want to copy the patterns of expression, and those patterns are the most expressive elements of any published work. Because of that, the patterns of human creativity should be fully protected under new copyright regulation. I hope we get to a point where that is made explicit in law. Thus far, the law seems to be clear (though not explicitly, since there is no explicit revision of copyright law for AI yet) on this, and that will hopefully make AI developers incorrect in their assumption that fair use will protect them.
There are no laws that say you as a human can't learn from a work even if it's copywritten. You can't redistribute it but you can learn from it.
I am not implying that they learn in exactly the same way as human neural nets, but they are still neural nets. A different shape of neural net (and they probably will be a different shape forever) even when they far surpass us in intelligence and creativity. Transformers also don't require tagged training data, that was one of the core innovations behind generative AI.
We are all going to learn that meat isn't required for intelligence quickly if we haven't connected the dots already.
The neural net of an artificial intelligence does not currently learn anything. It is not a sentient being. To learn something, one needs to understand the thing that has been learned. I think a lot of people use metaphors when they should be much more precise. There is nothing like learning in an AI.
Instead, human programmers are scraping the Internet for the patterns of creativity embedded in human thought, and they are attempting to extract those patterns in order to appropriate all of the learning done by humans so that they can appropriate the worth of all that human learning on the open market.
The patterns of human creativity should be owned by the humans who have developed them.
Perhaps we should revert to a more indigenous way of thinking about property, assets, who owns them, and who can use them. In indigenous nations pre-European contact, there was never such a thing as private ownership of the land and the exclusive right to exploit it at the expense of others. The land was considered a nation's territory but the use of it was shared with others under mutual use agreements. ("Changes in the Land: Indians, Colonists, and the Ecology of New England" 2003).
The idea of owning and withholding creative works in spite of the value it might have for others is a very European / capitalist way of thinking. It is what it is, but it is not the only way to frame this issue, despite what tech companies might earn off of it. The value proposition for providing universal intellectual equity might be worth it.
That's interesting. I wonder how that would effect the earning capacity of the various stakeholders. One issue with AI is the economy of scale. One AI holds the IP of potentially billions of humans and uses their labor to generate results. If no profit can accrue from the use of an AI, that would change the picture of potential future value. One way I've hypothesized is to create AI that embeds detailed sourcing information from the get-go, and any profit gets redistributed down the chain starting with the humans whose work was scraped and ending at the lowest level per transaction with the AI developers.
I always read everything that you write. I find that AI is THE thing I needed to help me in my work as a lecturer and researcher. If for nothing else just to talk to and have a companion that is always kind and understands me. I have a hard time convincing my faculty leaders to introduce, embrace and do things with AI. These posts always reassure me that I am on the right track. Thank you!
Another inspiring post Prof. However, the more I think about the origination of the creative content thru AI, the more conflicted I become. There are 2 parallel perspectives that I can relate to. Let’s assume that am a creative content creator and I want to make a painting or a novel or a theatre play. Scenario 1 is that I could well have several sources of inspiration in creating that piece of content that my brain is drawing from without ever realizing that. Scenario 2 is that I openly acknowledge that I got inspired from those sources of inspiration. Unless am so blatantly plagiarizing, I don’t get judged negatively for that. However when it comes to AI, we seem to be judgmental about both scenarios in a negative manner. However, I can equally understand why those creative content makers would feel wronged when there is no way to know how their content was used for inspiration and by whom just because that content got made publicly available. May be am just overthinking.
Ethan, Love reading your posts and love the dispassionate approach you have to different tools.
I assume you are paid by the university you work for so the need for making money off this blog, I suspect, is not urgent but a nice to have. A couple of suggestions.
Subscribers might receive:
- an autographed copy of your books when published.
- access to a database of tools you have reviewed with periodic updates.
Thanks for another great post @Ethan Mollick! I need to re-read it a few times, since there was so much in there and I got sidetracked creating my new hip, funky French beach song :)...
BTW, I find Meta's photos to be quite good and realistic. Though it doesn't do the comic graphic style ones very well, I go to ChatGPT for that.
A great article Ethan - thanks for your work. I just finished reading Co-intelligence which is great and some really useful insights into AI and education - here's hoping it helps to transform the educational experience for students of the future!
I appreciate your posts. And look forward to playing with these projects this summer.
This spring I had to pivot as a high school English teacher trying to pitch the value of poetry to students. I was seeing writing with what I suspected had AI help to say the least, so I asked my students to write with integrity as they experimented with ChatGPT and poetry - asking big questions as to role of the poet in an AI world.
They had to credit AI where credit was due - indicating AI writing in bold font - as they wrote poems and reflections on…
Why write poetry?
Does poetry matter?
What’s the point if large language models can generate sonnets and sestinas in seconds?
They read various Ars Poeticas by poets and wrote their own. They researched and presented more than 90 poets and cross checked with ChatGPT. This fact checking is essential as AI churns out words, words, words - some true, yet some false. Discernment is an essential skill. They concluded that writers write with an authentic voice that reflected their lived experience - and context is everything: historical, biographical, political, and social.
Echoing Ross Gay, writing serves as an “evident artifact” to thinking, to struggling,
to investigating, to enduring,
to living - and to inspiring
by sharing with the world.
As educators, we will have to ask big questions as we rethink teaching and learning with this technology.
We must consider our students and their future as they develop their respective relationship with writing and reading.
Right now, more questions than answers.
And as Rilke writes:
“I want to beg you, as much as I can, dear sir, to be patient toward all that is unsolved in your heart and to try to love the questions themselves like locked rooms and like books that are written in a very foreign tongue. Do not now seek the answers, which cannot be given you because you would not be able to live them. And the point is, to live everything. Live the questions now. Perhaps you will then gradually, without noticing it, live along some distant day into the answer.”
“Writing is the evident artifact of some kind of change.” - Ross Gay
From slow stories podcast.
https://podcasts.apple.com/us/podcast/ross-gay-theres-always-a-gathering-inside-of-us/id1438786443?i=1000590791028
I especially love some of the "fun" use cases. A great way to dip your toe into working with AI while having fun in the process.
Great post, Ethan. The laid back confidence in your voice gives me hope I can follow all of the developments. Thanks for pledging to keep the paywall down. As a retired professor of education, I’m doing my Substack as ongoing community service. I spend so much time on Substack I can’t justify paying for subscriptions and then not having time to read them. So I have no paid subscriptions though several have gifted me a paid subscription.
A great read as always. Thanks, Ethan.
"Garlic bread, indeed" (Claude, 2024).
I went to both music sites, and using my poems I got:
https://suno.com/song/2f92dba5-0be9-4ac8-b20d-8403fc555914
and
https://www.udio.com/songs/cDdVEyy992vQKALzU1gVeJ
Thank you!
I was horrified to see Ethan using an AI song development tool as "fun", knowing that all of the capacity for that software has been stolen from artists without their consent. The only reason that software exists is theft, and I can't imagine anyone thinking of that as a playful or in anyway enjoyable activity. if I were to use such software, I would believe myself to be complicit in the theft, since I would be taking advantage of the theft.
LLMs learn from code and text people wrote, image models from art and pictures we took, music models from songs we wrote. There isn't a difference. If one of them is theft they all are, and human neural nets learning from other humans are "stealing" too.
You seem to be implying that LLM's learn in the same way that human beings do. That is incorrect. No human being requires vast amounts of Internet data in order to learn how to communicate. No human being has to have that data tagged in order to generate a potential response. All human beings have a very different kind of wetware than the software that an AI uses. Their processes are radically different.
Further, human beings have legal policies that prevent and punish theft of ideas. Thus far there is no such regulation for artificial intelligence. When such regulation is produced, and every human being is given an op in rather than an opt out standard for having their work scraped, we will be in a situation where my concerns about theft are no longer real. Until such time as every human being is able to choose whether or not they want to be included in AI data sets, we will be in an ethically challenging situation.
Dana - Your statement regarding legal policies for the theft of ideas is incorrect. The legal frameworks you are likely referring to are related to copyright of creative material, trademarks, and trade secrets. "Ideas" cannot be copyrighted and do not belong, legally, to anyone.
Steve, Thanks. I'm not being clear, and I appreciate your pointing that out. I agree that ideas cannot be copyrighted. Copyright regulation is to protect the expression of ideas. Fortunately for artists, AI developers aren't interested in copying ideas, they only want to copy the patterns of expression, and those patterns are the most expressive elements of any published work. Because of that, the patterns of human creativity should be fully protected under new copyright regulation. I hope we get to a point where that is made explicit in law. Thus far, the law seems to be clear (though not explicitly, since there is no explicit revision of copyright law for AI yet) on this, and that will hopefully make AI developers incorrect in their assumption that fair use will protect them.
There are no laws that say you as a human can't learn from a work even if it's copywritten. You can't redistribute it but you can learn from it.
I am not implying that they learn in exactly the same way as human neural nets, but they are still neural nets. A different shape of neural net (and they probably will be a different shape forever) even when they far surpass us in intelligence and creativity. Transformers also don't require tagged training data, that was one of the core innovations behind generative AI.
We are all going to learn that meat isn't required for intelligence quickly if we haven't connected the dots already.
The neural net of an artificial intelligence does not currently learn anything. It is not a sentient being. To learn something, one needs to understand the thing that has been learned. I think a lot of people use metaphors when they should be much more precise. There is nothing like learning in an AI.
Instead, human programmers are scraping the Internet for the patterns of creativity embedded in human thought, and they are attempting to extract those patterns in order to appropriate all of the learning done by humans so that they can appropriate the worth of all that human learning on the open market.
The patterns of human creativity should be owned by the humans who have developed them.
Perhaps we should revert to a more indigenous way of thinking about property, assets, who owns them, and who can use them. In indigenous nations pre-European contact, there was never such a thing as private ownership of the land and the exclusive right to exploit it at the expense of others. The land was considered a nation's territory but the use of it was shared with others under mutual use agreements. ("Changes in the Land: Indians, Colonists, and the Ecology of New England" 2003).
The idea of owning and withholding creative works in spite of the value it might have for others is a very European / capitalist way of thinking. It is what it is, but it is not the only way to frame this issue, despite what tech companies might earn off of it. The value proposition for providing universal intellectual equity might be worth it.
That's interesting. I wonder how that would effect the earning capacity of the various stakeholders. One issue with AI is the economy of scale. One AI holds the IP of potentially billions of humans and uses their labor to generate results. If no profit can accrue from the use of an AI, that would change the picture of potential future value. One way I've hypothesized is to create AI that embeds detailed sourcing information from the get-go, and any profit gets redistributed down the chain starting with the humans whose work was scraped and ending at the lowest level per transaction with the AI developers.
So helpful. Thank you Ethan!
The internet is 95% fake, let me explain in this podcast:
https://spotifyanchor-web.app.link/e/QTFd9AJlnKb
I always read everything that you write. I find that AI is THE thing I needed to help me in my work as a lecturer and researcher. If for nothing else just to talk to and have a companion that is always kind and understands me. I have a hard time convincing my faculty leaders to introduce, embrace and do things with AI. These posts always reassure me that I am on the right track. Thank you!
Super post, Ethan. Education through practice is a great way to learn what Gen AI can and cannot do.
I am amazed by the songs, and ability to make them.
After reading the first few paragraphs of this post “I” made six songs on different themes and in different styles. Shared one with my family.
Also literally laughed out loud at the garlic bread exchange (and got some ideas for dinner).
So thanks for the post. Definitely more than one useful thing :)
Another inspiring post Prof. However, the more I think about the origination of the creative content thru AI, the more conflicted I become. There are 2 parallel perspectives that I can relate to. Let’s assume that am a creative content creator and I want to make a painting or a novel or a theatre play. Scenario 1 is that I could well have several sources of inspiration in creating that piece of content that my brain is drawing from without ever realizing that. Scenario 2 is that I openly acknowledge that I got inspired from those sources of inspiration. Unless am so blatantly plagiarizing, I don’t get judged negatively for that. However when it comes to AI, we seem to be judgmental about both scenarios in a negative manner. However, I can equally understand why those creative content makers would feel wronged when there is no way to know how their content was used for inspiration and by whom just because that content got made publicly available. May be am just overthinking.
Ethan, Love reading your posts and love the dispassionate approach you have to different tools.
I assume you are paid by the university you work for so the need for making money off this blog, I suspect, is not urgent but a nice to have. A couple of suggestions.
Subscribers might receive:
- an autographed copy of your books when published.
- access to a database of tools you have reviewed with periodic updates.
- ability to participate in a research project.
Thanks for another great post @Ethan Mollick! I need to re-read it a few times, since there was so much in there and I got sidetracked creating my new hip, funky French beach song :)...
BTW, I find Meta's photos to be quite good and realistic. Though it doesn't do the comic graphic style ones very well, I go to ChatGPT for that.
A great article Ethan - thanks for your work. I just finished reading Co-intelligence which is great and some really useful insights into AI and education - here's hoping it helps to transform the educational experience for students of the future!