If AI changes us into instrumentalist users of books, that will also change how books come to be written. You ask, "why read the book when you can just ask an AI to read it?," well, why write a book if your readers will be AI?
"if you are a very good writer or editor, you are better than current AI (though you can still benefit from AI help in many areas). If you are only okay at particular tasks, perhaps AI might outperform you. As AI technology advances, a lot depends on whether skilled humans continue to do better than the best AIs."
This really encapsulates this moment we're in. I love collaborating with generative AI right now; there are things I do way better, and there are things I can use AI for that will save a ton of time and busywork... but I also have an uneasy feeling that this symbiosis isn't going to last forever. The AI will just be better than me at everything within a few years... everything except for expressing what's inside my own head, hopefully!
So true! I also love collaborating with AI, yet I can see how it's not that far away when I'm no longer needed. Funnily, I'm actively working to achieve that in a couple of cases that I want to automate as much as possible (like, this AI-written blog post about AI surpassing us, which I think is entirely possible today: https://deargabrielle.substack.com/p/a-wifes-self-worth-declines-as-ai-takes-her-job)
And about expressing what's inside our heads- it is already decoding meaning from fMRI scans[1]... just a few years more.
That's a great question! I've wondered about it when working with text-to-image prompts. Having an image in mind, translating it to text, and hoping the AI would recreate it - felt like the bandwidth of the limited text prompt was entirely too constricting (an image is worth a thousand words). It would be much easier if the AI could read my mind to generate that image. And it extends far beyond images.
Our current language has allowed us to explore vast reaches from spirituality to science. But it also constricts our thinking in various ways. Imagine the possibilities with something more akin to the fluidity and extensibility of embeddings. By definition, I think it will exceed what we can describe with words today :)
I think we are well on our way. binary is the language we all want to speak at our core in order to have the highest efficiency, and to get precise ideas from our brains out into the world. LLMs are a great bridge between 2022 and everything after 2023, including liking directly into the human brain or reading thought patterns. It's early days yet for some of the technology, but I'm confident the progress will be far more rapid than almost anyone thinks.
I appreciate it how much less anthropomorphication you used in this essay. I find your comments generally helpful, especially since I am so much on the wary side of AI studies, and I enjoy reading people who are excited about the situation. But whenever someone anthropomorphizes an AI, I become suspicious of their knowledge-level. This essay felt a lot more neutral. Thanks again.
Words matter. The slippery and problematic aspect of AI conversations is the use of words in describing AI the collide with the subtleties of what is a list and what is an academic book. To suggest knowledge from an AI is troublesome and brings back the need for philosophical alignment within these conversations. At play in this field, leads us to the even more disturbing gap in social economics between the elite academic type and the common man who scrapes an existence and is akin to the crowd at witch burnings in the medieval times.
Using words like understanding and knowledge infers subject is capable of the nuances required of understanding or have we collectively just set the bar so low so that any machine can be allowed to control the situation.
We really really need to be revisiting the Anschauung, the contextuality, to what depth do we set the relatedness such that at some point indeed, a machine understands the dilemma. if we don’t understand the dilemmas, we won’t understand how to manage them.
I've tried uploading PDFs and EPUBs to Claude 2. No luck whatsoever. I've even converted the originals to TXT files. Still no luck. Also, with scientific papers, I can rarely upload more than three simultaneously, not to mention that Claude 2 will often get one paper correct but totally hallucinate the others.
These examples are illuminating and well chosen, but I'm a little thrown off by your final statement that we can ask an AI and get reasonable results. Mostly what I'm looking for when reading something for an instrumental purpose is to learn how to ask better questions, not to get plausible answers to the questions I currently have. I don't want reasonable results, I want dialogue. I want the process of reading and reflecting over time to change how my attention is allocated, this affording the thinking of different thoughts than I would have had, had I not read and engaged with the text. A bullet point summary doesn't afford that transformation is how one sees the world.
Teachers using a generated teaching guide could afford this and I think that would be wonderful, so we should always keep in mind that dialogue and learning to ask better questions is a critical part of learning, not just getting answers to the questions we currently have. Do you think this guy still wants to know how to parse HTML with regex? http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454
We used to categorize reading into “learning to read” and “reading to learn”. Now we have to add “reading for productivity.” They all have value. There are other benefits from reading a book other than extracting information. We’ve had book summary apps such ad Blinkest and Spark Notes for some time now. We still read. We read to understand ourselves.
I read Steve Jobs biography with great interest. I skim research reports and blog posts looking for the key points.
I did, Ethan, read your post from end to end (skimming the middle a bit), and then reread/skimmed it before I wrote this post. I invested an awful lot of time reading, re-reading and reflecting over one small post. You, the writer, have created value in my life.
As an instructor in a doctoral program I was intrigued by your comment: "These changes are exciting in some cases (there are amazing chances for scholarship assisted by AI), but threatening in others (why read the book when you can just ask an AI to read it?).
We are always challenging our students to think deeply and critically while interrogating peer-reviewed articles. Now that it is so easy to put an AI to work to find the meaning, flaws, and key points in an article what role does the student now have? I believe we need a new set of criteria and skills by which to evaluate and challenge our students. Perhaps it is their ability to prompt the AI? It seems AI is poised to radically change how doctoral students do research, write papers, and conduct scholarship. I'd love to hear what everyone in higher ed thinks about this opportunity/dilemma.
I tried your prompts with GPT-4/Bing to create quiz questions from an open source online project management text, one chapter at a time. The process is amazing, but . . . Try as I might, I could not get the system to stop providing "All of the above" as answer options. Help!
Here is the prompt I am using, with the textbook URL removed.
Do the following for the content at this open education resource textbook URL:
[URL removed for this comment.]
Do not click through on any of the links that are on this web page.
You are a quiz creator of highly diagnostic quizzes.
You will make good low-stakes tests and diagnostics.
You will make 6 multiple choice quiz questions on the book suitable for college students.
Do not include "all of the above" as a correct option in any of the quiz answers.
Only one answer should be correct for each question.
The questions should be highly relevant and go beyond just facts.
Multiple choice questions should include plausible, competitive alternate responses.
At the end of the quiz, you will provide an answer key and explain the right answer.
Does performance improve if you replace "a correct option" with "an option"? Humans can do logical reasoning but reasoning is incidental to current LLMs, so your following sentence might not be treated in conjunction with the previous, but as a soft objective.
Another practical use for AI as editor/writing partner is for journalism to fix the news media that suffers from a major loss of trust from the public AI can be used as a writing partner to detect bias and aid writers to overcome their natural bias. The news media is in decline due to loss of trust by the public, and yet they've avoided fixing what most people view as a poor quality product. How many industries get away with this? AI can help. This page I ran into https://FixJournalism.com has a good image to illustrate "AI Nudged To Neutral" and explores detail, and notes the absurdity of the news industry:
'A study by Gallup and the Knight Foundation found that in 2020 only 26% of Americans reported a favorable opinion of the news media, and that they were very concerned about the rising level of political bias. In the 1970s around 70% of Americans trusted the news media “a great deal” or a “fair amount”, which dropped to 34% this year, with one study reporting US trust in news media was at the bottom of the 46 countries studied. The U.S. Census Bureau estimated that newspaper publisher’s revenue fell 52% from 2002 to 2020 due to factors like the internet and dissatisfaction with the product.
A journalist explained in a Washington Post column that she stopped reading news, noting that research shows she was not alone in her choice. News media in this country is widely viewed as providing a flawed product in general. Reuters Institute reported that 42% of Americans either sometimes or often actively avoid the news, higher than 30 other countries with media that manage to better attract customers. In most industries poor consumer satisfaction leads companies to improve their products to avoid losing market share. When they do not do so quickly enough, new competitors arise to seize the market opening with better products.
An entrepreneur who was a pioneer of the early commercial internet and is now a venture capitalist, Marc Andreessen, observed, the news industry has not behaved like most rational industries: “This is precisely what the existing media industry is not doing; the product is now virtually indistinguishable by publisher, and most media companies are suffering financially in exactly the way you’d expect..” The news industry collectively has not figured out how to respond to obvious incentives to improve their products. '
Really good read. A few points, based on my work on collective intelligence at MIT and beyond
A. One open question is the following: books are software for the human mind, helping us build new lenses that we use to solve problems in new ways. Can AI help that process, at least partially, so that the cycle of innovation becomes faster and people can benefit from the knowledge from adjacent spaces? I think so, and I am working on it. Happy to connect if you’re interested.
Gianni, I really would appreciate your opinion. I am interested in the way how writing (or maybe creating more generally) shapes the way how we understand the world - individually and collectively.
Do you think that outsourcing this process of structuring and condensing our understanding will have an impact?
I know, this sounds a bit like the grumpy "The world will get dumb if we use an electronic calculator instead of multiplying 12 digit numbers by hand" but writing seems to be a "higher level" activity so maybe it is something different.
Not sure if all of this is understandable, maybe it is just the proof that writing stuff down does not increase clarity :-D.
I don't know if there's evidence yet, but it is plausible that's the case, and my initial anecdotal experience in my work corroborates that hypothesis. This said, design of the interaction could help. And, we have lost the ability to do many things (like, doing math in our head) but that doesn't mean that we won't be better off overall. TBC.
Another great piece! I was surprised to see how pedagogical Claude's approach to the book was. I've been using Claude for a few months, and it's Claude+ that I find very impressive (sometimes, but not always, at the level of GPT-4... or even higher, especially with Claude+ 1.3). If I read the screenshots correctly, you used the 100k model through Poe, so the underlying LLM for that is Claude instant, which is not as powerful as the later models. That's why I was so impressed with the results you got. Just imagine Claude 1.3 working with a 100k context window!
If AI changes us into instrumentalist users of books, that will also change how books come to be written. You ask, "why read the book when you can just ask an AI to read it?," well, why write a book if your readers will be AI?
"if you are a very good writer or editor, you are better than current AI (though you can still benefit from AI help in many areas). If you are only okay at particular tasks, perhaps AI might outperform you. As AI technology advances, a lot depends on whether skilled humans continue to do better than the best AIs."
This really encapsulates this moment we're in. I love collaborating with generative AI right now; there are things I do way better, and there are things I can use AI for that will save a ton of time and busywork... but I also have an uneasy feeling that this symbiosis isn't going to last forever. The AI will just be better than me at everything within a few years... everything except for expressing what's inside my own head, hopefully!
So true! I also love collaborating with AI, yet I can see how it's not that far away when I'm no longer needed. Funnily, I'm actively working to achieve that in a couple of cases that I want to automate as much as possible (like, this AI-written blog post about AI surpassing us, which I think is entirely possible today: https://deargabrielle.substack.com/p/a-wifes-self-worth-declines-as-ai-takes-her-job)
And about expressing what's inside our heads- it is already decoding meaning from fMRI scans[1]... just a few years more.
[1] https://www.nytimes.com/2023/05/01/science/ai-speech-language.html
Do you think language (as we understand it now, w/characters and words and the like) will ultimately go away? Will we communicate a different way?
I'm pretty confident the answer is yes.
That's a great question! I've wondered about it when working with text-to-image prompts. Having an image in mind, translating it to text, and hoping the AI would recreate it - felt like the bandwidth of the limited text prompt was entirely too constricting (an image is worth a thousand words). It would be much easier if the AI could read my mind to generate that image. And it extends far beyond images.
Our current language has allowed us to explore vast reaches from spirituality to science. But it also constricts our thinking in various ways. Imagine the possibilities with something more akin to the fluidity and extensibility of embeddings. By definition, I think it will exceed what we can describe with words today :)
I think we are well on our way. binary is the language we all want to speak at our core in order to have the highest efficiency, and to get precise ideas from our brains out into the world. LLMs are a great bridge between 2022 and everything after 2023, including liking directly into the human brain or reading thought patterns. It's early days yet for some of the technology, but I'm confident the progress will be far more rapid than almost anyone thinks.
I appreciate it how much less anthropomorphication you used in this essay. I find your comments generally helpful, especially since I am so much on the wary side of AI studies, and I enjoy reading people who are excited about the situation. But whenever someone anthropomorphizes an AI, I become suspicious of their knowledge-level. This essay felt a lot more neutral. Thanks again.
Words matter. The slippery and problematic aspect of AI conversations is the use of words in describing AI the collide with the subtleties of what is a list and what is an academic book. To suggest knowledge from an AI is troublesome and brings back the need for philosophical alignment within these conversations. At play in this field, leads us to the even more disturbing gap in social economics between the elite academic type and the common man who scrapes an existence and is akin to the crowd at witch burnings in the medieval times.
Using words like understanding and knowledge infers subject is capable of the nuances required of understanding or have we collectively just set the bar so low so that any machine can be allowed to control the situation.
We really really need to be revisiting the Anschauung, the contextuality, to what depth do we set the relatedness such that at some point indeed, a machine understands the dilemma. if we don’t understand the dilemmas, we won’t understand how to manage them.
I've tried uploading PDFs and EPUBs to Claude 2. No luck whatsoever. I've even converted the originals to TXT files. Still no luck. Also, with scientific papers, I can rarely upload more than three simultaneously, not to mention that Claude 2 will often get one paper correct but totally hallucinate the others.
I must say, I'm not impressed. At all.
These examples are illuminating and well chosen, but I'm a little thrown off by your final statement that we can ask an AI and get reasonable results. Mostly what I'm looking for when reading something for an instrumental purpose is to learn how to ask better questions, not to get plausible answers to the questions I currently have. I don't want reasonable results, I want dialogue. I want the process of reading and reflecting over time to change how my attention is allocated, this affording the thinking of different thoughts than I would have had, had I not read and engaged with the text. A bullet point summary doesn't afford that transformation is how one sees the world.
Teachers using a generated teaching guide could afford this and I think that would be wonderful, so we should always keep in mind that dialogue and learning to ask better questions is a critical part of learning, not just getting answers to the questions we currently have. Do you think this guy still wants to know how to parse HTML with regex? http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454
a) this is so cool and I can’t wait to start playing with it in this way
b) I need to read your book!
c) there has never been a better time to be an autodidact
We used to categorize reading into “learning to read” and “reading to learn”. Now we have to add “reading for productivity.” They all have value. There are other benefits from reading a book other than extracting information. We’ve had book summary apps such ad Blinkest and Spark Notes for some time now. We still read. We read to understand ourselves.
I read Steve Jobs biography with great interest. I skim research reports and blog posts looking for the key points.
I did, Ethan, read your post from end to end (skimming the middle a bit), and then reread/skimmed it before I wrote this post. I invested an awful lot of time reading, re-reading and reflecting over one small post. You, the writer, have created value in my life.
As an instructor in a doctoral program I was intrigued by your comment: "These changes are exciting in some cases (there are amazing chances for scholarship assisted by AI), but threatening in others (why read the book when you can just ask an AI to read it?).
We are always challenging our students to think deeply and critically while interrogating peer-reviewed articles. Now that it is so easy to put an AI to work to find the meaning, flaws, and key points in an article what role does the student now have? I believe we need a new set of criteria and skills by which to evaluate and challenge our students. Perhaps it is their ability to prompt the AI? It seems AI is poised to radically change how doctoral students do research, write papers, and conduct scholarship. I'd love to hear what everyone in higher ed thinks about this opportunity/dilemma.
I tried your prompts with GPT-4/Bing to create quiz questions from an open source online project management text, one chapter at a time. The process is amazing, but . . . Try as I might, I could not get the system to stop providing "All of the above" as answer options. Help!
Here is the prompt I am using, with the textbook URL removed.
Do the following for the content at this open education resource textbook URL:
[URL removed for this comment.]
Do not click through on any of the links that are on this web page.
You are a quiz creator of highly diagnostic quizzes.
You will make good low-stakes tests and diagnostics.
You will make 6 multiple choice quiz questions on the book suitable for college students.
Do not include "all of the above" as a correct option in any of the quiz answers.
Only one answer should be correct for each question.
The questions should be highly relevant and go beyond just facts.
Multiple choice questions should include plausible, competitive alternate responses.
At the end of the quiz, you will provide an answer key and explain the right answer.
Does performance improve if you replace "a correct option" with "an option"? Humans can do logical reasoning but reasoning is incidental to current LLMs, so your following sentence might not be treated in conjunction with the previous, but as a soft objective.
Maybe.
Another practical use for AI as editor/writing partner is for journalism to fix the news media that suffers from a major loss of trust from the public AI can be used as a writing partner to detect bias and aid writers to overcome their natural bias. The news media is in decline due to loss of trust by the public, and yet they've avoided fixing what most people view as a poor quality product. How many industries get away with this? AI can help. This page I ran into https://FixJournalism.com has a good image to illustrate "AI Nudged To Neutral" and explores detail, and notes the absurdity of the news industry:
'A study by Gallup and the Knight Foundation found that in 2020 only 26% of Americans reported a favorable opinion of the news media, and that they were very concerned about the rising level of political bias. In the 1970s around 70% of Americans trusted the news media “a great deal” or a “fair amount”, which dropped to 34% this year, with one study reporting US trust in news media was at the bottom of the 46 countries studied. The U.S. Census Bureau estimated that newspaper publisher’s revenue fell 52% from 2002 to 2020 due to factors like the internet and dissatisfaction with the product.
A journalist explained in a Washington Post column that she stopped reading news, noting that research shows she was not alone in her choice. News media in this country is widely viewed as providing a flawed product in general. Reuters Institute reported that 42% of Americans either sometimes or often actively avoid the news, higher than 30 other countries with media that manage to better attract customers. In most industries poor consumer satisfaction leads companies to improve their products to avoid losing market share. When they do not do so quickly enough, new competitors arise to seize the market opening with better products.
An entrepreneur who was a pioneer of the early commercial internet and is now a venture capitalist, Marc Andreessen, observed, the news industry has not behaved like most rational industries: “This is precisely what the existing media industry is not doing; the product is now virtually indistinguishable by publisher, and most media companies are suffering financially in exactly the way you’d expect..” The news industry collectively has not figured out how to respond to obvious incentives to improve their products. '
I think it is pretty clear that mediocre writing (and all other kinds of creative creations) does not need human actors any more.
As probably 90% of the internet content is basically mediocre or worse we will witness a lot of changes.
Really good read. A few points, based on my work on collective intelligence at MIT and beyond
A. One open question is the following: books are software for the human mind, helping us build new lenses that we use to solve problems in new ways. Can AI help that process, at least partially, so that the cycle of innovation becomes faster and people can benefit from the knowledge from adjacent spaces? I think so, and I am working on it. Happy to connect if you’re interested.
B. Some time back, I wrote a piece about how “helping the world know what the world knows” could be transformative. LLMs weren’t the talk of the town yet, but the writing was on the wall. But we should look at a convergence of things, not just LLM. https://medium.com/@giannigiacomelli69/if-the-world-knew-what-the-world-knows-e66ded84d5a
C. I have written a few concept stories in the “Futures 2030” report here www.supermind.design/resources and they may give you further ideas
Thanks for your work - it is very valuable.
Gianni, I really would appreciate your opinion. I am interested in the way how writing (or maybe creating more generally) shapes the way how we understand the world - individually and collectively.
Do you think that outsourcing this process of structuring and condensing our understanding will have an impact?
I know, this sounds a bit like the grumpy "The world will get dumb if we use an electronic calculator instead of multiplying 12 digit numbers by hand" but writing seems to be a "higher level" activity so maybe it is something different.
Not sure if all of this is understandable, maybe it is just the proof that writing stuff down does not increase clarity :-D.
I don't know if there's evidence yet, but it is plausible that's the case, and my initial anecdotal experience in my work corroborates that hypothesis. This said, design of the interaction could help. And, we have lost the ability to do many things (like, doing math in our head) but that doesn't mean that we won't be better off overall. TBC.
Another great piece! I was surprised to see how pedagogical Claude's approach to the book was. I've been using Claude for a few months, and it's Claude+ that I find very impressive (sometimes, but not always, at the level of GPT-4... or even higher, especially with Claude+ 1.3). If I read the screenshots correctly, you used the 100k model through Poe, so the underlying LLM for that is Claude instant, which is not as powerful as the later models. That's why I was so impressed with the results you got. Just imagine Claude 1.3 working with a 100k context window!