This may not be surprising, but it is hard to overemphasize the importance of books in human history. Books have influenced culture, religion, politics, innovation, and even the shape of our cities — the spread of the printing press explained “at least 18% and as much as 68% of European city growth between 1500 and 1600.” Seemingly small improvements in the technology of books, can have an outsized impact on how they are used (as I was just reminded of after reading a 350 page tome on the importance of the index). Thus, changing how we interact with books changes how humanity learns, remembers, and innovates.
So might AI, the technology of the moment, change the way we interact with books? To test this we would need both an AI with a memory large enough to hold a book, and an author who knows their own book well enough to judge the AIs results. Dear reader, we have both. Anthropic’s Claude1, one of the three major foundational Large Language Models, now has enough memory to hold a short book (technically it has a context window of 100,000 tokens, which is around 70,000 words), and I happen to have written a short book on entrepreneurship (29,868 words) a couple years ago. I pasted the latter into the former, and ran some experiments.
AI as reader and editor
Every author knows, and dreads, what most people want AI to do with books - summarize them. They want to distill down pages of thoughtful prose, carefully considered language, and specific phrases into a few pithy points. So, can AI do this? Yes. And surprisingly well. Further, it has enough “sense” of context that you can ask for expansions - tell me the examples and research the book gives to support each point. And it does that, too.
But does it “know” what it is reading? Most of my readers probably realize that AI does a very good job of faking understanding. In fact, it does such a good job that there is substantial debate among researchers over whether AI has developed a generalized sense of the meaning of human language or whether that is an illusion. We won’t deal with such a weighty topic directly, but I did carefully check to see whether the material being produced contained “hallucinations” or made-up facts. In this case, I didn’t see any. So, I asked a harder question: give me examples of metaphors in the book. Metaphor is challenging for even human readers, it involves finding a use of figurative language without any clear markers (unlike a simile, there are no “likes” or “as”).
And the results are impressive, though there are minor errors (a repeat of one metaphor, perhaps the baseball metaphor is too narrow). What about the style and patterns of writing? Are there any phrases or verbal tics that repeat throughout the book?
Again, no clear hallucinations, and a pretty impressive job (though I am starting to feel bad about my writing). Which, of course, leads us to our next topic. Can it work as an editor? As an editor, offer both several broad suggestions, and several specific ones, about how the book could be made more accessible and like a pop science bestseller and also: create a better transition between chapters 2 and 3. give me the original and your changes, and why you made them
In general, this isn’t bad… but it also isn’t great. Nothing is particularly wrong with the advice, but it also not deeply insightful. Similarly, the rewritten prose is fine, but also not especially compelling. All of this highlights something that has become clear about the current state of AI: if you are a very good writer or editor, you are better than current AI (though you can still benefit from AI help in many areas). If you are only okay at particular tasks, perhaps AI might outperform you. As AI technology advances, a lot depends on whether skilled humans continue to do better than the best AIs.
A practical use: Help for instructors
Given that the AI has an impressive ability to understand text, one use case for this knowledge is to help teachers who often assign books to classes. Given the entire text of the book, can AI help an instructor create more meaningful learning as a result?
I think the answer is yes. To see why, I started with having the AI create a quiz, using a variation of a prompt we discuss in our paper: You are a quiz creator of highly diagnostic quizzes. You will make good low-stakes tests and diagnostics. You will make 5 quiz questions on the book suitable for college students. The questions should be highly relevant and go beyond just facts. Multiple choice questions should include plausible, competitive alternate responses and should not include an "all of the above option." At the end of the quiz, you will provide an answer key and explain the right answer. (We designed this for GPT-4/Bing, but it works well for Claude, too). In general, most questions were good, but there were also potential problems, and they were not the usual hallucinations that sometimes creep into other AI answers. For example, one flawed question was:
According to the book, which groups of founders are MOST likely to succeed?
a. Young founders
b. Solo founders
c. Founding teams of strangers
d. Family members founding together
Answer: d The book found that family members founding together had the highest success rates of any type of founding team.
Why was this wrong? While older founder do outperform younger founders, and family members outperformed all other relationships among founding team members, there was no direct comparison in the book between solo founders, younger founders, and family members. So while the AI did not hallucinate entirely novel facts, it didn’t get the subtlety of the situation. But, when I pointed out this error, the AI actually did much better in spotting the problem and citing the relevant text. (Further evidence that you don’t want to use a single prompt, but rather interact with the AI repeatedly, to get the best results)
But the AI did much better at a wide range of other educational tasks based around the book: write a case study in the style of a Harvard Business School case that would require students to use the lessons of the book and provide the instructor’s guide to the case resulted in an interesting in-class exercise that I could see using.
It did other things well - creating lesson plans, glossaries, and other useful educational aids on demand. It also faked a very good book report (remember, AI cheating is already ubiquitous, it is only going to get more so). But I was particularly interested in its ability to apply knowledge from the book in novel contexts and ways: Explain the main themes of the book to me at four different levels: first grader, 8th grader, college student, PhD student resulted in good summaries. For the first grader, for example: “Many people think starting a company means you have to be young, wear a hoodie and work super hard. But the author says that's not true - there are lots of different kinds of great founders. And great ideas can come from anywhere, not just on computers. As long as you experiment and learn, you can start a business your own way.”
I also tried explain how the book might be useful to dairy farmer in Wisconsin, a ninja living in ancient Japan, an experienced venture capitalist, and Glormtok an orcish barbarian from the fantasy steppes (sorry, I couldn’t help myself). This resulted in lessons that actually encapsulated what the book was about: “The ninja's clan likely has deep-rooted beliefs about what makes a capable warrior, but these myths may blind them to new approaches. Adopting a more skeptical, evidence-driven stance towards clan traditions could allow the ninja to innovate tactics and weapons, gaining an advantage over enemies and rivals beholden to outdated dogma.”
Teachers will find a lot of use in getting the AI to help them with making book-based assignments meaningful and useful.
AI and Books
After these experiments, I have come to believe how we relate to books is likely to change as a result of AI. Search engines changed how we found information, but they never had a sense of the underlying content they indexed, and thus were limited in usefulness across vast volumes of data. Thus, they never altered how we used books in a deep way. They might help us find a keyword in a book, but we still had to read the actual text to know what the book said.
Now, AIs have, or at least have the appearance of having, an understanding of the context and meaning of a piece of text. This radically changes how we approach books as sources of information and reference - we can ask the AI to extract meaning for us, and get reasonable results. These changes are exciting in some cases (there are amazing chances for scholarship assisted by AI), but threatening in others (why read the book when you can just ask an AI to read it?).
More broadly, larger context windows means AI is soon going to “remember” a lot more information than we have come to expect, and do to so far more accurately. With more accurate, detailed access to human knowledge provided by these larger context windows, AIs will begin to change how we understand and relate to our own written heritage in massive ways. We can get access to the collective library of humanity in a way that makes the information stored there more useful and applicable, but also elevates a non-human presence as the mediator between us and our knowledge. It is a trade-off we will need to manage carefully.
As an AI, Anthropic’s Claude is around the same capability level as GPT-3.5 (generic ChatGPT), which means it is much less powerful than GPT-4 (ChatGPT Plus/Microsoft Bing in Creative Mode). Thus, the results are not going to be of the quality level of the most powerful AI of the moment. On the other hand, Claude is really “friendly” and easy to work with.
If AI changes us into instrumentalist users of books, that will also change how books come to be written. You ask, "why read the book when you can just ask an AI to read it?," well, why write a book if your readers will be AI?
"if you are a very good writer or editor, you are better than current AI (though you can still benefit from AI help in many areas). If you are only okay at particular tasks, perhaps AI might outperform you. As AI technology advances, a lot depends on whether skilled humans continue to do better than the best AIs."
This really encapsulates this moment we're in. I love collaborating with generative AI right now; there are things I do way better, and there are things I can use AI for that will save a ton of time and busywork... but I also have an uneasy feeling that this symbiosis isn't going to last forever. The AI will just be better than me at everything within a few years... everything except for expressing what's inside my own head, hopefully!