Hear hear! 🦜 Let's talk more

Expand full comment

Hey! After reading your ChatGPT posts, you've inspired me to create my own substack. I just created one on how I organize my ChatGPT prompts. Hope it's helpful to you too!


Expand full comment

I've been thinking about the exact same question for a while now and would like to humbly suggest another option. My best metaphor is that AI is like our microbiome. What microbiome is for humans as biological systems, databiome (AI+data+infrastructure+interfaces) is going to be for humans as information systems.

More on this in my little essay here: https://essays.georgestrakhov.com/ai-is-not-a-horse/

Expand full comment

That's a really good essay. The databiome is a fascinating concept.

The security issue is scary. Even more frightening - what if you ended up with a system that knew you better than your wife does? What if we discover that our faulty, Swiss cheese memory is not a bug - but a feature?

On the one hand it's a fascinating idea to have something that remembers your life, organizes it, and explains it to you when you ask - on the other that might be a lot more than any of us want to know. I don't think AI worried me until now.

Expand full comment

Great post! In fact, I featured it (and added some of my thoughts) here:


Keep up the good work, prof Mollick! 💚 🥃

Expand full comment

Great post. Analogies are indeed a double-edged sword, and should be used with care! 💚 🥃

Expand full comment

It’s just a minor point, but Babbage’s computers were digital not analogue.

Expand full comment

The best analogy:

AI as shoggoths


From Wikipedia:

"Shoggoths were initially used to build the cities of their masters. Though able to "understand" the Elder Things' language, shoggoths had no real consciousness and were controlled through hypnotic suggestion. Over millions of years of existence, some shoggoths mutated, developed independent minds, and rebelled. The Elder Things succeeded in quelling the insurrection, but exterminating the shoggoths was not an option as the Elder Things were dependent on them for labor and had long lost their capacity to create new life."

I cannot think of an analogy more accurate. GPTs are fundamentally alien, trying to fit them into well known and understood categories is probably heavily distorting its capabilities and dangers.

Expand full comment

How should creatives feel about generative ai? Many of them seem quite hostile at the moment.

Expand full comment

I find the simulators analogy the most helpful, and wrote about it here: https://blog.domenic.me/chatgpt-simulacrum/

I think it's especially powerful because it explains the way that a textbox on a webpage is non-agentic, but it's happy to be as creative and agentic as you prompt it to be... Inside the world it is simulating.

Expand full comment

I think the very name makes analogies hard because they need to cut through some big assumptions (i.e. that "intelligence" is involved at all).

But I really do like the "blurry Jpg" of the internet analogy - or better yet "lossy data compression engine".

Also your statement about AI being "creative" just doesn't feel right (given the underpinnings of the tech).

Expand full comment

Ha. Been thinking a lot on what a bad search engine ChatGPT is. Let's see what happens in upcoming months, but based on its probability - based modelling and its current results, it may well be that AI is a complement and not a substitute for search engines.

Expand full comment

Ethan, about 45 years ago, I took the best course in my life (so far), taught by Ernest May (historian) and Richard Neustadt (political scientist) called The Uses Of History. They later turned it into a book called “Thinking in Time”. Their perspective was that we could see history as a set of valuable skills that everyone needs, regardless of their line of work. One of the most important of these is making analogies. They showed how our speech is littered with them, yet we rarely unpack their connotations before applying them. A word like ‘balkanization’ loosely conjures up a certain period in human history, when certain specific events were said to have triggered certain outcomes. They urged us to delve into any historical analogy we might use and ask what exactly happened back then, what factors were at work then to what effect and what is similar AND different about todays situation. Mostly, the comparison was faulty, but we could still gain insight from both its aptness and its flaws. Glad to see in your article another great example that demonstrates their hypothesis!

Expand full comment

> But it is undeniable that ChatGPT and Bing AI are shockingly adept at generating ideas, even maxing out standard creativity tests.

I quite agree. I've been using AI art and AI-generated item descriptions in my D&D campaign with excellent results. (And occasionally jotting down my most interesting findings on my Substack, though it ain't much yet.) Even worked with an AI to generate a stat block for a boss fight that had my players excitedly struggling against a power worthy of a fey queen in both power and style. It's a great tool.

But, as another AI nerd pointed out when I brought this up: just how much has my high-quality DM's aide cost in terms of money and man hours? And what was it supposed to have achieved by now? A bit more, he observed, than generating delicious menus for feywild banquests.

Now, I ain't complaining. It wasn't my money research institutions were shoveling into this thing to give me a minion. But he had a point.

Expand full comment

My favorite analogy is 'stochastic library' or the more esoteric 'wishful mnemonics'. I wrote about these terms on my Substack: https://goodinternet.substack.com/p/wishful-mnemonics (in german) and translated part of it with, well, ChatGPT.


One of the most beautiful terms for this new technology in my opinion is "wishful mnemonics", coined by Beth Carey: an AI system not as an intelligent and therefore capable agent, but as a "wish-fulfilling memory aid", which poetically describes exactly what these machines do.

Another term that I have often read in the AI bubble on Twitter describes this technology as a "synthesis engine", which illustrates the technological evolutionary leap from search engines like Google to a machine capable of presenting existing knowledge in new algorithmically generated variations, such as by analyzing our knowledge of protein folding and calculating practically all 200 million possible ways in which proteins can fold, or by extrapolating our knowledge of chemical weapons into 40,000 new variations of the same.

Psychologist Alison Gropnik has now presented a framework for these technologies in the text that I have picked from the Wall Street Journal, which describes AI systems as a continuation of tools within the cultural evolution of making knowledge accessible. Modern large language models have been trained on giant collections of existing human knowledge, and they store billions of different associations in their databases. This puts these AI models much closer to cultural technologies such as libraries, writing, or language itself - all these technologies serve to provide, mediate, and recombine existing knowledge.

This framing allows us to think anew about forthcoming regulations of the technology, such as in the context of copyright-protected works in the datasets of image generators. For example, there is the internationally applied obligation of legal deposit for national and state libraries, according to which copyright holders must supply copies of their publications to state libraries, which raises the question of whether and in what form AI systems that make collected knowledge of humanity accessible should be managed as a state library and how their currency can be ensured.

In my opinion, at least the large, all-encompassing "statistically stochastic knowledge synthesis libraries", i.e., the really large "large language models", should be operated by the public sector to ensure their currency and prevent abuse. This does not mean that private knowledge synthesizers are not possible, all variants are conceivable, up to highly commercially developed and very expensive niche models for synthetic image generation for detailed comic productions in all possible brushstroke variations in specially trained illustration styles for a modular AI Photoshop of the next generations.

The classification of large language models as a cultural tool analogous to the Internet, printing, and writing eliminates esoteric associations of anthropomorphic properties such as "intelligence" or "consciousness" from these novel database technologies and enables a language that allows for regulation without falling into the trap of humanization and truly demonstrates the technological possibilities of these knowledge synthesizers.

What AI systems like Dall-E already suggest today in a rudimentary form for the knowledge complex "illustration" will change all areas of human knowledge tomorrow. And whatever the regulation and modalities of the coming AI economies may look like: it helps to think of these models not as intelligences, but as cultural tools of knowledge transfer.

Expand full comment