17 Comments

Ethan, this essay is very insightful and deserves wide dissemination. Many people who have piled onto the AI bandwagon don’t appreciate the difference between arithmetic (under the hood of Photoshop, Word, etc) and the transformations performed by Midjourney, ChatGPT, Adobe Firefly, etc.

Expand full comment

Great thoughts I'll only poke at one thing here. "I do not think our current LLMs are close to being sentient like people (though they can fool us into thinking they are),"

First, as you state, LLMs can't think and don't have intent beyond the statistical probability of language flow. So they can't fool anyone.

Because second, as you go on to suggest, we should treat them as human. So we are going ourselves about what they are and what their intent can be interpreted as.

90% of everything I've read about AI and human type actions is anthropomorphizing. Just like we can look at animals and create a trope about how a polar bear can teach us how to be more human when it plays with a dog.... Yet the next year kills and eats a non kin cub, we humans should be very very careful how much we treat AI like a human because then we misinterpret what it's doing

Expand full comment
author

I agree that anthropomorphizing AI can lead to risk. This is neither human nor is it normal software, it is a third thing that we need to learn to be wary of, and also to use. But they absolutely can fool humans into thinking they are real. Witness: https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html

Expand full comment

I don't mean to nitpick on that NY Times article but did the AI fool the human or did the human fool the human? I'm torn on this because it seems subtly important to understand. Because the solution feels like it will shift because of it.

Expand full comment

“..it is best to think of AI as people.”

I found this true in working with AI on a project - treating it as a team-mate is best. Instead of just giving it work to do on its own, I:

1. Develop ideas together with AI

2. Ask for help and feedback

3. Bring my own contributions

Just like working on a team, the outcome of this AI+human collaboration becomes far better than just the sum of its parts. The difference is I can bug my AI teammate all day long and they never seem to get upset :)

The best way to see this is to try it out yourself!

My AI teammate and I write about our journey in building a project together here- if anyone is interested in ideas on how to go about it: https://dearai.substack.com/p/introduction

Expand full comment

Thinking of AI as a tool or even better a resource to delegate to feels like the way forward at the moment.

I’ve found most value when I feed it my human thoughts or reactions and ask it to arrange them in a logical way.

So I might have some jumbled notes from a meeting. They could be in the form of bullets but what I want is a narrative of what the meeting that I can share. I’d typically want this to have a bit of structure so that the reader gets the idea of what happened and my thoughts.

Once the AI has had a go I will edit and amend to bring it closer to what I actually meant to say.

This tends to save about 50% of the time on this task.

I used a similar approach this afternoon to setting my own objectives. I fed it my outline ideas it created 6 clear objectives as a starter for ten. Again, I then amended these to meet what I wanted to say. 30 mins saved.

Expand full comment

I'm so glad you're putting out One Useful Thing. I learn so much every single time.

Expand full comment

Interesting as always.

As a former Latin-learner, I appreciated the Argon knock knock joke, but thought it was interesting that the AI got the subtlety that Argon sounds like "are gone", but went with salue (hello) rather than uale (goodbye). A child probably would have made that connection.

Expand full comment

I thought the same and wrote a case study about it too. I remember being grateful it wouldn't be "bothered" when I wanted to ask a question

Expand full comment

Thanks for an interesting essay. If you're going to think of the AI as human, then I would use an appropriate human metaphor. You've been using the idea of a high school intern. I wonder about that one. If I had an intern, I would expect to be giving some thing back to the intern, like an experience that I could use for future employment. Perhaps the fact that the prompts you offer the system help the owners of the system improve the system, could be analogous.

However, to me the better an analogy would be to a slave. You can assign any task, no matter how problematic to the AI. The AI is based on a black box of human labor that is being re-purposed without any compensation to that labor. If you are creating art with an AI, you're definitely using art that has been scraped without permissions, so the underlying workers are working without consent. I can't think of a teammate working under those circumstances.

Expand full comment

Excellent article. The human aspects of how we communicate with AI will shape how it communicates with us, both near term and as it evolves, more consequentially in the the future.

Expand full comment

I wanted to share this earlier but I had to hold till I published it but I tried to address a lot of what's driving these issues in this essay: The Biggest Threat from AI is Us!

https://polymathicbeing.substack.com/p/the-biggest-threat-from-ai-is-us

Expand full comment

Super-excellent insights into AI. If AI behaves like a human, it makes one reflect on what consciousness really is. If it is a simple matter of matching scores/values on to tokens, is this how we do it as well?

Expand full comment

I am not sure about this AI vs software idea. Do we (the general population) really "want to know what our software does, and how it does it, and why it does it"? Yes, we want our tools and devices to work. But that's it. Beyond that, most people do not even know that their refrigerator or tractor has software.

Expand full comment

We like having "someone" know. Some expert or company that can give it the specific capabilities we want, fix an maintain it, be liable for when it breaks, document operating parameters, provide training/documentation for other to professionally install/integrate with other systems. I don't want to know all that, but I do want to pay someone else for that competence.

Expand full comment

This was the subject of my video on this, around why we have so much trouble mastering ChatGPT and how we get around it. This isn't Super Google. This behaves like a person. And yet our brains are wired to get a command-response from Google in a way that limits the power of ChagGPT. I call it the Hi Thanks Great Principle: https://www.youtube.com/watch?v=fjs7oIjKvtM&t=38s

Expand full comment