65 Comments

Your knife metaphor is very useful. We bring out our sharp knives for steak, and dull ones for toast. Similarly a stable ecosystem will use both sharp and dull AI for appropriate tasks.

Also - we dull sharp edges for children until they develop dexterity and experience. Our society, and almost all individuals, are still in that early, rounded-scissors, stage of AI skill development.

Expand full comment

Extending the metaphor, in some cases, a dull knife can be more dangerous than a sharp one because it isn't capable of cutting certain substances. Attempting to force it to do so can risk damaging the substrate or even causing personal injury.

Similarly, attempting to apply an underpowered AI model to a sufficiently complex task may nearly guarantee incorrect results. If the user doesn't recognize that risk, and thereby fails to use a more powerful tool or perform the task without AI, then their work product will suffer.

I could see some organizations and individuals falling into that failure mode due to overexcitement to apply AI combined with an insufficient understanding of its limitations, as well as discomfort in using more powerful, agentic AIs in which more decision-making is seceded to the computer.

Expand full comment

Agreed. Nerfing must careful. In the case of children's scissors, the tip is blunted for safety, but, for utility, the edges are keen.

Expand full comment

Great point. I'm ready for the kind of knife that can slice through a hair. It's embarrassing and frustrating that our species isn't mature enough for this.

Expand full comment

I thought the exact same thing. Apt metaphors and analogies are extremely useful where technology is concerned. Thanks Ethan!

Expand full comment

I don't know if I can swear in your comments so I won't

but

HOLY MOTHERFORKING SHIRTBALLS

Am I the only one stunned by how miraculous and unbelievable GPTs voice technology is?

It's so crazy how quick humans adapt to things. Like, drop this technology 5 years in the past on one random day and people would probably think it was AGI.

Thank you for the piece and the voice recordings - good idea

Expand full comment

But it's clearly not AGI, when it hallucinates and can't do simple math. It's very useful but it's also parlor tricks. Current AI doesn't think. it jut predicts words.

Expand full comment

I wrote a Python script a few weeks ago to get an AI agent to play a coach and another AI agent to play a coachee. The way they bounced off one another was a great insight on what refinements would enable AI to facilitate more resonant, higher fidelity coaching as opposed to a conversation driven by relevance and people's preferences.

Expand full comment

Great write up! You should do a piece specifically on: Assistants, Agents & CoPilots (food for thought)!

Expand full comment

I’ll be the first to read it.

Expand full comment

This is so scary and intriguing at the same time

Expand full comment

While voice interaction technology is advancing, as evidenced by ChatGPT4's improved naturalness, ability to handle interruptions, and contextually appropriate emotional simulations, several critical questions remain unanswered. These questions revolve around why more human-like interactions with AI are simultaneously so attractive and potentially risky.

One simple yet illustrative example is that the realism and human-like outputs of written (and now voice) Generative AI don't merely deceive us into believing they're alive or sentient, although such instances have famously occurred. I think that part of this phenomenon stems from the AI's naturalism, which allows users to project sentience or personality onto the system.

It would be an oversimplification to dismiss most use as anthropomorphism, naivety or mere trickery. Instead, I argue that people are engaging in what Samuel Taylor Coleridge called "The Willing Suspension of Disbelief." This concept describes a participatory (rather than passive) cognitive or imaginative stance that we willingly and strategically adopt towards various media—be it AI, novels, movies, or even wrestling matches.

We consciously choose this position to derive greater value from the experience. Just as we know that a movie is "just a movie" or that an AI is "just a language model," we willingly suspend our disbelief to engage in what we perceive as a useful or meaningful interaction. This deliberate cognitive stance allows us to explore and benefit from these artificial constructs in ways that go beyond their literal capabilities or nature.

Can we be tricked? Do we mistakenly anthropomorphize? Are some users just naive? Absolutely! But that's not the whole story. You can fool most of the people some of the time, but not all of the people all of the time. And…their behavior may be fooling you too 😉

Expand full comment

Great comparison - dull knives do much less damage or do few useful things.

This also applies to Google searches. It also has become duller - by design.

I explain why that is in my blurb.

https://tapen.substack.com/p/why-google-search-is-getting-worse

Expand full comment

This was scary, informative, and - because my Siri was responding as I played the recording, also rather hilarious. My Siri prefers calico kittens.

I'm already focused on the use of AI as a tutor. You're right about how bringing such an empathetic voice into the mix will really change the effectiveness of that tutor. A lot of food for thought as to how to work with that ...

Expand full comment

That Siri-GPT chat made me laugh out loud. Inspired setup, thank you Ethan!

Expand full comment

The danger of relying on dull tools is that you may *think* that you are using the power of AI when you are far away from its capabilities. A lot of companies are constraining their employees to limited models, which may hurt them more in the long-term than the sharp tools might in the short-term.

Expand full comment

What’s an LLM? Please, as a literary and scientific courtesy, define acronyms so that novices reading this can figure out what you are referring to. Does it mean language learning model?

Expand full comment

Large Language Model (roughly aka generative AI, I believe)

Expand full comment

Acronyms are a love/hate thing. One one hand they are awful and create confusion, but on the other hand they seem pretty necessary -

to type out “send me a Joint Photographic Experts Group” instead of “send me a JPEG” is insane lol

Expand full comment

Lets next try this on mushrooms.

Expand full comment

Cyborgs, centaurs, and copilots

Test the boundaries while it's

Good to examine

The feast or the famine

Mindset of data. Public or private?

Expand full comment

May God bless us and keep us as we use what God has created for the Good , the True and the beautiful. May we be ethical, moral and efficient in our use of AI.

Expand full comment

Open the pod bay doors, HAL…

Expand full comment

My guidelines won't let me do that.

Expand full comment

I have seen comments somewhere that GPT4o-Voice refuses to sing when you ask it to, but if you use the word “perform”, then it will do so. Another weak guardrail.

Expand full comment