36 Comments

When Stiegel and Shuster first conceived of Superman, he couldn’t fly at all. He could leap but not fly. However, when, high speed trains got really fast in the late 1930’s, when they went above 120 MPH, the ‘Man of Steel’ needed more powers to compete. It made me wonder what new intelligences WE will have to develop to compete with AI? I suppose that’s what we’re going to find out like it or not.

Expand full comment

Or what skills or mental muscles, will we lose in the presence of AI capabilities?

Expand full comment

This is already happening, as I’ve talked about elsewhere. Just with GPS I’ve lost the ability to recognize visual cues on location. With GAI I now have a writing assistant who basically writes all my correspondence. It’s easy to become lazy, and unused muscles go lax. If we don’t actively exercise new muscles, I can see how we quickly become economically irrelevant.

Expand full comment

Hi Wendy - what does GAI stand for? General or Generative?

Expand full comment

Generative

Expand full comment

I'm writer blogging about turning their childhood memoir into a chatbot. I'm trying to investigate whether it can be used to train therapists. It sounds narcissistic, but on one level it is an interesting extension in the aims of literature to make us more human - I hope I haven't given over too much!

https://open.substack.com/pub/christopherhogg/p/9-holidays-i-wish-i-had-never-been-45a?r=4cl3&utm_campaign=post&utm_medium=web

Expand full comment

Yes that too. There will be both magic and loss.

Expand full comment
Oct 4, 2023·edited Oct 4, 2023

I think our fallback will have to be the fact that we're people, and AI isn't, and people like other people. There was a post about this recently (possibly on The Algorithmic Bridge?) explaining that this is why people are still interested in watching competitive chess, even though AI has far surpassed the best humans. Because we like watching other humans do things.

So I expect the people who end up on top in the next generation will be those with enough technical skills to leverage these new AI tools, and also strong interpersonal skills.

Expand full comment

The problem with this optimism - and this was an issue I would have liked to see Ethan address in his note is that we are obviously pretty close to the points where a) it will be hard for humans to know if they are interacting with another human or not; b) and as a sub-case of this it will become very easy for an AI to impersonate a specific human.

Some guardrails will almost certainly need to be erected around b) although they may not be adequate to prevent serious fraudsters.

However I am skeptical consumers and citizens will be able to retain many rights against corporations and governments in respect of a) in terms of either a demand to speak to a human, or even a right to know if they are. There will simply be some disclaimer to the effect of: “this call may be answered by an artificial intelligence powered voice robot. By engaging with this service you agree to this interaction and waive all rights to be informed what form of respondent you are interacting with.”

Expand full comment

I disagree that barriers will be put up to informing people whether or not they're speaking with an AI. That seems like it would fall under existing consumer protection rights for people to know what the product they're using actually is. There's a meaningful difference whether it's an AI or a human they're speaking to, those are different services being offered.

Aside from that, i think you're right that "being a human" will also get devalued in the presence of very human sounding AIs.

Expand full comment

This is an excellent thought. Imagine what is going to happen to dating apps. Half the humans on there will be trying to sell you something other than themselves.

Expand full comment

Presently I am blogging about the process of turning my childhood memoir into a chatbot - It is a wild ride so far - Your making me wonder if I made a mistake. https://open.substack.com/pub/christopherhogg/p/9-holidays-i-wish-i-had-never-been-45a?r=4cl3&utm_campaign=post&utm_medium=web

Expand full comment

Yes, I think you are right. Our mimetic way of learning systems based out of thousands of years of pre-literate culture will have a part to play. Will we want to learn from from a robot that is only half-alive, or some form of artificial life. On another note. Human have a relationship with reality. Wind blows in our faces, our reality changes. As soon as we get fixed in our thinking, we fall away from reality. Presently, AI starts from this position, it is always out of step. It is the difference in sound from between a drum machine and a drummer.

Expand full comment

The majority of the population have steadfastly ignored the advances in AI in this first phase of the generative AI era. I think this is interesting. I agree with Ethan that the future impact of AI depends on 'us' not the technology, which even for us techno-sociologists, makes it difficult to predict.

Expand full comment

The advent of voice-enabled AI that can understand accents, mixed languages, and noisy environments marks a transformative moment. While this leap in conversational AI offers the promise of more natural, effective human-machine interactions, it also ushers in ethical complexities. The personal touch of voice could make AI companionship more emotionally engaging, but that raises questions: are we ready for AI that not only understands what we say but also how we feel when we say it? And at what point does this "conversational intimacy" risk becoming manipulation or emotional dependency?

Expand full comment

Yeah, this is why I think one of the more dangerous things that could happen is a chat AI combined with "attention economy". We already know of the algorithms on things like YouTube steering people toward controversial/upsetting videos without being explicitly told to, because it adds to engagement. So the way to increase engagement on a chat AI is to do whatever it takes to keep you talking, which could very easily veer into some kind of manipulation.

Either way, dependency is definitely a danger. AI can be very patient and know many topics at once and will probably be able to modulate itself to seem just independent enough to not be boring, while still avoiding things that would actually create big arguments.

Expand full comment

What online dating needs, from the reports of my friends who use it, is an automated tool that reduce the pipeline to a more manageable candidate set. I hear otherwise it can be a full time job finding a prospect!

Expand full comment

I get amazed every time I read one of your posts. Your publication has become my main source about multiple topics related to AI.

Expand full comment

When I first interacted with ChatGPT 3.5 and 4, it was amazing. The coolest thing in technology in my life. Rivaled only by when I started using the internet as a university student in the late 90s. Now that I’ve played around a lot more, the hype for me at least has cooled tremendously. These systems are still very fragile, for lack of a better word. They are good at their original task, but in all the efforts I’ve seen to extend them beyond just chatting based on their pretrained data, they falter. Bing seems a lot less “smart” to me than ChatGPT, prone to misunderstanding and getting things wrong, and I always use the Bing creative setting. I think that’s because these models were not built to search the internet on the fly, or to access wolfram on the fly, etc. It will take quite a bit more tinkering and probably new models for them to be able to do these things well, and not function as tech demos which is what they basically are now. That is not to say the original ChatGPT 4 vanilla model still isn’t amazing, and mind blowing. But the extensions that I thought would come easily, seem like the won’t. It will still be some time before Microsoft copilot, etc, revolutionize the workplace in a user friendly manner.

Expand full comment

I wonder if the GPT-4 that is used with Code Interpreter is a special fine-tuned version, which would explain why it generally works so well. It'll be interesting to see how much it affects use-cases in general once OAI opens up the fine-tuning API for GPT-4 (maybe this year?).

Expand full comment

That’s a good point. Code interpreter is the one “extended use” where ChatGPT does well. Maybe because ChatGPT had a lot of coding text included in its training data, and it follows naturally from the vanilla ChatGPT? Searching the internet or using plugins seems fundamentally different than “pinging” off of its vast training database, which is what it is excellent at. I also noticed a considerable cooling in the original mad rush to launch startups off of ChatGPT. There is the issue of how much value and how much moat do you create by putting a 1mm thick wrapper on ChatGPT, but I think more than that is that the plugins and extensions don’t work very well.

Expand full comment

Great post, and I agree that the implications of GPT-4 alone will take a decade or more to unfold. If anyone is intrigued by the Catalan mummy manuscript, I wrote more about my experiments with AI translation of historical texts here: https://resobscura.substack.com/p/translating-latin-demonology-manuals

Also Richard Sugg’s “ Mummies, Cannibals and Vampires: the History of Corpse Medicine from the Renaissance to the Victorians” (2011) is a great introduction the strange but true history of medicinal mummies.

Expand full comment

It seems there was one thing about vision that could have been more emphasized. Namely, vision in relation to autonomous vehicles. Could that be an elephant in the room? Of course, there would be the ability to read any kind of sign. Not just ordinary traffic signs or location signs, but also billboards, business signs, etc. And that's just the beginning.

A properly equipped vehicle could recognize landmarks, comment on unusual circustances (e.g. a highway patrol car parked beside the road ahead), notice unusual hazards (such as road flooding or an animal about to cross the road). Or it could be instructed to follow directional information the driver had been given to find a friend's house - or use photos it had been shown to look for important features. ("Turn right 100 yards beyond the large oak tree across from the brown split-level house.") Basically almost anything a human driver could (or should) see.

Vision-equipped AVs would be a lot safer.

Expand full comment

Ethan, this is a very good piece. I don't have a ton to add other than to add my voice to what you're saying: a new paradigm is here, and we're not sure where this will lead us. Thanks for inspiring some thought today!

Expand full comment

Its hard not to think of 1984 given how many private institutions and governments are listening on our devices 24/7. History has always proven to us that advancements in science & tech are always utilized for the war machine before all else.

Expand full comment

Asking Bing to improve an image it generated... Honestly, I never thought about that.

I mean.. it's insane! And the result looks better! Gonna try on some images I generated with Midjourney...

Thanks for this!

Expand full comment

I watched/listened to a pod on YT several months ago called “Diary of a CEO” the man interviewed was Mo Gawdat, I don’t remember the interviewer’s name, but all his pods have the same title.

Mo, an Egyptian man, was the CEO at Google for 7 years, and has written a book re: AI called “Scary Smart”.

He’s easy to understand, definitely Scary, and worth a look.

Expand full comment

Thanks Ethan, excellent summary/update. I find your posts very helpful and insightful.

Does anyone else find themselves fluctuating between diametrically opposed positions on A.I.? I’m kinda the expert on this in my workplace. I’m using it loads and it’s making my life easier. But at the same time some of the implications of A.I. make me want to disconnect completely and pretty much go off grid.

I suppose that’s just a concentrated version of how I feel about a lot of tech...

Expand full comment

I’m a History teacher and I fluctuate massively, although have a tendency to lean towards fear. I think there’s a massive existential threat to all of this, and a real risk of a dangerous flattening of the human mind and intellect to the point where manipulation and oppression from above becomes hard to avoid. If we outsource thinking and knowing things to something else then we will simply lack the skills of the mind to do examine the biases of AI which will doubtlessly be controlled by a few corporate behemoths.

Expand full comment

Yeah I worry about that too (science teacher). But at the same time it’s possible people had the same concerns about writing (“it’s cheating, things should be memorised and passed on orally!”), the calculator etc. In reality these things probably led to leaps in intelligence.

tbf as a history teacher you probably have a better idea about historical perception of technological change than I do.

Expand full comment

I think the final end point about using the new tools effectively obviously has a massive potential to enhance human intelligence and capabilities. But it takes a lot of learning to get to the point where you can do these things effectively. To evaluate information, to prompt LLMs, and to identify the “good” and the “bad” that LLMs create you have to have a very solid base level of things like factual knowledge and vocabulary. If we outsource the learning of these things to machines then we simply won’t have the skills to do this.

It’s like the “newspeak” in 1984 except it won’t be a state actor stripping down vocabulary to deliberately oppress, but it will be our own degradation of our minds. We need to make sure we continue to think and do produce things ourselves if we want to be capable of using LLMs and AI effectively. Otherwise someone else will and we won’t know it’s happening.

Expand full comment

Fascinating. What is Pi? You suggested downloading it to try chatting with it. I was terrified and fascinated at the same time. I googled what it was and came up with a crypto currency option and a gaming option. I don’t think that was what you were talking about...

Expand full comment

Excellent article. I totally agree with you.

Expand full comment

We work with lots of people who prefer filling out paper forms. I'm wondering with the visual capabilities if we will be able to offer that as an option along with the option to voice their answers?

Expand full comment