36 Comments

The reason the interstellar travellers could be lapped by later voyagers is that the already-departed travellers could not adopt the new propulsion technology developed on Earth. With AI, however, when there's a new model released, the projects currently in progress can adopt that model, bringing with them their understanding of the problem domain, their curated datasets, their tailored evaluations, etc.

The lesson here may be more "be ready to adopt progress as it arrives" than "wait".

Expand full comment

This was a good insight. However, I hope people don't confuse "wait" with "don't". Even if a task is likely to be achievable by an AI, doing the learning phase of a project is still valuable so we can evaluate the results.

Expand full comment

This is exactly right. Experimentation is what allows you to learn what the tools can/cannot do and also improves your ability to interact with them. Prompt engineering is a good example -- those of us actively using the tools prompt better than someone who tries it for the 1st time

Expand full comment

imminent, not "immanent"

Expand full comment
author

Good catch, but I don't sweat the spelling errors as much post-AI. Its how you know my writing is still done by me.

Expand full comment

Yes! Love that. When I see errors now I punch the air and read on with renewed enthusiasm.

I'm talking to a human!

Expand full comment

i've noticed a couple of typos in ethan's blog and could literally not care less.

Expand full comment

Yes, it's a spelling error that makes a contextual error...I noticed it too. I thought Ethan had used the word purposely until I searched its definition. :) But, I learned a new word!

Expand full comment

As Francois Candelon says there are two ways to implement genAi. Too early or too late :-)

Expand full comment

or the (possibly Connor Leahy) variant...

"There are only two times to react to an exponential: Too early, or too late."

Expand full comment

Ethan, this is a good one.

I have performed my own Wait Calculation on elbow and knee surgery, and I'm honestly surprised more folks don't do that as a matter of course.

Expand full comment

Super interesting idea! AI surgeons may get exponetially better, but then, the living systems (like you knees lol) often deteriorate exponentially too right? Both accelerate other time, what is the point where it makes the most sense?

Expand full comment

I ask myself this question often. I've got some pretty terrific accumulated wear and tear on my joints from 3 decades of grappling (BJJ, judo, and some wrestling earlier on), so I get a lot of opportunities to consider surgery and/or various forms of treatment.

I really don't know the answer, and I'm pretty convinced most doctors also don't, mainly because they understand linear thinking really, really well.

Expand full comment

I started working on a novel a year and a half ago. Six months ago, I shifted my strategy: Instead of writing the actual novel, I'm spending my time working out the structural elements (a detailed plot summary, the theme and subthemes and relationship of the characters to the theme), writing background world-building memos and character descriptions, etc.

My theory is that Claude and GPT-4 can already write passably good scenes given the appropriate context, and I bet GPT-4.5 will be able to write very good scenes. But right now it seems like we're a lot farther away from an AI that can make 100,000 words of complex world-building and plot hang together in creative and theme-driven ways. (And that's also the part I think I'm relatively strong at.) So I'm working on that part, to create the background docs to feed into AI to then draft the scenes.

If there's nothing better than GPT-4 when I'm ready to draft scenes, that's what I'll use. And if there's something better, well, amazing! And if there's something that's better *on its own* at writing an entire complex sci-fi novel from scratch than it can with my background work + my guidance and feedback, then I guess we'll all be out of jobs and the entire publishing industry will be in danger anyway.

Expand full comment

Serious question: Then why should I buy your book?

Expand full comment

There are several product ideas I've had in the past that now seem more attainable with advances in artificial intelligence. However, I think if you have an idea you're sufficiently motivated to do, you should do it. Don't fall into the trap of inaction.

Expand full comment

This is yet another reason why generative AI turns conventional AI wisdom on its head. With the machine learning based models of the last 15 years, there was a strong first mover advantage. The quicker you deploy the quicker, you get feedback and more data, the quicker you can retrain the model to be better. Rinse and repeat.

With generative AI, deploying early, doesn't have nearly the same advantages. Sure, if you collect user behaviour on your generative AI tool you can use it to fine tune the model better. But any improvements from this process pale in comparison to simply waiting for a better model to come out.

Expand full comment

i agree but if you are not the first mover and plan to enter the market then the benchmarks you need to reach for your product can be pretty significant.

for example, google Gemini pro and whatever apple will come out with has to either meet or exceed ChatGPT 4 without actually having any real world experience with deployment and customer experience.

Expand full comment

Ah, so I think we're talking about different things. You're talking about the foundation models, where I 100% agree the convention rules do apply. You're right, when it comes to foundation models, OpenAI is reaping the rewards of a first mover advantage.

Where I think the wisdom gets turned on its head is in the tools that *use* the foundation models. Like all these GPT4 wrappers.

Expand full comment

I see. glad you clarified me there! i agree that the GPT4 wrappers are not worth it since openAI will be releasing incrementally better over time and will probably try to integrate all of the popular wrappers being used in the latest model.

Expand full comment

Wait calculations were being done with respect to really long running computationally intensive tasks in the mid 1980s. It seemed like a well established concept even then.

Expand full comment

"Should you wait?"

Never. Because every minute you spend working towards achieving a goal, big or small, will be a learning experience.

Yes, AI will make your use of *tools and processes* obsolete, sooner or later. And it will no doubt change and reframe your work and thinking.

Whatever you do before AI one-ups you is NOT a loss. AI can never recreate the personal, internal, and unique life lessons & expertise that your efforts will have yielded.

Expand full comment

what about opportunity cost? why brute force a solution for something that will be much simpler a year from now for something that won't be?

Expand full comment

Sure, but time spent learning things you'll never use is at worst a waste of time, and at best an inefficient use of time.

You're right that anything can be a learning experience, but ideally we want to learn the things that are most important, right?

Expand full comment

great food for thought and i appreciated you providing both sides of the coin regarding a decision to delaying current work and waiting for AI development.

Any use cases you can think of that can benefit in waiting for chatgpt 5 (that chatpgt4 can do but with a workaround approach?)

I think most people that are interested in AI at the moment would find it very hard to play the waiting game with the current AI models due to the action cognitive bias (our automatic tendency to take action, even when the better choice may be holding off on doing anything at all.). The GPTs feature makes it so easy to experiment so i find that makes waiting even harder as well.

I asked my personal GPT (Dan 2.0, https://chat.openai.com/g/g-56DoRUOE4-dan-2-0-gpt ) for some personal use cases of AI where waiting might make more sense and it came back with the following.

1. AI-Assisted Personal Development Tools: Advanced AI models could provide personalized tools for self-improvement and mindfulness practices, aligning with your interest in the noble eightfold path and core values like mindfulness and temperance​​.

2. Data Visualization and Analysis Enhancements: Future AI advancements might offer more sophisticated data visualization and analysis tools, benefiting your work as a data analyst and interest in quantitative data analysis​​.

3. Interactive AI for Intellectual Discussions: AI capable of deeper, more nuanced conversations could enhance your desire for intellectually stimulating discussions, possibly in areas related to your favorite books and role models​​.

4. AI in Strategic Thinking and Decision-Making: As AI evolves, it could better assist in applying strategic forecast mental models and pre-mortem thinking, aiding in complex decision-making and expanding your Circle of Competence​​.

5. AI-Driven Chess and Soccer Analysis: More advanced AI might provide in-depth analysis and strategy development for chess and soccer, aligning with your leisure interests and aiding in performance improvement in these activities​​.

Expand full comment

Will you be sharing the prompts you used for the Innovator and Negotiator? I am working on something similar for Career Counseling, but clearly am lacking some prompting expertise. Thanks

Expand full comment

Another really useful piece Ethan. I really appreciate you sharing these insights and experiences. There is a lot of hype and nonsense flying around about AI and it's super helpful to have your grounded input.

For me we seem to continue to miss a simple truth in all of this. Namely that anything we create using AI to help us is designed for consumption by other humans (not by other AI). Logically this means that any and all AI output is designed, evaluate and delivered on the basis that it serves some human purpose or other. I have seen loads of ChatGPT output that isn't fit for purpose but is posted or used by people who clearly don't know what they are doing and don't even realise that they don't know what they are doing. ChatGPT can't help with that :-)

Expand full comment

Grahame, what is the point you are trying to make in your second point? I am currently using AI to generate responses for other AIs to evaluate to assess how noisy LLMs are at responses compared to humans..

Expand full comment

hi Dan, sorry for slow reply. And also for a sloppy post. My point is that the quality/utility of the output is determined by the quality of the people using it. Given that the output will be consumed/used by humans it human capability that will determine how useful/helpful these tools are, more so than the capability of the tool itself. Maybe.

The deeper point, if i may, is that we are building these tools to serve human purposes (what else would they be for, right?). But as these tools begin to transform our human capabilities i think we quickly arrive at the question "what are we trying do?" as an overarching enquiry. Reduce costs, improve productivity, make money, create cool new tools etc etc. is all fine but it is hardly transformative. And yet it seems to me that we are busily creating AI capabilities that have the capacity to reimagine our collective, human, purpose. I don't sense that we are even approaching that agenda yet. Perhaps, in the end, we'll ask an Advanced Super Intelligence AI to defining our human purpose for us. Wouldn't that be fun ;-)

Expand full comment

thanks for the clarification! that explanation was helpful and cleared up my confusion.

"The deeper point, if i may, is that we are building these tools to serve human purposes (what else would they be for, right?"

The myths that seem to be in the silicon valley air that is driving this race to the bottom for AGI does not seem to be based on how AI can be more using for humans but more on how we can create something that is more intelligent than humanity.

https://www.vox.com/the-highlight/23779413/silicon-valleys-ai-religion-transhumanism-longtermism-ea

https://www.humanetech.com/podcast/can-myth-teach-us-anything-about-the-race-to-build-artificial-general-intelligence-with-josh-schrei

"Someone asked for engineers at these cutting edge AI research labs why they're rushing to build AGI and they responded in a few ways 1. determinism 2. believe in the idea that biological life will be replaced by digital life and 3. that change is a good thing. the subtle ego factor in being able to talk with an intelligent life form smarter than us and somehow being able to collaborate with them."

Expand full comment

I actually think we have hit a bit of a bump in the road. The chat AI's only do so much. It still takes a creative, hardworking person pushing the AI. I *do* think there are some really creative people creating systems for normal people to use. Love this blog.

Expand full comment

It also often takes marrying the LLM with a plug-in. Most of the things getting a lot of attention now aren’t the foundation models itself, but the combination of the LLM with another tool through RAG -- Perplexity (LLM + indexed web), Data Analysis (GPT-4 + a code interpreter tool), DALL-E 3 (using GPT-4 to better prompt an image generator) etc

Expand full comment

With the rise of ChatGPT, I have become very reluctant to write my next book.

Executive summary: I have fallen on the incentive trap. Damn!

Expand full comment