Discussion about this post

User's avatar
Ethan Mollick's avatar

I realized revealing the answer now will ruin the survey, so I'll add it here in a few hours.

Expand full comment
Red's avatar

I love that closing sentence: "The only thing I know for sure is that the AI you are using today is the worst AI you are ever going to use."

Also, I wanted to address an earlier point about the internet being a finite source of content for AI training, and using AI-generated content to bypass that. There's a potential phenomenon called model collapse that might occur if LLM output becomes too strongly the primary source of information that subsequent generations are trained on. Paper here: https://arxiv.org/abs/2305.17493

but TLDR version: the probable gets overrepresented, and the improbable (but real) slowly gets erased. Based on the probabilistic way that these large models work, this makes a lot of sense-- but a probable reality and an actual reality are two extremely different things.

LLMs and LMMs (large multimodal models) are likely to improve for quite a while yet, but it's quite possible that it will not be a linear or even exponential direction upwards. There will probably be some hidden valleys of performance loss that we might not notice until we solve them with novel architectures (if we ever even notice them at all!)

So I'll close with a sentiment that echoes yours: "The only thing I know for sure is that the AI you are using today is the worst AI you are ever going to use - but the same thing might not be true in the future."

Expand full comment
56 more comments...

No posts