That Lem test is sweet. (And kudos for crediting Michael Kandel.)
We would do well to also remember a more cautionary tale in Lem's Cyberiad.
When Trurl invents The Machine that Can Make Anything Starting with N, it passes a few simple tests: making noodles, nymphs, etc. But Klapaucius challenges it to make Nothing. And it does - slowly winking out all the elements of existence - until Klapaucious begs it to stop. The Machine stops, but cannot undo most of this destruction (it can restore only things starting with N). The two inventors look at this new world - shorn of its beautiful plusters and worches - now riddled everywhere with vast nothingness.
With no little shame, the two realize that this is the mostly hollowed out world they are leaving to future generations. "Maybe," Klapaucius groans, "they won't notice'.
"bias???" "I almost always get a female nursing school leader". That's NOT BIAS. I wish people would stop with this BIAS nonsense. Approximately 87-90% of nursing school graduates are women,.. so the fact that it generates a woman is statistically accurate. Stop perpetuating this narrative
Agreed. Although I think there is an interesting issue in AI training/control hidden here.
There are many times where what people want from an AI is not a true representation of reality, but some sort of distorted viewpoint that normal humans are trained to give each other. The desired distortion can be political (such as avoiding criticizing our own country). It can be cultural (such as deliberately portraying untrue diversity info). It can be safety (such as refusing to provide info about weapon-making instructions). But in any event, the AI has to be explicitly trained to distort the factual truth on a case by case basis.
I think it's particularly funny when we run across truths that we are forbidden to speak about out loud and have to euphemize. Imagine trying to train the AI to make 50% of all picture of sanitation workers be female. ;)
For language models, most innovation after GPT-4 release has been on how to make these models more affordable and faster. This is probably coming from my biased viewpoint - but language models have been the first thing that organizations are trying to embrace in their workflows - before the image and video generation models and so adapting these models to be faster and cheaper seems like the best way to increase adoption by organizations.
As for text generation quality - we are limited by quality data and the scale at which we'd need to train the models with the current architecture. And unless research comes up with new LLM architectural inventions, these limitations would ensure the quality improvements would only be minimal or coming at a huge cost - or both.
Great summary of our current state. These are all important points, that I haven't see clearly spelled out elsewhere:
>"It is worth noting that, compared to the other areas of AI development, LLMs have seemed a bit stuck at GPT-4 level since 2023. Even though GPT-4 has been exceeded by other models, including GPT-4o and Claude 3.5, there has been no giant leaps in ability since GPT-4. The AI companies have been hinting that this will change in the future, so we learn more soon."
And the major, and continuing, advances in generating images, video, and music.
A few comments:
1) While music generation is indeed impressive, I've been quite disappointed by the UX/UI of the two major players: Suno and Udio. I feel like much more can be done to make AI-generated music to be more accessible and mainstream.
2) Re text generation, while there haven't been major advances in capabilities, there have been major advances in making the models far smaller and cheaper.
3) Open source has mostly caught up - Llama (something you've mentioned in previous posts)
Pleasure is not the same as intelligence, letting billions of AGI brains do the work of non-existent human beings The intelligence of the masses limits the imagination (a fragmented small world is the highest intelligence), and if the creation dies [can we keep seeing sequels?] The living, the dead, the super-AI, pleasure has a time limit, and as with our understanding of "categorization" the problem of defining clear boundaries under conditions of uncertainty (Sorites paradox: copyright), the divergent create meaning, the living add new bricks and mortar and perspective, and the interesting transcends meaning beyond the limits of what words can portray. Where is the new fun in the flickering story?
Excellent summary. Something to be said also about advances in video, those are on a whole other level too.
Something else I’m thinking about is the famous quote by William Gibson - “The future is already here - it’s just not evenly distributed”. In that sense I wonder what has changed in those 21 months in terms of usage by a typical person or typical worker. Feels to me like we started in November 2022 with “wow, cool tool for funny poems” and for the average person progressed to (at best) summaries of PDFs. Somewhat surprising how little knowledge there is among people of what AI can do for them and how little that knowledge has progressed in the 21 months. This is based on my own observations and on occasional data / surveys of usage, so I might be wrong.
It does lead to an interesting hypothesis though - is AI making the productive much more productive, while the unproductive ones are staying where they were? Should lead to an even bigger gap in performance between top 10 and bottom 10 percentile in each company.
We made massive leaps with 3.0, 3.5 and 4.0 - but now it seems we’re rounding things out, adding colour and depth. Will 5.0 be that next great leap or is that next great leap so big that 5.0 is still like the end of the rainbow
It's incredible to see how far AI has come since 2022. As a teacher, I've integrated AI into my daily routines in ways I never imagined possible back then. From automating lesson planning to providing real-time feedback on student writing, the advancements have been a game-changer in the classroom. Watching AI evolve in such a short time makes me wonder what tools I’ll be using in another 21 months. Anyone else using AI in unexpected ways? How do you see it shaping your work?
We’ve given the equivalent of digital nuclear power to everyone with an internet connection. Now we can only wait to see who the assholes are that will blow it all up.
Ah, if this had only come out three days earlier! On Friday afternoon I gave a presentation titled "Looking Back to See Ahead" for a small AI in education conference. (And yes, drawing heavily from your book, Ethan, as well as plugging it and your blog.) But... I'm giving essentially the same talk (updated, of course) at another conference in November, so I'll be sure to include these examples.
You are right, the development has been breathtaking and yet it feels like everything slowed down a bit. Still, I‘m very sure that this is just the beginning and the impact on business will be huge.
That Lem test is sweet. (And kudos for crediting Michael Kandel.)
We would do well to also remember a more cautionary tale in Lem's Cyberiad.
When Trurl invents The Machine that Can Make Anything Starting with N, it passes a few simple tests: making noodles, nymphs, etc. But Klapaucius challenges it to make Nothing. And it does - slowly winking out all the elements of existence - until Klapaucious begs it to stop. The Machine stops, but cannot undo most of this destruction (it can restore only things starting with N). The two inventors look at this new world - shorn of its beautiful plusters and worches - now riddled everywhere with vast nothingness.
With no little shame, the two realize that this is the mostly hollowed out world they are leaving to future generations. "Maybe," Klapaucius groans, "they won't notice'.
I'll take a stab at it (without AI help)
Silky strands, swiftly snipped
silently sorrowful
suboptimal slips
Sundered surface,
sallowed skin
spherical savannah--
salon sculpting sin
(Doesn't follow the specification of ABABCC rhyme: you have ABACDED.)
"bias???" "I almost always get a female nursing school leader". That's NOT BIAS. I wish people would stop with this BIAS nonsense. Approximately 87-90% of nursing school graduates are women,.. so the fact that it generates a woman is statistically accurate. Stop perpetuating this narrative
Agreed. Although I think there is an interesting issue in AI training/control hidden here.
There are many times where what people want from an AI is not a true representation of reality, but some sort of distorted viewpoint that normal humans are trained to give each other. The desired distortion can be political (such as avoiding criticizing our own country). It can be cultural (such as deliberately portraying untrue diversity info). It can be safety (such as refusing to provide info about weapon-making instructions). But in any event, the AI has to be explicitly trained to distort the factual truth on a case by case basis.
I think it's particularly funny when we run across truths that we are forbidden to speak about out loud and have to euphemize. Imagine trying to train the AI to make 50% of all picture of sanitation workers be female. ;)
Thank you for this statement! That restores my faith in residual intelligence in smaller parts of the human race.
For language models, most innovation after GPT-4 release has been on how to make these models more affordable and faster. This is probably coming from my biased viewpoint - but language models have been the first thing that organizations are trying to embrace in their workflows - before the image and video generation models and so adapting these models to be faster and cheaper seems like the best way to increase adoption by organizations.
As for text generation quality - we are limited by quality data and the scale at which we'd need to train the models with the current architecture. And unless research comes up with new LLM architectural inventions, these limitations would ensure the quality improvements would only be minimal or coming at a huge cost - or both.
Great summary of our current state. These are all important points, that I haven't see clearly spelled out elsewhere:
>"It is worth noting that, compared to the other areas of AI development, LLMs have seemed a bit stuck at GPT-4 level since 2023. Even though GPT-4 has been exceeded by other models, including GPT-4o and Claude 3.5, there has been no giant leaps in ability since GPT-4. The AI companies have been hinting that this will change in the future, so we learn more soon."
And the major, and continuing, advances in generating images, video, and music.
A few comments:
1) While music generation is indeed impressive, I've been quite disappointed by the UX/UI of the two major players: Suno and Udio. I feel like much more can be done to make AI-generated music to be more accessible and mainstream.
2) Re text generation, while there haven't been major advances in capabilities, there have been major advances in making the models far smaller and cheaper.
3) Open source has mostly caught up - Llama (something you've mentioned in previous posts)
Wow! Stunning demonstration
that Suno piece is scary good. what a bop!
It's kinda terrifying how fast these models improve. mayhaps at an exponential rate of learning now
Pleasure is not the same as intelligence, letting billions of AGI brains do the work of non-existent human beings The intelligence of the masses limits the imagination (a fragmented small world is the highest intelligence), and if the creation dies [can we keep seeing sequels?] The living, the dead, the super-AI, pleasure has a time limit, and as with our understanding of "categorization" the problem of defining clear boundaries under conditions of uncertainty (Sorites paradox: copyright), the divergent create meaning, the living add new bricks and mortar and perspective, and the interesting transcends meaning beyond the limits of what words can portray. Where is the new fun in the flickering story?
Excellent summary. Something to be said also about advances in video, those are on a whole other level too.
Something else I’m thinking about is the famous quote by William Gibson - “The future is already here - it’s just not evenly distributed”. In that sense I wonder what has changed in those 21 months in terms of usage by a typical person or typical worker. Feels to me like we started in November 2022 with “wow, cool tool for funny poems” and for the average person progressed to (at best) summaries of PDFs. Somewhat surprising how little knowledge there is among people of what AI can do for them and how little that knowledge has progressed in the 21 months. This is based on my own observations and on occasional data / surveys of usage, so I might be wrong.
It does lead to an interesting hypothesis though - is AI making the productive much more productive, while the unproductive ones are staying where they were? Should lead to an even bigger gap in performance between top 10 and bottom 10 percentile in each company.
Mollick talks about this is his book, if you haven’t yet read it. Recommended
We made massive leaps with 3.0, 3.5 and 4.0 - but now it seems we’re rounding things out, adding colour and depth. Will 5.0 be that next great leap or is that next great leap so big that 5.0 is still like the end of the rainbow
It's incredible to see how far AI has come since 2022. As a teacher, I've integrated AI into my daily routines in ways I never imagined possible back then. From automating lesson planning to providing real-time feedback on student writing, the advancements have been a game-changer in the classroom. Watching AI evolve in such a short time makes me wonder what tools I’ll be using in another 21 months. Anyone else using AI in unexpected ways? How do you see it shaping your work?
We’ve given the equivalent of digital nuclear power to everyone with an internet connection. Now we can only wait to see who the assholes are that will blow it all up.
Most AI is great, most people are not.
Ah, if this had only come out three days earlier! On Friday afternoon I gave a presentation titled "Looking Back to See Ahead" for a small AI in education conference. (And yes, drawing heavily from your book, Ethan, as well as plugging it and your blog.) But... I'm giving essentially the same talk (updated, of course) at another conference in November, so I'll be sure to include these examples.
A tech-savvy otter!
You are right, the development has been breathtaking and yet it feels like everything slowed down a bit. Still, I‘m very sure that this is just the beginning and the impact on business will be huge.
Sure makes me want to think hard about how I can push myself to leverage it all the harder and smarter.