But it is chilling to hear AI voices - faking folksy humanity with artificial hesitations, gratuitous repetitions, and sexy giggle-snorts - as they dismiss the threat to job security as if it were the silly fear of ignorant reactionaries.
Clearly AI tools cleave the creative workforce into a managerial class that truly will be superpowered and a far larger crowd of “talent” - the illustrators, copywriters, composers, coders and, yes, podcast commentators who will be made largely redundant in everyday work.
Let's face this inevitable reality with clarity and humanity, not wishful whitewash from bots engineered to mimic even the flaws of the human workers they replace.
The artificial voices and cheerfulness are indeed a little chilling. Where I'm even more concerned is how propagandists will be able to export their ideologies.
Imagine Steve Bannon's podcast being translated into dozens of languages, perhaps even dialects like a British cockney accent. Or Putin bypassing the payments to American influencers that just got exposed and just having his RT videos translated in near real time.
The challenge is we're not training our populations about how to consume and not consume content smartly and wisely. It's basically the same that every company's IT team does, turns on a new feature without any guidance. Here you go. That's what's happening with ai. Humanity and civilizations have always reached peaks where they implode. We're on this trajectory. I wish the world leaders and ai tech leaders would pause this innovation and give us all time to process it and use what we've developed now to enjoy.
The same could be said of ever innovation that increased work productivity. Eg, the steam shovel would lead to masses of unemployed construction workers while a few foreman would steer these massive machines. Yet time and time again the productivity gains just lead to higher ambitions for society. Eg, larger construction projects.
Even if machines eventually are capable of doing all economically valuable work, it'll be a process of getting from our current economy to get from here to there. Throughout that journey there will be numerous opportunities for individuals, firms, and institutions to capitalize on the new increased productivity. So we should focus on the opportunities in front of us and at most stockpile a diversified portfolio of assets in the event labor eventually does become worthless.
One further possible factor: often AI tools can help an individual most with particularly bureaucratic organisational tasks. Ones that probably shouldn’t exist if those processes had been reconsidered at any point in the last ten years… Legal, HR, reporting, and so on. Individuals often find these tasks boring, and their usefulness is not always transparent, so the organisation had to press quite hard to ensure they are completed, and in that culture you’re not going to admit that you’re shortcutting these ‘important’ organisational tasks with AI.
Unix core was written in assembler by a single programmer in the summer, followed by his collaborator's rewrite the next summer in C. Their employer was Bell Labs, which had enough money to fund two guys trying their hand at something not directed from the top. Don't expect upper management to know or understand the potential for AI - until a competitor announces a product or service that puts the company in jeopardy.
BTW - I used NotebookLM yesterday to create a podcast from a technical slide deck. The results were even more impressive than using ChatGPT to manipulate million-cell spreadsheets. The most important feature of NotebookLM is that it is able to translate an involved tech discussion, into one that can easily be understood by almost anyone. The back-and-forth audio banter is easy to picture in your mind. Absolutely unforgettable. Thanks for your work.
Bell Labs had more than enough to fund two guys. It also funded Kernighan and Ritchie who wrote that C language you mention, and Shockley and team who invented the transistors that Unix runs on, and hundreds more. Let's remember that this deep investment in innovation was not a product of free-enterprise capitalism. Ma Bell was a protected monopoly and phone calls were mighty expensive.
Somewhere I still have my original K&R C Programming Language book. My point was that K&R could take time from their other work, with permission, while employed by the Labs. Ma Bell did technology a huge disservice as they ran their monopoly. To install my business telephone service was a huge upfront expense (70s), and the Bell tech just reused their previously installed gear. We have several monopolies in this country. Sometimes it takes a lawsuit to dislodge them. But today open source code is the biggest obstacle to monopolies.
A year ago I discovered how to submit huge spreadsheets and write simple English statements to manipulate them. Yesterday OpenAI advanced this process by introducing "canvas" as another choice in their pulldown of versions. Very impressive upgrade for writing and coding. I've been a technology player over the past 60 years. But the evolving speed of AI is way beyond anything previously. Really, it's kind of scary. Especially when a young marketing person provides written English instructions to create a full-stack application. Talk about an upheaval in the programming ranks. Forget Computer Science. Welcome Computer Engineering. The tech world has morphed from detailed code writing to "create a program to solve.... problem".
Then there is Paradigm AI (paradigmai.com), which recently graduated from Y Combinator. Their product uses the spreadsheet metaphor as a workspace to use LLMs for performing massively parallel research. Check out @tryparadigm and @annarmonaco on X. Oh, and I’m impressed with your 60 years of experience in the IT space, which beats my 54 years. I do remember having the K&R “bible” on my desk for years during software development. Those were the days.
I'm an HVAC engineer in Indiana applying LLM technology to understand, manage, and run HVAC systems. My 1960 Fortran card decks would take up large boxes, and so would the 40 sheets of paper the ring printer kicked out to tell me I made a typo on card 81. My grandson will undoubtedly be a better programmer - oh, wait - their won't be "programmers" before long. I submitted a Google slides 60 card deck to NotebookLM yesterday. The resulting audio was stunning. Now I know how to communicate better with non-techy people (like school boards).
When I compare the many hours I spent dealing with programming details to the 3 minutes NotebookLM took to create a fantastic audio, I'm glad I'm still playing in this game.
Times are changing at an incredible pace. Actually, 54 years ago to this day, I took my first Fortran IV programming course at the University of Bochum here in Germany. Unfortunately, my parents threw out all my card decks while I continued my computer science studies at the University of Kansas, starting in 1974. When I got my Ph.D., I interviewed at IU in Bloomington, as I knew several of the professors there, who also did research in programming language design and implementation. My wife hails from Illinois, so there are lots of ties to the Midwest. After 36 years in the US I returned to Germany in 2010. We are turning a bit into a technological backwater here, but with the global digital community it’s no problem to keep up with the latest developments in generative AI. — Your HVAC endeavor sounds interesting; what’s in scope, if I may ask? I’m now working as an independent consultant and have spent a lot of time interacting with LLMs, starting with GPT-2 in early 2019. These days I’m trying to understand what the OpenAI o1 model can and cannot (yet) do. And play with 4o advanced voice mode, the new canvas feature, image generators like Midjourney and FLUX.1, and, of course, NotebookLM. BTW, if you give a good resume to NotebookLM, it generates an excellent 120-150 second long podcast. I tried it for an acquaintance on a lark, and was pleasantly surprised.
As soon as these productivity gains are implemented, organizations will begin eliminating jobs. I think people see this, and so are less inclined to help speed up that process.
The notebook podcast was surprisingly good, but I found the article a lot better still. Writing style, specific details, tone-- they all add a lot to the underlying information.
I agree. I have converted numerous publications, including the recent Draghi report on European competitiveness (or lack thereof) into NotebookLM podcasts. While listening to these podcasts on a walk gives you a reasonably good overview of the subject matter, too much gets lost in the banter, for my taste, when the underlying source materials get lengthy. On the other hand, I found that making a podcast out of a very brief document can be very insightful. Try it on a good resume and see what happens.
Unfortunately, the Notebook LLM podcast is even better than your essay. It puts so much emotion in explaining the problem and spelling the steps that need to be taken. It even plays up the “secret cyborgs” because it knows audiences will love it. The days of people reading your essays might be numbered. It’s just source material now. 😉
Why not both? The dialogue in the podcast is great for keeping you engaged, but that seems to be mostly solving a problem inherent to the format.
Is there perhaps a way in NotebookLM to dial down the level of “chattiness”? This is just a bit too much for me — knowing it's artificial certainly has an effect there, but in part, it also may be due to my verbally more blunt (non-American) cultural background.
I think there is definitely a place for both, as well as a useful video component that will match the ease of NotebookLM. People have different learning and consumption styles, which calls for different formats, not to mention those of us soaking up AI content in the middle of the night who can't play audio as we don't want to awaken others. :-)
I was reminded recently of how much the current hashtag#AI-driven transformation mirrors the BYOD transformation of old. Those of us with some grey in our hair remember how BYOD went: businesses had strongly-worded policies against individual use of non-approved technology (phones, laptops) because they took comfort in the security profile of a uniform tech environment; employees, though, felt increasingly limited by the tech they were allowed to use on the job and started sneaking in their own devices at the periphery. Ultimately BYOD was driven, not by top-down executive fiat, but by the simple reality that employee use of their iPhones was too pervasive to roll back; they made their peace with tech diversity rather than embracing it. I would like to think that business leaders are smarter today and willing to embrace more flexible strategies where things like AI are concerned. My experience, though, suggests that many companies will fight for control over what their people are doing, rather than empowering those people to discover new and and profitable opportunities.
I wonder whether many people, in orgs that claim they aren't seeing an uplift in productivity from AI use yet, work jobs like mine: I'm a support analyst in the IT department of a medical SaaS company. My work comes from a queue of customer reported issues, which I solve by writing and deployming scripts. I can say with authority that AI boosts my productivity. But the queue is the cap on productivity - I can't go out and get more work if I wanted to, so I mostly enjoy an "employee surplus" of free time to tinker and experiment with AI frameworks. (My hope is that this will pay off for the company - and not just me - some day, when we get around to acting like Ethan recommends)
Many other people work jobs with capped productivity, either because they are dependent on a queue for jobs to do, or because they are bottlenecked by someone else (or multiple someones) needing to give feedback before their work can go on to another stage. They can't make more work/generate more value independently. Because AI is not being deployed enterprise-wide - and processes aren't being reexamined and reformed - the productivity accrues entirely to the individual user, not the org. I expect this will change, but I am certain it is a bottleneck on present AI-driven increases in efficiency and productivity.
I HIGHLY recommend that you watch the YouTube video from the Harvard B-School conference, "investing in the future of AI." The moderator & 3 panelists put flesh on the bones of the framework that Ethan has laid out. Some invaluable tips from them & some great insights to follow up.
NotebookLM was my gateway AI! In my program, we have a 1 credit writing course meant to be creative, playful, even indulgent. I'm inspired to do this post -- spend an hour a week with students solely to explore AI-LLMs to accomplish writing tasks. Help each other write prompts. Learn how to iterate towards an outcome. If I teach it, I will make time for it.
👍🏽 How would this translate to educational institutions I wonder. We have ZERO resources (time, money, personnel) compared to a corporation. We don‘t do R&D. And we loathe doing trial & error - which is essentially what we have had to do with genAI till now. Hm.
To use one tool as an example - NotebookLM is a good, simple way to introduce a new topic to those with zero knowledge. It's excellent at distilling what's novel, weird, or quirky in the content you provide, so I suggest that you assign a 5min podcast to students prior to starting a particularly dense subject, and include some fun tidbits.
(A few tips: the filenames you use strongly influence the podcast focus, and listing the priorities or structure of the podcast near the top of the content docs you provide ensures that it hits the main points you want it to.)
Say more about what you meant by “listing the priorities or structure of the podcast near the top of the content docs you provide ensures that it hits the main points you want it to” if you don’t mind. NotebookLM creates its podcasts by reviewing the source materials, drafting an outline, writing a script (incl what should be emphasized or expressed with skepticism), then adding the banter. As such, I’m curious what you are doing within the sources themselves to influence that further.
If the default podcast it generates from the source materials I give it doesn't address my intended audience's POV (e.g. if it thinks the audience is someone trying to avoid sneaky gotchas when really they're open-minded beginners), I'll change the inputs so that what goes into the black box is shaped a bit.
It accepts TXT files, so I can add a TXT file as one of the source materials in which I outline the structure of the podcast. Regenerating the podcast again after adding this can significantly change the output.
Further, I can populate these structured sections with concrete examples and data to prevent NotebookLM from hallucinating for these specific things. I've noticed that it readily latches on to human stories I provide.
Finally, I can add some bullet points near the top of the TXT file saying things like "It is very important to focus on [aspect W] of [subject X] so that listeners understand the difference between [position Y] and [position Z]."
I don't mind testing a beta tool like this, but some educators are reticent to use something that tends to produce errors. At the moment, I feel NotebookLM is not ready for prime time in education. We early adopters are playing around with it but it needs to be 100 % correct. Thanks for the tips on file names.
Name that insurance company or it didn't happen. Agentic AI is nowhere near that advanced and the tools don't exist for one person to easily retrain them. As a thought experiment I don't mind it, but please don't use it as a case study if it's embellished.
Super interesting article! I'm lucky enough to work with people that openly encourage the adoption of GenAi tools, but I see the struggle also in non-professional contexts to avoid being the secret cyborg. Thanks!
This post is very validating to what I've been preaching all year! Some people are waiting for the org to tell them what to do, but others are experimenting. As a mid-level lead, I'm finding it an absolute delight to connect with the people who are experimenting, and I'm pleased that our org culture enables them to feel completely comfortable sharing. The only thing holding them back is questions about what is and isn't permissible within which systems, and it's been slow-going getting answers from the top.
One solution is to make human-AI collaboration public for organisations with CAIR (Circle AI Resource). More on CAIR here: https://www.newssocial.co.uk/cair.html
Yes the audio is a marvel.
But it is chilling to hear AI voices - faking folksy humanity with artificial hesitations, gratuitous repetitions, and sexy giggle-snorts - as they dismiss the threat to job security as if it were the silly fear of ignorant reactionaries.
Clearly AI tools cleave the creative workforce into a managerial class that truly will be superpowered and a far larger crowd of “talent” - the illustrators, copywriters, composers, coders and, yes, podcast commentators who will be made largely redundant in everyday work.
Let's face this inevitable reality with clarity and humanity, not wishful whitewash from bots engineered to mimic even the flaws of the human workers they replace.
The artificial voices and cheerfulness are indeed a little chilling. Where I'm even more concerned is how propagandists will be able to export their ideologies.
Imagine Steve Bannon's podcast being translated into dozens of languages, perhaps even dialects like a British cockney accent. Or Putin bypassing the payments to American influencers that just got exposed and just having his RT videos translated in near real time.
The challenge is we're not training our populations about how to consume and not consume content smartly and wisely. It's basically the same that every company's IT team does, turns on a new feature without any guidance. Here you go. That's what's happening with ai. Humanity and civilizations have always reached peaks where they implode. We're on this trajectory. I wish the world leaders and ai tech leaders would pause this innovation and give us all time to process it and use what we've developed now to enjoy.
The same could be said of ever innovation that increased work productivity. Eg, the steam shovel would lead to masses of unemployed construction workers while a few foreman would steer these massive machines. Yet time and time again the productivity gains just lead to higher ambitions for society. Eg, larger construction projects.
Even if machines eventually are capable of doing all economically valuable work, it'll be a process of getting from our current economy to get from here to there. Throughout that journey there will be numerous opportunities for individuals, firms, and institutions to capitalize on the new increased productivity. So we should focus on the opportunities in front of us and at most stockpile a diversified portfolio of assets in the event labor eventually does become worthless.
Simon Willison has a great post about the magic behind NotebookLM, the tool used to generate the podcast, and the term disfluency that describes the added emotion to the language. https://simonwillison.net/2024/Sep/29/notebooklm-audio-overview/
One further possible factor: often AI tools can help an individual most with particularly bureaucratic organisational tasks. Ones that probably shouldn’t exist if those processes had been reconsidered at any point in the last ten years… Legal, HR, reporting, and so on. Individuals often find these tasks boring, and their usefulness is not always transparent, so the organisation had to press quite hard to ensure they are completed, and in that culture you’re not going to admit that you’re shortcutting these ‘important’ organisational tasks with AI.
Unix core was written in assembler by a single programmer in the summer, followed by his collaborator's rewrite the next summer in C. Their employer was Bell Labs, which had enough money to fund two guys trying their hand at something not directed from the top. Don't expect upper management to know or understand the potential for AI - until a competitor announces a product or service that puts the company in jeopardy.
BTW - I used NotebookLM yesterday to create a podcast from a technical slide deck. The results were even more impressive than using ChatGPT to manipulate million-cell spreadsheets. The most important feature of NotebookLM is that it is able to translate an involved tech discussion, into one that can easily be understood by almost anyone. The back-and-forth audio banter is easy to picture in your mind. Absolutely unforgettable. Thanks for your work.
Bell Labs had more than enough to fund two guys. It also funded Kernighan and Ritchie who wrote that C language you mention, and Shockley and team who invented the transistors that Unix runs on, and hundreds more. Let's remember that this deep investment in innovation was not a product of free-enterprise capitalism. Ma Bell was a protected monopoly and phone calls were mighty expensive.
Somewhere I still have my original K&R C Programming Language book. My point was that K&R could take time from their other work, with permission, while employed by the Labs. Ma Bell did technology a huge disservice as they ran their monopoly. To install my business telephone service was a huge upfront expense (70s), and the Bell tech just reused their previously installed gear. We have several monopolies in this country. Sometimes it takes a lawsuit to dislodge them. But today open source code is the biggest obstacle to monopolies.
Out of curiosity, what does your point about ChatGPT and spreadsheets refer to? Has there been a recent milestone in that front?
A year ago I discovered how to submit huge spreadsheets and write simple English statements to manipulate them. Yesterday OpenAI advanced this process by introducing "canvas" as another choice in their pulldown of versions. Very impressive upgrade for writing and coding. I've been a technology player over the past 60 years. But the evolving speed of AI is way beyond anything previously. Really, it's kind of scary. Especially when a young marketing person provides written English instructions to create a full-stack application. Talk about an upheaval in the programming ranks. Forget Computer Science. Welcome Computer Engineering. The tech world has morphed from detailed code writing to "create a program to solve.... problem".
Then there is Paradigm AI (paradigmai.com), which recently graduated from Y Combinator. Their product uses the spreadsheet metaphor as a workspace to use LLMs for performing massively parallel research. Check out @tryparadigm and @annarmonaco on X. Oh, and I’m impressed with your 60 years of experience in the IT space, which beats my 54 years. I do remember having the K&R “bible” on my desk for years during software development. Those were the days.
I'm an HVAC engineer in Indiana applying LLM technology to understand, manage, and run HVAC systems. My 1960 Fortran card decks would take up large boxes, and so would the 40 sheets of paper the ring printer kicked out to tell me I made a typo on card 81. My grandson will undoubtedly be a better programmer - oh, wait - their won't be "programmers" before long. I submitted a Google slides 60 card deck to NotebookLM yesterday. The resulting audio was stunning. Now I know how to communicate better with non-techy people (like school boards).
When I compare the many hours I spent dealing with programming details to the 3 minutes NotebookLM took to create a fantastic audio, I'm glad I'm still playing in this game.
Times are changing at an incredible pace. Actually, 54 years ago to this day, I took my first Fortran IV programming course at the University of Bochum here in Germany. Unfortunately, my parents threw out all my card decks while I continued my computer science studies at the University of Kansas, starting in 1974. When I got my Ph.D., I interviewed at IU in Bloomington, as I knew several of the professors there, who also did research in programming language design and implementation. My wife hails from Illinois, so there are lots of ties to the Midwest. After 36 years in the US I returned to Germany in 2010. We are turning a bit into a technological backwater here, but with the global digital community it’s no problem to keep up with the latest developments in generative AI. — Your HVAC endeavor sounds interesting; what’s in scope, if I may ask? I’m now working as an independent consultant and have spent a lot of time interacting with LLMs, starting with GPT-2 in early 2019. These days I’m trying to understand what the OpenAI o1 model can and cannot (yet) do. And play with 4o advanced voice mode, the new canvas feature, image generators like Midjourney and FLUX.1, and, of course, NotebookLM. BTW, if you give a good resume to NotebookLM, it generates an excellent 120-150 second long podcast. I tried it for an acquaintance on a lark, and was pleasantly surprised.
Can we communicate separately? michael.r.lavelle@gmail.com
As soon as these productivity gains are implemented, organizations will begin eliminating jobs. I think people see this, and so are less inclined to help speed up that process.
The notebook podcast was surprisingly good, but I found the article a lot better still. Writing style, specific details, tone-- they all add a lot to the underlying information.
I agree. I have converted numerous publications, including the recent Draghi report on European competitiveness (or lack thereof) into NotebookLM podcasts. While listening to these podcasts on a walk gives you a reasonably good overview of the subject matter, too much gets lost in the banter, for my taste, when the underlying source materials get lengthy. On the other hand, I found that making a podcast out of a very brief document can be very insightful. Try it on a good resume and see what happens.
Unfortunately, the Notebook LLM podcast is even better than your essay. It puts so much emotion in explaining the problem and spelling the steps that need to be taken. It even plays up the “secret cyborgs” because it knows audiences will love it. The days of people reading your essays might be numbered. It’s just source material now. 😉
Why not both? The dialogue in the podcast is great for keeping you engaged, but that seems to be mostly solving a problem inherent to the format.
Is there perhaps a way in NotebookLM to dial down the level of “chattiness”? This is just a bit too much for me — knowing it's artificial certainly has an effect there, but in part, it also may be due to my verbally more blunt (non-American) cultural background.
I think there is definitely a place for both, as well as a useful video component that will match the ease of NotebookLM. People have different learning and consumption styles, which calls for different formats, not to mention those of us soaking up AI content in the middle of the night who can't play audio as we don't want to awaken others. :-)
I was reminded recently of how much the current hashtag#AI-driven transformation mirrors the BYOD transformation of old. Those of us with some grey in our hair remember how BYOD went: businesses had strongly-worded policies against individual use of non-approved technology (phones, laptops) because they took comfort in the security profile of a uniform tech environment; employees, though, felt increasingly limited by the tech they were allowed to use on the job and started sneaking in their own devices at the periphery. Ultimately BYOD was driven, not by top-down executive fiat, but by the simple reality that employee use of their iPhones was too pervasive to roll back; they made their peace with tech diversity rather than embracing it. I would like to think that business leaders are smarter today and willing to embrace more flexible strategies where things like AI are concerned. My experience, though, suggests that many companies will fight for control over what their people are doing, rather than empowering those people to discover new and and profitable opportunities.
I wonder whether many people, in orgs that claim they aren't seeing an uplift in productivity from AI use yet, work jobs like mine: I'm a support analyst in the IT department of a medical SaaS company. My work comes from a queue of customer reported issues, which I solve by writing and deployming scripts. I can say with authority that AI boosts my productivity. But the queue is the cap on productivity - I can't go out and get more work if I wanted to, so I mostly enjoy an "employee surplus" of free time to tinker and experiment with AI frameworks. (My hope is that this will pay off for the company - and not just me - some day, when we get around to acting like Ethan recommends)
Many other people work jobs with capped productivity, either because they are dependent on a queue for jobs to do, or because they are bottlenecked by someone else (or multiple someones) needing to give feedback before their work can go on to another stage. They can't make more work/generate more value independently. Because AI is not being deployed enterprise-wide - and processes aren't being reexamined and reformed - the productivity accrues entirely to the individual user, not the org. I expect this will change, but I am certain it is a bottleneck on present AI-driven increases in efficiency and productivity.
I HIGHLY recommend that you watch the YouTube video from the Harvard B-School conference, "investing in the future of AI." The moderator & 3 panelists put flesh on the bones of the framework that Ethan has laid out. Some invaluable tips from them & some great insights to follow up.
https://www.youtube.com/watch?v=t6xc-_m47_0&t=393s
NotebookLM was my gateway AI! In my program, we have a 1 credit writing course meant to be creative, playful, even indulgent. I'm inspired to do this post -- spend an hour a week with students solely to explore AI-LLMs to accomplish writing tasks. Help each other write prompts. Learn how to iterate towards an outcome. If I teach it, I will make time for it.
👍🏽 How would this translate to educational institutions I wonder. We have ZERO resources (time, money, personnel) compared to a corporation. We don‘t do R&D. And we loathe doing trial & error - which is essentially what we have had to do with genAI till now. Hm.
To use one tool as an example - NotebookLM is a good, simple way to introduce a new topic to those with zero knowledge. It's excellent at distilling what's novel, weird, or quirky in the content you provide, so I suggest that you assign a 5min podcast to students prior to starting a particularly dense subject, and include some fun tidbits.
(A few tips: the filenames you use strongly influence the podcast focus, and listing the priorities or structure of the podcast near the top of the content docs you provide ensures that it hits the main points you want it to.)
Say more about what you meant by “listing the priorities or structure of the podcast near the top of the content docs you provide ensures that it hits the main points you want it to” if you don’t mind. NotebookLM creates its podcasts by reviewing the source materials, drafting an outline, writing a script (incl what should be emphasized or expressed with skepticism), then adding the banter. As such, I’m curious what you are doing within the sources themselves to influence that further.
If the default podcast it generates from the source materials I give it doesn't address my intended audience's POV (e.g. if it thinks the audience is someone trying to avoid sneaky gotchas when really they're open-minded beginners), I'll change the inputs so that what goes into the black box is shaped a bit.
It accepts TXT files, so I can add a TXT file as one of the source materials in which I outline the structure of the podcast. Regenerating the podcast again after adding this can significantly change the output.
Further, I can populate these structured sections with concrete examples and data to prevent NotebookLM from hallucinating for these specific things. I've noticed that it readily latches on to human stories I provide.
Finally, I can add some bullet points near the top of the TXT file saying things like "It is very important to focus on [aspect W] of [subject X] so that listeners understand the difference between [position Y] and [position Z]."
Very cool. Thanks for sharing that
I don't mind testing a beta tool like this, but some educators are reticent to use something that tends to produce errors. At the moment, I feel NotebookLM is not ready for prime time in education. We early adopters are playing around with it but it needs to be 100 % correct. Thanks for the tips on file names.
Here's a real case study of 'Agentic AI' achieving something between Level 3 and 5, today: a company has replaced its entire operations team with bots, who collaborate with each other over Slack to increase company profitability 8x: https://www.linkedin.com/pulse/agentic-ai-new-frontier-thats-already-here-simon-torrance-hihme
Name that insurance company or it didn't happen. Agentic AI is nowhere near that advanced and the tools don't exist for one person to easily retrain them. As a thought experiment I don't mind it, but please don't use it as a case study if it's embellished.
Super interesting article! I'm lucky enough to work with people that openly encourage the adoption of GenAi tools, but I see the struggle also in non-professional contexts to avoid being the secret cyborg. Thanks!
This post is very validating to what I've been preaching all year! Some people are waiting for the org to tell them what to do, but others are experimenting. As a mid-level lead, I'm finding it an absolute delight to connect with the people who are experimenting, and I'm pleased that our org culture enables them to feel completely comfortable sharing. The only thing holding them back is questions about what is and isn't permissible within which systems, and it's been slow-going getting answers from the top.
One solution is to make human-AI collaboration public for organisations with CAIR (Circle AI Resource). More on CAIR here: https://www.newssocial.co.uk/cair.html
This is exactly what I'm seeing when talking to organizations as well.
Great summary! Thanks!