As a professor, I caught three students who turned in assignments written by ChatGPT. I caught them because I required specific information/examples from a book that GPT had not been trained on, so there was lots of made up information. With Bing Chat connected to the internet and Claude 2 able to analyze digital files, catching students to stop them from using these tools is going to be nearly impossible. I’ve decided to follow your lead and let students use AI on out of classroom assignments. I’ll hold them responsible for the information on in class exams and try to help them figure out what AI is good at and isn’t good at.
Thank you! Again. This is a great article and am sharing it to an AI Nerds group I belong to. They always give me shit when I mention one of your articles. One said, "Do you read anyone else?" And I said, "I do but I'm not sure why. Ethan is the best." Ha!
Keep up the calm, mindful approaches this world so desperately needs.
Ethan, I fear you are missing one of the best pieces of advice you can give to educators and corporate executives. With ~20% having tried AI in the US, and approximately 50% of that group using it <1x/week, the problem with adoption is some basic education. I find that when I invest 8 hours in basic AI education over 3 days on how to use it to get things done and how to prompt better, the retention and use curve goes way up and the users then find their own practical ways to apply the technology. What is missing is just a small investment in employees and students and the return will be massive.
Generally good, but a few corporate standards make sense. 1) confidential company data only go into systems that protect the info; 2) nobody uses AI output in a critical activity (such as reporting to a customer) without checking the work.
Anyone else have something that should be a minimum standard?
We're in a moment that's going to divide people. The ship is sinking, land is a mile away. Who's leaping and swimming without hesitation, who's clinging to the ship? It's a very human question.
I have a different take on the points about corporate use of LLMs. Yes, GPT-4 is probably still the best one available, even after seemingly getting nerfed. However, context from within the company is also critical, whether it's internal processes, the code base, information on customers, or what have you. Without an enterprise LLM instance (which may use something like GPT-4 or Bard), employees either have to try to distill all of the relevant information into the prompt, which is a lot of work, or they get back suboptimal results. The LLM should have access to all of the useful information in the company by default. This isn't necessarily for fine tuning, which is technically difficult and often iffy in terms of results. Instead, it's the knowledge plus retrieval use case.
Having a company-wide LLM, like Stripe and Jane Street do, is also a huge benefit because it allows employees to share and iterate on prompts and other knowledge around using LLMs for work. There's going to be a huge amount of innovation on evolving workflows to take advantage of LLMs, and that's going to happen a lot slower at companies that don't have an internal platform for it.
It's unclear whether open source versions will really catch up to the leading commercial models. But when we hit GPT-5 levels, there's a good chance that performance will be overkill for many tasks. Let's say the open source/internal equivalent will be a generation behind at GPT-4 levels. At that point, writing a customer email with a less powerful reasoning engine but with access to every other customer email the company has written, and the outcomes of those emails, will be superior to using a state of the art AI without access to the right context.
Great summary of the basic AI questions! Mankind continually tries to control, but an evolutionary path and force is the true energy and wisdom. AI will go on its path, as mankind is going on its path. Along the way, we restrict, control, seize, predict, fear and so on. At the end, the true evolutionary path and force that connects us all will remain as it always has. And on the distant horizon we all will experience who we really are.
Learning how to use the tools is as important as having the tool itself. I feel like we are going to have people that will only use AI for the most basic tasks while others are blowing your mind with it. Very similar to smartphones, actually. How many people do you know that only use it to make calls, text, and scroll through social media? They have one of the most useful devices that mankind has ever been able to fit in a pocket, and they are not even using a measurable percentage of what their true capabilities are. It's very likely a generational thing. People at the end of their careers are less interested in ways to make their jobs easier than people just starting out. Sadly, it is people who should be at the end of their careers that make a lot of the decisions in organizations and schools. Give it another ten years and the old guard will finally have to get out of the way for GenX to step in and start leveraging technology better instead of being afraid of it. The pandemic pushed a lot of them out already because they were so bad at grasping the concept of remote work and Zoom. I think AI may push out most of the rest due to the pressure they feel to adopt it and their inability to even comprehend it.
The sycophant news media is so desperate for clicks they don't bother to do their due diligence to determine if a developer can manifest intelligent life using only algebra.
It is wildly offensive to a real engineer to pretend another intelligence is in the room sequencing the algebra I write.
There is not I am alone the output is mine not the calculators.
Affirming that inanimate object has anthropomorphiz traits with sufficient evidence of mental illness that would land you in an insane asylum for the rest of your life.
I agree that AI, particularly generative AI like ChatGPT, has the potential to bring significant changes to various aspects of work and life. Ignoring or banning AI won't make it go away, as individuals will find ways to utilize it secretly. Centralizing AI within organizations may not be the most effective approach, as current AI implementations often lack the transformative power and creativity found in individual use cases.
To fully harness the potential of AI, it's important to democratize control over AI and empower workers to innovate. Radical incentives can encourage knowledge sharing, while user-to-user innovation should be encouraged through prompt libraries and open tools. Companies should not solely rely on external providers or existing R&D groups to dictate AI use cases; instead, they should dive in responsibly and discover the best applications for their specific needs.
In education, AI can revolutionize tutoring and improve outcomes for students. Rather than trying to turn back the clock, we should envision and embrace a future where AI is integrated into teaching and learning processes. This can help democratize access to education and cater to students of all abilities.
It's essential to acknowledge that AI brings genuine and widespread disruption. As individuals and societies, we have the agency to determine how AI is used and when. Recognizing and preparing for the rising tide of AI is crucial in making informed decisions about its implementation.
Again, thank you Ethan for providing thee most current information on AI out there; at least as it applies to me, am instructor, I feel like I am in the “ know” reading your blog. What is everyone recommending to recreate their syllabus with? I’ve got last semesters, but I need to create it for Fall with fall dates. Chat GPT? Thanks all!!
On holding back the strange AI tide
As a professor, I caught three students who turned in assignments written by ChatGPT. I caught them because I required specific information/examples from a book that GPT had not been trained on, so there was lots of made up information. With Bing Chat connected to the internet and Claude 2 able to analyze digital files, catching students to stop them from using these tools is going to be nearly impossible. I’ve decided to follow your lead and let students use AI on out of classroom assignments. I’ll hold them responsible for the information on in class exams and try to help them figure out what AI is good at and isn’t good at.
Thank you! Again. This is a great article and am sharing it to an AI Nerds group I belong to. They always give me shit when I mention one of your articles. One said, "Do you read anyone else?" And I said, "I do but I'm not sure why. Ethan is the best." Ha!
Keep up the calm, mindful approaches this world so desperately needs.
Correct me if I'm wrong, but aren't the assertions made in the link from your third footnote.. not really accurate? Specifically the compositing stuff with regards to Midjourney? Lorelei Shannon's "taking from other artists" nonsense... https://adventuregamehotspot.com/2023/07/21/echoes-of-somewhere-how-a-solo-developers-game-takes-center-stage-in-the-ai-controversy/
I wish my managers would read this
Everyone is talking AI regulation, but noone is being specific.
Ethan, I fear you are missing one of the best pieces of advice you can give to educators and corporate executives. With ~20% having tried AI in the US, and approximately 50% of that group using it <1x/week, the problem with adoption is some basic education. I find that when I invest 8 hours in basic AI education over 3 days on how to use it to get things done and how to prompt better, the retention and use curve goes way up and the users then find their own practical ways to apply the technology. What is missing is just a small investment in employees and students and the return will be massive.
Generally good, but a few corporate standards make sense. 1) confidential company data only go into systems that protect the info; 2) nobody uses AI output in a critical activity (such as reporting to a customer) without checking the work.
Anyone else have something that should be a minimum standard?
Thanks for writing these. I read each one.
We're in a moment that's going to divide people. The ship is sinking, land is a mile away. Who's leaping and swimming without hesitation, who's clinging to the ship? It's a very human question.
I have a different take on the points about corporate use of LLMs. Yes, GPT-4 is probably still the best one available, even after seemingly getting nerfed. However, context from within the company is also critical, whether it's internal processes, the code base, information on customers, or what have you. Without an enterprise LLM instance (which may use something like GPT-4 or Bard), employees either have to try to distill all of the relevant information into the prompt, which is a lot of work, or they get back suboptimal results. The LLM should have access to all of the useful information in the company by default. This isn't necessarily for fine tuning, which is technically difficult and often iffy in terms of results. Instead, it's the knowledge plus retrieval use case.
Having a company-wide LLM, like Stripe and Jane Street do, is also a huge benefit because it allows employees to share and iterate on prompts and other knowledge around using LLMs for work. There's going to be a huge amount of innovation on evolving workflows to take advantage of LLMs, and that's going to happen a lot slower at companies that don't have an internal platform for it.
It's unclear whether open source versions will really catch up to the leading commercial models. But when we hit GPT-5 levels, there's a good chance that performance will be overkill for many tasks. Let's say the open source/internal equivalent will be a generation behind at GPT-4 levels. At that point, writing a customer email with a less powerful reasoning engine but with access to every other customer email the company has written, and the outcomes of those emails, will be superior to using a state of the art AI without access to the right context.
Great summary of the basic AI questions! Mankind continually tries to control, but an evolutionary path and force is the true energy and wisdom. AI will go on its path, as mankind is going on its path. Along the way, we restrict, control, seize, predict, fear and so on. At the end, the true evolutionary path and force that connects us all will remain as it always has. And on the distant horizon we all will experience who we really are.
Learning how to use the tools is as important as having the tool itself. I feel like we are going to have people that will only use AI for the most basic tasks while others are blowing your mind with it. Very similar to smartphones, actually. How many people do you know that only use it to make calls, text, and scroll through social media? They have one of the most useful devices that mankind has ever been able to fit in a pocket, and they are not even using a measurable percentage of what their true capabilities are. It's very likely a generational thing. People at the end of their careers are less interested in ways to make their jobs easier than people just starting out. Sadly, it is people who should be at the end of their careers that make a lot of the decisions in organizations and schools. Give it another ten years and the old guard will finally have to get out of the way for GenX to step in and start leveraging technology better instead of being afraid of it. The pandemic pushed a lot of them out already because they were so bad at grasping the concept of remote work and Zoom. I think AI may push out most of the rest due to the pressure they feel to adopt it and their inability to even comprehend it.
Total disinformation propaganda.
The sycophant news media is so desperate for clicks they don't bother to do their due diligence to determine if a developer can manifest intelligent life using only algebra.
It is wildly offensive to a real engineer to pretend another intelligence is in the room sequencing the algebra I write.
There is not I am alone the output is mine not the calculators.
Affirming that inanimate object has anthropomorphiz traits with sufficient evidence of mental illness that would land you in an insane asylum for the rest of your life.
I agree that AI, particularly generative AI like ChatGPT, has the potential to bring significant changes to various aspects of work and life. Ignoring or banning AI won't make it go away, as individuals will find ways to utilize it secretly. Centralizing AI within organizations may not be the most effective approach, as current AI implementations often lack the transformative power and creativity found in individual use cases.
To fully harness the potential of AI, it's important to democratize control over AI and empower workers to innovate. Radical incentives can encourage knowledge sharing, while user-to-user innovation should be encouraged through prompt libraries and open tools. Companies should not solely rely on external providers or existing R&D groups to dictate AI use cases; instead, they should dive in responsibly and discover the best applications for their specific needs.
In education, AI can revolutionize tutoring and improve outcomes for students. Rather than trying to turn back the clock, we should envision and embrace a future where AI is integrated into teaching and learning processes. This can help democratize access to education and cater to students of all abilities.
It's essential to acknowledge that AI brings genuine and widespread disruption. As individuals and societies, we have the agency to determine how AI is used and when. Recognizing and preparing for the rising tide of AI is crucial in making informed decisions about its implementation.
Again, thank you Ethan for providing thee most current information on AI out there; at least as it applies to me, am instructor, I feel like I am in the “ know” reading your blog. What is everyone recommending to recreate their syllabus with? I’ve got last semesters, but I need to create it for Fall with fall dates. Chat GPT? Thanks all!!
Ethan,
is there an hype cycle here?
is there something we can do ... to differentiate hype from reality?
thanks.