As a professor, I caught three students who turned in assignments written by ChatGPT. I caught them because I required specific information/examples from a book that GPT had not been trained on, so there was lots of made up information. With Bing Chat connected to the internet and Claude 2 able to analyze digital files, catching students to stop them from using these tools is going to be nearly impossible. I’ve decided to follow your lead and let students use AI on out of classroom assignments. I’ll hold them responsible for the information on in class exams and try to help them figure out what AI is good at and isn’t good at.
Oral exams are the way to go, no? who cares if they use AI for assignment, the main point is whether they’ve comprehended the material. Are oral exams too time intensive?
Thank you! Again. This is a great article and am sharing it to an AI Nerds group I belong to. They always give me shit when I mention one of your articles. One said, "Do you read anyone else?" And I said, "I do but I'm not sure why. Ethan is the best." Ha!
Keep up the calm, mindful approaches this world so desperately needs.
I think my point of view is similar to yours, but I have a hard time articulating it. Why should we not think of diffusion models like Midjourney as ‘taking from the artists’ they were trained on?
We're in a moment that's going to divide people. The ship is sinking, land is a mile away. Who's leaping and swimming without hesitation, who's clinging to the ship? It's a very human question.
Ethan, I fear you are missing one of the best pieces of advice you can give to educators and corporate executives. With ~20% having tried AI in the US, and approximately 50% of that group using it <1x/week, the problem with adoption is some basic education. I find that when I invest 8 hours in basic AI education over 3 days on how to use it to get things done and how to prompt better, the retention and use curve goes way up and the users then find their own practical ways to apply the technology. What is missing is just a small investment in employees and students and the return will be massive.
Generally good, but a few corporate standards make sense. 1) confidential company data only go into systems that protect the info; 2) nobody uses AI output in a critical activity (such as reporting to a customer) without checking the work.
Anyone else have something that should be a minimum standard?
I have a different take on the points about corporate use of LLMs. Yes, GPT-4 is probably still the best one available, even after seemingly getting nerfed. However, context from within the company is also critical, whether it's internal processes, the code base, information on customers, or what have you. Without an enterprise LLM instance (which may use something like GPT-4 or Bard), employees either have to try to distill all of the relevant information into the prompt, which is a lot of work, or they get back suboptimal results. The LLM should have access to all of the useful information in the company by default. This isn't necessarily for fine tuning, which is technically difficult and often iffy in terms of results. Instead, it's the knowledge plus retrieval use case.
Having a company-wide LLM, like Stripe and Jane Street do, is also a huge benefit because it allows employees to share and iterate on prompts and other knowledge around using LLMs for work. There's going to be a huge amount of innovation on evolving workflows to take advantage of LLMs, and that's going to happen a lot slower at companies that don't have an internal platform for it.
It's unclear whether open source versions will really catch up to the leading commercial models. But when we hit GPT-5 levels, there's a good chance that performance will be overkill for many tasks. Let's say the open source/internal equivalent will be a generation behind at GPT-4 levels. At that point, writing a customer email with a less powerful reasoning engine but with access to every other customer email the company has written, and the outcomes of those emails, will be superior to using a state of the art AI without access to the right context.
I've heard Stripe and Jane Street have them, but it's not clear what they implemented specifically. Morgan Stanley was listed as a partner for OpenAI with a use case of enabling internal chat and search for their wealth management materials, if I remember correctly. Various other big companies are probably working on this, either using APIs from OpenAI or Google or using an open source foundation model like Llama 2.
And I bet every large technology consulting company is touting their expertise on LLMs these days and trying to land work with big enterprises to make this happen. I'm sure mileage will vary a great deal on that work.
Has it only been 8 months? As a person who gave up his Ph.D to become a single parent of two daughters thirty years ago and paid for it all by becoming a programmer. ChatGPT has given me my intellectual life back. The creative part where I get to bounce my creative ideas off something that can respond! Great article!
Organizations are compelled by law to centrally manage use of software and data by their employees. OpenAI models are mostly commoditized now and are very expensive for what they deliver for most use cases, so having lots of model choice at your fingertips to handle things like cost, speed, compliance issues, or other security concerns (on top of specialization options) is important to most orgs. Orgs should not trust their business with 3rd party AIs who are seeing everything your employees put into and get out of the model, even if you believe the AI provider will live up to their promise they “do not train” on customer data (whatever that means to them) you are taking a big risk to let another company who also serves your competitors or may want to compete with you in the future fully understand your business the way AI use enables (see Amazon competing with Netflix even though Netflix outsources most of their tech to Amazon). AI models are not like cloud services that store your docs, Microsoft can always read your docs but those docs do not fully explain your business or your employees thinking on solving business problems, using AI at Microsoft or OpenAi does teach Microsoft that. This might be less of a concern if OpenAI’s CEO Sam Altman was not on the record for saying he’s building an AI that will consume all economic activity and replace the median employee (you can find this Sam statements easily with simple Google search).
Solutions exist in the market to address the limitations you propose like Kindo that allows full centralization and security control of ANY AI model commerical or open source and easy sharing of prompts and workflows. Orgs can move much faster to full AI option when they choose these solutions. This is not a sales pitch just catching you up on what’s in the market besides the limitations and vision that AI model makers ship themselves. It’s a bigger world out there than just whatever OpenAI (or whoever is the new leader in AI models) offers to fit whatever their agenda is.
Love your thoughts and writing on this topic and find I agree with you more than not. Question - you mention in this article that there are more options now to use AI securely - can you share what those are? I work with many HR leaders where this is one of their greatest concerns -
Learning how to use the tools is as important as having the tool itself. I feel like we are going to have people that will only use AI for the most basic tasks while others are blowing your mind with it. Very similar to smartphones, actually. How many people do you know that only use it to make calls, text, and scroll through social media? They have one of the most useful devices that mankind has ever been able to fit in a pocket, and they are not even using a measurable percentage of what their true capabilities are. It's very likely a generational thing. People at the end of their careers are less interested in ways to make their jobs easier than people just starting out. Sadly, it is people who should be at the end of their careers that make a lot of the decisions in organizations and schools. Give it another ten years and the old guard will finally have to get out of the way for GenX to step in and start leveraging technology better instead of being afraid of it. The pandemic pushed a lot of them out already because they were so bad at grasping the concept of remote work and Zoom. I think AI may push out most of the rest due to the pressure they feel to adopt it and their inability to even comprehend it.
¡Hola! Anhony Marc. Creo que puede ser tentador comparar las redes sociales y el uso de teléfonos inteligentes con la IA, pero debemos tener en cuenta que son dos cosas diferentes, los teléfonos están hechos para jugar, ver videos de gatitos y lo más importante para llamar y conversar con otros.
Las IA, como ChatGPT, son herramientas enfocadas en el desarrollo de tareas, ya sea laboral, educativo o para emprender. Las IA están para acelerar estos procesos. Y sí, abran personas que solo jugaran porque no le encontraran el sentido del uso de la IA. Pero creo que a diferencia de los celulares, estas herramientas tendrán un fin mucho más serio con el paso del tiempo, donde niños, jóvenes y adultos trabajaran con estas herramientas para liberar tiempo de sus trabajos, escuelas y universidades para luego seguir jugando y viendo videos de gatitos en el celular.
The sycophant news media is so desperate for clicks they don't bother to do their due diligence to determine if a developer can manifest intelligent life using only algebra.
It is wildly offensive to a real engineer to pretend another intelligence is in the room sequencing the algebra I write.
There is not I am alone the output is mine not the calculators.
Affirming that inanimate object has anthropomorphiz traits with sufficient evidence of mental illness that would land you in an insane asylum for the rest of your life.
I agree that AI, particularly generative AI like ChatGPT, has the potential to bring significant changes to various aspects of work and life. Ignoring or banning AI won't make it go away, as individuals will find ways to utilize it secretly. Centralizing AI within organizations may not be the most effective approach, as current AI implementations often lack the transformative power and creativity found in individual use cases.
To fully harness the potential of AI, it's important to democratize control over AI and empower workers to innovate. Radical incentives can encourage knowledge sharing, while user-to-user innovation should be encouraged through prompt libraries and open tools. Companies should not solely rely on external providers or existing R&D groups to dictate AI use cases; instead, they should dive in responsibly and discover the best applications for their specific needs.
In education, AI can revolutionize tutoring and improve outcomes for students. Rather than trying to turn back the clock, we should envision and embrace a future where AI is integrated into teaching and learning processes. This can help democratize access to education and cater to students of all abilities.
It's essential to acknowledge that AI brings genuine and widespread disruption. As individuals and societies, we have the agency to determine how AI is used and when. Recognizing and preparing for the rising tide of AI is crucial in making informed decisions about its implementation.
As a professor, I caught three students who turned in assignments written by ChatGPT. I caught them because I required specific information/examples from a book that GPT had not been trained on, so there was lots of made up information. With Bing Chat connected to the internet and Claude 2 able to analyze digital files, catching students to stop them from using these tools is going to be nearly impossible. I’ve decided to follow your lead and let students use AI on out of classroom assignments. I’ll hold them responsible for the information on in class exams and try to help them figure out what AI is good at and isn’t good at.
Oral exams are the way to go, no? who cares if they use AI for assignment, the main point is whether they’ve comprehended the material. Are oral exams too time intensive?
Thank you! Again. This is a great article and am sharing it to an AI Nerds group I belong to. They always give me shit when I mention one of your articles. One said, "Do you read anyone else?" And I said, "I do but I'm not sure why. Ethan is the best." Ha!
Keep up the calm, mindful approaches this world so desperately needs.
Correct me if I'm wrong, but aren't the assertions made in the link from your third footnote.. not really accurate? Specifically the compositing stuff with regards to Midjourney? Lorelei Shannon's "taking from other artists" nonsense... https://adventuregamehotspot.com/2023/07/21/echoes-of-somewhere-how-a-solo-developers-game-takes-center-stage-in-the-ai-controversy/
I think my point of view is similar to yours, but I have a hard time articulating it. Why should we not think of diffusion models like Midjourney as ‘taking from the artists’ they were trained on?
I wish my managers would read this
Everyone is talking AI regulation, but noone is being specific.
We're in a moment that's going to divide people. The ship is sinking, land is a mile away. Who's leaping and swimming without hesitation, who's clinging to the ship? It's a very human question.
Ethan, I fear you are missing one of the best pieces of advice you can give to educators and corporate executives. With ~20% having tried AI in the US, and approximately 50% of that group using it <1x/week, the problem with adoption is some basic education. I find that when I invest 8 hours in basic AI education over 3 days on how to use it to get things done and how to prompt better, the retention and use curve goes way up and the users then find their own practical ways to apply the technology. What is missing is just a small investment in employees and students and the return will be massive.
Generally good, but a few corporate standards make sense. 1) confidential company data only go into systems that protect the info; 2) nobody uses AI output in a critical activity (such as reporting to a customer) without checking the work.
Anyone else have something that should be a minimum standard?
Thanks for writing these. I read each one.
I have a different take on the points about corporate use of LLMs. Yes, GPT-4 is probably still the best one available, even after seemingly getting nerfed. However, context from within the company is also critical, whether it's internal processes, the code base, information on customers, or what have you. Without an enterprise LLM instance (which may use something like GPT-4 or Bard), employees either have to try to distill all of the relevant information into the prompt, which is a lot of work, or they get back suboptimal results. The LLM should have access to all of the useful information in the company by default. This isn't necessarily for fine tuning, which is technically difficult and often iffy in terms of results. Instead, it's the knowledge plus retrieval use case.
Having a company-wide LLM, like Stripe and Jane Street do, is also a huge benefit because it allows employees to share and iterate on prompts and other knowledge around using LLMs for work. There's going to be a huge amount of innovation on evolving workflows to take advantage of LLMs, and that's going to happen a lot slower at companies that don't have an internal platform for it.
It's unclear whether open source versions will really catch up to the leading commercial models. But when we hit GPT-5 levels, there's a good chance that performance will be overkill for many tasks. Let's say the open source/internal equivalent will be a generation behind at GPT-4 levels. At that point, writing a customer email with a less powerful reasoning engine but with access to every other customer email the company has written, and the outcomes of those emails, will be superior to using a state of the art AI without access to the right context.
What are some of these corporate llms? What are the companies offering this?
I've heard Stripe and Jane Street have them, but it's not clear what they implemented specifically. Morgan Stanley was listed as a partner for OpenAI with a use case of enabling internal chat and search for their wealth management materials, if I remember correctly. Various other big companies are probably working on this, either using APIs from OpenAI or Google or using an open source foundation model like Llama 2.
And I bet every large technology consulting company is touting their expertise on LLMs these days and trying to land work with big enterprises to make this happen. I'm sure mileage will vary a great deal on that work.
Has it only been 8 months? As a person who gave up his Ph.D to become a single parent of two daughters thirty years ago and paid for it all by becoming a programmer. ChatGPT has given me my intellectual life back. The creative part where I get to bounce my creative ideas off something that can respond! Great article!
Organizations are compelled by law to centrally manage use of software and data by their employees. OpenAI models are mostly commoditized now and are very expensive for what they deliver for most use cases, so having lots of model choice at your fingertips to handle things like cost, speed, compliance issues, or other security concerns (on top of specialization options) is important to most orgs. Orgs should not trust their business with 3rd party AIs who are seeing everything your employees put into and get out of the model, even if you believe the AI provider will live up to their promise they “do not train” on customer data (whatever that means to them) you are taking a big risk to let another company who also serves your competitors or may want to compete with you in the future fully understand your business the way AI use enables (see Amazon competing with Netflix even though Netflix outsources most of their tech to Amazon). AI models are not like cloud services that store your docs, Microsoft can always read your docs but those docs do not fully explain your business or your employees thinking on solving business problems, using AI at Microsoft or OpenAi does teach Microsoft that. This might be less of a concern if OpenAI’s CEO Sam Altman was not on the record for saying he’s building an AI that will consume all economic activity and replace the median employee (you can find this Sam statements easily with simple Google search).
Solutions exist in the market to address the limitations you propose like Kindo that allows full centralization and security control of ANY AI model commerical or open source and easy sharing of prompts and workflows. Orgs can move much faster to full AI option when they choose these solutions. This is not a sales pitch just catching you up on what’s in the market besides the limitations and vision that AI model makers ship themselves. It’s a bigger world out there than just whatever OpenAI (or whoever is the new leader in AI models) offers to fit whatever their agenda is.
Love your thoughts and writing on this topic and find I agree with you more than not. Question - you mention in this article that there are more options now to use AI securely - can you share what those are? I work with many HR leaders where this is one of their greatest concerns -
Learning how to use the tools is as important as having the tool itself. I feel like we are going to have people that will only use AI for the most basic tasks while others are blowing your mind with it. Very similar to smartphones, actually. How many people do you know that only use it to make calls, text, and scroll through social media? They have one of the most useful devices that mankind has ever been able to fit in a pocket, and they are not even using a measurable percentage of what their true capabilities are. It's very likely a generational thing. People at the end of their careers are less interested in ways to make their jobs easier than people just starting out. Sadly, it is people who should be at the end of their careers that make a lot of the decisions in organizations and schools. Give it another ten years and the old guard will finally have to get out of the way for GenX to step in and start leveraging technology better instead of being afraid of it. The pandemic pushed a lot of them out already because they were so bad at grasping the concept of remote work and Zoom. I think AI may push out most of the rest due to the pressure they feel to adopt it and their inability to even comprehend it.
¡Hola! Anhony Marc. Creo que puede ser tentador comparar las redes sociales y el uso de teléfonos inteligentes con la IA, pero debemos tener en cuenta que son dos cosas diferentes, los teléfonos están hechos para jugar, ver videos de gatitos y lo más importante para llamar y conversar con otros.
Las IA, como ChatGPT, son herramientas enfocadas en el desarrollo de tareas, ya sea laboral, educativo o para emprender. Las IA están para acelerar estos procesos. Y sí, abran personas que solo jugaran porque no le encontraran el sentido del uso de la IA. Pero creo que a diferencia de los celulares, estas herramientas tendrán un fin mucho más serio con el paso del tiempo, donde niños, jóvenes y adultos trabajaran con estas herramientas para liberar tiempo de sus trabajos, escuelas y universidades para luego seguir jugando y viendo videos de gatitos en el celular.
...this reply was written by AI, wasn't it? Lol
jajajajaja. Ya no sabemos quién escribe. Saludos.
Total disinformation propaganda.
The sycophant news media is so desperate for clicks they don't bother to do their due diligence to determine if a developer can manifest intelligent life using only algebra.
It is wildly offensive to a real engineer to pretend another intelligence is in the room sequencing the algebra I write.
There is not I am alone the output is mine not the calculators.
Affirming that inanimate object has anthropomorphiz traits with sufficient evidence of mental illness that would land you in an insane asylum for the rest of your life.
I agree that AI, particularly generative AI like ChatGPT, has the potential to bring significant changes to various aspects of work and life. Ignoring or banning AI won't make it go away, as individuals will find ways to utilize it secretly. Centralizing AI within organizations may not be the most effective approach, as current AI implementations often lack the transformative power and creativity found in individual use cases.
To fully harness the potential of AI, it's important to democratize control over AI and empower workers to innovate. Radical incentives can encourage knowledge sharing, while user-to-user innovation should be encouraged through prompt libraries and open tools. Companies should not solely rely on external providers or existing R&D groups to dictate AI use cases; instead, they should dive in responsibly and discover the best applications for their specific needs.
In education, AI can revolutionize tutoring and improve outcomes for students. Rather than trying to turn back the clock, we should envision and embrace a future where AI is integrated into teaching and learning processes. This can help democratize access to education and cater to students of all abilities.
It's essential to acknowledge that AI brings genuine and widespread disruption. As individuals and societies, we have the agency to determine how AI is used and when. Recognizing and preparing for the rising tide of AI is crucial in making informed decisions about its implementation.