33 Comments

As a professor, I caught three students who turned in assignments written by ChatGPT. I caught them because I required specific information/examples from a book that GPT had not been trained on, so there was lots of made up information. With Bing Chat connected to the internet and Claude 2 able to analyze digital files, catching students to stop them from using these tools is going to be nearly impossible. I’ve decided to follow your lead and let students use AI on out of classroom assignments. I’ll hold them responsible for the information on in class exams and try to help them figure out what AI is good at and isn’t good at.

Expand full comment

Thank you! Again. This is a great article and am sharing it to an AI Nerds group I belong to. They always give me shit when I mention one of your articles. One said, "Do you read anyone else?" And I said, "I do but I'm not sure why. Ethan is the best." Ha!

Keep up the calm, mindful approaches this world so desperately needs.

Expand full comment

Correct me if I'm wrong, but aren't the assertions made in the link from your third footnote.. not really accurate? Specifically the compositing stuff with regards to Midjourney? Lorelei Shannon's "taking from other artists" nonsense... https://adventuregamehotspot.com/2023/07/21/echoes-of-somewhere-how-a-solo-developers-game-takes-center-stage-in-the-ai-controversy/

Expand full comment

I wish my managers would read this

Expand full comment

Everyone is talking AI regulation, but noone is being specific.

Expand full comment

We're in a moment that's going to divide people. The ship is sinking, land is a mile away. Who's leaping and swimming without hesitation, who's clinging to the ship? It's a very human question.

Expand full comment

Ethan, I fear you are missing one of the best pieces of advice you can give to educators and corporate executives. With ~20% having tried AI in the US, and approximately 50% of that group using it <1x/week, the problem with adoption is some basic education. I find that when I invest 8 hours in basic AI education over 3 days on how to use it to get things done and how to prompt better, the retention and use curve goes way up and the users then find their own practical ways to apply the technology. What is missing is just a small investment in employees and students and the return will be massive.

Expand full comment

Generally good, but a few corporate standards make sense. 1) confidential company data only go into systems that protect the info; 2) nobody uses AI output in a critical activity (such as reporting to a customer) without checking the work.

Anyone else have something that should be a minimum standard?

Expand full comment

Thanks for writing these. I read each one.

Expand full comment

I have a different take on the points about corporate use of LLMs. Yes, GPT-4 is probably still the best one available, even after seemingly getting nerfed. However, context from within the company is also critical, whether it's internal processes, the code base, information on customers, or what have you. Without an enterprise LLM instance (which may use something like GPT-4 or Bard), employees either have to try to distill all of the relevant information into the prompt, which is a lot of work, or they get back suboptimal results. The LLM should have access to all of the useful information in the company by default. This isn't necessarily for fine tuning, which is technically difficult and often iffy in terms of results. Instead, it's the knowledge plus retrieval use case.

Having a company-wide LLM, like Stripe and Jane Street do, is also a huge benefit because it allows employees to share and iterate on prompts and other knowledge around using LLMs for work. There's going to be a huge amount of innovation on evolving workflows to take advantage of LLMs, and that's going to happen a lot slower at companies that don't have an internal platform for it.

It's unclear whether open source versions will really catch up to the leading commercial models. But when we hit GPT-5 levels, there's a good chance that performance will be overkill for many tasks. Let's say the open source/internal equivalent will be a generation behind at GPT-4 levels. At that point, writing a customer email with a less powerful reasoning engine but with access to every other customer email the company has written, and the outcomes of those emails, will be superior to using a state of the art AI without access to the right context.

Expand full comment

Has it only been 8 months? As a person who gave up his Ph.D to become a single parent of two daughters thirty years ago and paid for it all by becoming a programmer. ChatGPT has given me my intellectual life back. The creative part where I get to bounce my creative ideas off something that can respond! Great article!

Expand full comment

Organizations are compelled by law to centrally manage use of software and data by their employees. OpenAI models are mostly commoditized now and are very expensive for what they deliver for most use cases, so having lots of model choice at your fingertips to handle things like cost, speed, compliance issues, or other security concerns (on top of specialization options) is important to most orgs. Orgs should not trust their business with 3rd party AIs who are seeing everything your employees put into and get out of the model, even if you believe the AI provider will live up to their promise they “do not train” on customer data (whatever that means to them) you are taking a big risk to let another company who also serves your competitors or may want to compete with you in the future fully understand your business the way AI use enables (see Amazon competing with Netflix even though Netflix outsources most of their tech to Amazon). AI models are not like cloud services that store your docs, Microsoft can always read your docs but those docs do not fully explain your business or your employees thinking on solving business problems, using AI at Microsoft or OpenAi does teach Microsoft that. This might be less of a concern if OpenAI’s CEO Sam Altman was not on the record for saying he’s building an AI that will consume all economic activity and replace the median employee (you can find this Sam statements easily with simple Google search).

Solutions exist in the market to address the limitations you propose like Kindo that allows full centralization and security control of ANY AI model commerical or open source and easy sharing of prompts and workflows. Orgs can move much faster to full AI option when they choose these solutions. This is not a sales pitch just catching you up on what’s in the market besides the limitations and vision that AI model makers ship themselves. It’s a bigger world out there than just whatever OpenAI (or whoever is the new leader in AI models) offers to fit whatever their agenda is.

Expand full comment

Love your thoughts and writing on this topic and find I agree with you more than not. Question - you mention in this article that there are more options now to use AI securely - can you share what those are? I work with many HR leaders where this is one of their greatest concerns -

Expand full comment

Learning how to use the tools is as important as having the tool itself. I feel like we are going to have people that will only use AI for the most basic tasks while others are blowing your mind with it. Very similar to smartphones, actually. How many people do you know that only use it to make calls, text, and scroll through social media? They have one of the most useful devices that mankind has ever been able to fit in a pocket, and they are not even using a measurable percentage of what their true capabilities are. It's very likely a generational thing. People at the end of their careers are less interested in ways to make their jobs easier than people just starting out. Sadly, it is people who should be at the end of their careers that make a lot of the decisions in organizations and schools. Give it another ten years and the old guard will finally have to get out of the way for GenX to step in and start leveraging technology better instead of being afraid of it. The pandemic pushed a lot of them out already because they were so bad at grasping the concept of remote work and Zoom. I think AI may push out most of the rest due to the pressure they feel to adopt it and their inability to even comprehend it.

Expand full comment

Total disinformation propaganda.

The sycophant news media is so desperate for clicks they don't bother to do their due diligence to determine if a developer can manifest intelligent life using only algebra.

It is wildly offensive to a real engineer to pretend another intelligence is in the room sequencing the algebra I write.

There is not I am alone the output is mine not the calculators.

Affirming that inanimate object has anthropomorphiz traits with sufficient evidence of mental illness that would land you in an insane asylum for the rest of your life.

Expand full comment

I agree that AI, particularly generative AI like ChatGPT, has the potential to bring significant changes to various aspects of work and life. Ignoring or banning AI won't make it go away, as individuals will find ways to utilize it secretly. Centralizing AI within organizations may not be the most effective approach, as current AI implementations often lack the transformative power and creativity found in individual use cases.

To fully harness the potential of AI, it's important to democratize control over AI and empower workers to innovate. Radical incentives can encourage knowledge sharing, while user-to-user innovation should be encouraged through prompt libraries and open tools. Companies should not solely rely on external providers or existing R&D groups to dictate AI use cases; instead, they should dive in responsibly and discover the best applications for their specific needs.

In education, AI can revolutionize tutoring and improve outcomes for students. Rather than trying to turn back the clock, we should envision and embrace a future where AI is integrated into teaching and learning processes. This can help democratize access to education and cater to students of all abilities.

It's essential to acknowledge that AI brings genuine and widespread disruption. As individuals and societies, we have the agency to determine how AI is used and when. Recognizing and preparing for the rising tide of AI is crucial in making informed decisions about its implementation.

Expand full comment