25 Comments

Thanks for this interesting post. As always you give us lots to think about.

I was most struck by this comment at the end: "But it is just as clear to me that humans are not going to be replaced by Code Interpreter. Instead, the AI does what we always hope automation will do - free us from the most annoying, repetitive parts of our job so we can focus on the good stuff. By simplifying the process of analysis, I can do more and deeper and more satisfying work. My time becomes more valuable, not less, as I can concentrate on what is important, rather than the rote. Code Interpreter represents the clearest positive vision so far of what AIs can mean for work: disruption, yes, but disruption that leads to better, more meaningful work. I think it is important for all of us to think about how we can take this same approach to other jobs that will be impacted by AI."

Given that you seem quite impressed by the software in this (its most basic) level, and knowing that the goal of OpenAI is to create a meta-human intelligence with tools like ChatGPT and Code Interpreter as the means to that end, why are you assuming the AI will not replace the more meaningful work as well?

Expand full comment

Ethan, thanks for the encouragement and cautions. You have been an inspiration to our work in the UW-Madison community and my work with our industry consortium members. As a policy, I suspect it remains a valid concern about sharing non-public information with chatgpt and chatgpt-plus. I understand that one can “switch off training in ChatGPT settings (under Data Controls) to turn off training for any conversations created while training is disabled” but worry about uploading data sets that would otherwise be private or sensitive. How should we think about using these new tools for data manipulation in light of the concern for privacy or IP? (Source on OpenAI policy https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance)

Expand full comment

Hi, You asked it to prove to a doubter that the Earth is round with code, and it provided multiple arguments, integrating the text with code and images. I am interested in what result you would get if you asked it to prove to a Sphere Earther that the earth is flat. Would it get false information from the internet and weave it into a "fact-based" proof?

Expand full comment

Ethan, fantastic breakdown.

Any thoughts on preventing the "Samsung Effect", having data leak out into the world? For obvious reasons, I'd love to use LLMs to help clean up my businesses' data, but for equally obvious reasons, we're nowhere near confident that we can do this and protect our proprietary data.

Expand full comment

Well this just melted my mind.

I code a lot in Python at work (biochemist) but am relatively self-taught and still blunder a lot. Chat-GPT has been good for getting over some problems, as well as GitHub CoLab, but this just looks all kinds of next level. Thanks for the insight! I'm going to look into this more.

Expand full comment

Great post!

Nitpick: "I know it is an illusion, and that LLMs are in no way sentient or thinking, but those Moments are a thrilling, and sometimes unnerving, glimpses of possible futures with smarter AIs." --> this doesn't reflect the current state of the field. I think the right thing to say here is that we don't know whether modern LLMs are sentient/thinking in the ways relevant to moral status. Unlike e.g. rocks or video game NPCs, it seems like some modern AI systems might qualify to a similar degree that various animals do. For a nice overview talk by one of the best living philosophers, see: https://www.youtube.com/watch?v=-BcuCmf00_Y He gives it <10% that current LLMs are conscious but >20% that the AIs of 2030 will be.

Expand full comment

Interesting insights. Thanks Ethan.

Expand full comment

Thank you! This article inspired my own analysis of government data here - https://twitter.com/dhruvaray/status/1684205100900704260

Expand full comment

At least you are honest Ethan, things indeed move fast (not to say that PhD is not valuable :): Things that took me weeks to master in my PhD were completed in seconds by the AI, and there were generally fewer errors than I would expect from a human analyst.

Expand full comment

Muchas gracias. Grandioso este boletín.

Expand full comment

Is it only me? I think it doesn't work.

I enabled the code-interpreter issue and gave it the prompt for counting the number of words - it just gives the textual result. I ask him to use code interpreter for this - he spits a python program. He doesn't run it. Earlier I asked it to write code that paints the acceptable region in the problem of the prisoners dilemma - it provided me with a python code that runs beautifully on my system but it refuses to run it - "Since calculating the acceptable region involves plotting a complex region in 2D, and given my current capabilities as a text-based AI, I can provide you with an example of Python code that calculates the feasible and acceptable regions of this game. "

Expand full comment

AI has produced no useful tools.

Every AI script is written in executable algebra one line at a time by a man no exceptions.

When pressed no AI expert can articulate a distinction between an AI script and a traditional script.

AI was devised by con artist to steal money primarily from the elderly since they have funds from a lifetime of work but do not understand technology or marketing.

Worst of all it makes individuals Wait For imaginary tech solutions instead of devising solutions to their own problems.

When you delay one person as an individual you retard us all collectively.

Expand full comment

Thanks Ethan. For the last 4-6 weeks , seems gpt-plus is tuned down !! it no longer performs the wow moments. Not only short memory, it wont eloborate, it waits for 10 inputs before completing... list goes on.. now code interpreter falls on the same category and most of the time fails to complete. Re-try your examples now with current version.

Expand full comment

No, my question was, if asked to, would AI create convincing (to a flat earther) arguments that the earth is flat.

Expand full comment

Great piece, and I agree that this increasingly what AI is going to look and feel like. @Doug Barton, in another comment, raises valid concerns. As a business, I don’t know if I would be comfortable with my employees using Code Interpreter with company data. There has to be another layer of privacy and protection built in for enterprises to adopt.

Expand full comment

This is enough to make me want to switch entirely to python

Expand full comment