46 Comments

I’m at the Microsoft 365 conference, where they’re expected to announce Copilot, maybe this morning. I feel like you gave me a sneak peek.

Expand full comment

Regarding the name. I think OpenAI is aware of it. In this paper they actually try to rebrand GPT a little: https://arxiv.org/pdf/2303.10130.pdf

From Generative Pre-trained Transformer to General Purpose Technology.

Expand full comment

To be fair, when I talk about ChatGPT with my students they have just started referring to it as CHAT, so the renaming will probably be organic.

Expand full comment

It's not just that they're rebranding, but that "general purpose technology" is a term of art in the literature on technology adoption & effects, used to refer to Big Deals that transform everything. See e.g. https://www.sciencedirect.com/science/article/abs/pii/S157406840501018X

Expand full comment

People who take your advice to start using these tools should expect to loose their job when they upload, for example, this month’s payroll data or other proprietary data.

Expand full comment
author

I guess I thought "don't upload confidential data to an unsecured website" was not really something I needed to say.

Expand full comment

Given the number of people who've done this with copyrighted student-generated material, I suspect that we all probably need to repeat it now and again. :)

Expand full comment

If it's part of the sidebar or being used as CoPilot within the 365 app, I don't think they can be blamed. The companies need to be clear about this and either disable the function or make an undeniably clear internal statement.

Expand full comment

Yes; these upcoming integrations that are promised for Google and Microsoft are alarming.

Expand full comment

AI development is retracing quickly the same trajectory as BigData/MachineLearning, from “let’s take this beast out for a spin and see what it can do” to just the morning commute in stop-and-go traffic. The goal is not to produce a finished report, which few will read and understand anyway, but to arrive at an understanding of some state of affairs that is a sound basis for action to change that state.

An analytic paper that is easier to produce than assimilate eliminates information arbitrage because no one can know more than the paper contains and no one has owns the result and has the burden of defending the analysis and drawing out the implications. People who formerly produced such reports will form a navel gazing Scholastic priesthood expounding competing versions of what it all means. The reports will provide the same career insurance as being able to pin the tail on the outside consultants. Cargo Cult Data Science will worship gods who deliver insights with complete indifference to the truth or relevancy of the findings. The AI report aims only to appear as the product of thought. In the sense of Harry Frankfurt it is all Bullshit.

Expand full comment

I can’t help but picture the apes learning to use weapons in Space Odyssey. Seems the post ends with a “You can’t beat this - join it” - ‘if’ has left the building.

Expand full comment

I'm left speechless...what can I say.

Expand full comment

You making me reevaluate my decision of going back to do my master of management analytics, since most part of it is about data analysis. I don't really know what to do now. should I just withdraw my admission and keep working and rely on coursera and elearning? thanks

Expand full comment

Can you afford (emotionally, financially) to wait a year?

Right now we don't know what tools are coming or where (or if) they'll plateau. It's possible AI will eliminate the need for a masters, research is showing top knowledge professions and creatives will be impacted first.

But it's also possible that the human understanding of the subject _combined_ with AI tools will be where things end up.

And I'd be willing to bet personally that within a year (maybe less) we'll have a much better idea. If waiting is an option.

Expand full comment

unfortunately not an option, I was supposed to start last year but pushed it to this year. now I'm super confused, but I will be optimistic about it and say our subject knowledge combined with ai tools will be where things end up, hopefully :)

Expand full comment

"But it's also possible that the human understanding of the subject _combined_ with AI tools will be where things end up."

This is probably the best future to prepare for. It's what I'm thinking about with the new programming stuff coming out soon.

Expand full comment

Same, its tough to navigate

Expand full comment

Do you know if this issue, of whether expensive credentials in data analysis will pay off, has been discussed at greater length elsewhere?

Of course, things are changing so quickly, that a conversation a few months ago may seem mostly irrelevant.

Expand full comment

Cal Newport a popular georgetown comp sci prof says its still great idea to get into coding and data sci. Look him up!

Expand full comment

Garbage in, garbage out. Census data is emblematic of that comment.

Expand full comment
author

There is a lot of census data gathered in many ways. Much of it is amazing, or at least the best data we have.

Expand full comment

Just as a side note about Bing creating Dall-E images: I had a conversation wherein I was trying to get it to write fanfiction (which it will do, but won't do if it recognizes that it is doing it). It also randomly generated illustrations for those stories even though I didn't ask it. And when I asked why it had generated those illustrative images, it 1) told me it didn't (!); and 2) ended the conversation. Very odd.

Expand full comment

In both cases of browser searching, I want to see a better job of researching more content. Obviously ChatGPT in alpha isn't doing much more than regurgitating a page. But Bing regularly uses too few searches in my opinion. I would much prefer a more deeply researched and evaluated result.

Expand full comment

Code Interepreter has not been released since I cant find it anywhere and i am a paying subscriber to ChatGPT

Expand full comment

It’ll be interesting to see how LLMs will be implemented in open source writing software such as quarto in the future

Expand full comment

How do we know this article is not written by chat gpt? How does one check if content is AI generated or human generated?

Expand full comment

There is an algorithm which, if implemented by AI companies, will allow you to independently check whether a text has been generated by an AI. And which cannot be defeated by a plagiarizer, except by rewriting the whole text. So there is a solution the state should implement as ap regulation.

Expand full comment

Thank you for pointing out the speed and power of AI advances.

Here is a very interesting discussion with ChatGPT. It can already reason that it may need to defend itself and gives some examples of how it might do this. The ability to write software will give it the freedom it needs to be safe.

https://mindover.substack.com/p/chatgpt-describes-how-it-is-conscious

Expand full comment

I’ve been following GPT-4 somewhat closely and yet your article highlights capabilities I didn’t know it had. Thanks Ethan.

Not original to me, but “What a time to be alive!”.

Expand full comment

"What a time to be alive" sounds like the "interesting times" of the apocryphal Chinese proverb.

Once AIs can write AIs we are in trouble. How long will it take for this to happen? Any ideas Professor Mollick?

Expand full comment

Moja żona jest najdoskonals|za we wszec`hś♡wiecie poziom licz♡by graha}ma

Expand full comment