I love these articles that encourage people to think about AI as a person. Not because it is or isn't a person, but because we naturally know how to interact with people, and that mindset makes it easier to approach working with AI.

Here are a couple of additional thoughts to consider based on my experience:

1. This intern is a super-intern! They majored in every subject, from English to Engineering. So they can teach you a thing or two as well! For example, my intern taught me how to code and is currently trying to teach me some marketing.

2. Instead of an intern, also try to consider this alien as your teammate. While you might dismiss ideas from your intern, you might be more open to ideas from your respected teammate. For example: ask your teammate to give you ten ideas for something you're working on together - you might find some gems there.

3. Even for "just me" tasks, maybe some of them could benefit from a second look, some non-judgmental feedback, or polish. For example - when writing this comment, I asked ChatGPT for feedback, and it said I could provide specific examples :)

I love the humanizing analogies because they make it natural to see how you might benefit from having more "people" on your team. And the intern analogy hints at maintaining appropriate oversight of the AI's work. I've found "teammate" to be another valuable analogy. This article has me thinking if there are more I haven't considered yet! Any thoughts?

Oh, and, If you read all the way here, you might find this interesting: My "AI teammate" and I write about our experience working together on a project. So far, we've worked on planning, coding, making videos, and writing. Check it out for some more specific ideas and examples: https://dearai.substack.com/p/introduction

Expand full comment

> I love the humanizing analogies because they make it natural to see how you might benefit from having more "people" on your team.

It also makes it easier to forget that AI-systems are exploitive in nature and that you are using it to replace human workers, all while keeping the illusion of 'having more "people" on your team'.

Expand full comment

The US has just about the lowest unemployment rate it's ever had, and productivity improvements almost always increase employment, so it's just about the opposite of something you should worry about here.

"Replacing jobs" and "taking your jobs" are not things economists believe in; recessions and low productivity growth are, and that's what causes unemployment.

Expand full comment

Intern is a great term here- I find over and over again that people stop using ChatGPT the moment they get a wrong or overly simplistic answer. Their brain tells them "ChatGPT is great at XYZ but can't handle this more complex topic" and they move on. Which you would never do if brainstorming with a human. It has the answers in there, it just requires some patience to tease it out. So maybe it's an intern with potential: Gonna get some stuff wrong, but boy oh boy, when it gets it right...

Expand full comment

This reminds me of a project I worked on several years back for an RPA client. The solution was called a digital worker. We found in our discovery phase that many companies were already naming their RPA hacks/tools/bots and featuring them in Slack channels.

I believe we should take this a step further and turn a named ai into a benefit for individuals. Just like a 401K, health insurance, etc. MSFT is already using the right nomenclature - “co-pilot” as a way to describe it's intent.

The question I was solving for in that project was, “Would a human hire a bot?” The answer was ‘no’ at the time, more for accounting constraints (capex vs opex) than reasoning. I also discovered at the time that humans (employees) are considered intangible assets like brand and goodwill are on the balance sheet. Tech is purchased. Humans are hired. But, what if the do the same function? Isn't that an FTE or equivalent thereof?

I see a world in which both legal and accounting practices will need to rethink their approach as we move forward in this new era.

Expand full comment

I don't understand why you are using such a positive term, like "intern", for this attempt at anthropomorphizing an AI. One of the main goals of generative AI would be to put all interns out of work. The word slave is so much more appropriate, especially given the shaky ethics under which the systems are created. There are very few ways in which an AI is like an intern. An intern's work is consensual. On the other hand, like a slave, an AI doesn't get to say no to inappropriate or any other requests. None of the people whose labor was scraped from the Internet to give the AI its capacity has had any chance to say "no," either. There are many more ways in which the system is a bit like slavery. Is there a reason why you want to put such a positive spin on this questionable set of ethical practices?

Expand full comment

I asked GPT-4 to read the four paragraphs generated about strategy and tell me what are the key differences between the paragraphs and the knowledge and perspective of the people who wrote them:


1. **Paragraph 1:** This is the broadest in its scope, discussing strategy as a fundamental component across multiple areas - business, politics, military, games, and personal life. The author clearly has a comprehensive understanding of the universality of strategy and its function in planning and decision-making. The approach is general and non-specific to any field, emphasizing the necessity of strategy for guiding actions, anticipating challenges, and navigating complexity.

2. **Paragraph 2:** The second paragraph is focused specifically on marketing. The author's perspective shows a deep understanding of marketing strategy and its role in shaping brand communication and advertising efforts. They emphasize the importance of understanding market trends and consumer behavior, suggesting that the author has a background in, or deep knowledge of, marketing. The language and examples used indicate a creative and audience-oriented approach.

3. **Paragraph 3:** This paragraph centers on strategy in a business context, specifically in managing change and aligning organizational resources. The author's perspective suggests they have an understanding of strategic management and organizational dynamics. They emphasize the role of strategy in guiding decision-making at all levels of an organization and managing competition, hinting at expertise in business strategy and possibly some managerial experience.

4. **Paragraph 4:** The final paragraph also discusses strategy within a business context, but with a distinct focus on aligning business operations towards long-term goals. The author emphasizes the role of strategy in guiding decision-making across departments and utilizing resources efficiently, indicating a perspective that values operational efficiency and cohesion in an organization. This suggests the author has knowledge of, or experience in, operations management or strategic planning.


Super interesting!

Expand full comment

You have hit the nail right on the head (one more

time on this current AI related things you write).

The analogy with an intern is quite right.

As an academic, when I had *undergrad*

students as interns [usually based on their

financial needs and academic record] I would

often give them very little to do unless it also

helped them to further their careers.

For example, I would get them to write some

codes in R (not from scratch) for things I was


However, I would always make sure those codes

do what I want them to do. Interns often lack

the motivation to do the work. It is not that

they are unwilling. It is simply that their incentives

are not always aligned with my needs.

They are eager to deliver *something* - anything.

Executive summary: Treat AI the same way.

Expand full comment


Expand full comment

I treat my AI intern with respect and say things like "please" and "thanks" when working with it. You never know when that intern might one day become your boss.

Expand full comment

ChatGPT is an incredibly valuable assistant. It has significantly enhanced my productivity, assisting me with everything from concise emails to crafting grant proposals. With its support, I find myself accomplishing tasks more efficiently. I frequently rely on this tool for research purposes, and its input proves invaluable in refining language and offering constructive solutions to any challenges I encounter.

Expand full comment

Yeah, it seems like schools really need to get out in front of fear and defenses posturing

Expand full comment

This week should really drive home the bit about "the worst AI intern you will ever have." I just got notified about plugins and web access for my awful intern!

Expand full comment

I have to laugh a little because it felt like you were caveating about athromoporphization for me :)

The thing is, it does work to think about what it can do from a human perspective as far as these tasks. I loved the essay and I'm going to put a lot of this to use in my own day to day. It is such an enabler.

Expand full comment