As an early adopter, I've been immersed in AI and digital technologies, employing tools like GPT and Midjourney in my day-to-day activities for months now. Far from being a luddite, I embrace these developments with open arms, recognizing their potential to shape the future.

But in my field, ever since film cameras were supplanted by digital cameras, and subsequently by smartphones, everyone seems to think that creating a movie is simple.

Clients and agencies have started to cut down on delivery times and budgets. Faith in the expertise of professionals has plummeted.

As a result, projects are less prepared, the duration of shoots is diminished, as is that of post-production: "You don't need so much time to deliver this edit or mix to me."

What fades away with this shift towards digital and AI, is the time for reflection, the capacity to take a step back and contemplate what we are doing. The ability to reexamine one's work after a break or to review an edit after a good night's sleep is dwindling.

Soon, everyone will be familiar with the concept of "simply pressing The Button".

Everyone will know that a letter of recommendation can be written in twelve minutes and that minutes of a meeting can be automatically transcribed - and cleverly summarized - during the work session.

Yet, the time saved won't be repurposed for more enriching activities. It will merely serve as a means to having to accept more work, for the same pay of course, and without the luxury of reflection time.

We've drawn closer to the condition of a hamster in its wheel. We are running faster. But for what purpose, and in which direction? This is the question that looms large and, to my mind, requires our immediate attention and action.

The accelerating pace of technology has its perks but let's not lose sight of what truly matters – the value of deliberation and inspiration, the luxury of reflection, and the unpredictability of our human spirit.

Expand full comment

The "Help me write" button seems misleading to me. Based on the example you gave, it would have been better labeled "Write something for me," which elicits a much different response in me when it comes to the temptation to "Press The Button."

I can imagine a more engaging "Help me write" button that immediately sets off into a dialogue exploring your needs, interests and motivations and compiling your responses into a set of meta-documents ranging from word clouds, to outlines, to first drafts, to speaker's notes.

If "Write something for me" is retained as an option within that dialogue, then I think it will simultaneously accelerate both the recognition/automation of meaningless tasks and the flourishing of more fulfilling production that blurs the boundaries between work and play.

Expand full comment

This is such a beautiful blog that it deserves to be called an essay.

“With AI-generated work sent to other AIs to assess, that sense of meaning disappears.”

It’s interesting to think about what writing tasks we should throw away, as we move to a world where AI’s write and other AI’s evaluate the writing of AI’s.

The answer to the question, “how does AI change writing in the classroom, or at work?” has to start with questions around the purpose of specific writing tasks. Surely we need to focus on preserving writing that has a higher purpose for humanity. And identify the more “instructional writing” designed to manage people and processes as something that can be delegated to AI.

Ethan, I do not agree the AI-generated recco was good. I found it rather bland, and lacking the imperfect human insights that make recommendations stand out.

Expand full comment

While such tools may be useful, sometimes the exhilaration about their emergence is clouding our judgment and preventing us from dropping common and deeply rooted but in fact nonsensical work routines.

If drafting of a lengthy document, especially something like a performance review or a report can be easily automated, think first whether it makes sense to produce a lengthy document in the first place.

You can't create a meaningful document without a prompt or a series of prompts that feed all the necessary details.

Maybe you just need to communicate all those necessary details in a short conversation or email and that will do?

Don't automate useless work, drop it.

Expand full comment

When both writers and reviewers are using AI, the letter itself becomes redundant and begs to be automated. Maybe each of your students will have an AI-generated profile over the course of their time with you. Outsiders look at those profiles and decide which people should be offered positions. The role of a professor is to nurture students so their AI-generated profiles are as marketable as possible, given their goals.

Eventually, the students start to wonder what value is added by the instructor, since the point is to maximize the student's AI-generated profile -- something which itself could be AI-driven. Answer: the best instructors add more value than the AI alone. That's a never-ending arms race that requires a lot more out of everyone

Expand full comment

Well, that letter, like most stuff produced by ChatGPT, strikes me as utterly generic and therefore "fake." My students are sometimes using it now, against my advice, to produce "journals," and they salt my mailbox with clichés. Every contrast is stark, every rebuke is devastating, queries are always pondered, etc.

That the letters you get are mostly worse than that makes me feel better about the effectiveness of my own letters.

But yes, you're right: this is going to produce a flood of boring crap. And people will get worse. But the few people who have original thoughts will get better. And the gap will widen again.

Expand full comment

Even though his creator is in bad odor, I loosely quote Dilbert: The best way to prepare data no one cares about is to make it up.

In the largish hierarchical organization most of what passes for work consists in people trying to figure out what it is that they are actually supposed to be doing. This meta work is judged on the basis of effort, because there are no results. The ineffectiveness of this approach is repackaged as inefficiency so that the meta work can be further abstracted into process reassembly. This is beneficial from the perspective of the operative values of the organization for two reasons. In the fat years, it justifies headcount, the objective basis for compensation. In the famine years, it provides sacrificial victims to appease the angry gods. Both of these are protective of the management pyramid scheme.

The deployment of AI to further this virtuous circle will be welcome because by gumming up the works with a higher volume of bullshit (in philosopher Henry Frankfurt’s sense of communications made to persuade without concern for truth or falsehood) because no one will admit to using it, so attention can be further abstracted to distinguish artisanal BS from imitation BS. At first, this will be easier because the level of communication skill of the existing workforce is so inferior to what AI can provide. As that workforce is replaced with AI skilled labor and as AI improves it will become more difficult.

In a world where 90% of everything is already dreck, raising the level to 99% won’t change much.

Expand full comment

The obvious outcome is that some form of AI reads the letters of recommendation. Soon, the signal is compressed into a single number.

The question for most of these jobs, hiring, firing, managing, shuffling of papers - when you strip them down via AI, what is the core, remaining data that matters? And how do you collect it?

Expand full comment

This is the kind of writing about AI we need more of (even if you use AI to help write it).

Expand full comment


Thank you for the insightful and thought provoking write ups about this new wave of changes with AI. Regardless of how you or I feel about it, Pandora’s box has been opened and we all have to examine and explore the meaning that we give it.

THAT is the single most important quagmire we have to navigate: the meaning that we individually and collectively give this new development. If I like it, I call it a surprise. If I don’t like it, I call it a problem. Nothing about whatever “IT” is changed; my individual bias, affinity or avoidance all influence the meaning I label it. Problem or Surprise.

This dilemma of meaning a labeling happens everyday! Multiple times a day! Now compound that by my social circles, geographical location, belief systems and associations… This is an essential and unavoidable skills to recognize the behavior and utilize it to my benefit.

I have professionally utilized ChatGPT to write prompts, do marketing analysis, write policy manuals and a variety of other tasks for my business. I have ChatGPT pulled up for any planning or board meetings to consider possibilities I wouldn’t otherwise examine .

The meaning that I give the information and ethical disclosure is that I am the writer, editor and curator of the information. I graduate school, students are told that 40% of their papers will be written by them with the remaining 60% coming from professional resources, college professors and places like a writing center. That is academically honest!

How does AI change this? Like your article today, AI shifts the menial demands away from us and enables us all to teach the examination, synthesis, and defense of new ideas to others. If a student uses AI to write their entire paper, fine. However, my grading rubric will focus on a 5-10 minute oral defense of the points of their paper.

This is congruent with higher levels of learning and also in a real world example of someone falsifying their resume. After the typed application is submitted, an interview person committee will still invite the candidate to now verbally present themselves and ideas for examination.

Beautiful things have happens with AI: My brothers world has opened up with writing and expressing himself to others because of AI. Much like the hearing aids he got when he was 9 and the symphony of sounds became real to him; his world has changed because of this iteration of AI.

The meaning I give all of this is very positive: It’s a surprise!

Expand full comment

You make a lot of good points in this post. One point, which you don't touch on, but which I think is an important implication of the observations that you make in this post, is that closed source AI, such as that offered by Google, OpenAI, MSFT, etc. has the advantage of wide distribution and massive customer bases. A lot of people are hoping that open source AI wins out, but I just don't see it when, to your point, all of these companies can just add closed source AI capabilities to tools used by hundreds of millions of people.

Expand full comment
Jun 6·edited Jun 6

"It it will ..."

Typo serves as a Shibboleth to prove it was not written with LLM, I suppose.

Expand full comment

For me, the big question is not how much easier it is to create me content, it’s how much of this tsunami of new, ‘good content’ will actually get read.

Expand full comment

I don't understand why using AI for writing a recommendation letter is morally incorrect. Before GPT, maybe you had a template you filled in. Or you got your assistant to write the letter. In one case I was asked by a professor to write my own recommendation letter because she had no time at all.

What matters to me is that you as a professor take ownership of the words that prove I am worthy. I don't care how those words came to be.

Setting time on fire for signalling purposes is almost always bad in my view. Time is the scarcest resource. Signalling tends to escalate. Let's say job cover letters were invented for this purpose. I'd much rather pay a fee for every application, to signal my actual interest and that I'm not applying indiscriminately, than spend one hour on a cover letter.

Expand full comment

I worry most about those who don't have good ideas using this to make it look like they have good ideas. I'm thinking social media in general where you are arguing against an AI and the person copying and pasting has zero skin in the game.

Expand full comment

So much thought and meaning in the post and following comments - applause!

Some things come to mind while reading this: a) industries and people affected by this evolution are going to have to develop new skill sets (vs wanting to keep the ones they currently have), b) like sugar withdrawal, the benefits of not using Ai are not yet evident nor researched and documented (and they could offset the adoption), and c) ChatGPT (and others) just provide answers; teacher’s however help you when there’s no right answer.

Ethan keep writing, and congregating; this is all so beneficial for those that read, learn and take part!

Expand full comment