62 Comments

As an early adopter, I've been immersed in AI and digital technologies, employing tools like GPT and Midjourney in my day-to-day activities for months now. Far from being a luddite, I embrace these developments with open arms, recognizing their potential to shape the future.

But in my field, ever since film cameras were supplanted by digital cameras, and subsequently by smartphones, everyone seems to think that creating a movie is simple.

Clients and agencies have started to cut down on delivery times and budgets. Faith in the expertise of professionals has plummeted.

As a result, projects are less prepared, the duration of shoots is diminished, as is that of post-production: "You don't need so much time to deliver this edit or mix to me."

What fades away with this shift towards digital and AI, is the time for reflection, the capacity to take a step back and contemplate what we are doing. The ability to reexamine one's work after a break or to review an edit after a good night's sleep is dwindling.

Soon, everyone will be familiar with the concept of "simply pressing The Button".

Everyone will know that a letter of recommendation can be written in twelve minutes and that minutes of a meeting can be automatically transcribed - and cleverly summarized - during the work session.

Yet, the time saved won't be repurposed for more enriching activities. It will merely serve as a means to having to accept more work, for the same pay of course, and without the luxury of reflection time.

We've drawn closer to the condition of a hamster in its wheel. We are running faster. But for what purpose, and in which direction? This is the question that looms large and, to my mind, requires our immediate attention and action.

The accelerating pace of technology has its perks but let's not lose sight of what truly matters – the value of deliberation and inspiration, the luxury of reflection, and the unpredictability of our human spirit.

Expand full comment

Just like you, I've been in the deep end of this tech game. For over a decade, I was the go-to ad photographer and the 'digital guy,' riding the wave from film to digital in Melbourne's Photography studios. The next wave – the smartphones, the clients wanting more because 'why not, everyone else is doing it.' and on the distant shore the third wave of AI generated works had me look for a boat off the 'content creation' island. I've felt the same pinch you've been feeling.

So, I decided it was time for a change. I'm trading my digital guy hat for a less AI-impacted one. Mental Health and Counselling, that's my new gig. Here, I can help folks tell their own stories as ways of integrating and healing, rather than to tell clients stories to sell products and services. It’s a place where the 'form is the message', as it inherently includes slowing down a bit and where reflection still counts.

Good luck on your journey, mate. I hope you find the downtime you're looking for.

Expand full comment

Thanks, Max, your ship and sea talk really hits home.

Like you, I've found my own safe harbor - teaching. I love helping folks learn to sail their own ship, not just switch on the Sonar or auto-steer.

Raising the sails, feeling the wind, understanding the deep sea currents, and using the stars for direction, that's what it's all about. It's great to see them ready to handle any boat, any sea, and any race.

Your move to Mental Health and Counselling is both clever and generous. A place where taking your time and thinking things over really matters.

Expand full comment

I've been wondering why haven't you started a Substack already, Pascal. Your takes are always insightful. I feel you could add much value to these conversations!

Expand full comment

Thanks a lot Alberto, you're my Substack Hero, so your compliment hits home and makes me really happy.

I don't have a Substack for two main reasons:

- I've been running a blog for over 12 years, and I write quite a bit on the side;

- it's articles like yours that inspire me;

I'm already looking forward to reading the next one.

Expand full comment

That makes sense, of course. Looking forward to your comments, as always!

Expand full comment

"It will merely serve as a means to having to accept more work, for the same pay of course, and without the luxury of reflection time."

Totally agree with you Pascal.

I just added you on Linkedin.

I would be pleased to speak with you about this subject (in french ;) ).

Expand full comment

The "Help me write" button seems misleading to me. Based on the example you gave, it would have been better labeled "Write something for me," which elicits a much different response in me when it comes to the temptation to "Press The Button."

I can imagine a more engaging "Help me write" button that immediately sets off into a dialogue exploring your needs, interests and motivations and compiling your responses into a set of meta-documents ranging from word clouds, to outlines, to first drafts, to speaker's notes.

If "Write something for me" is retained as an option within that dialogue, then I think it will simultaneously accelerate both the recognition/automation of meaningless tasks and the flourishing of more fulfilling production that blurs the boundaries between work and play.

Expand full comment
Jun 3, 2023·edited Jun 3, 2023

I fully expect this to be one of the most useful directions these types of tools take in the near future. Rather than responding to a relatively vague one-off prompt, the system will first prompt *you* with questions that allow it to give you much higher quality output right off the bat.

Optimistically, maybe this has a net positive effect on education, since we will be able to intuitively learn what *types* of things we should be asking ourselves and thinking about before beginning a project, through being trained by well-structured AI helper systems.

Expand full comment

A help me write button that gives someone immediate access to a human tutor would be wonderful.

For me, the whole idea of a meaningless task needs challenging. Efficiency goals should eliminate those tasks. If they're truly meaningless, they don't actually need to be done.

Expand full comment

This is such a beautiful blog that it deserves to be called an essay.

“With AI-generated work sent to other AIs to assess, that sense of meaning disappears.”

It’s interesting to think about what writing tasks we should throw away, as we move to a world where AI’s write and other AI’s evaluate the writing of AI’s.

The answer to the question, “how does AI change writing in the classroom, or at work?” has to start with questions around the purpose of specific writing tasks. Surely we need to focus on preserving writing that has a higher purpose for humanity. And identify the more “instructional writing” designed to manage people and processes as something that can be delegated to AI.

Ethan, I do not agree the AI-generated recco was good. I found it rather bland, and lacking the imperfect human insights that make recommendations stand out.

Expand full comment

While such tools may be useful, sometimes the exhilaration about their emergence is clouding our judgment and preventing us from dropping common and deeply rooted but in fact nonsensical work routines.

If drafting of a lengthy document, especially something like a performance review or a report can be easily automated, think first whether it makes sense to produce a lengthy document in the first place.

You can't create a meaningful document without a prompt or a series of prompts that feed all the necessary details.

Maybe you just need to communicate all those necessary details in a short conversation or email and that will do?

Don't automate useless work, drop it.

Expand full comment

When both writers and reviewers are using AI, the letter itself becomes redundant and begs to be automated. Maybe each of your students will have an AI-generated profile over the course of their time with you. Outsiders look at those profiles and decide which people should be offered positions. The role of a professor is to nurture students so their AI-generated profiles are as marketable as possible, given their goals.

Eventually, the students start to wonder what value is added by the instructor, since the point is to maximize the student's AI-generated profile -- something which itself could be AI-driven. Answer: the best instructors add more value than the AI alone. That's a never-ending arms race that requires a lot more out of everyone

Expand full comment

Well, that letter, like most stuff produced by ChatGPT, strikes me as utterly generic and therefore "fake." My students are sometimes using it now, against my advice, to produce "journals," and they salt my mailbox with clichés. Every contrast is stark, every rebuke is devastating, queries are always pondered, etc.

That the letters you get are mostly worse than that makes me feel better about the effectiveness of my own letters.

But yes, you're right: this is going to produce a flood of boring crap. And people will get worse. But the few people who have original thoughts will get better. And the gap will widen again.

Expand full comment

Even though his creator is in bad odor, I loosely quote Dilbert: The best way to prepare data no one cares about is to make it up.

In the largish hierarchical organization most of what passes for work consists in people trying to figure out what it is that they are actually supposed to be doing. This meta work is judged on the basis of effort, because there are no results. The ineffectiveness of this approach is repackaged as inefficiency so that the meta work can be further abstracted into process reassembly. This is beneficial from the perspective of the operative values of the organization for two reasons. In the fat years, it justifies headcount, the objective basis for compensation. In the famine years, it provides sacrificial victims to appease the angry gods. Both of these are protective of the management pyramid scheme.

The deployment of AI to further this virtuous circle will be welcome because by gumming up the works with a higher volume of bullshit (in philosopher Henry Frankfurt’s sense of communications made to persuade without concern for truth or falsehood) because no one will admit to using it, so attention can be further abstracted to distinguish artisanal BS from imitation BS. At first, this will be easier because the level of communication skill of the existing workforce is so inferior to what AI can provide. As that workforce is replaced with AI skilled labor and as AI improves it will become more difficult.

In a world where 90% of everything is already dreck, raising the level to 99% won’t change much.

Expand full comment

nice post, lousy math.

when 90% dreck becomes 99% dreck,

then the good stuff is reduced to a tenth of what it was.

Expand full comment
Jun 19, 2023·edited Jun 19, 2023

"distinguish artisanal BS from imitation BS" is excellent; chef's kiss.

Expand full comment

This is the kind of writing about AI we need more of (even if you use AI to help write it).

Expand full comment

Ethan,

Thank you for the insightful and thought provoking write ups about this new wave of changes with AI. Regardless of how you or I feel about it, Pandora’s box has been opened and we all have to examine and explore the meaning that we give it.

THAT is the single most important quagmire we have to navigate: the meaning that we individually and collectively give this new development. If I like it, I call it a surprise. If I don’t like it, I call it a problem. Nothing about whatever “IT” is changed; my individual bias, affinity or avoidance all influence the meaning I label it. Problem or Surprise.

This dilemma of meaning a labeling happens everyday! Multiple times a day! Now compound that by my social circles, geographical location, belief systems and associations… This is an essential and unavoidable skills to recognize the behavior and utilize it to my benefit.

I have professionally utilized ChatGPT to write prompts, do marketing analysis, write policy manuals and a variety of other tasks for my business. I have ChatGPT pulled up for any planning or board meetings to consider possibilities I wouldn’t otherwise examine .

The meaning that I give the information and ethical disclosure is that I am the writer, editor and curator of the information. I graduate school, students are told that 40% of their papers will be written by them with the remaining 60% coming from professional resources, college professors and places like a writing center. That is academically honest!

How does AI change this? Like your article today, AI shifts the menial demands away from us and enables us all to teach the examination, synthesis, and defense of new ideas to others. If a student uses AI to write their entire paper, fine. However, my grading rubric will focus on a 5-10 minute oral defense of the points of their paper.

This is congruent with higher levels of learning and also in a real world example of someone falsifying their resume. After the typed application is submitted, an interview person committee will still invite the candidate to now verbally present themselves and ideas for examination.

Beautiful things have happens with AI: My brothers world has opened up with writing and expressing himself to others because of AI. Much like the hearing aids he got when he was 9 and the symphony of sounds became real to him; his world has changed because of this iteration of AI.

The meaning I give all of this is very positive: It’s a surprise!

Expand full comment

You make a lot of good points in this post. One point, which you don't touch on, but which I think is an important implication of the observations that you make in this post, is that closed source AI, such as that offered by Google, OpenAI, MSFT, etc. has the advantage of wide distribution and massive customer bases. A lot of people are hoping that open source AI wins out, but I just don't see it when, to your point, all of these companies can just add closed source AI capabilities to tools used by hundreds of millions of people.

Expand full comment

Thinking out loud, one thing I'm personally concerned about long term - the loss of skillsets which I currently use daily. This has already started happening with the prevalent use of social media and the (not as thrilling) link to loss of focus for longer periods of time (eg:https://www.theguardian.com/science/2022/jan/02/attention-span-focus-screens-apps-smartphones-social-media). Having started using ChatGPT to write for me (and accepting the first version offered), I've wondered if, as a designer, over time my own divergent thinking ability will fade - or will I end up putting more energy and effort into actually actively trying to prevent that from happening?

Expand full comment
Jun 6, 2023·edited Jun 6, 2023

"It it will ..."

Typo serves as a Shibboleth to prove it was not written with LLM, I suppose.

Expand full comment
Jun 19, 2023·edited Jun 19, 2023

or that it was...? If the LLM's learning from corpus(es?) of human writing, then we can be sure said writing is loaded with typos & infelicities.

Expand full comment

For me, the big question is not how much easier it is to create me content, it’s how much of this tsunami of new, ‘good content’ will actually get read.

Expand full comment

I don't understand why using AI for writing a recommendation letter is morally incorrect. Before GPT, maybe you had a template you filled in. Or you got your assistant to write the letter. In one case I was asked by a professor to write my own recommendation letter because she had no time at all.

What matters to me is that you as a professor take ownership of the words that prove I am worthy. I don't care how those words came to be.

Setting time on fire for signalling purposes is almost always bad in my view. Time is the scarcest resource. Signalling tends to escalate. Let's say job cover letters were invented for this purpose. I'd much rather pay a fee for every application, to signal my actual interest and that I'm not applying indiscriminately, than spend one hour on a cover letter.

Expand full comment

" you as a professor take ownership of the words that prove I am worthy. I don't care how those words came to be" -- came here to say something similar; the real value of a letter of recommendation is the signature on it -- if X famous / fancy person is willing to sign a document extolling my virtues, that person's "brand," their reputation, ethos, is what's helping me, much more than the actual artifact of the letter.

Expand full comment

I worry most about those who don't have good ideas using this to make it look like they have good ideas. I'm thinking social media in general where you are arguing against an AI and the person copying and pasting has zero skin in the game.

Expand full comment

already true of bots & trolls, which make up an unconscionable percentage of posters on too many sites....

Expand full comment

So much thought and meaning in the post and following comments - applause!

Some things come to mind while reading this: a) industries and people affected by this evolution are going to have to develop new skill sets (vs wanting to keep the ones they currently have), b) like sugar withdrawal, the benefits of not using Ai are not yet evident nor researched and documented (and they could offset the adoption), and c) ChatGPT (and others) just provide answers; teacher’s however help you when there’s no right answer.

Ethan keep writing, and congregating; this is all so beneficial for those that read, learn and take part!

Expand full comment