Discussion about this post

User's avatar
macirish's avatar

One way to detect when something is written by a human (or at least edited) are the subtle mistakes that an LLM wouldn't have made.

There are at least two in this article - were they on purpose?

Expand full comment
RG's avatar

Nice piece, Ethan - very thought provoking. It'd be great to get you opinion on a few of the things you mentioned:

- Does the replacement of some people by AI necessarily only lead to the same amount of work now being done by the AI? I would assume AI would allow for a lot more scaling, both in terms of volume and speed. Panoptican or not, some positions will certainly get entirely eliminated.

- Ultimately, wouldn't this replacement be a function of competitive costs rather than simply employee policies? It's a bit like AI adoption right now. Ever since OpenAI let the cat out of the bag, everyone (Big Tech) has to follow suit. They weren't all far behind, having introduced competitive offerings pretty quickly thereafter, but they hadn't made the first move. OpenAI forced them to.

- Like you rightly point out, I would wager that as much as worrying about cheating, educators will need to worry about what skills are now valid. The objectives of evaluation and even teaching might become as important as worries about how to evaluate learning.

We'd done a piece recently on how AI would affect the MBA, and many similar concerns came out there.

Expand full comment
20 more comments...

No posts