65 Comments
User's avatar
Tom Goodwin's avatar

Finally, an otter watching map built around my needs.

I got to be honest, I've found AI to be deeply impressive, wildly profound, utterly magical, but often rather useless. This idea reads all my emails, scans my calendar, compares with my notion, access my past articles and prepares a prep doc for me, is wildly impressive, but not remotely exciting.

I can do this myself in seconds, I have a memory, I know what's important instinctively , I can't help but think that most people who rate this stuff have never really done a normal job

Perhaps I'm being miserable because you have to triple check everything, perhaps it's because if a briefing document is even slightly wrong, it could scupper a massively important meeting.

This stuff seems to be designed for people that don't do especially important things all day long but are mega busy. Sorry but I'm the opposite, if a meeting's worth so little to me that I'm going to get AI to prepare me for it, I'm going to just cancel the meeting , it's not worth my time being in it.

Yelena Nikulina's avatar

Yes! I still cannot believe that for someone, having a back-and-forth conversation in a chat about their day in the calendar is better than just opening and looking at it. Looking is like 10 times faster. Why would I want to type?

Rodney Murray's avatar

What are the risks of giving Cowork access to my email and data?

Kenny Easwaran's avatar

If it just reads the emails and data, the main risk is either getting by hacked or otherwise revealing sensitive information by accident. But if it has any sort of write or edit access, it should be clear that there are huge risks in letting someone else send emails from your account or change events in your calendar, even if that someone else is supposed to be a personal assistant working for you!

Edward's avatar

AI agents can get hacked which means anything they have access to could then be controlled by the hacker.

Andrey's avatar

This! You have captured my issue with these tools beautifully.

The AIs are definitely not at the level where they can decide for myself how to answer even the simplest emails (yet), so it is frustrating to use them for that. It will take much less time to do all this myself, which is a classical management problem to be honest.

The example by Ethan is actually a very good one. Impressive on the surface, but is it actually useful? Ethan had to know that the image is not the most recent before asking Claude about it, because otherwise why would he ask about this particular presentation and chart and not the different one? Then instead of actually going out and updating it, Claude had to e further pushed and guided, and even had to be spoon-fed with the direct link to the PDF.

So outside of the coding and normal chat mode working with Claude feels like working with over-eager but totally ignorant trainee, where you spend more time guiding them than actually getting useful actions.

Tim's avatar

Yes, everything in this scenario is essentially “on demand.” But not everything benefits from being done on demand.

Marc's avatar

The key right now is to understand what AI is good at, and use it accordingly. I would not use it to prepare a super important meeting, as it still makes a lot of mistakes. And I agree with you that daily prep docs are quite useless. But I have found a lot of use cases for my work, especially since I use cowork

Josh Devon's avatar

We’ve been really thinking about UI (and conversational UI) in the age of agents and meeting the users where they are. For our agent control plane, we have a GUI for our natural language policy studio for security and GRC teams. For developers, we have a TUI that meets them where they are in the terminal so that they can better build and guide their agents that need to adhere to rules set by others. Wrote more about this here:

https://substack.com/@joshdevon/note/c-235648237

Sherry Heyl's avatar

This is interesting, but I think the implication is bigger than the interface itself.

What I’m seeing with companies is that the way AI is being accessed is actually creating uneven performance across teams.

The people who already know how to think, structure problems, and filter signal can get a lot of value out of it. Everyone else gets overwhelmed or ends up with output that looks polished but isn’t well thought through.

That creates a hidden risk. Leaders think AI is being adopted, but in reality the quality of thinking and decision-making is becoming more inconsistent.

So the issue isn’t just that chatbots are clunky. It’s that the way we’ve introduced AI into organizations is amplifying gaps in how people work.

The companies that figure out how to integrate AI into actual workflows, not just conversations, are going to pull ahead very quickly.

Ashley Striblet, PhD's avatar

I love this thinking. I think the UX part of AI has been constantly undervalued, and I think we are now seeing how much of a risk it is to build for people who "already get it".

Sherry Heyl's avatar

I’ve been thinking about this in the context of early web adoption. When I was teaching RSS feeds and blog optimization, it felt overly technical to most people. Then UX evolved and suddenly it became intuitive.

We’re not there yet with AI.

What stands out is how much harder this will be at scale. There are still people today who struggle with social media basics, which says a lot about how uneven adoption can be. AI is far more complex.

Until it feels natural within workflows—not just accessible through tools—we’re going to keep seeing gaps in who can actually use it effectively.

Adam Murray's avatar

The interface problem is real but there's a problem underneath it that better interfaces make worse. As AI gets more seamless, the line between exercising your judgment and supervising AI output gets harder to see. A clunky chatbot at least reminds you you're using a tool. An agent that reads your email, preps your briefing, and updates your slides just feels like Tuesday.

Sherry, I think your point is important. AI amplifies existing gaps in how people work. But the gaps don't just persist. They widen. Every default approved instead of a decision made is the tool routing around someone's judgment. The output looks the same. Something inside the professional is shifting, and nobody is measuring it.

I've been trying to work this out, specifically what happens to the human capability term in AI × capability = enhanced competence when the tool keeps offering reasons to skip the practice that built the capability in the first place.

https://adammurray972420.substack.com/p/the-equation?utm_campaign=post-expanded-share&utm_medium=web

Ashley Striblet, PhD's avatar

I also just wrote a piece about how the bad/lack of UX isn’t just an accessibility issue, there’s also emerging signs that’s it harmful. I think for me, it’s why I feel like this is something that needs to be urgently addressed. https://earlyinsightsclub.substack.com/p/the-ux-of-ai-chatbots-is-risking?r=6ju2pa&utm_medium=ios

Dov Jacobson's avatar

The promise of instant purpose-built interfaces for every task sounds exhilarating.

Until you remember how much effort and frustration is involved in learning a new interface (even highly tested 'intuitive' interfaces designed by UX professionals).

So we can assume that (as with cars and smartphones) the agentic experience will see a fairly fixed interactivity design evolve around efficiency, utility and expressive power.

If we want to maintain a common culture, we might want this to be a common shared language, not a Babel of bubbles.

Scott Wilkinson's avatar

All good! But...the majority of things to be done in the world...the majority of things we WANT to do in the world, do not involve computers or computing (sorry, but I refuse to use "compute" because it's a stupid word). I use AI, and I use computers...but (with respect Ethan!) I'm getting really tired of all these articles about agents that act like agents can do EVERYTHING!! Except for that one little catch–they only survive inside computers, and can only manipulate things inside a computer.

When will we have AI inside physical bodies as capable as a human's? Right—not for a very, VERY long time. Despite all the insanely capable androids in movies, we are nowhere near that. Any humanoid robots that currently exist are klutzy and mere shadows of our own bodies. Furthermore, if the worst ever happens (apocalyptic stuff) AI is toast. Literally. Then guess what? We're right back to a world of physical things that only we humans can manipulate.

I know, I know—Ethan's blog is about AI, not "everything in the whole world." But I believe it's VERY easy to lose sight of the fact that 99.999% of the whole world today is still physical—not virtual. And until AI can work as effortlessly in the physical world, it'll forever be trapped in the world of ones and zeroes.

Kenny Easwaran's avatar

Remote workers are the same way! They can do anything your company needs a worker to do, but only if it can be done through a computer.

Marc's avatar

Well, I spend around 6 hours every day in front of the computer, sorting information, solving problems etc. If Claude cowork can help me reduce that to 5 hours, good for me. It is already doing some of the work that I used to do...

It's true that it's difficult to get it to do exactly what you need. But if you perserverate and ask it to write a good summarized claude.MD file when after a few hours work all problems are sorted and you get the results that you need, it usually will do is OK from then on

I have found that there are a lot of tricks that help you get a much better performance from it. And the more you work with it, get frustrated, solve problems... the more tricks you learn and the more you understand what it can and can not do. Although that frontier is expanding constantly

Russell Jon Ivanhoe's avatar

I am not a coder at all (I started in college but I didn't have patience for punch cards) none the less started using Chat 4.o to code. It was funky, distracted and way too obsequious. However, contrary to you comment on model change, I have seen a remarkable change in insight, efficiency and results, on four different axises.

Federico's avatar

The changes you describe really do seem like a step change from last year. I’m looking forward to the spiraling effects these newer tools may bring about through self-improvement.

> It is always good to be cautious about papers that make claims based on older AI models, but in this case I doubt there has been much change between the now-obsolete GPT-4o and GPT-5.4 (or whatever), since both still tend to produce walls of text.

…unless you ask for visual interfaces to surface the ideas within those walls of text, something Claude does particularly well. A question underlying this post is why Anthropic seems to have taken the lead in the developments you describe, and whether other labs will catch up in time.

Venkat Peri's avatar

Agreed on the interface bottleneck — but for enterprise, better interfaces may be necessary without being sufficient. The structural limits (context degradation, single-user isolation, linear vs. hierarchical work) don't fully dissolve just because the interface improves. Dispatch is impressive, but it's still one agent talking to one person. Enterprise work is multi-stakeholder, compliance-bound, and requires auditability that no chat-derived interface naturally provides.

Wrote about this in January from the enterprise workflow angle — the conclusion was similar to yours, but the prognosis was more cautious: chat (and its derivatives) may always be better as an entry point than an operating system.

https://medium.com/advisor360-com/why-chat-interfaces-cant-fully-replace-enterprise-workflows-a97cc6e3749f

Tommaso Maria Ricci's avatar

The interface problem is real, but I’d flip the framing. It’s not that Claude lacks the right interface, it’s that we’re still mapping 20th-century workflows onto it. The companies actually getting ROI aren’t building better UIs, they’re redesigning processes from scratch.

Arjan Broere's avatar

Reminds me of the paper The Devil is in the Defaults. How powerful defaults and design are for th behaviours of users: https://www.semanticscholar.org/paper/The-Devil-Is-in-the-Defaults-Kerr/a628f898c9502c6f18a92059c79673ba26bf1085

Wolfsdread's avatar

It's just a computer, folks. What do you expect?

Tirma's avatar

Not Dispatch related, but... One of the things I enjoy of the Cowork interface vs OpenAI chats is something as simple as when we have to select between a few options to continue. Cowork shows you a few cards where you can click on your preferred option (or type in your preference if different). In ChatGPT I have to refer back to the whole list of decisions we need to make and type out " "1-a ; 2-d ; 3-It needs to be more concise"...

Recovering Doom-Reader's avatar

I was always wondering why it feels like there's a gap on how I imagine I can use AI and how I use it as a layperson.

I'll be interested to know how AI will be develop to accommodate regular users

José M Galarza's avatar

Gracias por esta distinción. No es un problema de capacidad de la IA, es un problema de interfaz y contexto. Muy útil para replantear cómo trabajamos con estas herramientas.

Manny79's avatar

Another great article by Professor Mollick. Thank you for sharing it.

I’d add that the improved Alexa+ is another strong example of “AI adapting its interface to you.”

I recently used it for the first time, by voice command while washing dishes, to make a restaurant reservation seamlessly. It worked so smoothly that I almost didn’t believe it, so I called the restaurant afterward to confirm. Sure enough, the reservation had been made, using contact details from our Amazon account through an OpenTable connector.

Another example was reordering an item from Amazon by voice. Alexa+ did not require me to first place the item in a cart, review it, or provide a voice code. It simply used voice recognition, along with my registered location and device, to confirm it was me and placed the order right away.

Both were excellent, seamless experiences with this new generation of AI. I can easily imagine AI agents being controlled primarily by voice, almost like speaking with an employee over the phone, and eventually attending meetings with humans and other bots using avatars and voice-enabled interfaces.

Jungbin's avatar

Dear Professor Mollick,

Thank you for another wonderful post — your writing always gives me a lot to think about.

I am a graduate student working on an AI-related research project, and I have sent you an email outlining my ideas. I would be truly honored if you could find a moment to take a look and share your thoughts.

Thank you so much for your time and generosity.

Best regards