27 Comments

I worry about the quarter-over-quarter, market-driven economics of the big tech companies that are rolling AI out within their organizations. Some companies have concluded that they can't allow generative AI in any fashion to be used by employees, but in all cases, they're necessarily thinking about how whatever they do will affect their bottom line... but like next quarter, not 2 or 3 years down the road.

We need companies who boldly stand against the Wall Street short-termism and plan for multiple year horizons, where AI is thoughtfully and carefully integrated into all aspects, but with an architecture that overarches everything, and with tremendous oversight and guidance... but what company on Wall Street is actually capable of doing this?

Ethan, you have inspired me to think and write more about this topic. Your pieces here on Substack are excellent thought leadership.

Expand full comment

As soon as I can afford it, I’m going to create a budget for every employee to expense whatever AI tool they want. Requirement: they write a paragraph about how they use it once a month and share it with the team.

Expand full comment

Cyborgs abound.

In a perfect world, these productivity gains would translate into a 4 - or even 3 - day work week. Alas, this has never materialized in the past. Everything will be expected faster...

Expand full comment

Thank you for another reminder that AI is not a great *software*.

What worries me, however, is what you said in this comment.

https://twitter.com/emollick/status/1670160714378407937

Executive summary: AI reverse engineers *the data* given your result.

This is a pure recipe for producing fraudulent results left, right and center.

Expand full comment

Ethan, I definitely can confirm the "shadow AI" use of ChatGPT at work. I spoke at two AI conferences last week in Boston and spoke with 5 people who actively doing. One was using his personal laptop at work through his smart phone wi-fi and then emailing things to his work address. Industrious employees are not got to let a IT ban get in the way of one of using one of the biggest productivity tools ever.

Expand full comment

Excellent post. I think those organizations which thrive in our AI era will be those who see AI tooling as a complement to human labor. ChatGPT already can free up a lot of time by helping someone rapidly synthesize data, especially for new tasks. Here's a contrived example I came up with: using ChatGPT to do customer segmentation analysis for the first time. https://davefriedman.substack.com/p/rapidly-do-customer-segementation . But you can adapt the methods I demonstrate in that post to virtually any business process or analytical task.

Expand full comment

Shadow AI usage is indeed a risk for enterprises. Thank you for the analysis and hello from Central Asia Product managers!

Expand full comment

I get the one size doesn’t fit all problem of LLM AI. And such a malleable tool makes for challenging skills transfer. And that leads to how it can be used in customer support, for instance the help desk.

Expand full comment

Excellent post! I would love to hear more about how people can/will use AI tools to be more productive in their personal, everyday life as well. We definitely live in interesting times!

Expand full comment

Actively promoting the use of Code Whisperer along my crew at the day job (it’s the only approved tool right now). Mixed results so far, since it doesn’t cover all of the current version languages and constructs we use, but especially for new college hires it does seem to get them closer to done faster in well-trod areas. Ironically, more senior engineers seem to be more resistant to the alternative work style.

Expand full comment

Simple question, who can we credit the cool comic artwork in the last image?

Expand full comment

Brilliant post Ethan. Loved it.

Expand full comment

Small companies are where it's at. While we are awaiting corporations to figure out how to be successful with AI, cyborgs in smaller companies will be much less disturbed while they expand their value.

- Small companies often don't have a policy heavy environment to dissuade use.

- Employees within small companies typically have more flexibility in how they solve problems.

(e.g. Using AI to create a task specific app that compounds time savings across 30 employees - dodging the bureaucracy of a large company)

- As far as job security, I'd imagine employees in small businesses feel more irreplaceable than in a large company that has a full HR hiring farm.

Ultimately, I think cyborgs will still stay in the dark though. If you gained superhuman ability, would you want to cheapen the results of your production?

Expand full comment

The impact of generative AI on office work is a topic that every company should consider. We are entering an era of massive information generation that will eventually have to be handled using AI. Generative AI has the potential to significantly impact the workplace by automating routine tasks, personalizing products and services, and enhancing creativity. It can also facilitate collaboration between humans and machines and create new revenue streams and market opportunities.

Imagine AI summarizing and responding to the emails that were generated using AI. The real question at that point is how companies can build a system and an AI strategy that makes this automation work in favor of the company. Business models will have to be adjusted, people will need to be upskilled, and the pace of digital transformation has to increase.

There is a lot to think about regarding AI making information overload worse. As far as individual productivity gains go, the busy work should be automated anyway. Perhaps these “cyborgs” will start spending their time thinking about innovative ideas and bring 10x growth in certain areas that were not possible before.

Expand full comment

Thanks for another interesting post!

I would amend the following statement: "Large Language Models are a breakthrough technology for individual productivity, but not (yet) for organizations." the amended statement would read: "Large Language Models are a breakthrough technology based on ethically questionable training procedures. They have increased individual productivity, but not (yet) for organizations."

I'm not sure why individual users would want to admit publicly to the use given the ethical problems. Given the fact that there are already class action lawsuits against the makers of this technology, I'm not sure why organizations would want to buy in at this point.

Do you see a reason to rush forward before the ethics have been resolved?

Expand full comment

Great read as usual. Links well with this article I wrote on #aieducation https://www.linkedin.com/pulse/ai-your-dirty-little-secret-dr-nick-jackson

Expand full comment