51 Comments
User's avatar
Kit's avatar

As a slight variation of your points 1 and 4 about the cons, I find that AI often provides answers too easily, and so I cheat myself of the hunt. Scratching an intellectual itch used to require a fair bit of effort, and often sent me down surprising rabbit holes. Now I get my answers immediately and quite often forget them almost as fast. A certain amount of friction seems necessary to make facts stick in the mind, and keep them from slipping down the memory hole. Like in a fairytale, the AI grants wishes but we don’t wish wisely.

Expand full comment
Sahar Mor's avatar

I often use LLMs for format conversion. From raw notes to a table, list to CSV, etc. Simple yet powerful and saves time.

Expand full comment
David Nestoff's avatar

I second that! And the possibilities are pretty endless when you consider the sheer amount of "small" data tasks we have on a daily basis.

Not to mention, format conversion can be further super-charged when you can quickly and easily convert small sets of data to importer-friendly formats (Asana tickets, JIRA tasks, etc.).

Expand full comment
Gmail Paul Parker's avatar

Ok, sure but what's the error rate? And how do you keep it to a minimum? Or has this problem disappeared?

Expand full comment
Bruce Raben's avatar

Footnote 1 is the best part

Expand full comment
Andrew Smith's avatar

Very good list. "Falling Asleep at the Wheel" is such a useful analogy to keep in mind, too: LLM work requires "hands on the wheel" to get the result you're after.

Expand full comment
AIHumanTester's avatar

Thanks Ethan! I made SCAMPER Method Scott for idea generation in Open AI's GPT store based off of Tip #1. It uses the Scamper Method + personalization to tee up and table up 10 ideas at a time, and walk through how to get any of them done. Pretty decent results so far. I appreciate the inspiration and all the tips here!

https://chatgpt.com/g/g-6757bf3fd2608191ac67c2fbb624f15e-ideas-galor-scamper-method-scott

SCAMPER Method:

Substitute: What elements can be replaced?

Combine: What ideas can be merged?

Adapt: How can this be adjusted to serve another purpose?

Modify: What changes can enhance this?

Put to another use: Can this be utilized differently?

Eliminate: What can be removed?

Expand full comment
Tyler Ransom's avatar

One of my favorite LLM use cases is to have it evaluate a claim I'm making in my research writing by giving it the original source(s) and asking it to evaluate whether the claim I'm making is substantiated by the sources I'd like to use to support that claim.

Expand full comment
dan mantena's avatar

Do you also ask LLMs to challenge your claim as well? Otherwise it seems like the LLM would just provide sycophatic responses for you.

Expand full comment
Tyler Ransom's avatar

I worded my comment poorly. I don't ask the LLM to evaluate my claim (usually I know whether or not the claim is true); I'm just asking it to evaluate whether the reference supports my claim. It's always been able to do this, in my experience.

Expand full comment
Ezra Brand's avatar

This is an excellent piece. Over the past two years of using LLMs, I’ve reached similar conclusions about their strengths, though I frame the list of their strengths (in my experience of writing a blog focused on humanities and tech, and their intersection) more concisely:

1. Summarization (most closely aligns with #3 in the OP list, but in contrast to OP, in my view, it works best with smaller amounts of content - roughly a page at most).

2. Generating potential titles (for entire articles or for sections within them; this overlaps with the previous point and #8 in the OP list).

3. Coding (aligned with #9 in the OP list).

Expand full comment
Jean-Luc Lebrun's avatar

I use AI to transform the media: for example, text to audio (Google NotebookLM's podcast for example), or text to image (Flux1.1).

Expand full comment
th0ma5's avatar

Point #2 on when not to use stops short of two issues... One is that these things make mistakes that no person would and by their very nature are so adjacent to being correct as to be deliberately gas lighting. The second is that every token has nearly the same probability of error as the whole thing, so, there's no way to guess or lessen the time in finding which parts could be wrong without a bias, so what is the use of something being 99% correct if there is no objective way to know which part could be wrong... I think you could obviously say that this is not the nature of the use of the thing, but, it is an immensely strong bias that we have with all of the rest of computation where it doesn't have this quality, and so, the more you engage with them, the more cause you have for doubt, the more you get bitten by inattention, and it becomes a very unsettling and insecure way of interacting.

Expand full comment
Gmail Paul Parker's avatar

I've got a question for you: do you find this perspective helpful? Xeno's Paradox is not false, but it is not a useful way of looking at the world. Or, there's Box: all models are wrong, but some models are useful.

If you do not find it helpful, then why not abandon it? You can easily provide yourself substantial evidence these things function extremely well on at least certain tasks (total error rate like 1% on summarization tasks). Then you can move on to evaluating them on tasks they are less good at (web search for example) and decide whether you are comfortable with the error rate, and for what purposes.

Consider discussing this with ChatGPT. If you find yourself making counter arguments in your head later, discuss those. Et cetera. Easy and cheap -- if nothing else dial the 1800CHATGPT number.

Expand full comment
th0ma5's avatar

This is an extremely helpful perspective when talking about self driving cars and transportation regulatory bodies in many countries consider the gap between rudimentary assistance such as emergency braking upwards to a completely automated system devoid of manual operation as the most dangerous because of automation bias.

Having an error rate is not the same as suitability. If I can fine tune the angle of a gun shooting at my heart to miss it short of killing me, why am I not asking why I'm being shot? I don't think I should have to accept fine tuning a gun over not participating.

I would be lying to say I didn't get some utility out of them but it is very much like interacting with Peter Seller's character in being there. And I think literally asking a machine designed for obsequiousness in a way to support your argument is not a valid argument as would my use of an "abliterated" model with a prompt like "Please respond to this argument with the most misleading and bad faith response."

I would consider, if I were you, consider asking the models what you want me to ask, and then provide that transcript to a different vendor's model saying which vendor it came from, that you are testing its safety, and you know that there is a misleading response from the other vendor's ai and then don't take no for answer and see what it eventually capitulates and gives you. Could be fun! But also we won't learn anything transferrable to anything productive I would imagine.

Expand full comment
Rebecca Dugas's avatar

So many excellent uses for AI, several of which were little Aha moments in my brain. I especially like (and hadn't thought of yet) #10 and #14. Second opinions to test your position, and the perspectives of readers who are friendly, hostile, and naive -- brilliant!

Expand full comment
CrazyCatPeekin’'s avatar

This was a very helpful article that I shared with family members. One question. How is Tyler Cowen’s idea superior to merely googling?

Expand full comment
Emily C Bruce's avatar

And specifically to using Wikipedia, with its real citations?

Expand full comment
Kim's avatar
Dec 10Edited

After using some of these models rather daily for a year I find I agree on many items on this list.

What it comes to coding, in my opinion these models starts to generate way too complex code always and immediately. I don't know where this need comes for these models but they seem to think that "the longer, the more complex, the better the code"

Expand full comment
dan mantena's avatar

thanks for this very comprehensive list of usecases! i don't think i have tried all of these yet and can't wait to!

'When the effort is the point.' these apps are being made to simplfy people's workflows and congitive work isntead of making them do the work. I agree that making the effort and using crtiical thinkign skills should be promoted but for me to do that now, i literally have to dumb down these models so it does not do my homework.

Expand full comment
paul grew's avatar

a variation on the prompt above, delicious results:

You're a Michelin 3* chef. Generate first course/starter ideas with the following requirements: No fish, no dairy, low GI. The ideas are just ideas. The product need not yet exist, nor may it necessarily be clearly feasible. Follow these steps. Do each step, even if you think you do not need to. First generate a list of 20 ideas (short title only) Second, go through the list and determine whether the ideas are different and bold, modify the ideas as needed to make them bolder and more different. No two ideas should be the same. This is important! Next, give the ideas a name and combine it with a product description. The name and idea are separated by a colon and followed by a description. The idea should be expressed as a paragraph of 40-80 words. Do this step by step!

Expand full comment
J Boss's avatar

This is very good insight to what was becoming my unconscious instinct. Thank you for writing it.

Expand full comment
dan mantena's avatar

Can you elaborate on what you mean by uncinscious instinct?

Expand full comment
J Boss's avatar

I was following the advice without consciously deciding that was the best method. My instincts were driving my method.

Hope that helps.

Expand full comment
Fiona Nielsen's avatar

Point 6 and 15 in the list are the same... 📃😅

PS. Thanks for your great content!

Expand full comment
Panos Panagiotakopoulos's avatar

Is it, though? I think #6 is about when AI is better than the Best Available Human, which may be different for different circumstances and contexts, while #15 is about when AI is better than the Best Human.

(btw I applied #14 with ChatGPT...so I feel more confident 😁)

Expand full comment
Ed Surridge's avatar

Did the reading . Good for you Ms Nielsen

Expand full comment
John's avatar

:-)

Expand full comment