46 Comments

I often use LLMs for format conversion. From raw notes to a table, list to CSV, etc. Simple yet powerful and saves time.

Expand full comment

I second that! And the possibilities are pretty endless when you consider the sheer amount of "small" data tasks we have on a daily basis.

Not to mention, format conversion can be further super-charged when you can quickly and easily convert small sets of data to importer-friendly formats (Asana tickets, JIRA tasks, etc.).

Expand full comment

As a slight variation of your points 1 and 4 about the cons, I find that AI often provides answers too easily, and so I cheat myself of the hunt. Scratching an intellectual itch used to require a fair bit of effort, and often sent me down surprising rabbit holes. Now I get my answers immediately and quite often forget them almost as fast. A certain amount of friction seems necessary to make facts stick in the mind, and keep them from slipping down the memory hole. Like in a fairytale, the AI grants wishes but we don’t wish wisely.

Expand full comment

Footnote 1 is the best part

Expand full comment

One of my favorite LLM use cases is to have it evaluate a claim I'm making in my research writing by giving it the original source(s) and asking it to evaluate whether the claim I'm making is substantiated by the sources I'd like to use to support that claim.

Expand full comment

Do you also ask LLMs to challenge your claim as well? Otherwise it seems like the LLM would just provide sycophatic responses for you.

Expand full comment

I worded my comment poorly. I don't ask the LLM to evaluate my claim (usually I know whether or not the claim is true); I'm just asking it to evaluate whether the reference supports my claim. It's always been able to do this, in my experience.

Expand full comment

Very good list. "Falling Asleep at the Wheel" is such a useful analogy to keep in mind, too: LLM work requires "hands on the wheel" to get the result you're after.

Expand full comment

Thanks Ethan! I made SCAMPER Method Scott for idea generation in Open AI's GPT store based off of Tip #1. It uses the Scamper Method + personalization to tee up and table up 10 ideas at a time, and walk through how to get any of them done. Pretty decent results so far. I appreciate the inspiration and all the tips here!

https://chatgpt.com/g/g-6757bf3fd2608191ac67c2fbb624f15e-ideas-galor-scamper-method-scott

SCAMPER Method:

Substitute: What elements can be replaced?

Combine: What ideas can be merged?

Adapt: How can this be adjusted to serve another purpose?

Modify: What changes can enhance this?

Put to another use: Can this be utilized differently?

Eliminate: What can be removed?

Expand full comment

This is an excellent piece. Over the past two years of using LLMs, I’ve reached similar conclusions about their strengths, though I frame the list of their strengths (in my experience of writing a blog focused on humanities and tech, and their intersection) more concisely:

1. Summarization (most closely aligns with #3 in the OP list, but in contrast to OP, in my view, it works best with smaller amounts of content - roughly a page at most).

2. Generating potential titles (for entire articles or for sections within them; this overlaps with the previous point and #8 in the OP list).

3. Coding (aligned with #9 in the OP list).

Expand full comment

I use AI to transform the media: for example, text to audio (Google NotebookLM's podcast for example), or text to image (Flux1.1).

Expand full comment

Point #2 on when not to use stops short of two issues... One is that these things make mistakes that no person would and by their very nature are so adjacent to being correct as to be deliberately gas lighting. The second is that every token has nearly the same probability of error as the whole thing, so, there's no way to guess or lessen the time in finding which parts could be wrong without a bias, so what is the use of something being 99% correct if there is no objective way to know which part could be wrong... I think you could obviously say that this is not the nature of the use of the thing, but, it is an immensely strong bias that we have with all of the rest of computation where it doesn't have this quality, and so, the more you engage with them, the more cause you have for doubt, the more you get bitten by inattention, and it becomes a very unsettling and insecure way of interacting.

Expand full comment

So many excellent uses for AI, several of which were little Aha moments in my brain. I especially like (and hadn't thought of yet) #10 and #14. Second opinions to test your position, and the perspectives of readers who are friendly, hostile, and naive -- brilliant!

Expand full comment

This was a very helpful article that I shared with family members. One question. How is Tyler Cowen’s idea superior to merely googling?

Expand full comment

And specifically to using Wikipedia, with its real citations?

Expand full comment
Dec 10Edited

After using some of these models rather daily for a year I find I agree on many items on this list.

What it comes to coding, in my opinion these models starts to generate way too complex code always and immediately. I don't know where this need comes for these models but they seem to think that "the longer, the more complex, the better the code"

Expand full comment

thanks for this very comprehensive list of usecases! i don't think i have tried all of these yet and can't wait to!

'When the effort is the point.' these apps are being made to simplfy people's workflows and congitive work isntead of making them do the work. I agree that making the effort and using crtiical thinkign skills should be promoted but for me to do that now, i literally have to dumb down these models so it does not do my homework.

Expand full comment

This is very good insight to what was becoming my unconscious instinct. Thank you for writing it.

Expand full comment

Can you elaborate on what you mean by uncinscious instinct?

Expand full comment

I was following the advice without consciously deciding that was the best method. My instincts were driving my method.

Hope that helps.

Expand full comment

Point 6 and 15 in the list are the same... 📃😅

PS. Thanks for your great content!

Expand full comment

Is it, though? I think #6 is about when AI is better than the Best Available Human, which may be different for different circumstances and contexts, while #15 is about when AI is better than the Best Human.

(btw I applied #14 with ChatGPT...so I feel more confident 😁)

Expand full comment

Did the reading . Good for you Ms Nielsen

Expand full comment

:-)

Expand full comment

I was surprised that Ethan thought there were any obvious examples where AI should not be used. I hope he listed all of the things he thought were obvious, because obviousness is in the eye of the beholder.

Expand full comment