80 Comments
User's avatar
Kit's avatar

As a slight variation of your points 1 and 4 about the cons, I find that AI often provides answers too easily, and so I cheat myself of the hunt. Scratching an intellectual itch used to require a fair bit of effort, and often sent me down surprising rabbit holes. Now I get my answers immediately and quite often forget them almost as fast. A certain amount of friction seems necessary to make facts stick in the mind, and keep them from slipping down the memory hole. Like in a fairytale, the AI grants wishes but we don’t wish wisely.

Sahar Mor's avatar

I often use LLMs for format conversion. From raw notes to a table, list to CSV, etc. Simple yet powerful and saves time.

David Nestoff's avatar

I second that! And the possibilities are pretty endless when you consider the sheer amount of "small" data tasks we have on a daily basis.

Not to mention, format conversion can be further super-charged when you can quickly and easily convert small sets of data to importer-friendly formats (Asana tickets, JIRA tasks, etc.).

Gmail Paul Parker's avatar

Ok, sure but what's the error rate? And how do you keep it to a minimum? Or has this problem disappeared?

Bruce Raben's avatar

Footnote 1 is the best part

Andrew Smith's avatar

Very good list. "Falling Asleep at the Wheel" is such a useful analogy to keep in mind, too: LLM work requires "hands on the wheel" to get the result you're after.

AIHumanTester's avatar

Thanks Ethan! I made SCAMPER Method Scott for idea generation in Open AI's GPT store based off of Tip #1. It uses the Scamper Method + personalization to tee up and table up 10 ideas at a time, and walk through how to get any of them done. Pretty decent results so far. I appreciate the inspiration and all the tips here!

https://chatgpt.com/g/g-6757bf3fd2608191ac67c2fbb624f15e-ideas-galor-scamper-method-scott

SCAMPER Method:

Substitute: What elements can be replaced?

Combine: What ideas can be merged?

Adapt: How can this be adjusted to serve another purpose?

Modify: What changes can enhance this?

Put to another use: Can this be utilized differently?

Eliminate: What can be removed?

Tyler Ransom's avatar

One of my favorite LLM use cases is to have it evaluate a claim I'm making in my research writing by giving it the original source(s) and asking it to evaluate whether the claim I'm making is substantiated by the sources I'd like to use to support that claim.

dan mantena's avatar

Do you also ask LLMs to challenge your claim as well? Otherwise it seems like the LLM would just provide sycophatic responses for you.

Tyler Ransom's avatar

I worded my comment poorly. I don't ask the LLM to evaluate my claim (usually I know whether or not the claim is true); I'm just asking it to evaluate whether the reference supports my claim. It's always been able to do this, in my experience.

Jean-Luc Lebrun's avatar

I use AI to transform the media: for example, text to audio (Google NotebookLM's podcast for example), or text to image (Flux1.1).

Donghu Kim's avatar

As college writers experimenting with AI, we agree with your advice about how AI should not be used in high-level accuracy. Us as college students have experienced “hallucinations” from AI chatbots when asking questions requiring high-level accuracy. Especially in regards to mathematics and some science-related prompts, AI tends to generate false responses confidently. Even if we create sufficient prompts with clear instructions for the chatbot, AI models can still make mistakes. For example, when asking high-level math questions, Generative AI tends to generate wrong answers throughout its process of solving equations. Additionally, when performing an experiment in a lab the AI can not see situations in real life and give high level results. For instance, when taking the time and observations of a reaction AI can guess but not be accurate on the rate at which the reaction occurs. Overall, as college students, your claim of avoiding the use of AI in situations that require high-level accuracy strongly resonates with us and solidifies our already existing beliefs.

Rob Jenkins's avatar

As college students in STEM who have used AI to help further explain concepts in our classes, we've found that your advice about AI being better than humans at certain roles is relatively concerning. Particularly concerning your 6th and 15th claims, we seek clarification regarding the potential threat of AI. While it has served us well in temporarily assisting us with short clarifications, we also feel it’s important to recognize the implications this has for us in the future. In other words, we worry that AI gets to the point where we could never surpass it in terms of knowledge. Our future jobs could be threatened, making our degree nearly obsolete. Do you think that AI will ever reach this point?

Linganguli's avatar

As college writers experiment with AI, we agree with your advice that AI works best as a thinking partner rather than a replacement for learning. In particular, your recommendations to use AI for brainstorming are helpful because it lowers the barrier to getting started without doing the intellectual work for us. We have found this useful in writing courses when choosing research angles or generating questions. However, we strongly agree with your warning against using AI to write full drafts, especially in classes where productive struggle is the point. When AI writes an essay, students miss the process of organizing ideas, developing a voice, and revising skills that are central to academic writing. For example, both of us are STEM students, and in most of our classes, they even let us use AI, just so we can see that AI still cannot answer with accuracy and precision our problem sets. We believe AI supports learning best when it assists effort, not when it replaces it.

Mariana Sinisterra's avatar

We can personally relate and agree with this article as college students taking writing class. Because of this and our experience, we agree with your claim that AI is useful for low-stakes, high quantity work like brainstorming, but we think your warning about “when the effort is the point” is especially important for students. In our writing courses, assignments such as drafting and revising essays about certain topics are designed to create productive struggle, where learning about the topic and/or learning about writing happens through trial, error, and reflection. This process is a vital part of learning and can sometimes be overshadowed by the irresponsible use of AI. While AI can help generate ideas or suggest alternative phrasings, using it to skip over drafting or revising steps risks flattening our thinking and weakening our understanding of the material. Skipping this process can reduce the effort put into assignments as well as the learning we that can be gained from the assignment. For example, using AI to brainstorm research questions can help students get started, but relying on it to summarize readings or write analysis prevents us from developing our own interpretations or opinions. We agree with your framework overall, but believe students need clearer guidance on where assistance ends and learning begins. Until what point do we rely on AI when it comes to writing?

Davis's avatar

Hi, Mr. Mollick! We are college students learning about AI use in the classroom, our professor shared your article as a part of the resource. We agree with your point that AI is most useful when we already have enough knowledge to evaluate its output critically. But that’s why we’re skeptical of point 11, using AI as a potential “co-founder” or mentor in entrepreneurial contexts. AI can be harmful when we rely on it to learn, since it may replace the cognitive struggle necessary for deep understanding. We believe AI should serve as an effective learning assistant for us, by helping us quickly gather information and organize key points for searching. Ultimately, we must rely on our own judgment to verify the accuracy of AI-provided information. We believe AI can be a mentor only when it supports thinking rather than replacing it. If it becomes the source of understanding too early, without the cost of real cognitive struggle, it will weaken our ability to think on our own.

Thomas and Xiaolong's avatar

As college writers experimenting with AI, we agree with your distinction for using AI for low-value tasks and avoiding it for work when thinking and learning is the priority. Using AI as a brainstorming tool or to outline a paper is definitely helpful in our writing classes especially if you can’t get any thoughts on paper. For instance, you could generate possible topic ideas or theses for organizing your thoughts for the paper. This allows us to spend the majority of our time strengthening our arguments now that we have the concrete idea. Simultaneously, we are also aware that AI used to write papers can have risks, especially for reflective writing where it heavily depends on our own thinking. In peer review work, relying too much on AI can hinder our learning and replace our voice. This is why AI should only be used as a support tool and not as a substitute for learning.

Kim's avatar

After using some of these models rather daily for a year I find I agree on many items on this list.

What it comes to coding, in my opinion these models starts to generate way too complex code always and immediately. I don't know where this need comes for these models but they seem to think that "the longer, the more complex, the better the code"

Ezra Brand's avatar

This is an excellent piece. Over the past two years of using LLMs, I’ve reached similar conclusions about their strengths, though I frame the list of their strengths (in my experience of writing a blog focused on humanities and tech, and their intersection) more concisely:

1. Summarization (most closely aligns with #3 in the OP list, but in contrast to OP, in my view, it works best with smaller amounts of content - roughly a page at most).

2. Generating potential titles (for entire articles or for sections within them; this overlaps with the previous point and #8 in the OP list).

3. Coding (aligned with #9 in the OP list).

Davin Martin's avatar

As a college student, I’m worried how briefly you mention that users should not be using AI when the content you are producing is not the final goal, as this may be the single most important caveat in its use. Under nearly every one of the positives should practically be an asterisk: “AI is good for coding *except when you are learning a new concept in code” or “AI is good at summarizing information *unless the details and why are what you should be understanding.” For students, when it comes to assignments, the process of the work is the goal, not the deliverable. The professor is not asking for 25 essays for use, but to be sure each student understands the content. At that point, using AI to write a paper is roughly analogous to having AI take a test on your behalf, something more universally frowned upon. Students are consistently going to use AI as a shortcut to an answer, skipping right over actually absorbing the content.