Discussion about this post

User's avatar
The Bull and The Bot's avatar

Great breakdown! One thing I always mention when people say they’re wary of using AI assistants because of hallucinations: the mindset needs to shift. These aren’t just Q&A robots. They can actually be your critical thinking partners.

The real value isn’t in asking “what’s the answer?” It’s in using these models to stress-test your thinking. They can:

1. Expand your ideas

2. Validate or poke holes in them

3. Surface POVs you may have completely overlooked

Yes, they’re great for answering simple questions but in doing so, they can also hallucinate too. The key is in how you engage with them.

Give o3 a thesis, for example a stock idea and your reasons for liking it. And give it a persona, like a skeptical hedge fund portfolio manager. Ask it for 10 reasons that support your case and 10 that challenge it. You’ll get new angles, risks you hadn’t considered, and potential counterarguments to prepare for. Now, the conversation isn’t about being right or wrong now. It’s about being more rigorous.

Bottom line: don’t use LLMs only as search bars. Start using them like strategic thought partners. Pick its brain so that it shares information that can sharpen your thoughts and help YOU make more informed decisions.

Expand full comment
Jason Scharf's avatar

I use all of these for various tasks and mostly aligned with the way you describe. Two additional thoughts

1) just started using Grok Deep Search in Tasks. It has been an amazing tool for keeping up on news (in my case a news and trends on a very specific niche - Austin Bio & Health)

2) I have found memory in ChatGPT to be a super power. As many of threads I have are linked in various ways. However I can't get it to stop using em-dashes no matter how many times I tell it to remember or put it in custom instructions.

Expand full comment
17 more comments...

No posts