Discussion about this post

User's avatar
Dov Jacobson's avatar

It was humbling when I first used ChatGPT to review a peer review I had written. The article being reviewed presented the efficacy of a health device based on random controlled trial sponsored by the device's manufacturer. Of course I was alert to bias, and discovered a few minor instances. But the LLM mildly mentioned a discrepancy between the control and treatment conditions that had been worded so slyly as to evade human detection. Pulling on the thread that the LLM exposed uncovered a deceptive practice that invalidated their conclusions, and (after they protested "This is the way it is always done!") a large body of previous sponsored research.

[ If you are curious: the researcher required subjects to "follow the manufacturer's instructions" for each device. In practice, treatment group subjects were told to comply with the ideal duration, frequency and manner of use specified in the printed instructions. But control group subjects were given a simpler competing device that offered no enclosed instructions and thus were given no performance requirements at all for participation in the research. ]

Expand full comment
Ezra Brand's avatar

In my personal experience, this aspect is key: "[M]ore researchers can benefit because they don’t need to learn specialized skills to work with AI. This expands the set of research techniques available for many academics."

For the first time in my life, over the past year, I've been able to do serious text analysis on relatively large texts, with Python. (Specifically, on the Talmud, which is ~1.8 million words.)

The barrier to entry for doing meaningful coding is now far lower

Expand full comment
40 more comments...

No posts