Exploring AI at a Mile High

The Big Picture: When the medical doctor met the AI chatbot and started to worry about his job

If ChatGPT’s bedside manner is already better than that of an empathetic doctor, what’s around the corner?

Phil Nugent

Boulder, Colorado

Last updated on Oct 9, 2024

Posted on Oct 9, 2024

Whatever your concept of the “average” American medical doctor, Jonathan Reisman would not be it. In addition to having practiced medicine at Massachusetts General Hospital, this highly accomplished ER doc and pediatrician has done so in in Nepal, in the slums of India, and among the Oglala Sioux of South Dakota.

And not only does he write about practicing medicine for the New York Times and The Washington Post, but he’s also written a book about the human body that’s been described as “elegant, elegiac, and deeply enjoyable.” Another reviewer speaks knowingly of Dr. Reisman’s humility and empathy.

Even so, it is this very same Dr. Reisman who earlier this week wrote an article for the New York Times titled, “I’m a Doctor. ChatGPT’s Bedside Manner is Better Than Mine.”

Not exactly what you’d expect a doctor to loudly proclaim to the world. And yet, there he is, saying things like the following:

“As a young, idealistic medical student in the 2000s, I thought my future job as a doctor would always be safe from artificial intelligence.

“At the time it was already clear that machines would eventually outperform humans at the technical side of medicine…. But I was certain that the other side of practicing medicine, the human side, would keep my job safe. This side requires compassion, empathy and clear communication between doctor and patient.

“As long as patients were still composed of flesh and blood, I figured, their doctors would need to be, too. The one thing I would always have over A.I. was my bedside manner. When ChatGPT and other large language models appeared, however, I saw my job security go out the window.”

Wait, what?

What are we to make of this, especially coming from a doctor who seems to be overflowing with empathy, and who has always enjoyed connecting with his patients and celebrating their humanity?

I bring Dr. Reisman’s essay to your attention because the good doctor addresses what I believe is a common – and dangerous – lack of understanding out there among otherwise intelligent and accomplished people: that generative AI tools can’t do what they do, and further, that AI is unlikely to be able to do what they do anytime soon.

“Oh, sure,” these people might say, “customer service jobs are probably in trouble, but for those of us who are true experts and professionals, it’s not going to happen anytime soon!”

Well, perhaps they’re right. But again, that’s why I bring Dr. Reisman into the conversation.

He writes, “These new tools excel at medicine’s technical side — I’ve seen them diagnose complex diseases and offer elegant, evidence-based treatment plans. But they’re also great at bedside communication, crafting language that convinces listeners that a real, caring person exists behind the words.”

And then, the coup de grace: “In one study, ChatGPT’s answers to patient questions were rated as more empathetic (and also of higher quality) than those written by actual doctors.”

It’s just following pre-written scripts

At that point, Dr. Reisman goes into a discussion of the importance of prepared scripts in a doctor’s practice, which he discovered in his third year of medical school. In a teaching session focused on how to break bad news to patients, he found out that there’s quite a time-tested list of dos and don’ts. Things like getting to the point quickly, not hiding behind medical terminology such as “adenocarcinoma,” and saying things like “I wish I had better news” instead of “I’m sorry.”

All of this may come as a surprise to you, as it did to the future Dr. Reisman. In the bigger picture, it begs the question: As a patient, am I witnessing true empathy on the part of my doctor, or do I just happen to have a medical professional who’s learned her lines and can deliver them well?

Of course, it also brings about another question: The more detailed a list like this happens to be, doesn’t that make it an even better opportunity for ChatGPT to step in to help the doctor show some empathy?

That was definitely Dr. Reisman’s takeaway. But in his telling, it’s not all bad news for humans, even if he does offer some dark humor about where the medical profession is headed. In that spirit, he says this:

“Until A.I. completely upends health care (and my career), doctors will have to work in tandem with the technology. A.I. can help us more efficiently write notes in medical charts. And some doctors are already using A.I.-generated lines to better explain complex medical concepts or the reasoning behind treatment decisions to patients.”

Dr. Reisman concludes with a philosophical turn: “People worry about what it means to be a human being when machines can imitate us so accurately, even at the bedside. The truth is that prewritten scripts have always been deeply woven into the fabric of society. Be it greetings, prayer, romance or politics, every aspect of life has its dos and don’ts. Scripts – what you might call ‘manners’ or ‘conventions’ – lubricate the gears of society.”

He's right, of course. In courtrooms, churches, and political life, we have all been beholden to traditions – and their prescribed scripts – for centuries. What difference does it make if ChatGPT (or one of its brethren) helps us with some of that script writing when it’s needed?

After all, we know that most politicians on the national stage don’t write their own speeches. But as long as they deliver their lines well, do we care that the politician used a speechwriter? Or that the speechwriter used a thesaurus?

What lessons does the doctor have for the rest of us?

This leaves for another day the real elephant in the room: If a doctor as experienced and empathetic as Dr. Reisman is concerned about his job, what hope does that leave the rest of us?

Let’s set aside the fact that – chances are – the good doctor is not truly concerned about his job. Let’s assume that he just used that line as a rhetorical device. What’s the larger point he’s trying to make?

I’d suggest his point is that all of us should be paying very close attention to what’s happening in the world of AI. Why? Two reasons: First, because it’s not a separate world from the one all of us inhabit. (It’s not Earth 2 from Another Earth.)

And second, because despite what some may tell you, it’s not hype. AI is improving more quickly than any technology the world has ever seen, in no small part because of the hundreds of billions of dollars – soon to surpass one trillion dollars – that are being invested in it.

Whether that will prove to be a good investment for all is a separate issue. There likely will be bumps in the road, but AI will keep getting better and better. And the speed in which it continues to improve is the thing.

So, if anyone is still dismissive of AI’s capabilities and continues to bring up Google’s glitch when its Gemini AI displayed racially diverse American founding fathers, it's important to let them know that happened eight months ago. Which might seem like yesterday to you and me, but it's actually a long time in our incredibly fast-moving AI era. When a technology is much closer to the bottom of the hockey stick graph than the top, dramatic change happens very, very quickly. And it keeps happening.

Be ready for it.

; ; ; ;

Share on

Tags

Subscribe for free to keep up with Colorado AI News!

Sign up today to get weekly email updates and to comment on selected articles.

Subscribe Now