The Doctor’s Edge: What AI Can’t See, Feel, or Understand
Why your human insight remains irreplaceable in the age of technology.
Firstly, I’m not against AI, I use it. Like all technologies it has many benefits yet like all technologies we need to keep them in perspective. This is why I wrote this article.
Having spent 30 years business consulting, 20 of them in healthcare environments across diverse settings and specialties from emergency to hospice care, I began to see the patterns outside the details.
While AI seemed different, the same conversations began to take place.
In a recent article in the New York Times1, AI was touted as diagnosing a medical condition better than doctors. The study was based on one study of 50 doctors and 6 case histories and yet it made the New York Times as an article.
Let’s unpack how we got here and why it’s important to pull out of the details and take a helicopter view.
1990’s - Doctors and the Internet
I was directly involved in implementing infrastructure in healthcare and training doctors in use of the Internet in the early 1990’s in Ireland.
Patients were arriving to clinics with wads of printouts showing details upon details on their diagnosis. Doctors were concerned that the Internet would replace their much needed medical knowledge.
None of this happened and in my opinion it allowed doctors to focus on bringing their core attributes and competencies to bear on the patient engagement which naturally involved a broader and deeper understanding of what the patient was presenting with.
Roll-on to today and the same issue of replacement by technology (this time AI) of the doctor’s ability to see a broader and deeper perspective is being put forward.
I want to approach this in two ways:
A balanced perspective on AI
What AI describes and what a Doctor understands
A balanced perspective on AI
AI has for me followed the normal trajectory of a technology. Initial excitement tinged with unrealistic expectations which moves on to a lowering of expectations as the reality of what the technology can provide is understood.
what venture capitalist Ben Horowitz recently observed about the latest generation of AI models: despite increasing computing power, “we’re not getting the intelligent improvements at all.2
And:
The assumption that progress in artificial intelligence follows predictable “scaling laws” appears to be less a fundamental principle than a temporary phenomenon—one that may have captured a brief period of rapid advancement rather than an eternal truth. This realization raises important questions about the foundations of modern AI, with its hundred-billion-dollar valuations and ambitious promises of artificial general intelligence (AGI). Companies that have based their business models and valuations on continued exponential improvements may need to substantially revise their expectations and adapt their strategies as the limitations of current approaches become clearer.
And so like the hype that was present for the Internet until it settled down, including the ‘dot-com’ bubble, we need not get oversold and in doing so underplay the real agency of the doctor.
What AI describes and what a Doctor understands
One of the best ways to look at this is through language. Many people use AI in the form of ChatGPT, which is based on versions of GPT, (a ‘Large Language Model’ (LLM)).
Thin and Thick Descriptions
Gilbert Ryle, a philosopher and philosopher of language had an incredible insight into the different types of language. In summary, a thin description simply describes what happened, the thick description describes the nuances and depth of meaning of what happened.
Doctors work in multidisciplinary environments and so are very used to the benefit of an additional outside perspective.
Twitch or Wink?
Ryle’s perfect examples was as follows:
Consider, he says, two boys rapidly contracting the eyelids of their right eyes. In one, this in an involuntary twitch; in the other, a conspiratorial signal to a friend. The two movements are, as movements, identical; from an I-am-a-camera, “phenomenalistic” observation of them alone, one could not tell which was twitch and which was wink, or indeed whether both or either was twitch or wink. Yet the difference, however unphotographable, between a twitch and a wink is vast; as anyone unfortunate to have the first taken for the second knows. The winker is communicating in a precise and special way: (1) deliberately, (2) to someone in particular, (3) to impart a particular message, (4) according to a socially established code, (5) without cognizance of the rest of the company…… a speck of behaviour, a fleck of culture, and voilá! - a gesture!3
As humans we can tell the difference. It can be culturally specific but within our own culture we will know. And so using language to describe what happened will differ greatly.
It’s obvious that AI will not be able to understand ‘thick description’ and will work very readily with a ‘thin description’. And yet is the doctor - patient engagement not replete with ‘full descriptions’, stratified layers of complex gestures. The case history will never get across what happens in a doctor patient engagement. And so the doctors still remain supreme in the depth and breadth of what they glean from the patient.
To gain an additional perspective on where AI stands:
In a 2023 DeepMind paper4, researchers proposed six levels of AGI, ranked by the proportion of skilled adults that a model can outperform. Current AI technology has reached only the lowest level.
In my experience, based on a resource utilisation view, there has been a steady decrease in the time allocated for doctor-patient engagement, for me this is to the detriment of the thick description and beneficial health effects in the long term.
The doctors who hold their centre and are cognizant of the ‘thick description’ are the ones that have long term healing effects. The ones that simply see and work with the ‘thin description’, i.e. fix the immediate problem, which is fine in an A&E setting, but the overall holistic healing is not there and the patient re-presents with the same or a similar issue further down the road.
On the Ground
Feedback from the doctors on the ground reflects a complex world. In paediatrics the children usually only have 1 disease - which is relatively easy to diagnose and treat. In geriatrics, however, where there are many illnesses, lots of pills taken, and different symptoms - the thick description is crucial to understand these more complex situations/ cases. And so AI would assist in straightforward work but not in the more complex specialties.
Someone to Sue
Like all things, everything goes swimmingly, until it doesn’t and there are problems. Questions need to be posed as to who takes responsibility for the diagnosis and if there is a mis-diagnosis, who do you send the solicitor’s letter to? Are tech companies such as Google, Microsoft, IBM, Nvidia, Tempus or OpenAI willing to take responsibility in case of a litigation situation?
Or, are we going to end up with AI doing some of the groundwork but ultimately the doctor needs to be present to oversee the process anyway?
And as a side note, what is the insurance companies view on the engagement? Will insurance companies covering AI diagnosis, if there is no doctor involved? Are these conversations being had and are the results being shared widely and transparently?
The 80/20 Rule
Like all things, the 80/20 rule applies. 80% of the cases will be straightforward but 20% will be complicated, and need the wisdom and agency of the doctor. The difficulty is, the 20% of complicated cases are sprinkled throughout the cases, and so the doctor needs to be present, to prevent any being missed, and falling through the net.
We are talking here about the healthcare of actual people. Sons, daughters, dads, mothers, brothers, sisters etc. and so it’s really important that we make sure the best heartfelt care is give to those who are sick and vulnerable.
To be more human
Technologies like AI, call on us to be more human. And to continually re-define what that means. AI is programmed for fast efficient response and rapid execution.
It cannot reflect, and unlike a doctor, it doesn’t have a heart, it cannot replace human values, emotional intelligence, or ethical decision-making.
Let’s keep the perspective of the wink or the twitch!
https://substack.com/@nickpotkalitsky/p-153215421
Excerpt on Gilbert Ryle by Clifford Geertz in his book ‘The Interpretation of Cultures’ (1973), p6.
Text: @nickpotkalitsky
Research: Levels of AGI for Operationalizing Progress on the Path to AGI (2023)
https://arxiv.org/abs/2311.02462
I just typed a long comment to this, then forgot to submit and closed my browser.. eek - I need a warning before leaving beep!