Professional Documents
Culture Documents
There are several limitations that prevent us from applying other AI techniques to the
problem. Off the top of my head:
- We do not have the technology for machines to capture the contextual and
communication nuances that doctors pick up on. There can be a world of difference
between the exact same statement given by two different patients or even the same
patient in two different situations. Likewise, the effect of a doctors' statement can be
quite literally the opposite depending on who the patient is and their state of mind. One
of the most important aspects of the GP's job is to handle these differences to achieve
the best possible outcomes for their patients.
- Society at large is not ready to trust machines to make such intimately relevant
decisions. It is not uncommon for patients to hide relevant information from their
doctors, and to blatantly ignore the recommendations from them. This would be many
times worse if the doctor part wasn't human.
- We cannot apply modern inference techniques (e.g.: deep learning) to the global
problem because we have strict rules that prevent medical data collection and analysis
without a clear purpose. Furthermore, these techniques tend to produce unexplainable
results -which is unacceptable in this field-. As a result, there's not enough political
capital to relax those rules.
reply
“Super important” — more like “super nice-to-have.” Hospitals don’t have any single
person on staff who stays attached to particular in-patients. Who knows you? Your
chart.
Yes, of course, hospital care would be better in many ways if we did have somebody
who statefully understood particular patients’ needs.
But what I’m saying is, the GPs in hospitals could be replaced with stateless diagnostic
AI without making hospital care any worse than it is now. And hospital care is a large
part of the medical system, so only replacing diagnostics there (while leaving
primary-care GPs alone) would still be a major optimization, freeing many doctors to
provide better care, go into specialties, etc.
reply
That's simply false. You obviously have no idea how hospital care is actually delivered.
To start with, every admitted patient has an assigned attending physician who is
responsible for coordinating the care team. Some things can be documented in the
patient chart but there are always gaps. Clinical decision support systems for partially
automating diagnosis could potentially be helpful in some limited circumstances but
the ones built so far mostly don't work very well.
reply
I’ll second how misguided that view of hospital care is. There is ALWAYS a treating
team, always an admitting consultant/attending
reply
Knowing the ontology of your patients and their risk is also a tenet of a doctor's job, but
we can do it with AI too. Hell, ontological engineering had a revamp specifically so that
we could have a standardized model to describe any and all "parts" of a "whole" in a way
that machines could understand.
reply
What data is being collected on you? Once a year blood test if that even?
I actually suspect it would be trivial to beat my doctor after 5 years of higher frequency
full blood panel data collection.
10 full blood panel samples a year, have 20 million people do that for a data set we can
do classification on. I think my doctor is kind of out of business then.
Will never happen in my life though with health insurance and health bureaucracy.
reply
Beat your doctor on what? You can already get 10 full blood panel tests per year if you
want. You can just pay for it and don't need insurance. But what will you do with the
data? For most people the results won't tell you anything useful.
https://www.ondemand.labcorp.com/lab-tests/comprehensive-hea...
reply
On the other hand, there are hypochondriacs and people will low pain tolerance. While
they can certainly also become seriously ill -- and one must never forget this -- the
tempo and pace of workup and order of intervention is markedly different, absent other
information that shifts the pretest probabilities.
reply
reply
There is no evidence that diagnostic robots would actually produce better outcomes.
The hypochondriacs are already able to Google their symptoms and make themselves
sick with anxiety.
reply
Know what would be a great use of AI? Something real like analyzing all of the telemetry
in EMRs to provide better guidance to doctors to proactively guide people. Some
CVSHealth chatbot telling me whatever is a waste of time.
reply
I think the person was suggesting using the results of the AI to inform the doctors, not
replacing them. Which is something I would like as well
reply
Automated diagnosis applications have existed for decades. They have proven useful in
limited circumstances for certain specialties and rare conditions but for routine medical
care they're more hassle than they're worth.
reply
reply
A good doctor’s memory of patients spanning decades of a career and all of the various
treatments that they did or did not respond to is very valuable. It’s a good thing that they
offload as much as possible to other people so they can focus on doing what they do
best.