Need for Physician Empowerment in front of Artificial Intelligence Overruling
Summary
Artificial Intelligence (AI) has become an essential part of the computer technology industry. Machine learning, knowledge engineering, patterns identification, problem solving and robots are making important progresses. Health care might benefit from these new tools by helping physicians to make decisions in complex clinical situations, to diagnose diseases, to recognize images, to adapt treatments to individual cases and even to interact by using robots allowing object manipulation. However, machine capabilities depend on human programming choices. Furthermore, AI can only be as smart as the data set served. There needs to be a regulatory body overseeing the development of "superintelligence". Data sources should be well controlled for their quality and for their comprehensiveness. Results obtained by machines should only be considered like a "consultant advice" before decision making. Evaluation of quality of care and measurement of performance of practice should be constantly examined. Medical schools should adapt their education programs by learning to future doctors the methods applied in AI and by training them to measure end-results of care in relation to various decisions. Health care professionals should be up to date in order to protect patient's privacy and to apply medical ethics.
Keywords
Artificial intelligence, Health care, Decision making
Artificial Intelligence Successes in Health Care
AI is an area of computer sciences leading to the creation of intelligent automated machines. Innovative software and digital communication technologies allow computers and robots to work and react like humans. AI mimics "cognitive" functions such as reasoning and problem solving, with capabilities to compete humans in strategic games (such as chess), to operate autonomously car driving or to understand human speech. AI tools include knowledge representation, learning by experience, planning, natural language processing and objects manipulation. AI research developments include methods based on logic, statistics, conditional probabilities, computational intelligence, mathematical optimization, artificial neural networks and many others.
Among applications of AI in health care, let's quote programs where computers could assist physicians with difficult decisions in complex clinical cases. They can help to obtain some diagnoses and the right treatment. The first article on this subject appeared in the literature (Ledley and Lusted) about 60 years ago [1] A few years later, several AI programs like those of F.T.de Dombal in gastroenterology [2] provided for the personnel in submarines an aid based on conditional probabilities, in order to determine, in case of acute abdomen, if the submarine had or not to go to the surface to transfer the case by helicopter to a surgical service.
Review of literature and recent analytic methods have much to offer to interpreting large and complex data sets. These interactive decision supports to researchers and to clinicians can be considered as a big step forward for deep learning in medicine. These impressive new softwares, including machine learning and natural language processing are becoming commercially available. Image analysis can aid pathologists, dermatologists or radiologists in the differential diagnosis of complex cases. Monitoring patients at distance, using telemedicine and "mobile health devices" (wearable computers linked to physiological and electrical captors to measure blood pressure, heart rhythm, glycaemia,…) can be combined to AI programs that follow up patients to be monitored and can assist the physicians for treatment modifications that are indicated.
AI can also be applied to drugs. They are particularly useful for checking systematically inappropriate associations of treatments, several contra-indications and to determine the accurate dosage of high cost products like immunosuppressive drugs. They might save lives as well as millions of dollars.
Robots were proposed by Karel Čapek in 1921 as fictional humanoids. They were programmed in UK in 1948 by electronic tools that could make them autonomous. They were created in order to replace human behaviors. Using AI, they are becoming more and more sophisticated and can perform repetitive instrumental tasks, like surgical closure of scars, or they can be used to learn physical therapy movements, or to dialogue with patients in order to inform them or to assist physicians to diagnose psychological disturbances. There is some fear that robots could not only interpret data, but that they might also make decisions.
Artificial Intelligence Mishaps and Pitfalls
The best AI machine has no personal consciousness. Its ethical choices depend on the humans who programmed its behaviors and advices. It has no empathy and cannot generate sentiments by itself. A computer can use algorithms (based on logic and mathematics) that end up to be more performant than a human but it cannot be "happy"! It can learn by experience, but it lacks intuition, common sense and global synthetic views. AI can lead to the best as well as to the worst. Widespread use of AI could have unintended consequences. It might eliminate jobs, like replace physical medicine therapists, suppress accountants and technicians in hospitals, but it might also create new jobs ranging from personal health care to psychological support.
A main question might be to understand the reasons why since 40 years Clinical Decision Support Systems (CDSS) designed for interactive use by clinicians are not accepted yet and integrated in the work of clinicians? Ted Shortliffe [3], a leading expert in AI and a practicing physician (internist) considers six reasons:
1. Black boxes are inacceptable. Transparency is required, so that users can understand the basis of AI advices
2. Time is a scare resource. A CDSS should be efficient in the busy clinical environment
3. Complexity and lack of usability thwart use. A CDSS should be simple and intuitive
4. Relevance and insight are essential. Answers should reflect the understanding of the pertinent domain
5. Delivery of knowledge and information should be respectful. It should inform but not replace a clinician
6. Scientific foundation must be strong. CDSS should be peer-reviewed with evidence of validity and safety
He explains that health care is particularly challenging for decision support, given the incomplete and uncertain understanding of the causal mechanisms to be analyzed. Furthermore, the content of patient's data in the CDSS can more easily assist with clinical diagnosis than with therapy planning, because CDSS can be built on linkages between clinical data and gold standards for accuracy (eg. biopsies, autopsies, biomolecular markers or surgery findings), while treatments have less gold standards and more disagreements between experts.
Medical records where factual data and summaries of diagnoses are collected lack of standards for their structure of their content as well as for medical terminology. If data captured by clinicians are inadequate or not precise enough in health records, they will lead to inappropriate results or erroneous predictions in medical practice. Furthermore, data collection depends on its objectives. As hospital information systems become "industrial products", they are mainly in the hands of accountants, economists and managers who are more orientated towards optimizing financial revenues than to improve patient care and public health objectives like quality of care and clinical epidemiology.
New Requirements for Health Professions
As patients are encouraged to gain progressively some "empowerment" in the decisions to be taken for their health by obtaining some autonomy in relation to their doctors, shouldn't physicians acquire their own "empowerment" in front of Artificial Intelligence? Don't they need an adequate education in order to understand the methods used as well as the quality and the representativity of the data bases? This maintenance of their "self-governance" (autonomy) becomes a human right in order to continue to act independently, with good external advice (using AI) but without external constraints, in function of their own judgment.
Health professions should have the capacity to understand, collaborate actively and have access to appropriate checking measures of data quality and processing methods obtained by independent organizations. This requires training in health data capture, terminology encoding, and in the most applied AI methods used in medicine, such as statistics (Bayesian conditional probabilities, nearest neighbor rules,…), learning algorithms, problem solving, knowledge representation (description logic, ontologies,…), robotics,....
As a matter of fact, Medical Schools appear more and more outdated in most countries of the world, while AI applications are spreading. Physicians remain most often exclusively "individual patient oriented", without public health vision. They tend to leave the mathematical and technical methods to other professions. They are exposed to the existential risk of the role of doctors. They are more and more under pressure in order not to lose time and more and more requested to record information that they consider as an administrative work. Although Lawrence Weed [4] proposed already in 1971 a structured medical record (the problem oriented record), no universal model has been agreed up to now between countries, in order to allow comparisons of practices. Physicians introduce their data by hand. Professional encoders are trained to attribute International codes (eg. ICD, International Classification of Diseases; SNOMED, Systematized Nomenclature of Medicine; ...) to diagnoses, interventions and other characteristics of the patient. Mountains of "garbage data" would mislead medical practice and research. Medical schools should revise the content of their programs by learning how to deal with AI by always considering data sources and by explaining AI methodologies.
AI could use language recognition for standard encoding of dictated medical reports in various countries in order to spare time for clinicians. It could check terminology standards of patient's summaries in relation to the content of the full record. Patients summaries are often incomplete or not precise enough for clinical research or population-based epidemiology. Minimum Basic Data Sets (MBDS) are often mainly used for management purposes (using DRGs or other classification systems). This data collection objective might be an incentive to give a preference to the diagnoses that generate the highest revenue rather than to an evaluation of end-results of medical practice in various conditions. A "DRG creep" can only be avoided if systematic controls are performed both by physicians in their institution and by an independent body, with regular comparisons of their results. Peer-review is recommended, using international standards.
AI becomes part of a global industrial invasion of medical practice, with the danger to lead to uniformity of thinking, to break patient privacy and to generate similar behaviors by analyzing all human personal activities. If patients wish to keep some freedom and if physicians intend to maintain their decisional role in medicine in future, they should rather consider further education and training in order to understand and control this new methodology. Decisions cannot be left to AI tools. Physicians have still to decide what is the best for their patients in a human dialogue that takes into account common sense, a global approach, an adequate judgment, ethical rules and empathy. It is not a question of authority but of liability of health professionals. IA should be not considered as more than an interesting advisor, like a consultant.
References
- Ledley RS, Lusted LB (1959) Reasoning foundations of medical diagnosis; Symbolic logic, probability and value theory aid our understanding of how physician reason. Science 130: 9-21.
- de Dombal FT, Grémy F (1976) Decision making and medical care. Elsevier (North Holland) publishing co, Amsterdam, The Netherlands, 603.
- Shortliffe EH, Sepulveda MJ (2018) Clinical decision support in the era of artificial intelligence. JAMA.
- Weed LL (1969) Medical records, medical education and patient care. Press of case western reserve university, Chicago.
Corresponding Author
Francis Roger France, MD, MS, PhD, Professor Emeritus, Faculty of Medicine, Past-President of the School of Public Health, University of Louvain, Belgium.
Copyright
© 2018 Roger France F. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.