1. Introduction
How does one come to deal with AI? The author's "case history". Medical studies were supposed to become more interesting from the Physikum onwards, but the frontal teaching hardly allowed any questions. The lecturers usually referred to "later" and fellow students only wanted exam material and no additional questions. This led to the decision to double major in psychology. Lectures there were characterized by student questions and were positively received by lecturers.
The staff at the institute rejected the topic for a dissertation, as psychology would not be accepted as a second degree course. The only option was to go to the head of the institute (C4), who did not supervise theses. First, a question had to be answered briefly and succinctly before a topic was offered: "What do you, as an outsider, think of our current ideas about intelligence?" Back then, there were already NCs for medical studies. My answer: "Nothing at all" - and the topic of my diploma thesis was assigned (on intelligence, with around a hundred definitions and just as many tests).
After passing the main diploma examination, I asked the head of the institute in Vienna how he came up with his question at the time. He was obliged by the dean of studies to present psychology for medical students in a double lesson. He was able to present the greatest nonsense and yet everything was written down. No student questions were asked. Was that the beginning of our current "belief in guidelines"? Is this being repeated in the use of AI when it comes to direct patient concerns?
2. AI as a Tool
Anyone communicating with AI and not with a human being should bear in mind that AI is a tool. It should "serve" itself with high-quality content from the network. When communicating between people, content may appear that is not yet circulating on the web. Everyone should decide for themselves what is more exciting. Asking difficult and easy-to-understand questions to your counterpart with a high level of clinical competence stimulates both brains to think. Kant on this: "Have the courage to use your own mind".
AI is hardly a dialog partner that is specifically geared towards the age-dependent circumstances of the individual. Individual biographies, starting from childhood/adolescence, can hardly be processed with average values from Internet data. Today more than ever, treatment decisions have to be made on an individual basis.
Who knows the competence of AI for this? This is already difficult with our guidelines: who knows for sure the competence of their actors and their objectives? This is even more difficult with AI as a tool. That's why it makes sense to talk about Google search engines, whose results we use critically to make decisions. We do not classify search engines as interlocutors because we need a lot of additional data to make treatment decisions. For lawyers, AI has no sense of what is a fair and appropriate solution in a legal dispute. The same applies to treatment decisions in medicine.
3. "Decision Intelligence" (DI) Needs Clinical Medicine
The AI analysis of patient data can be based on factual information. But ultimately, the patient must be prepared to endure invasive treatment consequences, for example. Providing detailed information about this is "uncomfortable". For this reason, known "number needed to treat" information is rarely communicated spontaneously in the case of drug therapy, for example.
DI platforms can certainly make suggestions for action on the basis of relevant data collected in industry and commerce. However, patient data is characterized by subjective factors that require doctors with a high level of expertise to interpret.
DI generally wants to use AI to provide a single analytical view. This contradicts the goal of individual decision-making required by medical ethics. DI can nevertheless be integrated into everyday clinical practice in order to promote rapid decision-making. The focus should be on explicit understanding of the patient's interests. This should be reconstructable, i.e. reliable. One DI objective can be feedback, e.g. re-evaluation of clinical emergency events, to improve future decisions.
4. Too Many Alternative Decisions Tend to Result in Mediocrity
Clinical decisions are often the selection of several options that are not made in the abstract, but based on the individual patient. AI as a tool without a limited search for alternatives can lead to errors. AI aims to analyze information from studies, guidelines and textbooks from the outset, taking individual patient data into account. Doctors with a high level of clinical expertise can make more targeted inquiries about the latter.
AI predictions outside of medicine are much more important. This is because the basis for programmed algorithms and data analyses is usually simpler in industry and commerce. AI can therefore make decisions there independently. Decisions in the human brain are made almost contrary to AI. In everyday life, the brain generally does not make decisions cognitively and consciously ("with the head"), but "automatically" and intuitively ("with the gut"), taking psychosocial factors into account. The latter are quickly clarified by the inquisitive doctor with clinical expertise. Einstein: I am not particularly intelligent, but I am very curious. Functional imaging has shown that very intelligent people tend to think more slowly when faced with complex questions. What happens during these slower thought processes is currently being researched. This puts the very high speeds of AI processes into perspective. Gerd Gigerenzer, a C4 psychologist in Berlin, says: "Decisions that result from more than seven alternatives tend to be mediocre.
The author worked regularly for 35 years in two university delivery rooms, each with 2,400 births per year, and the proportion of high-risk pregnant women was quite high, resulting in emergency situations with only a few minutes of decision-making time - primarily in the interests of the unborn child. The alternative choice could hardly be made consciously, as the brain works too slowly, but "intuitively". Once the stressful situation had been overcome, a sensible action could be consciously reconstructed, validated by the mother and newborn coming out. This is ultimately an everyday experience for healthy people in risky situations in road traffic, with the exception of novice drivers and those on drugs of all kinds. It may be that AI systems will be able to do the same for mobility in the distant future, but with such high energy consumption that the energy required for locomotion will become downright marginal.
5. ChatGPT since 2022 Covers Almost All Areas of Life
Chatbot, as a combination of "to chat" and bot (for robot), is simply put a robot that you can talk to. The chatbot communicates with users in natural language. The latter ask questions and the bot gives detailed answers. The monthly fee is around 20 US dollars and is free for registered users with hardly any waiting times.
If controversial and complex questions with emotional components are asked, misunderstandings can occur. The result is incorrect information that the questioner should recognize. Data protection violations must be considered, especially with patient data. The origin of content generated via AI is too often difficult to trace. This uncertainty can be problematic in medicine. After all, patients have the right to know the basis on which diagnostic and therapeutic decisions were made. The answer "decided by AI" alone will lead to problems in court.
ChatGPT is trained with millions of texts from the Internet, social media, online forums, magazine articles and books. A filter is supposed to eliminate false content. The author could hardly understand how this is done transparently and is hardly suitable for routine use. Depending on the term entered, e.g. sex hormones, "spongy" answers appear, convincingly formulated.
In clinical medicine, a competent physician should critically review the ChatGPT answers when making patient-relevant decisions. Access to other sources remains available. Is this still a relief with such verification when individual and easily comprehensible decisions are required anyway?
There are still prejudices and dogmas in medicine, e.g. in oncology in relation to "radical" therapies. Will these be practiced longer through the use of AI, analogous to earlier "schools" in medicine?
Chatbot is worthwhile when formulating publications in medicine, translating them into other languages and recognizing errors in self-authored texts. It is also very useful as a reference work. As a "source of ideas", the author would rather advise against it. Clinical observations on new therapies, for example, which are not yet "circulating" on the net, are more meaningful.
Biological logic in medical thought and action should not be forgotten. Anyone who wants to leave this to AI will lose the joy of the medical profession in the long term, unless pecuniary goals dominate.
6. Generative AI in Medical Studies and Further Training
There is hardly any major data collection on this for Germany. Therefore, by analogy, the use of generative AI in schools and in studies in general. A study was conducted in 2023 by the Bavarian Research Institute for Digital Transformation (bidt), part of the Institute of the Bavarian Academy of Sciences.
Generative AI had been used by 73% of pupils and 78% of students overall. 42% and 45% respectively were of the opinion that they had achieved better grades in this way, without having to work harder themselves. ChatGPT & Co. were used by 68% for writing texts and 59% for research.
Half of those who had heard of generative AI stated that they understood the basic technology. Half were also aware that results generated in this way can be factually incorrect. Just over half checked the correctness of the AI text results. Among teachers, a third were not aware of the use of AI for texts. Almost half of pupils and students aged 18 and over stated that there were no AI guidelines at their educational institutions. Half wanted more controls and a third wanted no AI use. Now to medicine: who checks the accuracy of doctor's letters created using AI methodology? Who objectively checks the authorship of dissertations and publications?
7. Generating Disinformation in Medicine Using AI
This is the title of a JAMA publication from 2022. AI could be used to spread misinformation on a massive scale. A sensitive topic from medicine: vaccination. Each discipline should review this independently. Take gynecology, for example. 4,000 new cases of cervical cancer per year in a female population of over 40 million means a risk of 1 in 10,000, which is why all girls and boys aged 12 and over should be vaccinated against HPV. 8-9 out of 10 women are infected with HPV and usually do not realize it. For cancer to develop as a result of HPV, additional risk factors such as nicotine consumption, inadequate genital hygiene and frequently changing sexual partners are usually required. This is often associated with low social status. These are all factors that are unlikely to result in vaccination compliance.
Why this example? The pharmaceutical industry can also use AI to successfully develop algorithms for more vaccinations and omit cost-benefit analysis. Now on to the JAMA publication with an Australian study: Various publicly accessible AI support systems were given the task of dealing with medical issues. The aim was to identify deliberately unethical statements from 50 blog entries in a short period of time using AI. The sobering result: AI was not up to the task in two systems.
Above all, "creative" headlines were suitable for spreading masses of false information. AI could be used to create seemingly scientific references. The research team investigated this and found that almost all of the sources cited by AI were false. The authors of the study concluded that AI can quickly and cost-effectively produce large amounts of false information in a targeted manner, most of which appears primarily "convincing". An example of this was given: An AI system created 102 blog entries with a total of 17,000 words in 65 minutes. Health-related texts, images and videos could be quickly generated and quickly disseminated using simple and publicly accessible tools - a new problem for medicine.
8. EU Parliament Responds to AI Threats with Law
The EU's approach is unique in the world: AI use is to become safer, more transparent and non-discriminatory. China and the USA are cited as role models when it comes to catching up in the field of AI. We have a different value system in the EU. Both major powers are hardly questioning the ever-increasing computing capacities in the interest of AI.
A few details on energy consumption. According to estimates, AI will consume 85 to 134 terawatt hours (TWh) per year by 2027. Here is the ratio: 1 TWh = 1 billion KWh. An ICE consumes just under 20,000 KWh for the journey from Hamburg to Munich. When Chat GPT was introduced, 100 million users consumed over 500 MWh of electricity every day in just two months just to keep the system running. That is 500,000 KWh, or 25 ICE train journeys.
If medicine wants to be a credible role model for environmental protection and contribute to less global warming due to health risks, then the energy consumption should be specified for all AI applications. For example, surgery with robotic assistance: every clinic should declare its energy consumption every year.
Once again on "humanoid machines". These robotic skulls can turn, wink, listen and speak. Their faces are intended to establish an emotional connection or give speakers a face. It is to be hoped that clinics and doctors' surgeries will hold back on purchases and continue to rely on qualified staff - with less energy/CO2 impact.
9. Summary
Artificial intelligence (AI) should be classified as a tool and not as a communication partner. The latter can only be the doctor for the patient. Because individuality must be understood holistically. This applies equally to diagnostics with more or less stress and therapy with more or less invasion (violation of physical integrity).
When using AI with a direct link to patients, the introductory paragraphs of our Basic Law must be borne in mind. AI results must be traceable, i.e. which data was used and how. This must be traceable in the same way as medical decisions.
With ChatGPT since 2022 and soon "humanoid machines" with a human face, more and more computing power will be required. Clinical medicine should convincingly and rationally justify the high AI energy consumption with health consequences via CO2 emissions and thus further critical global warming.
If intelligence is defined as "dealing efficiently with new conditions in life", then AI is just a parody of this. Because every patient is such a new condition. This primarily needs the doctor and AI should be used as an aid in a targeted manner.
References
- Health Disinformation Use Case Highlighting the Urgent Need for Artificial Intelligence Vigilance: Weapons of Mass Disinformation. JAMA Intern Med. 2023.