Alphabet-C's Med-PaLM 2, a large language model focused on the medical field, is undergoing testing in hospitals in the United States. While AI has the potential to assist in diagnosis and treatment in the medical field, there are also numerous risks involved. The World Health Organization urges caution in the use of untested AI systems.
AI Chatbots are increasingly penetrating into human daily life, including the healthcare field that is closely related to human health, driven by commercial companies.
Alphabet-C Medical Model Testing Started in April
According to The Wall Street Journal, Alphabet-C's medical model Med-PaLM 2 has been undergoing testing at top private hospitals such as the Mayo Clinic in the United States since April.
Med-PaLM 2 is a derivative of Alphabet-C's large language model PaLM 2, which is one of the largest language models in the world in terms of parameters. The prefix "Med" indicates that this model focuses on the medical field. Alphabet-C claims that due to the training by professional doctors, Med-PaLM 2 outperforms general chatbots like OpenAI in the medical field.
Med-PaLM 2 was first unveiled at the Alphabet-C I/O Developer Conference in May this year. It is described as the first large language model to reach expert level in the United States medical licensing exam. It has undergone several iterations since then. An internal email obtained by The Wall Street Journal shows that Alphabet-C believes the updated Med-PaLM 2 is particularly useful in countries with "limited access to medical care".
By simply inputting details such as a patient's symptoms, medical history, and age, Med-PaLM 2 can provide lengthy answers. Experiments at the Mayo Clinic have shown that although AI cannot replace human doctors, it can already serve as a reliable assistant to help doctors with diagnosis and treatment.
According to a research paper published by Alphabet-C in May, similar to other large language models, Alphabet-C Med-PaLM 2 also has the issue of "nonsense" answers. However, overall, Alphabet-C researchers believe that its performance is comparable to that of actual doctors.
The research even points out that compared to other human doctors' answers to the same question, Med-PaLM 2's answers are actually more favored by doctors, surpassing human doctors in nine aspects.
Greg Corrado, Senior Research Director at Alphabet-C, who was involved in training Med-PaLM 2, told The Wall Street Journal:
"I don't think this technology has reached the level where I would be comfortable having my family use it, but in areas where AI can bring benefits in the healthcare field, it can create ten times the value."
The Dangers of AI in the Healthcare Field
It is worth noting that although some of AI's answers may have high quality, emotional value is also an important aspect of the patient's medical process. Since AI fundamentally lacks empathy, it is difficult to replace the communication between doctors and patients.
In addition, the rapid entry of AI into the healthcare field without effective regulation also poses significant risks. Money issues, currently, almost all major language models have the problem of "illusion", easily "rambling" or mistaking false information as facts. Without careful examination, it may lead to incorrect diagnosis and treatment.
The World Health Organization (WHO) issued a statement in May, suggesting a "very cautious" approach to the integration of AI and medical services.
In the statement, WHO stated:
"Premature adoption of these untested AI systems may lead to medical errors, harm patients, undermine trust in artificial intelligence, and thus weaken (or delay) the long-term potential benefits and use of these technologies worldwide."
In addition, the high sensitivity of medical data has also raised concerns about the entry of tech giants into this field.
Although Alphabet-C claims that the data obtained by Med-PaLM 2 is encrypted and the company cannot access this data, considering Alphabet-C's poor track record in privacy protection, it is difficult to win over the market with just these words.
In 2019, Alphabet-C secretly launched the "Nightingale Project" in collaboration with its business partner, the Catholic chain hospital Ascension, collecting medical data from millions of patients in 21 states in the United States without their consent. The data collected by the Nightingale Project includes new information such as patient names and dates of birth, as well as medical test results, doctor diagnoses, and hospital records. The purpose of this project is to use AI to improve the effectiveness of diagnosis and treatment.