Experts say the tool can give "very wrong" medical advice that could cause serious harm to the user
Am I sick or Covid?Why do I wake up tired?What is causing the pain in my chest?For more than two years, entering medical questions into the world's most popular search engine has generated links to websites with answers.Search these health questions today and the answer will likely be written by artificial intelligence.
Google CEO Sundar Pichai first announced plans to incorporate artificial intelligence into the company's search engine at its annual conference in Mountain View, California in May 2024.Starting this month, users in the US will see a new AI Insights feature that will provide a summary of data on top of traditional search results.The change marked the biggest change to Google's core product in a quarter century.In 40 languages, 2 billion people use AI research every month.
With the rapid rollout of AI Overviews, Google is scrambling to protect its traditional search business, which generates around $200bn (£147bn) a year, before emerging AI rivals can derail it.
Experts say that generalization is dangerous.The AI they generate provides information about a topic or question, adding conversational answers on top of traditional search results in a blink of an eye.They may cite sources, but they don't necessarily know when they're wrong.
In the weeks since its launch in the United States, users have encountered falsehoods on a variety of topics. According to One AI Overview, Andrew Jackson, the seventh US president, graduated from college in 2005.Elizabeth Reid, Google's head of research, responded to the blog's criticism: "In a few cases, AI Overviews misinterpreted and misrepresented the site."
But experts say these questions about health, accuracy and information are too important to debate.Google is facing increased scrutiny of its AI profile for medical questions after a Guardian investigation found people are being put at risk by false and misleading health information.
The company says the AI recommendations are "reliable".But The Guardian found that some medical summaries were serving inaccurate health information and putting people at risk of harm.In one case, which experts said was "really dangerous", Google wrongly advised people with pancreatic cancer to avoid high-fat foods.Experts said this was exactly the opposite of what should be recommended and could increase the risk of death for patients from the disease.
In another "dangerous" example, the company published false information about important liver tests that could lead people with severe liver disease to mistakenly think they are healthy.Experts say what AI generalists say is normal can be very different from what they actually think is normal.Absences can lead critical patients to mistakenly think they have normal test results and not follow up.In follow-up sessions.
AI data about women's cancer screenings also provided "misleading" information, leading people to ignore symptoms, experts say.
Google initially tried to downplay the Guardian's findings.The company said that, as far as its clinicians could judge, AI Reviews has relevant experts who link to reputable sources and recommend seeking expert advice.“We are investing significantly in the quality of AI reviews, particularly on topics such as health, and the vast majority of them provide accurate information,” the spokesperson said.
Within days, the company released some AI insights into health questions flagged by The Guardian.A spokesperson said: “We do not comment on individual transfers during the search process.
Although experts have welcome the removal of some AI summaries for health questions, many remain concerned."Our bigger concern of all this is that it picks a single search result and Google can only turn off AI reviews for it, but it does not deal with the bigger issue from AI review, director of communication and politics in the British liver trust, a liver well charity.
"There are still too many examples of Google AI overview that give people inaccurate health information," added Sue Farrington, chair of the Patient Information Forum, which publishes evidence-based health information to the public and healthcare professionals.
A new study raises more concerns.When researchers analyzed responses to more than 50,000 health-related surveys in Germany to see which sources most supported public views on AI, one finding immediately stood out.The most mentioned domain was YouTube.
"This is important because YouTube is not a medical publisher," the researchers wrote."It is a general-purpose video platform. Anyone can post content there (eg, board-certified doctors, hospital channels, but also health activists, life coaches and creators with no direct medical training).
In medicine, not only the answer comes from the problem, or the level of accuracy, but how they are presented to the user, experts say."With AI Review, users no longer see a number of sources that can be compared and critically evaluated," said Hannah van Kolfschooten, researcher in AI, health and law at the University of Basel.
“This means that the system does not just reflect online health information, but actively reorganizes it."When this response is done in a format designed to meet medical standards, such as YouTube videos, it creates a new form of medical organization that is not controlled by the Internet."
Google says that AI Reviews are built to show information backed by top web results, and include links to web content that supports the information in the review.People can use the links to dig deeper into a theme, the company told the Guardian.
But separate blocks of text in AI assessments, which combine health information from multiple sources, can cause confusion, says Nicole Gross, associate professor of business and society at the National College of Ireland.
"With the collection of the user is a little more likely to research, most of them lack the time and appropriateness for this because of health problems."
Experts have raised some concerns with The Guardian.Even if and when AI reviews provide accurate information about a specific medical problem, they may not distinguish between strong evidence from randomized trials and weak evidence from observational studies, they say.Others also miss key points about the evidence, they add.
Having such claims listed side by side in AI Review can also give the impression that some are better founded than they actually are.Answers may also change as AI reviews develop, even when the science has not changed."That means people get different answers depending on when they search, and that's not enough," says Athena Lamnisos, chief executive of cancer charity Eve Appeal.
Google told the Guardian that the links added to the AI overview were dynamic and changed based on relevant, useful and timely search data.If the AI overview misinterprets online content or misses context, the company will use those mistakes to improve its system, and take action when appropriate, he said.
What is most worrying, says Gross, is that fake and dangerous medical information or advice from a general idea of AI "translates into the daily routine, routine, and life of a patient, even in tailored forms.""In health care, it can be a matter of life and death."
