Misinformation - Harvard Public Health Magazine https://harvardpublichealth.org/tag/misinformation/ Exploring what works, what doesn’t, and why. Thu, 16 Jan 2025 20:42:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://harvardpublichealth.org/wp-content/uploads/2022/05/favicon-50x50.png Misinformation - Harvard Public Health Magazine https://harvardpublichealth.org/tag/misinformation/ 32 32 https://harvardpublichealth.org/wp-content/uploads/2024/03/harvard-public-health-head.png Social media is the new public health frontline. Let’s treat it that way. https://harvardpublichealth.org/tech-innovation/to-combat-misinformation-social-influencers-need-the-right-tools/ Thu, 16 Jan 2025 20:42:47 +0000 https://harvardpublichealth.org/?p=23199 We must give influencers tools and training to deliver accurate health information.

The post Social media is the new public health frontline. Let’s treat it that way. appeared first on Harvard Public Health Magazine.

]]>
The results of the 2024 presidential election have ushered in a new era of uncertainty for public health. With Donald J. Trump soon back in the White House and his choice of Robert F. Kennedy Jr.—notorious for backing some public health conspiracy theories—as a key figure in the health sector, the stakes are immense.

Kennedy’s history of spreading disinformation about vaccines threatens to undermine decades of scientific progress and public trust. As the nation starts to grapple with the implications of this seismic shift, the role of accurate, evidence-based communication that actually reaches people has become more urgent than ever.

In this new landscape, social media creators have emerged as the frontline of public health communication. Often trusted more than traditional institutions, these creators wield significant influence over how health information is disseminated and understood by not only the masses but also the hardest-to-reach populations. Yet many creators have told me they lack the tools and training to verify and translate health information and instead rely on quick internet searches, which can inadvertently spread inaccurate content. This lack of accessible science creates a void that both unintentional misinformation and deliberate disinformation readily fill. Equipping creators to combat mis- and disinformation is no longer optional. It’s essential.

Sign up for Harvard Public Health

Exploring what works, what doesn't, and why.

Delivered to your inbox weekly.

Health misinformation disproportionately affects youth, people of color, and low-income communities, who often rely on social media for accessible health information. A recent study from the Centers for Disease Control and Prevention revealed that misinformation has significantly decreased vaccination rates in some communities, contributing to the resurgence of preventable diseases like measles and whooping cough.

Additionally, social media users are more likely to encounter and believe health misinformation. We know that a majority of people who use social media for health advice reported in a 2023 poll hearing and believing at least one false claim about COVID-19 or vaccines, compared to only four in ten who don’t rely on social media for health advice. These vulnerable groups often lack the resources to verify the information they see online. Even well-intentioned creators—those not spreading deliberate disinformation—struggle to simplify complex, jargon-heavy science for their audiences. How can we empower them to share accurate, impactful health messages?

While tools like commercial AI can summarize content and fact-checking services can identify false claims, these often fall short when it comes to offering creators audience-specific, evidence-based material that’s ready for sharing. Inspired by the need for a more effective solution, people have been developing strategies to simplify complex research. AI tools that summarize and organize academic papers have cropped up in the past year.

An organization I founded, Science to People, is working on a tool called VeriSci that uses AI to transform peer-reviewed health studies into usable content, with a language model specifically fine-tuned to best practices in science communication. This fall, YouTube announced it is working on something similar for creators in its partner programs. With social media companies’ commitment to fact-checking and removing misinformation and disinformation now waning, the need for independent, publicly accessible tools that offer scientific information in digestible language is clearer now more than ever before.

The demand for accessible and trustworthy health information is clear. For example, a recent experiment demonstrated the power of providing accurate messaging on mental health on TikTok, where videos tagged #mentalhealth have drawn more than 44 billion views. The researchers offered influencers digital toolkits that contained evidence-based mental health content in everyday language across several topics. They found that the creators who received the toolkits were significantly more likely to include mental health content supported by research in their videos. They also uncovered significant impacts. In the treatment groups, TikTok videos featuring the provided content attracted more than half a million additional views after the intervention. A follow-up study of the comments on these videos showed that these viewers had improved mental health literacy.

With support, creators could expand this impact across many health topics, reaching millions with accurate, culturally relevant information. Imagine a mental health advocate sharing evidence-based strategies to manage anxiety, or a sexual health educator presenting reliable birth control options tailored to their audience. Providing creators with science-driven information has the potential to improve health literacy and make a measurable difference in underserved communities.

As we enter this new era, it’s time to recognize that the frontline of public health has shifted to social media, where creators are leading the charge in sharing health information. Supporting these creators with innovative, research-backed resources is essential for combating misinformation and protecting public well-being.

These digital communicators have become our new public health allies. Empowering them with the right tools can make a significant difference in reaching diverse and often underserved audiences.

Top image: елена калиничева / Adobe Stock

The post Social media is the new public health frontline. Let’s treat it that way. appeared first on Harvard Public Health Magazine.

]]>
What should happen to doctors who spread misinformation? https://harvardpublichealth.org/policy-practice/misinformation-spread-by-doctors-rarely-leads-to-punishment/ Wed, 08 Jan 2025 20:19:18 +0000 https://harvardpublichealth.org/?p=23074 So far, medical boards have been hands-off.

The post What should happen to doctors who spread misinformation? appeared first on Harvard Public Health Magazine.

]]>
The spread of medical misinformation escalated during the pandemic, leading some people to reject COVID-19 vaccines and ingest unproven treatments. Now, as President-elect Donald J. Trump chooses officials to protect the country’s health, medical misinformation is again a threat to public health efforts. Robert F. Kennedy Jr., Trump’s pick to direct the Department of Health and Human Services, has promoted the discredited theory that routine childhood vaccines can cause autism. Kennedy has no medical or public health degrees.

But what can be done? State medical boards—which police the behavior of doctors and can discipline them for medical misinformation—provide one safeguard. These boards weigh the health and safety of the public against the First Amendment rights of physicians to engage in robust debate. However, medical boards were designed to monitor issues such as overprescribing and sexual misconduct, not surveil what physicians post online or say at community meetings, says Richard Saver, professor in the School of Law at the University of North Carolina at Chapel Hill.

Sign up for Harvard Public Health

Exploring what works, what doesn't, and why.

Delivered to your inbox weekly.

“If we want medical boards to move to take a more proactive approach here, and not just be reactive to complaints, it is going to require a lot more cost-intensive, resource-intensive surveillance efforts,” he says.

Saver recently authored a study that found state medical boards in the five largest U.S. states rarely disciplined physicians for spreading medical misinformation during the COVID-19 pandemic. He spoke with Sarah Muthler for Harvard Public Health. This interview has been edited and condensed.

HPH: Why did you decide to investigate this issue?

Richard Saver headshot
Richard Saver.

Saver: There were news reports of physicians, particularly at the start of the pandemic, who were spreading some fairly questionable, controversial claims, and there seemed to be increasing public interest about whether these physicians were acting professionally in doing so. As I looked into the issue, I was gleaning that there was very little discipline going on and also appreciating that medical boards probably were facing barriers, both very technical legal issues and larger questions of institutional resources and commitment.

It raised, in my mind, this larger question: Are medical boards well suited to deal with this problem and to police medical misinformation by physicians?

HPH: What were your findings regarding discipline for medical misinformation?

Saver: Less than one percent of the offenses that brought a physician to discipline by a medical board involved misinformation conduct of any kind. Misinformation offenses were exponentially below the more common reasons medical boards were disciplining during [the pandemic], like ordinary physician negligence, inappropriate record keeping, and inappropriate prescribing.

HPH: Can you talk about the challenges of defining medical misinformation?

Saver: It is a real challenge, and I think that can partly explain why medical boards may have been a little hands-off. There appear to be two leading definitions in the literature. One would be to look to medical or scientific consensus about whether the statement is false. The other would be to look to the best available evidence about whether the statement is false.

If you look at consensus, what does that mean? Particularly for an emerging disease threat like Covid, where we may not have fully peer-reviewed literature about the topic.

There’s this very delicate balancing act of wanting to go after clearly egregious instances of spreading falsehoods but leaving room for physicians to legitimately challenge what appear to be orthodox medical views.

Medical boards were three times more likely to sanction a physician for spreading misinformation to their patients in the clinic as opposed to a public setting.

HPH: The current reporting system is driven by patients coming forward with complaints. How does that influence what the medical boards are investigating?

Saver: In the case of misinformation, some patients exposed to it may not know it is misinformation or know they can report this to the medical board.

Medical boards usually don’t go to school board meetings and community board meetings. They’re not surveilling social media posts of licensed physicians, and so they’re not going to get a handle on this unless they have more proactive surveillance.

HPH: How is a posting by a doctor on social media or a public statement treated differently from what is said to a patient one-on-one?

Saver: We found [that] although misinformation sanctions were low overall, medical boards were three times more likely to sanction a physician for spreading misinformation to their patients in the clinic as opposed to a public setting.

When a physician has a patient in front of them and has assumed legal, ethical, and fiduciary responsibility for that patient’s outcome, the advice they give is tailored to that patient, and they may be more circumspect in what they’re going to say. When they’re out in public, contributing to robust debate, they may feel that they have the freedom to be overly rhetorical, exaggerate, embellish, what-have-you to contribute to public discourse. They do not have a doctor-patient relationship with members of the public.

HPH: Outside of state medical boards, are there other entities that could address this in some way?

Saver: Other promising entities that might get involved are private, medical-specialty boards, like the American Board of Internal Medicine. A private entity is not subject to the same First Amendment limitations that a public entity like a state medical board is. That gives these entities a little more maneuvering room to maybe take board certification away from some of their physicians.

There are also private health systems and even private hospitals that have started, on a very incremental basis, to remove the medical staff privileges of physicians who they think are putting out problematic statements to the public.

And then there is perhaps the easiest, although ultimately the least satisfactory, alternative, which is governmental health authorities putting out counter speech. Rather than punishing the doctor, try to alter the information environment and put out more accurate information directly counter to what a renegade doctor has said. In theory, that will help. I say “in theory,” because counter-speech—we have to be realistic—is going to be of somewhat limited effectiveness in our very polarized environment with distrust of traditional government and science authorities.

Source images: Isaac Lee, Liountmila Korelidou / iStock
Saver: The University of North Carolina School of Law

The post What should happen to doctors who spread misinformation? appeared first on Harvard Public Health Magazine.

]]>