Skip to main content

The independent source for health policy research, polling, and news.

Subscribe Follow Us Donate
  • Trump 2.0

    Trump 2.0

    • Agency Watch
    • State Watch
    • Rural Health Payout
  • Public Health

    Public Health

    • Vaccines
    • CDC & Disease
    • Environmental Health
    All Public Health
  • Audio Reports

    Audio Reports

    • What the Health?
    • Health Care Helpline
    • KFF Health News Minute
    • An Arm and a Leg
    • Health Hub
    • HealthQ
    • Silence in Sikeston
    • Epidemic
    All Audio
  • Special Reports

    Special Reports

    • Bill Of The Month
    • The Body Shops
    • Broken Rehab
    • Deadly Denials
    • Priced Out
    • Dead Zone
    • Diagnosis: Debt
    • Overpayment Outrage
    • Opioid Settlement Tracking
    All Special Reports
  • More Topics

    More Topics

    • Elections
    • Health Care Costs
    • Insurance
    • Prescription Drugs
    • Health Industry
    • Immigration
    • Reproductive Health
    • Technology
    • Rural Health
    • Race and Health
    • Aging
    • Mental Health
    • Affordable Care Act
    • Medicare
    • Medicaid
    • Children’s Health

  • RFK Jr.
  • Hantavirus Outbreak
  • AI in Healthcare
  • Makary Resigns
  • Pancreatic Cancer Drug

WHAT'S NEW

  • RFK Jr.
  • Hantavirus Outbreak
  • AI in Healthcare
  • Makary Resigns
  • Pancreatic Cancer Drug

Morning Briefing

Summaries of health policy coverage from major news organizations

  • Email

Friday, Mar 27 2026

Full Issue

For Those Who Raised Alarm On Social Media Harms, Verdicts Are Validation

Even though Meta and Google are weighing whether to pursue appeals, the findings by two juries indicate public perception of tech companies has shifted, with more people willing to push for changes to protect children's online safety. Minnesota lawmakers have advanced a bill they hope will do just that.

AP: Verdicts Against Social Platforms Validate Concerns Long Raised By Parents, Whistleblowers

For years, parents, teenagers, pediatricians, educators and whistleblowers have pushed the idea that social media is detrimental to young people’s mental health and can lead to addiction, eating disorders, sexual exploitation and suicide. For the first time, juries in two states took their side. In Los Angeles on Wednesday, a jury found both Meta and YouTube liable for harms to children using their services. In New Mexico, a jury determined that Meta knowingly harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms. Tech watchdog groups, families and children’s advocates cheered the jury decisions. (Ortutay, 3/26)

AP: Woman Whose Son Died From Drugs Bought On Social Media Celebrates Verdicts Against Meta, YouTube

A Colorado woman whose son died from a fentanyl-laced pill he bought through social media celebrated a pair of verdicts this week against Meta and YouTube that she said opened the door for companies to be held responsible for harms to children using their platforms. “The truth is out, and it’s time that they are held accountable for the design of the platforms,” said Kimberly Osterman, whose son Max died in 2021 at age 18. “They put profits over safety.” (Peipert and Schoenbaum, 3/27)

CBS News: Minnesota House Advances Bill Requiring Social Media Protections Against 'Addictive' Features, Parental Consent For Children

A bipartisan proposal to set guardrails around social media sites for children advanced in the Minnesota House on Thursday, one day after a landmark case against tech companies in which they were found liable for creating products that led to harmful behavior. The Minnesota bill would require parental consent for someone under 15 to make an account and would limit features bill authors say are addictive: infinite scrolling, autoplay of videos and push notifications for those users. Paid ads would also be prohibited and the strongest privacy settings need to be the default. (Cummings and Lisignoli, 3/26)

Also —

AP: New Study Says AI Is Giving Bad Advice To Flatter Its Users

Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear. The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions. (O’Brien, 3/26)

This is part of the Morning Briefing, a summary of health policy coverage from major news organizations. Sign up for an email subscription.
Newsletter icon

Sign Up For Our Newsletter

Stay informed by signing up for the Morning Briefing and other emails:

Recent Morning Briefings

  • Today, May 13
  • Tuesday, May 12
  • Monday, May 11
  • Friday, May 8
  • Thursday, May 7
  • Wednesday, May 6
More Morning Briefings
RSS Feeds
  • Podcasts
  • Special Reports
  • Morning Briefing
  • About Us
  • Donate
  • Staff
  • Republish Our Content
  • Contact Us

Follow Us

  • Instagram
  • YouTube
  • LinkedIn
  • Facebook
  • X
  • Bluesky
  • TikTok
  • RSS

Sign up for emails

Join our email list for regular updates based on your personal preferences.

Sign up
  • Editorial Policy
  • Privacy Policy

© 2026 KFF