Trump and Kennedy Seek To Relax Safeguards for AI Healthcare Tools

An unidentifiable nurse types at a computer. A patient is visible, but blurry, in the background.
(E+/Getty Images)

Paul Boyer, a psychotherapist for Kaiser Permanente in Oakland, California, is experiencing the AI revolution firsthand. He’s a little underwhelmed.

The health giant has rolled out a new suite of note-taking software, made by healthcare AI pioneer Abridge, intended to summarize a patient’s visit at supersonic speed. For many clinicians, the technology soothes one of the persistent headaches of their lives — administration and paperwork.

But the AI scribe caused another headache for Boyer and his colleagues: It is “not super useful.” They end up correcting the computer-written notes.

Abridge is “not good at picking up on clinical nuance, at picking up on the emotional tone” that can be critical in the mental health field, Boyer said. For example, for manic patients, what’s said is less important than how it’s said, Boyer said, and the software struggles with picking up on those cues.

Note-taking software isn’t the wave of the future; it’s the wave of the present. Hospitals nationwide are implementing it. And researchers are finding some benefits. A year after installation, doctors who used these products the most saved more than half an hour of work daily, according to a study of five hospitals published in April in the Journal of the American Medical Association.

Many doctors love the products where they’re deployed — several interview-based studies find overall positive reactions to the scribes.

Nevertheless, as Boyer’s example shows, there are persistent questions about the systems’ quality. While Boyer and his colleagues spend time correcting notes, safety researchers worry clinicians might not be diligent about catching errors. That might mean future doctors rely on bad information.

Abridge says it evaluates its scribes at every stage of deployment, including with head-to-head tests against previous versions of the software.

“Following deployment of a model, we monitor clinician edits, star ratings, and free-text feedback from clinician users about note quality,” the company’s director of applied science, Davis Liang, told KFF Health News in a statement.

Artificially intelligent scribe software is part of a swarm of AI-powered tools coming to healthcare. Clinicians and patient-safety advocates say government regulations are not well constructed to guard against the threat that the new technology will miss or obscure important details of patients’ conditions, potentially harming them.

“There is currently no safeguard in place” to vet scribe software at the federal level, said Raj Ratwani, a researcher specializing in human factors — that is, how people interact with technology — at MedStar Health, a large hospital system based in Columbia, Maryland.

Ratwani worries that safeguards on health software will relax even further. Proposed rules from the Office of the National Coordinator for Health IT — the body that regulates electronic health records, the central chronicle of care for patients — could weaken requirements to make medical records understandable, easy to use, and transparent about the use of AI, Ratwani said. And an incomprehensible record could confuse clinicians and lead to errors.

Newsletter Icon

Beginning in the Obama administration, the Health and Human Services Department’s IT office encouraged “user-centered design” tests, in which developers try their products on doctors and nurses. Regulators also sought to require more transparency from companies in the surging market in AI tools.

Both of those requirements are axed in the proposed rules from HHS Secretary Robert F. Kennedy Jr.’s health IT office.

Doctors and other health practitioners consult records for clinical information, such as scribe notes summarizing the history of patient care and lists of drugs and therapies their patients have used. Doctors also input orders for care.

Poor or cluttered design of a records system “might make the list of medications so complicated and confusing that the ordering provider selects the wrong medication,” Ratwani said.

Abridge’s general counsel, Tim Hwang, said the company “broadly supports” the government’s rules as a “necessary modernization” that “accommodates the speed at which AI is evolving.”

The old rules “put way too much burden” on electronic health record systems, said Ryan Howells, a principal at Leavitt Partners, which consults for digital health companies. Leavitt supports the proposals.

Dropping requirements, the administration argues, will result in more innovation and competition. The electronic health record market has steadily consolidated, with hospitals and other clinicians choosing from fewer vendors.

A 2022 study found the top two vendors, Epic and Oracle Health, accounted for more than 70% of the hospital market. And Howells argued too many rules burdened providers looking for good record systems. Federal regulations, Howells said, are “the single biggest inhibitor to true clinical innovation.”

The Trump administration proposal to remove requirements governing records is overbroad, some critics say. It removes regulations intended to keep records secure. It also eliminates privacy protections for sensitive medical data they safeguard, overhauls standards governing the formats data is sent in, and more. The rule may give clinicians “more health IT choices to meet their needs through increased competition,” the government wrote in its proposal.

HHS’ health IT office declined comment, noting the proposal is still winding through the regulatory process. Public comment closed in February.

But most concerning to some — even in the hospital and developer sectors — are proposals to scotch prerequisites to ensure new products are tested on actual users, and to ensure AI tech’s decisions are transparent to doctors and nurses.

“Historically, hospitals and health systems have been challenged by the black box nature of certain AI tools and how the algorithms are developed,” the American Hospital Association’s Jennifer Holloman said. And with more AI tools flooding the market, the association has said, transparency is even more critical.

Complaints about the safety of electronic health records are long-standing, even for seemingly straightforward tasks. Ratwani likes the example of ordering medication for a given condition.

“The physician is trying to order Tylenol, and the medication list can be so confusing that there’s 30 different versions of Tylenol all at a different dose and for different purposes, when in reality that could be designed much more simply and make it easier for the physician to actually pick the right type of Tylenol that they’re ordering,” he said.

Real-world user testing was intended to simplify record design for doctors. But the administration is ending that requirement in a confusing way, said Leigh Burchell, vice president for policy and public affairs at Altera Digital Health, an EHR developer.

In Burchell’s interpretation of the rules, which refer to “enforcement discretion,” a principle in which the government can opt not to enforce certain rules, companies are still required to do the testing — the part that takes work — but are not mandated to report their results to the feds.

The administration is also ending a Biden-era idea to create AI transparency “model cards.” The concept was that clinicians could explore the data used to train AI tools that advise clinicians with a simple mouse click. But few took advantage of the year-old tool, Trump’s regulators say.

Still, hospitals and doctors are wary of removing it. The tool “provides information on how a predictive or generative AI application was designed, developed, tested, evaluated and should be used. These data are critical to foster trust in AI tools and ensure patient safety,” the AHA wrote in a comment letter to the HHS IT office. The American College of Physicians offered a similar warning, saying a “lack of clarity could undermine clinician trust, increase liability expense, and erode the patient-physician relationship.”

Even developers aren’t totally sure about the idea. Burchell said the electronic health records trade group she’s part of had “a lot of different perspectives” on the issue. “Normally, we tend to be a bit more aligned on our responses.”

Still, Burchell’s group thought companies should be transparent about the data AI relies on to make decisions and how it comes up with recommendations.

Evidence for AI tools’ effectiveness is sparse or contradictory.

A recent study comparing 11 AI scribes for potential use as a pilot in the Veterans Health Administration found the software performed worse than humans across five simulated scenarios. “Although ambient AI scribes can generate complete notes, the overall quality remains broadly below that of human-authored documentation,” the authors noted, with the omission of information being particularly concerning, given the potential to affect follow-up care.

The vendors in the VA study weren’t identified, for what the authors called “contractual reasons.”

And that’s just one type of AI tool. A wave of them is coming, each needing its own evaluation, to say nothing of tools that have already been installed.

Boyer said he can mostly ignore his AI scribe, for the moment. But he worries that management will design his job around the expected time savings and schedule more patients — meaning he’d need to spend more time both with patients and correcting the software’s errors.

A KP spokesperson, Vincent Staupe, said the company does not require its clinicians to use AI.

“When I am correcting that note, I feel like this is too much work,” Boyer said. “This is definitely making this worse, and this is taking up time that I need to not be spending on correcting an AI tool.”

Related Topics

Health IndustryDoctorsHHSHospitalsPrivacyTechnologyTrump AdministrationAgency WatchMarylandCalifornia

More from KFF Health News