
Anthropic announced a new suite of health care and life sciences features Sunday, enabling users of its Claude artificial intelligence platform to share access to their health records to better understand their medical information.
The launch comes just days after rival OpenAI introduced ChatGPT Health, signaling a broader push by major AI companies into health care, a field seen as both a major opportunity and a sensitive testing ground for generative AI technology.
Both tools will allow users to share information from health records and fitness apps, including Apple’s Health app, to personalize health-related conversations. At the same time, the expansion comes amid heightened scrutiny over whether AI systems can safely interpret medical information and avoid offering harmful guidance.
Claude’s new health records functions are available now in beta for Pro and Max users in the U.S., while integrations with Apple Health and Android Health Connect are rolling out in beta for Pro and Max plan subscribers in the U.S. this week. Users must join a waitlist to access OpenAI’s ChatGPT Health tool.
Eric Kauderer-Abrams, head of life sciences at Anthropic, one of the world’s largest AI companies and newly rumored to be valued at $350 billion, said Sunday’s announcement represents a step toward using AI to help people handle complex health care issues.
“When navigating through health systems and health situations, you often have this feeling that you’re sort of alone and that you’re tying together all this data from all these sources, stuff about your health and your medical records, and you’re on the phone all the time,” he told NBC News. “I’m really excited about getting to the world where Claude can just take care of all of that.”
With the new Claude for Healthcare functions, “you can integrate all of your personal information together with your medical records and your insurance records, and have Claude as the orchestrator and be able to navigate the whole thing and simplify it for you,” Kauderer-Abrams said.
When unveiling ChatGPT Health last week, OpenAI said hundreds of millions of people ask wellness- or health-related questions on ChatGPT every week. The company stressed that ChatGPT Health is “not intended for diagnosis or treatment,” but is instead meant to help users “navigate everyday questions and understand patterns over time — not just moments of illness.”
AI tools like ChatGPT and Claude can help users understand complex and inscrutable medical reports, double-check doctors’ decisions and, for billions of people around the world who lack access to essential medical care, summarize and synthesize medical information that would otherwise be inaccessible.
Like OpenAI, Anthropic emphasized privacy protections around its new offerings. In a blog post accompanying Sunday’s launch, the company said health data shared with Claude is excluded from the model’s memory and not used for training future systems. In addition, users “can disconnect or edit permissions at any time,” Anthropic said.
Anthropic also announced new tools for health care providers and expanded its Claude for Life Science offerings that focus on improving scientific discovery.
Anthropic said its platform now includes a “HIPAA-ready infrastructure” — referring to the federal law governing medical privacy — and can connect to federal health care coverage databases, the official registry of medical providers and other services that will ease physician and health-provider workloads.
These new features could help automate time-consuming tasks such as preparing prior authorization requests for specialist care and supporting insurance appeals by matching clinical guidelines to patient records.
Dhruv Parthasarathy, chief technology officer at Commure, which creates AI solutions for medical documentation, said in a statement that Claude’s features will help Commure in “saving clinicians millions of hours annually and returning their focus to patient care.”
The rollout comes after months of increased scrutiny of AI chatbots’ role in dispensing mental health and medical advice. On Thursday, Character.AI and Google agreed to settle a lawsuit alleging their AI tools contributed to worsening mental health among teenagers who died by suicide.
Anthropic, OpenAI and other leading AI companies caution that their systems can make mistakes and should not be substitutes for professional judgment.
Anthropic’s acceptable use policy requires that “a qualified professional … must review the content or decision prior to dissemination or finalization” when Claude is used for “healthcare decisions, medical diagnosis, patient care, therapy, mental health, or other medical guidance.”
“These tools are incredibly powerful, and for many people, they can save you 90% of the time that you spend on something,” Anthropic’s Kauderer-Abrams said. “But for critical use cases where every detail matters, you should absolutely still check the information. We’re not claiming that you can completely remove the human from the loop. We see it as a tool to amplify what the human experts can do.”
Keep informed with the most important events in market and advanced calculators.