Most U.S. Doctors Are Quietly Using This AI Tool. Few Patients Know About It.
Imagine sitting across from your doctor, describing a set of symptoms that have been worrying you for weeks. She listens carefully, nods, and then — without leaving the room or opening a textbook — types something into her computer and gets an instant, evidence-backed recommendation for the right treatment path.
What you probably didn't see: an AI tool just helped shape your medical care.
Almost two-thirds of U.S. physicians now actively use an AI platform called OpenEvidence in their daily practice. That's roughly 650,000 doctors across the country. Yet if you ask the average patient walking into an exam room whether AI played any role in their visit, the answer would almost certainly be a blank stare.
We're living through one of the fastest technology shifts in medical history — and almost nobody on the receiving end of care knows it's happening.
This article pulls back the curtain. You'll learn what OpenEvidence actually does, how many doctors rely on it, the broader wave of AI tools reshaping your appointments, and why the secrecy matters for your next visit.
What Is OpenEvidence — and Why Haven't You Heard of It?
OpenEvidence is a clinical AI platform that functions like a supercharged medical research assistant. Doctors type in a clinical question — say, "What's the recommended first-line treatment for a 62-year-old with hypertension and early-stage kidney disease?" — and the tool scans millions of peer-reviewed medical studies to generate a cited, evidence-based answer in seconds.
Think of it as the AI-era equivalent of a doctor walking down the hall to consult a trusted colleague — except that colleague has read every medical journal published in the last two decades and can recall any relevant finding instantly.
The platform was designed specifically for clinicians, not consumers. Its responses are drawn from vetted medical literature rather than the open internet. And unlike general-purpose chatbots, OpenEvidence is built to answer the kind of nuanced, high-stakes clinical questions that arise in real exam rooms.
Why haven't patients heard of it? Because, for the most part, nobody told them. The tool operates behind the clinical curtain — a physician-facing technology that patients rarely see or are informed about. And that, it turns out, is a bigger deal than it might first appear.
Just How Many Doctors Are Using This Tool?
The numbers are staggering — and they've exploded almost overnight.
OpenEvidence representatives report that roughly 650,000 U.S. physicians actively use the platform, with another 1.2 million clinicians using it internationally. That places it among the most rapidly adopted professional technologies in healthcare history.
The broader trend is equally striking. According to the American Medical Association's 2026 physician survey, 81% of doctors now report using AI in their professional practice — more than double the 38% reported just three years earlier. Meanwhile, AI use among physicians surveyed by Doximity jumped from 47% in April 2025 to 63% by January 2026 — a 16-point increase in under a year.
To put this in perspective: AI adoption among doctors is outpacing the adoption curve of smartphones, electronic health records, and even the stethoscope in its day.
OpenAI's recently launched ChatGPT for Clinicians — a free tool for verified U.S. physicians, nurse practitioners, physician assistants, and pharmacists — has added fuel to an already raging fire. Physician use of ChatGPT more than doubled over the past year alone.
The bottom line: if you visited a doctor this year, there's a very high probability that an AI tool was involved somewhere in your care — whether you realized it or not.
What Doctors Actually Use It For
It would be easy to assume doctors are asking AI to diagnose patients. That's not what's happening — at least not primarily.
According to Dr. Anupam Jena, who is analyzing 90 million OpenEvidence queries submitted since 2024, 60% of all searches are about how to make clinical decisions. Physicians are asking questions like: "For this particular patient, with this profile, this condition, and these comorbidities, what's the right treatment?"
Other common use cases include:
- Literature searches and evidence reviews — the single most common physician AI use case, reported by 35% of doctors surveyed.
- Drafting patient discharge notes and care instructions
- Creating custom study tools for board and licensing exams
- Generating referral letters and prior authorization documentation
- Voice-based clinical documentation using ambient AI scribes, now used by 29% of physicians (up from 20% the prior year).
The through-line here is efficiency. Doctors are drowning in administrative work — and AI is throwing them a lifeline. Three-quarters of physician AI users say the technology has reduced their administrative workload and improved job satisfaction. And 69% say it has contributed to improved patient care and outcomes.
The Broader AI Shift Happening in Exam Rooms
OpenEvidence is just one piece of a much larger transformation. If you walked into a typical U.S. clinic in 2026, multiple AI systems might be operating during your visit — often without any visible sign.
Ambient AI scribes are the fastest-growing category. These tools passively listen to the conversation between you and your doctor and automatically generate structured clinical notes. Microsoft's Dragon Copilot, Abridge, Ambience, and Suki are among the leading names, deeply integrated into major electronic health record systems like Epic and Cerner.
At UCSF Health, 70% of physicians now use an AI scribe daily. At Kaiser Permanente, more than 7,000 physicians used AI scribes across 2.5 million patient encounters in just over a year.
Beyond scribes, ChatGPT for Clinicians — launched by OpenAI in April 2026 — gives verified clinicians free access to frontier AI models optimized for healthcare, including deep research capabilities that can survey medical literature in minutes. OpenAI reports that in pre-launch testing, physicians rated 99.6% of the tool's responses as safe and accurate across thousands of real clinical scenarios.
Other notable tools reshaping exam rooms include:
- Microsoft Copilot Health, integrating with data from over 50,000 U.S. hospitals.
- PatientGPT, a generative AI chatbot that integrates with Epic and answers patient inquiries directly through MyChart.
- OpenAI's ChatGPT for Healthcare, an enterprise platform already adopted by Boston Children's Hospital, Cedars-Sinai, and Memorial Sloan Kettering.
The AI medical scribe market alone hit $600 million in revenue in 2025 — more than any other clinical AI application category — and is projected to reach $27.8 billion by 2034, growing at a staggering 48.2% compound annual rate.
This is not a future trend. It's the present reality of American healthcare. And patients are largely walking through it blind.
Why Don't Patients Know About This?
Here's where things get uncomfortable.
In many cases, patients are never explicitly told that an AI tool is being used during their visit — to take notes, to look up treatment options, or to assist with clinical decisions. This isn't necessarily intentional secrecy. It's often a byproduct of how these tools are integrated: operating in the background, embedded in the electronic health record, invisible to the patient sitting across from the doctor.
But the gap is real and widening. A UC Davis study on AI scribe rollout found that patients strongly preferred to be informed early that an AI tool would be used — ideally during appointment scheduling or check-in. Yet 57% said they'd want face-to-face notification, and 45% said email was acceptable — suggesting that no single approach satisfies everyone.
The irony is thick: at the same time doctors are rapidly adopting AI, patients are hiding their own AI use from their doctors. A February 2026 survey found that many patients — particularly Gen Z — don't want their doctors to know they're using AI tools for health information and guidance. It's a mutual secrecy dynamic, and it's eroding trust from both sides.
Legal pressures are also mounting. Because ambient AI scribes record audio from clinical encounters, their use implicates wiretapping and eavesdropping laws — particularly in the 11 U.S. states that require all-party consent for recording. Recent lawsuits against health systems in California allege failures to provide adequate notice or obtain consent for AI-enabled recording.
If you're wondering whether your own doctor uses AI, you're not alone — and in many cases, you have a legal right to ask.
Is This a Good Thing? The Promise and the Peril
Like most powerful technologies, the answer isn't simple.
The promise is significant. Physicians spend nearly two hours on documentation for every hour of direct patient care — a burden widely cited as a leading driver of burnout. AI scribes have been shown to cut documentation time by 50–70% in controlled settings, and health systems using integrated AI workflows have reported a 70% reduction in physician burnout scores related to administrative tasks.
When doctors aren't buried in their keyboards, they make eye contact. They listen. The human connection improves. As one physician put it: "The ambient AI scribe has brought joy back to my practice. I focus on patients, not the computer."
The peril is equally real. AI scribes can produce hallucinations — fabricated but plausible-sounding content that ends up in medical notes. Studies have found that no tested clinical AI scribe produced error-free summaries, and while errors were relatively infrequent, hallucinations and factual inaccuracies were often clinically serious when they did occur.
There is also the question of equity. Researchers at Columbia Nursing have warned that speech recognition systems used by AI scribes are less accurate in transcribing Black patients' speech compared to White patients — a disparity that could compound existing healthcare inequities.
And then there's the deeper worry: what happens when doctors become so dependent on AI-generated answers that their own diagnostic skills begin to erode? The AMA survey found that while 70% of physicians see AI as a tool to reduce burnout, 88% are concerned about potential skill loss — particularly among less experienced doctors.
These are not theoretical concerns. They are the live tensions of a healthcare system in the middle of a transformation it hasn't fully reckoned with.
What This Means for You as a Patient
You don't need to become an AI expert before your next checkup. But you do deserve to be informed — and you have more agency than you might think.
Here's what you can do:
Ask your doctor directly. A simple question — "Do you use any AI tools during our visits, like a scribe or a clinical decision support tool?" — is entirely reasonable. Pay attention to how they answer. Transparency is a sign of a healthy doctor-patient relationship.
Ask how your data is handled. If an AI tool is listening to your visit, what happens to that recording? Is it stored? Who has access? HIPAA-compliant tools should have clear answers.
Request to see AI-generated notes. You have a legal right to access your medical records. If AI helped write them, you should be able to review what was documented and flag any errors.
Don't hide your own AI use. If you've used ChatGPT, a symptom checker, or any AI tool to research your health before your visit, tell your doctor. Mutual honesty builds mutual trust.
The rise of AI in medicine isn't inherently good or bad. What matters is whether it's deployed transparently, with patient awareness and consent, and with robust safeguards against the known risks. Right now, we're somewhere in the messy middle — and sunlight is the best disinfectant.