AI is Here to Assist, Not Replace
By Irene Yeh
As artificial intelligence (AI) continues to enter clinical spaces, there is an understandable concern around AI overtaking the clinician’s role and skillset. However, with patients using AI for themselves and showing up to their appointments with their own AI outputs, healthcare professionals need to adapt and develop the skills they need to utilize these new tools.
At last month’s Integrative Health Symposium, a panel discussed how AI is redefining core competencies and how it can enhance the experience for both patient and doctor. Tom Blue, founding partner of OvationLab and senior vice president of healthcare of AndHealth; Sunjya Schweig, M.D., founder and president of California Clinic for Functional Medicine; and Lexi Gonzales, ND, MS, IFMCP, senior clinical implementation and AI specialist at OvationLab, explored how AI can be used in pragmatic ways to refine workflow efficiency, gather useful insights, and develop personalized, precision care plans for patients.
“[The patients] want a doctor who uses AI,” said Blue. “And yet, if the doctor just uses AI, that’s a cheapened experience.” What sits in between is the skillset that can use the technology and turn it into something of value for both patient and doctor.
Three Core Transformations
“I think one of the biggest misconceptions is that it’s difficult or that it’s somehow scary,” to use AI, said Schweig. He suggested that the misconception is rooted in the “tsunami of information” surrounding AI that can make things difficult and intimidating to keep up. The panel covered three core transformations that can help clinicians “surf the wave” and develop the skills they need for AI.
- Friction to Flow
Identifying friction in practice is key to developing a smoother workflow. Dubbed friction mapping, the process is a “reflection tool” where the clinician thinks about the workflow process, the factors that go into it, and how long it may take to complete. For example, the visit preparation process requires all sorts of prep work: questions, labs, notes, and so on. The clinician thinks about how long it would take to process that data and translate it into notes. This is a common friction point for many clinicians, as it can be a time-consuming and tedious process. As the clinician goes about the workday, they can pinpoint where those drags are in the workflow. When these drag points are considered in a workflow, the clinician begins to develop an “intuitive eye” on smoothing things out, according to Schweig.“I think it’s a good exercise to just think about … your clinical day and start to figure out where those areas of drag are,” added Gonzales. - Data to Insight
Gonzalez mentioned “pajama time,” which is when clinicians write chart notes on their computers outside of work hours. This is when AI can step in by organizing thoughts and keeping a record of work done. By inputting data, clinicians can de-identify it and use a template or prompt to automatically generate a complete summary that may consist of the patient’s medical history, tests conducted, and treatments given—all synthesized into a condensed, concise report that is then shared with the patient.Schweig also noted the divide between consumer-grade AI and medical-grade AI. Consumer-grade platforms, including ChatGPT and Claude, are not HIPAA-compliant. It is important to keep this in mind when choosing what platform to use.It isn’t just limited to summary reports and pajama time notes. Schweig talked about an experience where they conducted an at-home sleep study. For the study, there were five different reports from the participants. Using a custom GPT, they popped in each PDF and start looking through each one for differences. They de-identified the data, put it in the GPT, and requested normal values to be put into a table that highlighted anything abnormal, showed trends over time, and provided some analyses. All of this information was then put on a chart and shared with the patient.
Insight amplification also applies to patients, according to Gonzalez. “They are engaging with ChatGPT … [and] try to validate our recommendations based on ones they gleaned from ChatGPT.”
- Personalized Precision
AI tools can also be used to hyper-personalize care plans. Gonzalez recalled a patient who had the markers of cardiovascular disease and prediabetes. She recommended the Mediterranean diet as a starting point. However, the patient worked as a long haul truck driver who was unable to cook and access fresh food due to being on the road for long stretches of time.By uploading the Mediterranean food plan into ChatGPT, she asked the AI model to generate a list of foods that can be purchased from gas stations and fast food restaurants. It created a cheat sheet that the patient cross-referenced to find and buy the foods pertaining to his diet from stores and restaurants on the road.“He offered an element of trust because I wasn’t rejecting his rejection of my food plan. I was meeting him where he was at,” said Gonzalez. She called it an enlightening experience that added great value to their relationship and empowered the patient by giving him a plan that he could follow.
Beware of Overdependence
Completely trusting what AI generates is never a good idea. The models are still prone to hallucinations and require a human eye to comb through any errors. Additionally, Gonzales mentioned that AI will be “very convincing” and provide evidence to support its answers. However, the model is actually pulling keywords and stringing them together to produce a seemingly convincing, accurate answer.
“I rarely ever accept the first response from AI,” said Gonzales. She encouraged pushing the model to give a better answer, correct any mistakes, and then input a second request to see if a more accurate answer is generated.
“The way AI phrases things sound really nice, but oftentimes, when you sit back and try to figure out what it’s saying, it makes zero sense,” she added.
Schweig also recounted a conversation he had with an OpenAI employee at January’s J.P. Morgan Healthcare Conference. Schweig informed the employee of his experience of using ChatGPT and how it was “worse” compared to previous experiences. The employee confirmed that the company is aware of the issues. While ChatGPT has improved with complex coding and math, it got worse with day-to-day simple responses. The OpenAI team is in the process of figuring out these problems.
Part of the reason why AI models have these issues is because it is probabilistic, meaning they string together sequences and cross-reference themselves against words on the internet and are not deterministic, explained Schweig. “To be honest, the people who are building these models don’t totally understand them. They are also getting surprised.”
The panel emphasized vigilance when using AI tools because of how common hallucinations are. They also acknowledged that it can be easy to overlook these hallucinations due to how much energy goes into multitasking and stressed the importance of slowing down and carefully reading the output the AI generates. Schweig also mentioned having admins read through the output and catching mistakes.
The Clinical Triad
Using AI in the clinic can be overwhelming and intimidating, but Schweig encouraged that the tools are quite intuitive and require a little practice. “Just try one or two things,” Schweig said. “And your brain will adapt to it.” Modern healthcare practices are no longer just clinician and patient. It is clinician, patient, and the AI: the clinical triad.
Multitasking is already a skillset clinicians have. During a patient visit, the clinician may have an EHR open for notes, a scribe tool running, and AI tools, such as ChatGPT or Claude, open. A patient may inquire about something, and the clinician can ask the AI model that question and have it conduct a research dive while the visit is still underway. By the end of the visit, the clinician could have a chart note finished, an action plan established, and research summary that can be reviewed. This is called parallel thought processing, which is essentially doing multiple tasks at once instead of saving it all for after the visit.
The key is to ensure that one’s cognition and attention aren’t fragmented while conducting these at the same time. As Gonzalez puts it, it takes some practice. But clinicians already have the foundational skills to adapt, use, and maximize AI to its fullest potential—and have a convenient assistant that can take some of the load off and allow more focus on the patient.




