Education

What to Do When Patients Say They Want to Check With ChatGPT First

Emilio Alcolea Emilio Alcolea May 4, 2026
HUMAN CRAFTED
Contents
    The 30-second take. When a patient leaves the consultation saying she wants to check with ChatGPT first, the consultation is not over. It has moved into the AI second-opinion window. The clinic that wins that window uses a simple loop: welcome the AI check in the room, give the patient better prompts, send a follow-up with verification links, and make sure the clinic's public sources can defend the answer. We call this the AI Second-Opinion Loop. It turns ChatGPT from a perceived threat into a third-party validator. This article shows what the coordinator should say, what the prompt sheet should include, what to send after the consultation, and what your website needs to have ready before the patient ever asks.

    1. The phrase that ended the old consultation funnel

    A patient finishes the consultation. She has the printed quote, she liked the surgeon, the result photos look good. On the way out she says, “I’m going to check with ChatGPT and get back to you.”

    Two years ago that sentence did not exist. The patient said “let me think about it,” which meant she was going to ask her partner, look at her budget, and probably circle back if the price felt right. Now the same sentence has a third party in it. A model. And the model is going to produce a synthesized answer about your clinic, your surgeon, the procedure, the safety, and the comparison set, in seconds, with reasoning, sometimes with citations.

    I learned how much this had changed while previously running marketing at VIDA Wellness & Beauty Center. We started asking every patient in consultation how they had heard about us, and one in three was saying some version of “ChatGPT mentioned you” or “I asked ChatGPT and it told me to look at you.” By early 2026, in our own consultation tracking, some version of the AI second-opinion phrase was showing up in roughly one in three consultations. That is operator-reported, not an industry benchmark, but it changed how we trained the team.

    The old funnel was: marketing brings the lead, the coordinator qualifies, the surgeon consults, the patient decides. The new funnel adds a step between consultation and decision. The patient consults, then validates with AI, then decides. If your clinic is invisible, misrepresented, or thinly cited in that AI validation step, you lose leads you already paid to acquire.

    This article is the playbook for that step. What the coordinator should say in the room. What the follow-up should look like. What the site needs to have ready before the patient even arrives. The work compounds, and the clinics doing it now will be 12 months ahead of the ones that wait.

    2. Why this is now part of every consultation, not just some

    Three things made the AI second opinion universal in 2025-2026, and none of them are reversing.

    The first is patient anxiety. Crossing the border for surgery, going under anesthesia in another country, recovering in a hotel near a clinic, those decisions sit inside a higher anxiety budget than a domestic procedure does. AI is the cheapest, fastest, anonymous validator she has access to. She can ask the question she would never ask the coordinator and get an answer in 30 seconds at 11 PM in her hotel room.

    The second is reach. AI is no longer a separate destination the patient has to seek out. ChatGPT, Gemini, Perplexity, and Google AI features are now part of normal research behavior. The patient may start with Google, move to ChatGPT, check Perplexity for sources, and come back to your coordinator with screenshots from all three.

    The third is trust transfer. The 2025 KFF Health Tracking Poll found that roughly one in six US adults already turn to AI tools for health information, with the share rising fastest among adults under 50, exactly the demographic that drives plastic surgery, bariatric, and dental tourism volume. The patient is not asking AI because she trusts AI more than her surgeon. She is asking AI because AI is the only second opinion she can get without scheduling another consultation.

    The combined effect is that the AI second-opinion conversation is no longer optional, segment-specific, or limited to early adopters. It is the default behavior of the patient demographic that drives medical tourism revenue.

    The consultation does not end when the patient walks out. It ends when the model agrees with you.

    3. The AI Second-Opinion Loop

    The clinics winning this conversation run a five-step loop, in order, across the room and the 48 hours that follow it.

    StepWhat the clinic doesWhy it matters
    1. Welcome itThe coordinator says the AI check is a good idea.Lowers defensiveness and keeps trust in the room.
    2. Shape itThe coordinator hands over neutral prompts.Helps the patient ask better questions instead of vague ones.
    3. Support itThe follow-up includes verification links and source notes.Gives the patient sources AI may miss or underweight.
    4. Defend itThe website, reviews, and third-party profiles align with the consultation.Makes the AI second opinion consistent with what the patient was told.
    5. Re-engageThe coordinator invites the patient to send screenshots or questions.Turns the AI check into a reason to continue the conversation.

    Each step is independently easy. The compound effect is what most clinics miss. Skip step 1 and the patient hides the AI check. Skip step 2 and she asks vague prompts that produce vague answers. Skip step 3 and she validates against thin sources. Skip step 4 and the AI answer contradicts the consultation. Skip step 5 and the conversation dies in silence.

    The next four sections walk through the operational version of each step.

    4. What coordinators should say in the room

    This is the highest-impact three minutes of the entire funnel and almost no clinic has trained for it.

    When the patient says “I’ll check with ChatGPT first,” most coordinators do one of three things. They smile politely and say “sure, take your time.” They get defensive and say “don’t trust ChatGPT, it makes things up.” Or they pretend they did not hear it and push the booking. All three lose the patient.

    The right response welcomes the AI check explicitly and shapes it.

    Here is the script the coordinator can use, adapted from what we built at VIDA:

    “Good idea, a lot of patients do that. If you want, I can give you a sheet with the four or five questions that will give you the most useful answers. ChatGPT is decent for the basics but it gives much better answers when you ask it specific things instead of general ones. Send me what comes back if you have questions. We are happy to compare.”

    Three things happen with that response.

    The patient stops feeling like she is doing something behind your back. She is now doing something with you. That is the trust shift.

    The coordinator gets to plant the prompts. The patient was going to ask AI either way. By handing over a prompt sheet, you shape the research path instead of leaving the patient with vague prompts and random comparisons. Better prompts produce more specific answers, and specific answers are easier to verify.

    The coordinator sets up the follow-up. “Send me what comes back” is permission to re-engage 24-48 hours later when the AI answer is fresh and any inaccuracies can be addressed.

    That last part matters because most coordinators lose the patient in the silence between consultation and decision. The AI prompt sheet creates a reason for the patient to come back into the conversation, and it does not feel like a sales follow-up. It feels like a research partnership.

    What to say vs what not to say

    The hardest part of training a coordinator for this conversation is unlearning the defensive script she already has. Use this table as a reference card.

    Patient saysDo not saySay this instead
    ”I want to check with ChatGPT first.""Don’t trust ChatGPT.""Good idea. A lot of patients do that. Let me give you the prompts that usually produce the most useful answers."
    "ChatGPT said another doctor is better.""That’s wrong.""Send me the screenshot. Let’s look at what sources it used and whether the comparison is using the same procedure and credentials."
    "ChatGPT gave me a different price.""Our price is the correct one.""AI often pulls old or generic ranges. Your quote is based on your procedure plan, operating time, anesthesia, and recovery needs."
    "ChatGPT could not find your doctor.""That doesn’t matter.""That tells us the public profile needs to be clearer. I can send you the verification links directly.”

    The pattern is consistent. The defensive answer treats AI as a competitor. The right answer treats AI as a research partner the patient brought along.

    5. What prompts to give the patient

    The prompt sheet is one piece of paper or a one-screen text message. Five prompts max.

    Prompt typeExampleWhy this prompt
    Open shortlist”Who are the best plastic surgeons in Tijuana for a deep plane facelift?”Tests whether your clinic appears at all when the patient asks generally. If you do not appear, you have a foundation problem.
    Specific surgeon”Is Dr. [Your Name] in Tijuana a good option for [procedure]?”Tests whether AI can defend a specific recommendation about you. If yes, you have entity strength.
    Comparison”Compare Dr. [Your Name] with two other surgeons in Tijuana for [procedure].”Tests whether AI can hold your name up against direct competitors with reasoning.
    Safety / credentials”What are the credentials and safety record of Dr. [Your Name] in Tijuana?”Tests whether AI can find your CMCPER, hospital affiliation, and case context. If the answer is thin, your bio is undercooked.
    Cost transparency”What is the typical cost range for [procedure] in Tijuana with Dr. [Your Name]?”Tests whether AI is repeating consistent pricing or contradicting itself across sources.

    Replace the bracketed placeholders with the surgeon’s real name and the patient’s specific procedure before handing over the sheet. Personalization matters here. A generic prompt sheet looks like marketing material. A prompt sheet with the surgeon’s name and the patient’s exact procedure looks like a custom checklist.

    Do not include prompts that are leading, like “Why is Dr. X the best.” Those produce positive-sounding answers that the patient will not trust because they sound canned. Stick to neutral, fact-finding prompts that put your clinic in front of AI’s actual evaluation surface.

    A note on framing: the prompt sheet should not tell the patient what decision to make. It should help the patient ask clearer questions and verify the same facts discussed in consultation. That keeps the conversation honest and keeps the clinic on the right side of how this scales.

    If you have done the foundation work, the AI answer will be defensible. If not, the prompt sheet is also a diagnostic tool that tells you exactly which gap to fix first. The deeper version of this same diagnostic is the 5 × 5 × 3 AI Visibility Test, which is the structured form of what the patient is doing informally with one prompt at a time.

    6. What to send after the consultation

    The 24-48 hours after a consultation are the AI validation window. Most clinics do nothing in that window because they are waiting for the patient to “get back to us.” That is a mistake. Some patients go further and upload your written quote into ChatGPT to ask whether the price is reasonable, which adds another verification surface to plan for.

    Here is the follow-up template, broken down by element so coordinators can adapt it without losing the structure.

    Follow-up elementExample wordingPurpose
    Thank-you line”Thanks again for coming in yesterday.”Keeps tone personal, not transactional.
    Prompt handoff”Here are the prompts I’d recommend running on ChatGPT or Perplexity to validate what we discussed.”Shapes the AI check without sounding defensive.
    Verification links”Two pieces of context that don’t always make it into AI answers: [CMCPER profile + link] and [hospital affiliation page + link]. Both are verifiable independently.”Gives the patient sources to validate against.
    Screenshot invitation”Send me what comes back if anything looks confusing or contradicts what we talked about.”Creates a natural re-entry point.
    Stability note”The AI answer changes every few weeks as new sources are indexed. The underlying facts about the surgeon’s certifications, hospital affiliation, and where the procedure is performed do not change.”Pre-empts confusion when the answer shifts in a follow-up check.

    Three things this follow-up does that a generic “following up to see if you have questions” email does not.

    It gives the patient the prompts. Half the patients do not actually run the AI test until prompted. By handing over the prompts in a follow-up, you increase the rate at which the AI validation step actually happens with your shaping rather than without it.

    It pre-empts the most common AI failure modes. If your clinic is missing a piece of information that AI answers consistently get wrong, you address it in the follow-up before the patient runs the prompt. The patient does not feel sold to. She feels prepared.

    It creates a citation hook. Linking to your CMCPER profile, your hospital affiliation page, and any third-party verification source means the patient who clicks through is now exposed to your strongest signals before the AI conversation even starts.

    Want to see how AI describes your clinic before your next patient does?

    The Free AI Visibility Scorecard runs the 5 × 5 × 3 test on your clinic, scores each answer against a structured rubric, and tells you exactly which gaps the AI second opinion is exposing. Delivered in 24 hours.

    7. What your website must support before the patient asks

    The coordinator script and the follow-up template only work if the AI answer the patient actually gets is defensible. That defensibility is built on the site, not in the consultation.

    Source elementWhat it must includeWhy AI needs it
    Canonical doctor bioFull name, specialty, credentials, hospital affiliation, sameAs links to LinkedIn, CMCPER profile, and directory listingsBuilds a stable doctor entity AI can recognize and defend across surfaces.
    Procedure pagesPatient-language procedure names (deep plane facelift, gastric sleeve), candidacy criteria, safety protocols, recovery timeline, price rangeConnects the doctor entity to the specific patient prompt.
    Hospital and accreditation contextFacility name, location, accreditation status when applicable, anesthesia setup, emergency protocolsSupports the safety questions cross-border patients ask AI most often.
    Procedure-specific reviewsDoctor name, specific procedure, patient origin, recovery context, outcome timelineGives AI extractable patient evidence rather than generic stars.
    Pricing languageClear range and what changes the quote (procedure plan, operating time, anesthesia, recovery needs)Prevents AI from pulling random or outdated numbers across sources.

    The full guide on AI visibility for Tijuana surgeons goes deeper on each of these as the five fundamentals. The summary version, for the purposes of this article, is that the consultation playbook only works if the foundation is in place. Without the foundation, the prompt sheet becomes a fast diagnostic of how much trouble you are in.

    8. Common mistakes clinics make in the AI second-opinion window

    After watching this play out across consultations at VIDA and during Tersefy diagnostic engagements, five mistakes recur.

    Treating AI as the enemy. Coordinators trained to deflect “don’t trust ChatGPT, it makes things up” lose the patient. The patient is going to use AI either way. Fighting the tool only positions the clinic as defensive.

    Not training the coordinator. The coordinator is the only person who hears the AI second-opinion phrase before the patient leaves. If she is not trained, she defaults to whatever script she has used for years, and the moment passes. Training is a 30-minute conversation and a printed prompt sheet. Most clinics still have not done it.

    Sending follow-ups without prompts. A “following up to see if you have questions” email is a missed opportunity. The patient already has questions. They are addressed to ChatGPT, not to you. The follow-up has to enter the AI conversation, not run parallel to it.

    Not auditing what AI actually says. Most surgeons have never run the prompt sheet on themselves. They assume AI is generally positive about their clinic because they assume AI is generally accurate. Both assumptions are often wrong, and the surgeon discovers this only when a patient mentions specific wrong information she got from ChatGPT in the consultation. If competitors keep showing up in your prompts and you do not, the answer is in the sources AI is using for them, not in the model itself.

    Letting it stay a marketing problem. The AI second-opinion conversation is owned by marketing in most clinics, which means it gets ignored when marketing is busy with paid ads. It is actually a sales operations problem because it lives inside the consultation. Whoever runs the consultation flow has to own the AI piece, with marketing supporting on content and structured data.

    The coordinator who treats AI as a threat loses the room. The coordinator who treats AI as a validator keeps the patient in the conversation.

    9. What we have seen change when clinics install this loop

    These are operator-reported observations from VIDA over a 12-month measurement window and from Tersefy diagnostic engagements with other clinics, not independently audited results or guarantees. The pattern is still useful because it shows what to watch for once the loop is installed.

    Behavior shiftWhat it looks likeWhy it happens
    Faster decision cyclePatients come back sooner after consultationThe AI answer matches the consultation instead of reopening doubt.
    Less price-only comparisonPatients ask fewer “can you match this quote” questionsThe validation step shifts focus toward safety, credentials, and fit.
    Better follow-up re-entryCoordinators have a natural reason to re-engageThe screenshot or AI answer becomes the next conversation.
    More AI-attributed inboundMore patients mention ChatGPT, Gemini, or Perplexity in consultationThe clinic appears more often during pre-consultation research.

    The combined effect is that AI visibility stops being a content problem and becomes an operations problem. The room, the follow-up, and the site reinforce each other. None of the three works in isolation.

    10. How to start this week without rebuilding everything

    You do not need to install the full Loop before Friday. Three moves get you to most of the value.

    MoveOwnerTime requiredOutput
    Run the 5 × 5 × 3 self-test on your clinicMarketing or owner30-60 minutesBaseline visibility sheet across 5 prompts × 5 platforms × 3 runs
    Train the coordinator on the room scriptSales manager or owner30 minutes role-playRoom script + printed prompt sheet at front desk
    Update the canonical doctor bioMarketing + web1-2 hoursPlain-text bio with full credentials, sameAs links, hospital affiliation

    Three moves, executable inside one week, and they unlock the rest of the playbook because they create the loop between room, follow-up, and site. Everything else, the procedure pages, the review system, the structured data, the citation building, compounds on top of these three.

    Want a deeper read on which gap to close first?

    The Cross-Border GEO Audit runs the 5 × 5 × 3 test at scale, maps the source pattern explicitly, and delivers a gap inventory tied to specific URLs you can hand to your dev or marketing team. $997, 3 business days, credited toward GEO Setup if you continue within 30 days.

    Quick answers

    What should a coordinator say when a patient mentions checking ChatGPT?

    Welcome it. Say: “Good idea. A lot of patients do that. Let me give you the prompts that usually produce the clearest answers.” Then hand over a prompt sheet and invite the patient to send screenshots if anything looks confusing.

    Is it bad if a patient checks ChatGPT after the consultation?

    No. It is bad if the patient checks ChatGPT and your clinic does not appear, or appears with wrong information, or appears without sources. The check itself is now part of the buying process. Your job is to make sure the answer the patient gets is one you can defend.

    How long does it take for AI to update after I fix a bio or a directory listing?

    Two to four weeks for most surfaces. ChatGPT and Perplexity update fastest because they pull from recent grounding. Gemini and Google AI Overviews lag because they rely more on Google’s primary index. Plan your fixes at least a month before any campaign that depends on them.

    Send the link, with three or four specific prompts that will make your strongest case. The patient is going to ChatGPT either way. You can either pretend it is not happening or you can shape what the answer looks like.

    What is the single most important fix to make AI describe my clinic correctly?

    A canonical bio in plain text, on your own site, with credential context, hospital affiliation, procedure list, and sameAs links to your LinkedIn, board profile, and directory listings. That single page is what most AI answers about you will be built from.

    Can I block AI from indexing my site if I do not want it scraping?

    You can block AI crawlers with robots.txt, but clinics that depend on patient acquisition usually need the opposite: clean, structured, verifiable information that AI tools can read and cite.

    Does this matter if my patients are mostly local and not US-based?

    Less, but not none. AI second-opinion behavior is most pronounced in cross-border medical tourism because trust friction is highest. Local patients still check Google, GBP reviews, and increasingly AI summaries. The mechanics are the same. The volume is lower.

    What to do next

    The shift is happening with or without you. The patient is going to check with ChatGPT, Gemini, or Perplexity before she signs the consent form. The only variable is whether the answer she gets is one your clinic can defend.

    Run the 5 × 5 × 3 self-test this week. Train the coordinator on the room script. Update the bio. Those three moves install the Loop. The deeper foundation work, covered in the full guide on AI visibility for Tijuana surgeons, is what stabilizes the Loop over time.

    Clinics that built this loop in 2024 are 18 months ahead of clinics starting now. The cost of waiting is not paid in dollars. It is paid in patients who walked out of your consultation, asked AI, got an answer that was not yours, and never came back.

    Sources

    • KFF Health Tracking Poll (2025). Health information seeking through AI tools. Cited inline in section 2.
    • Aggarwal, P. et al. (2024). GEO: Generative Engine Optimization. Princeton University. arXiv:2311.09735. Reference for the entity-and-citation framing throughout.
    • AirOps (2025). Source-attribution study across LLM-grounded answers. Reference for the source-update lag in Quick answers.
    • Patients Beyond Borders (2024). Medical Tourism Statistics & Facts, Mexico chapter. Reference for cross-border patient behavior context.
    • Mexican Council of Plastic, Aesthetic, and Reconstructive Surgery (CMCPER), public certification registry. Referenced in sections 5, 6, 7.
    • Tersefy internal observations (2025-2026), VIDA Wellness & Beauty Center reference implementation, n = 4 surgeons, 12-month measurement window. Cited inline in sections 1 and 9.
    Version history(2 versions)
    • v2.02026-05-04Editorial consolidation. Named the AI Second-Opinion Loop framework, added 4 tables, removed unsourced reach claims, softened manipulative phrasing, fixed broken markdown, added compliance note.
    • v1.02026-05-04Initial publication. Clinic playbook for the post-consultation AI validation window.
    Emilio Alcolea
    Author

    Emilio Alcolea

    Founder, Tersefy. Former Head of Marketing & Sales at VIDA Wellness & Beauty Center (Tijuana's largest medical tourism clinic) and Washington Vascular Specialists (USA). Built AI visibility systems for 5 surgeons, taking them from invisible to AI-recommended in 6 months.

    VIDA Wellness & Beauty Center Washington Vascular 57 articles Tijuana-based
    See what AI says about you Get Free Scorecard