US patients are screening Tijuana surgeons through layered AI prompts that evaluate credentials, pricing, logistics, niche expertise, and review patterns in a single conversation. Five prompt structures keep appearing in our testing, and they're reshaping which practices get recommended and which get skipped entirely.
In 2019, a patient in Phoenix considering a facelift in Tijuana would have typed something like "best facelift surgeon Tijuana" into Google, scrolled through a few results, clicked on three websites, and maybe filled out a contact form.
Today, that same patient may open ChatGPT or Google Search and type: "I'm looking for a board-certified plastic surgeon in Tijuana who specializes in deep plane facelifts, has strong recovery reviews, and is close to the San Ysidro border crossing."
That's not a simple search query. It's a multi-criteria evaluation prompt. And it changes everything about how your practice needs to be structured online.
You're no longer competing only for rankings on broad keywords. You're competing to be retrieved inside high-intent patient prompts that contain multiple criteria at once. Specialty, credentials, logistics, price, and reputation, often in a single prompt. If your practice isn't easy for AI systems to interpret, you may be missing some of the highest-intent patients crossing the border.
I started paying attention to this shift about a year ago when our coordinators at VIDA began reporting something strange. Leads were arriving with more information than we'd given them. They'd reference specific surgeons by name, quote price ranges we hadn't advertised, and ask about details buried deep in our website. When we asked how they found us, the answers were vague. "I just researched online." But the specificity of their questions told a different story. They hadn't browsed. They'd been briefed.
Something had changed in how US patients were building their shortlist. And the more I looked into it, the clearer it became: a growing number of high-intent patients were using AI to screen us before we ever knew they existed.
From Search Queries to Patient Prompts
This shift didn't happen overnight. Google had been training users to ask longer, more natural questions for years. BERT in 2019, MUM in 2021, and then the integration of AI Overviews directly into search results. Longer, more natural search behavior was already increasing on mobile before ChatGPT launched. Patients always wanted to ask complex, layered questions. The old interface just forced them to break those questions into five separate searches.
AI removed that friction. Now the patient can express the full decision framework in one interaction.
With OpenAI reporting over 200 million weekly active users in 2024, millions are already using ChatGPT to map out complex health and wellness decisions. That's no longer niche behavior. For many patients, it's becoming part of the healthcare decision process. Google executives described at Google I/O (2024) how AI-powered search sessions tend to involve longer, more conversational queries and follow-up questions, with a meaningful share of users asking follow-ups within the same session.
In practice, that means this: when a patient types "Compare the cost, credentials, and recovery support of the top bariatric surgeons in Tijuana," the AI doesn't just look for one page that answers all of that. It fans out. It may pull credentials from one source, pricing from another, and reviews from a third, then assemble a summary. If your practice has that information structured and accessible, you're in the comparison. If it doesn't, you're far less likely to appear.
This isn't limited to ChatGPT power users anymore. Google's AI Overviews and AI Mode are now delivering AI-mediated answers to patients who just "Google it" the way they always have. The patient doesn't need to adopt a new tool. The tool they already use is changing underneath them.
Why These Prompts Matter More Than Generic Traffic
A complaint I hear often from practice owners in Tijuana is: "We're getting leads, but they ask about price and disappear." The standard interpretation is that these are low-quality leads. Price shoppers. Tire-kickers.
But there's a more structural explanation, and it's one most practices haven't considered.
Many of these patients are high-intent. They're collecting data points to feed into a broader comparison process. In the old model, they'd fill out a form, get a call, and enter a sales funnel. In the current model, they might collect your price via WhatsApp, your competitor's price the same way, then go to ChatGPT and ask "compare these two clinics based on what I've found." The lead may not be low-quality. They may have used your coordinator as an information source, then made the decision in a layer you can't see.
Your next patient may not search like a user anymore. They may screen you like an investigator.
The conversion didn't fail at the coordinator stage. It may have failed at the AI comparison stage, where your entity wasn't strong enough to win.
This is particularly relevant for self-pay patients, which describes the vast majority of medical tourism patients. There's no insurance company pre-vetting the surgeon for them. They bear the full trust burden themselves. AI becomes the tool that helps them do what insurance would have done in the US: evaluate, compare, and filter.
In our prompt testing, AI comparison prompts tend to center on narrow, decision-stage questions. They're designed to eliminate options fast. A single ChatGPT session can take a patient from "I'm interested in bariatric surgery in Tijuana" to "I'm going to contact these three clinics" in fifteen minutes. If you're not in that response, you may never enter consideration.
The 5 High-Intent Prompt Patterns Reshaping Medical Tourism
We run approximately 50 to 100 prompts monthly across ChatGPT, Claude, Perplexity, and Google AI Mode, focused on US-to-Tijuana medical tourism queries. Five prompt patterns repeatedly surface.
Results vary by session, location, and model version, but the structural patterns below have been consistent across months of testing.
1. The Safety and Credentials Prompt
What the patient asks:
"Is Dr. [Name] board-certified?" / "Did Dr. [Name] train in the US?" / "Any red flags on this surgeon in Tijuana?" / "Who is a board-certified plastic surgeon in Tijuana who specializes in facelifts, near San Diego?"
This is often where a high-intent patient starts. Not price. Not logistics. Trust. And the reason is rational. They're considering surgery in a foreign country. They may feel they have less legal and institutional protection than they would in the US. There's no insurance company vouching for the surgeon. They need to verify credentials themselves, and AI is the fastest way to do it.
Here's the problem. Mexican board certification (like the Consejo Mexicano de Cirugía Plástica, Estética y Reconstructiva for plastic surgeons) can be rigorous and highly relevant, but it's often less familiar to American patients, and AI systems may not always interpret those credentials clearly when they're inconsistently labeled online. When a patient asks "Is Dr. X board-certified?" the AI may not recognize a Mexican board certification with the same confidence it recognizes an ABPS certification. Some Tijuana surgeons have US training, dual credentials, or fellowship experience, but those distinctions need to be stated clearly and consistently online. Too often this information is buried in a brief bio paragraph or listed as abbreviations that neither patients nor AI can easily parse.
If your credentials aren't clearly stated, AI systems may favor providers whose qualifications are easier to interpret. Not a better surgeon. A more structured one.
What your practice needs: Individual doctor pages with credentials in plain text. Not abbreviation soup. Physician schema markup where appropriate. Training lineage stated explicitly ("Fellowship at Cleveland Clinic under Dr. [Name]," not "advanced training at leading institutions"). Consistent bios across your website, Google Business Profile, and any third-party directories.
2. The Price Transparency Prompt
What the patient asks:
"How much does gastric sleeve in Tijuana cost total?" / "What is included in a mommy makeover package in Tijuana?" / "Compare the total cost of All-on-4® in Tijuana vs San Diego."
In our testing, gastric sleeve and All-on-4 are two of the procedures that generate the most price-comparison prompts. The patient pool is large, the provider count is high, and the price variance between practices is relatively narrow. This makes AI comparison prompts a natural decision tool.
AI can summarize and compare more effectively when it finds clear, broken-down, consistent pricing information. Opaque pricing creates weaker comparison content. If your competitor lists "Gastric sleeve package: $4,900, includes surgeon fee, hospital stay, anesthesia, labs, nutritionist, and airport transport" and your site says "contact us for pricing," you may already be behind in the comparison before the patient ever contacts you.
What your practice needs: Pricing FAQ pages. Package breakdowns that specify what's included and what's not. Recovery, facility, anesthesia, transport, hotel, labs. All in plain text, not behind a form or a PDF.
3. The Niche Procedure Match Prompt
What the patient asks:
"Who is most experienced in deep plane facelift in Tijuana?" / "Which surgeon in Tijuana is known for ethnic rhinoplasty in thick-skin patients?" / "Best revision bariatric surgeon in Tijuana" / "Best dentist in Tijuana for full-mouth restoration with Straumann implants"
This is where generic bio copy fails completely. AI doesn't just look for a surgeon. It looks for a surgeon associated with the exact problem being described. A general "Plastic Surgery in Tijuana" page that mentions facelifts in a bullet list won't compete with a dedicated page explaining deep plane facelift technique, candidacy, recovery, and surgeon experience.
First-time medical tourists tend to ask broader questions. But repeat patients and patients deeper in the decision cycle ask hyper-specific prompts: "Who does SADI-S revision after sleeve in Tijuana?" or "Best surgeon for capsular contracture revision in TJ." These tend to be high-intent prompts. And they require content depth, not breadth.
What your practice needs: Dedicated pages for niche procedures. Real clinical detail, not marketing copy. Case-based explanation. Terminology that patients and AI both recognize. If a surgeon has substantial documented experience with a procedure, that information should appear on the relevant procedure page. Not on a general bio page buried among a dozen other specialties.
4. The Logistics and Recovery Prompt
What the patient asks:
"Which Tijuana clinic helps with border crossing?" / "Who offers transportation from San Diego airport?" / "How long do I need to stay after gastric sleeve in Tijuana?" / "What happens if I have a complication after I go home?"
The medical tourism patient doesn't buy just the procedure. They buy the complete system. And the "near the border" filter in these prompts is more nuanced than most practices realize. San Ysidro is one of the busiest land border crossings in the world. Wait times can vary significantly depending on the day, time, and crossing method. Sitting in a car for two hours after abdominal surgery is medically relevant. Patients know this, and they ask about it.
The post-op question is particularly important. A common medical tourism concern is what happens after the patient goes home. Practices that have robust post-op protocols, telemedicine follow-ups, US-based partner physicians for emergencies, or 24/7 nursing support often don't describe these online because they consider them operational details rather than marketing content. But these are exactly the details that tend to appear in AI recommendations.
What your practice needs: Travel pages. Recovery timelines by procedure. Specific border crossing guidance, including which crossing to use, estimated conditions, and whether patients may benefit from SENTRI or other faster-entry options where applicable. Transport details from specific San Diego locations. Post-op protocol described in plain text: how many follow-ups, telemedicine or in-person, emergency contacts, nursing support hours. If your logistics aren't described clearly online, AI may not include them in the recommendation.
5. The Review Consensus Prompt
What the patient asks:
"What do patients complain about most with Dr. [Name]?" / "Summarize the reviews for this practice" / "Are there recurring complaints about recovery, billing, or communication?" / "Any red flags about [practice name]?"
This is the prompt pattern that catches practices most off guard. AI tools may summarize visible review patterns across public sources, although the quality and completeness of those summaries varies. And patients are increasingly asking adversarial questions as a final trust gate. "Any lawsuits against [practice name]?" or "Is [practice] safe or a scam?" are real prompts. They represent the moment right before booking, and most practices have no strategy for them.
In practice, many practices ask for reviews either too early or too late in the patient journey. At checkout, the patient is still swollen or medicated. Weeks later, motivation has faded. The reviews they do get tend to be generic. Five stars, no substance. These reviews are far less useful because they lack the details AI systems can associate with procedure, surgeon, and recovery context.
What your practice needs: Review volume and review specificity. Consistent brand responses to both positive and negative reviews. A review process that encourages detail (surgeon name, procedure, origin city, recovery experience) without sounding scripted. And for the adversarial prompts: proactive content that addresses concerns directly. Publish clear, factual information about safety protocols, facility accreditation, complication rates, and aftercare systems. When you don't publish that information, you cede the narrative to sources you don't control.
Why Traditional Marketing Often Misses These Prompts
This is not an argument that Meta ads or Google Ads stopped working. They didn't. If you're running paid campaigns that generate leads, keep running them. The issue is that those channels solve a different part of the problem.
Meta ads can create awareness. Google Ads can capture explicit demand. But neither automatically makes your practice legible inside an AI-generated comparison workflow. The patient who sees your Instagram ad and then asks ChatGPT to compare you against two competitors is using two different systems. One you control. One you don't.
BrightLocal's Local Search Ecosystem report (2024) found that business websites were the source category behind roughly 58% of citations in ChatGPT-generated local answers, though the exact percentage depends on the dataset and prompt set used. What matters is the direction: your website can influence what AI retrieves. But it has to contain the right information in the right structure. A beautiful site with gated pricing, abbreviated credentials, and no logistics content is optimized for human browsing, not AI retrieval.
There's another issue. In practice, AI answers often appear to rely on a limited set of easily retrievable sources about Tijuana medical tourism: Medical Tourism Association content, major news features, Patients Beyond Borders, RealSelf, BariatricPal, and a handful of aggregator sites. If aggregator pages become the main sources AI systems surface about your practice, you have less control over how your brand is framed. The comparison is happening on someone else's terms.
This is the gap between traditional SEO/performance marketing and AI retrieval. Call it GEO, AI visibility, or structured retrieval readiness. The goal is the same: make your practice easier for AI systems to interpret accurately. It's not a replacement for what you're already doing. It's the missing layer.
What to Optimize If You Want to Show Up in These Prompts
If you've recognized gaps in your own practice's content while reading those five prompt patterns, here's where to start. Very little of this requires new technology. Most of it requires extracting information your coordinators already share with patients every day over WhatsApp and putting it on your website in a structured, retrievable format.
The coordinator bottleneck is worth emphasizing here. Most Tijuana practices run their patient pipeline through 2 to 8 bilingual coordinators who answer the same questions hundreds of times per month. What's included in the package? How long is recovery? Where do I stay? How do I cross the border? This repetitive Q&A is exactly the content that should be on the website in structured, AI-retrievable format. But it usually isn't, because the coordinators handle it verbally and the website was built as a marketing brochure, not an information system. Your most valuable content lives in WhatsApp conversations that most AI models cannot easily access.
One operational issue that gets overlooked is entity confusion. Many Tijuana practices inadvertently operate under multiple names across platforms. "Dr. Garcia's Clinic" on Google Maps, "Garcia Surgical Center" on the website, "Clínica García" on Mexican directories. For AI models trying to build a coherent entity, this fragmentation makes it much harder to aggregate reviews, mentions, and credentials into a single trust profile. Consolidating your digital identity across platforms is low-effort, high-impact.
A New Marketing Question for Surgeons
For the last decade, the standard marketing question in medical tourism was: "Am I ranking?" Then it became: "How many leads did we get this month?" Both still matter. But there's a third question now, and it's the one most practices aren't asking.
What information is an AI system likely to surface when a patient asks about my specialty? Does my name come up? Are my credentials presented accurately? Is my pricing visible or do I get skipped because a competitor listed theirs? Am I being compared fairly, or is an aggregator page framing my practice on terms I didn't choose?
We've seen better lead quality and fewer trust-friction moments when coordinator messaging matches what patients already saw online. Consistency between what AI says and what the coordinator says builds trust. Inconsistency breaks it instantly. The patient arrives having been "briefed" by AI, and when the coordinator contradicts that briefing (different price, different package inclusions, different credentials emphasis), the lead goes cold.
The patients asking these layered AI prompts aren't the ones you can afford to lose. They tend to be educated, systematic, and comfortable with technology. They're using AI precisely because they want to be thorough, not because they're lazy. A patient asking detailed follow-up questions about recovery risks, altitude considerations, or complication protocols is usually well past the awareness stage. They're not a tire-kicker. They're an investigator.
The next generation of medical tourism demand may not begin with a keyword search. It may begin with a layered AI prompt that compares trust, cost, specialty, and logistics in a single conversation. If your entity isn't built for that environment, you may be invisible before the patient ever visits your website.
Your patients are already asking AI detailed questions about your specialty, your credentials, your reviews, and your prices. The question is whether you know what it's saying back.
Test the exact prompts your patients are likely to use. Try "Best [your specialty] in Tijuana." Then try the harder version: "Any red flags about [your practice name]?" Include pricing comparisons, credential checks, and adversarial queries. See what comes back. That's your starting point.