What is GEO? The Medical Tourism Clinic's Guide to Generative Engine Optimization

Generative Engine Optimization is the practice of structuring your clinic's digital presence so AI systems can more easily recognize, retrieve, and reference your doctors when patients ask for recommendations.

I could leave it at that definition and let you Google the rest. But the reason I'm writing this is because I spent the better part of a year figuring out what GEO actually means when you're the one implementing it. Not from a blog post. Not from a conference talk. From inside a clinic in Tijuana where the leads were drying up and nobody could tell me why.

If your organic numbers have been slipping lately, not dramatically, more like a faucet that's slowly losing pressure, this may be one of the reasons.

What happened to our leads

I run marketing and sales for one of the largest medical tourism clinics in Tijuana. Multiple specialties, multiple doctors, significant ad spend, years of SEO investment. By every traditional metric we were in good shape.

Then the numbers started moving in the wrong direction. Not a collapse. Just a consistent downward drift in organic inquiries that nobody on the team could explain. Rankings hadn't changed much. Reviews were still strong. Content was still going out. But fewer patients were coming through the door from organic channels.

It took us a while to figure out what was going on. The answer, once we found it, was embarrassingly simple: patients had changed where they ask for recommendations. They weren't typing "best bariatric surgeon in Tijuana" into Google anymore. They were asking ChatGPT. And ChatGPT had no idea our doctors existed.

That realization sent me down a rabbit hole that eventually led to everything we now do under the name GEO.

The scale of the shift

I'm not going to dump a wall of statistics on you. But there are a few numbers worth sitting with because they explain why this isn't a temporary trend.

BrightEdge published their AI Traffic Report for the first half of 2025. AI-referred sessions to healthcare websites grew 527% year-over-year. Even allowing for a small base, that's hard to ignore.

Capgemini ran a study that same year and found 58% of consumers were using AI tools instead of traditional search engines for some product and service searches.

Meanwhile, Google watched all this happen and responded by putting AI-generated answers (what they call AI Overviews) at the top of search results in over 200 countries. So even patients who still use Google are now reading an AI-generated answer before they see a single organic link.

For Tijuana specifically, the Baja Health Cluster estimates over one million medical and wellness visitors arrive in the region every year. Most of them are Americans who live 20 minutes away in San Diego. These are people who ask their phone a question while sitting on the couch. And increasingly, that question goes to an AI model, not a search engine.

If a growing share of your potential patients are asking AI for recommendations, and AI doesn't know your clinic exists, you have a leak in your pipeline that no amount of traditional SEO will fix.

527%
AI traffic growth to healthcare sites, YoY
BrightEdge, 2025
58%
Of consumers already use AI instead of search
Capgemini, 2025
1M+
Medical visitors to Baja California annually
Baja Health Cluster

So what is GEO, technically

The term actually comes from a 2024 research paper out of Princeton, Georgia Tech, and IIT Delhi, published at KDD, one of the top conferences in data science. They coined the term "Generative Engine Optimization" and built a benchmark to test which content strategies actually improve visibility in AI-generated responses.

In their benchmark, content with embedded statistics improved citation rates by up to 41%. Content with source references improved by up to 28%. And overall, their GEO strategies boosted visibility by up to 40%.

Those findings suggest that AI systems respond better to content that is factual, sourced, and structured for extraction. Which is what 90% of clinic websites are not. Most clinic websites say "world-class care" and "experienced team" and "state-of-the-art facilities." None of that gives AI anything to cite.

GEO, in practice, means rebuilding your digital presence so that every important fact about your doctors, your procedures, your credentials, and your outcomes is structured in a way that AI can find it, verify it, and use it in a response.

Some people call it AEO (Answer Engine Optimization). Others call it LLMO. The label doesn't matter. The principle is the same: if you want to be the answer AI gives to a patient, you have to give AI something worth citing.

How GEO differs from SEO

If you've been doing SEO, keep doing it. GEO doesn't replace SEO. It sits on top of it.

But the mechanics are different and it's worth understanding why.

SEO is a competition for position on a list. You optimize your page, you build links, you target keywords, and you try to get Google to rank you higher than the clinic down the street. The patient sees 10 links and clicks one. Your job is to be in those top 10.

GEO is a competition to be the answer itself. There's no list. AI pulls from the information available to it, weighs what appears most relevant and credible, and generates a response. Sometimes it mentions three clinics. Sometimes it mentions one. Sometimes it mentions none. There are no rankings to track. There's just: did AI cite you, or didn't it?

In SEO, a patient has to click your link and visit your site to learn anything about you. In GEO, the patient learns about you directly from the AI's response. Your name, your credentials, your pricing, your location. All delivered before they ever visit a website.

Traditional SEO GEO (Generative Engine Optimization)
Optimizes for Google rankingsOptimizes for AI citations
Patient sees list of 10 linksPatient sees one synthesized answer
Patient clicks a linkPatient reads the answer directly
Backlinks, keywords, page speedSchema, entities, factual density
Best-optimized page winsMost extractable, authoritative content wins
Measure rankings and CTRMeasure mentions, citations, AI visibility

What we've observed about AI visibility signals

Nobody outside OpenAI, Google, or Anthropic knows exactly how these models decide what to cite. The Princeton paper gave us a framework. Everything else comes from practitioners testing, observing, and comparing notes.

What I can tell you is what we've seen after months of rebuilding the digital presence of multiple doctors across different specialties. These patterns have been consistent enough that we now treat them as our operating framework.

Extractable content. AI needs statements it can pull directly into a response. "Dr. Rodriguez is a board-certified bariatric surgeon with a PhD, FACS fellowship, dual US-Mexico licensure, and over 7,800 procedures" is something AI can work with. "Our experienced team provides world-class care" is not. Most clinic websites are full of the second kind and empty of the first.

Schema markup. This is machine-readable code embedded in your website that tells AI what each piece of content actually means. Physician schema, MedicalOrganization, MedicalProcedure, FAQPage. Without it, the content may still be usable, but it's less clearly labeled for machines. With it, every credential, every procedure, every FAQ answer is labeled and organized for extraction.

Review specificity. Not just volume and rating, though those matter. What seems to really move the needle is the content of the reviews. "Dr. Quiroz performed my deep plane facelift at VIDA Wellness. Natural results, minimal bruising, flew in from San Diego." That gives AI several useful signals. "Great doctor, 5 stars" gives it far less to work with.

Entity recognition. AI needs to understand your doctor as a distinct person tied to specific procedures, credentials, and locations. Not just a name on a staff page. A recognizable professional entity with a consistent profile across the web. Without their own website, their own Google Business profile, and consistent directory listings, the AI has very little to build from.

Cross-platform consistency. AI checks multiple sources. If your doctor's credentials are different on their personal site, Healthgrades, Doctoralia, and Google Business, the AI can't build a reliable profile. Every platform needs to say the same thing.

Freshness. Active publishing signals an active practice. Stale content signals an inactive one. AI appears to weight recently published material more heavily, especially for time-sensitive queries.

These are our observations, not gospel. But they've held up across every implementation we've done.

Why this matters more in Tijuana than most places

Tijuana has something most medical tourism destinations don't: a massive gap between the quality of care and the quality of digital infrastructure.

The medical talent in Tijuana is stronger than its digital footprint suggests. The Baja Health Cluster calls Tijuana the number one medical destination in Mexico for international patients. Over one million visitors a year. Procedures cost 40-70% less than the US. Many doctors trained in American residency programs. The city is 20 minutes from San Diego. Patients can get surgery and recover across the border from home.

All of that is exactly the kind of factual, structured, verifiable information that AI loves to work with. The raw material for AI visibility is already there.

The problem is that almost nobody has organized it. Most clinic websites in Tijuana are brochure sites. No schema. No individual doctor profiles. Generic reviews. Published research disconnected from current practice. Academic credentials buried in PDFs that no AI can read.

The gap between what Tijuana surgeons have accomplished and what AI can see about them is probably the widest in any major medical tourism market. That's the bad news. The good news is that clinics that fix it early may have an outsized advantage.

What the implementation looks like

When we built this out for doctors at our clinic, it wasn't a weekend project. There's no shortcut and no tool that automates it. It's layered work that compounds over time.

The foundation is entity building. Personal websites for each doctor with complete schema markup. Individual Google Business profiles. Consistent information across every directory (RealSelf, Healthgrades, Doctoralia). Published research linked to current practice. In our experience, this usually takes about two weeks, and it makes everything else work better.

On top of that goes content architecture. FAQ hubs with the answer first and context second. Procedure pages with real pricing, real recovery timelines, real outcomes data. Blog posts that include statistics and cite their sources. Content that exists to be useful, not to fill a marketing calendar.

Then reputation infrastructure. A review process that guides patients to mention specific details. Consistent velocity. Active management across platforms. This never stops.

And monitoring. We test 20+ prompts per doctor across ChatGPT, Gemini, Claude, and Perplexity every month. We document what changed. We adjust. Most people skip this part, which is why most people can't tell you whether their GEO work is actually doing anything.

What changed for us

We started this work in late 2025. Within 60 days of implementation, we began seeing previously invisible doctors appear in our prompt testing across AI platforms. Not generically. With their credentials, procedure counts, pricing, and direct links to their sites.

I wrote about one case in detail: two facelift surgeons at our clinic who trained under Dr. Bruce Connell, a surgeon widely credited with pioneering the deep plane facelift technique. Between them they do over 40 deep plane facelifts a month. And ChatGPT had never heard of either of them. That article walks through exactly what was broken and how we fixed it.

The patterns held across every specialty we worked on. Structured data, complete entity profiles, detailed reviews: those clinics showed up. Everyone else was invisible. It wasn't luck. It was architecture.

What to do right now

Open ChatGPT. Type "Best [your specialty] in Tijuana." See what happens. Try Gemini. Try Perplexity. Check Google and look at the AI Overview above the regular results.

If you're there, check that the information is accurate and complete. If you're not, you just saw the problem with your own eyes. And now you have a framework for understanding what needs to change.

Gartner forecasts that traditional search volume will drop 25% by end of 2026. The clinics building AI visibility now may benefit from a compounding advantage. The ones that wait may find it harder to dislodge practices that established visibility earlier.

Most clinics in Tijuana haven't started. That's the window. It won't last, but right now it's wide open.

Want to see where your clinic stands?

We test 20+ real patient prompts across ChatGPT, Gemini, Claude, and Perplexity. Full report in 48 hours. Free.

Get your free AI audit