Why Your Tijuana Clinic's Traffic Dropped, Even If Your SEO Reports Still Look Good

Your traffic dropped because a growing share of patient decisions now happen inside AI-generated answers, before anyone clicks through to your website.

Your rankings are probably green. Organic sessions are stable, maybe even up. The agency shipped four new blog posts and earned some backlinks. Everything on the dashboard says things are working.

So why are consultations from San Diego and Orange County down again?

I sit in these meetings at VIDA. I've sat in them for years. And the gap between what the SEO report says and what the surgery schedule shows has gotten wide enough that I stopped treating it as noise. Something structural has changed. Not in our SEO. Not in our content quality. Not even in patient demand. What changed is the path patients take between wanting a procedure and choosing a doctor.

A growing share of that path now happens inside AI-generated answers. Google AI Overviews. ChatGPT. Gemini. Perplexity. A patient types "best gastric sleeve surgeon in Tijuana" or "facelift Tijuana cost," and increasingly gets a summary, comparison, or shortlist before ever reaching your website. Your rankings didn't drop. But the click that used to follow those rankings is increasingly intercepted.

Over the last year, I've been tracking this. Testing prompts. Watching analytics. Comparing what our reports said with what our coordinators actually experienced on the ground. This article is what I found.

360
Open-web clicks per 1,000 Google searches
SparkToro, 2024
25%
Projected drop in traditional search volume
Gartner, 2024
527%
AI-referred healthcare traffic growth YoY (small base)
BrightEdge, 2025

The Zero-Click Shift: Where Did Your Patients Go?

They didn't leave. They just stopped clicking.

SparkToro and Datos published a clickstream study in 2024 that measured what happens after a Google search. For every 1,000 searches, only about 360 clicks reached the open web. Not Google-owned properties. Not ads. The open web, where your website lives. The rest stayed inside Google's ecosystem or ended without a click at all.

That measurement was taken while Google's answer layer was still evolving, before AI-driven search features became more prominent in the user experience. Before AI assistants became a routine place for users to ask health-related questions at scale.

Search Engine Land reported in early 2025, using Similarweb traffic data, that organic click share in Google continues to decline. In markets with clean measurement, organic click share dropped from 47.1% to 43.5% year-over-year. The US numbers are harder to isolate cleanly, but the directional trend appears similar. Fewer clicks per search. More answers consumed without a visit.

Most of the data cited in this article comes from studies of Google AI Overviews specifically. ChatGPT, Gemini, and Perplexity each behave differently in how they select and cite sources. But the directional pattern is similar across platforms: AI systems summarize, compare, and recommend before the patient clicks through to any website.

Now apply this to the queries that drive your business. "Gastric sleeve Tijuana cost." "Best facelift surgeon in Tijuana." "Dental implants Tijuana reviews." These aren't casual informational queries. These are decision-stage searches from patients who are actively comparing options before crossing a border. And increasingly, the comparison happens inside the answer layer. The patient may get a price range, a shortlist of doctors, and a summary of pros and cons before deciding whether your site is worth visiting. They may never visit your site, or they may visit only one site: the one the AI mentioned by name.

Your traffic didn't disappear. More of the decision now happens before the click.

The 25% Warning: Why This Increasingly Looks Structural

In February 2024, Gartner's research division projected that traditional search engine volume would drop 25% by the end of 2026 due to AI chatbots and virtual agents. That forecast came from an enterprise analyst firm putting a number on it for their Fortune 500 clients, not from a marketing blog or a social media hot take.

At the same time, BrightEdge's Data Cube report found that AI-referred traffic to websites surged dramatically over the last year, while also noting that organic search remained the primary conversion driver. The 527% growth in AI-referred healthcare traffic is a real number, but it's growing from a small base. Both things can be true at once. Organic is still the biggest channel. But AI discovery is growing fast enough to reshape behavior before it dominates the analytics dashboard.

This is the part most practice owners overlook. They look at their Google Analytics, see organic traffic holding relatively steady, and conclude that AI isn't yet relevant to their business. But the shift in patient behavior happens before it shows up in your traffic. A patient who used to visit six practice websites may now visit two because an AI system effectively pre-filtered the rest. The patient who used to click through to your pricing page now already has a price range from Perplexity. The patient who used to read your surgeon bio now has a ChatGPT summary that may or may not include your doctor's name.

Many medical tourism patients are high-research patients, especially for elective procedures with large price differences across borders. They compare doctors, costs, risks, logistics, and credentials before crossing a border. That research profile is exactly the type AI tools are built to serve. Summarize. Compare. Shortlist. Recommend.

This increasingly looks structural, not temporary. It's a shift in how medical tourists compare options.

"Your rankings didn't drop. But the click that used to follow those rankings is increasingly intercepted by an AI answer layer the patient trusts enough to skip your website entirely."

Why SEO Reports Still Look Fine While Revenue Feels Worse

Here's where it gets frustrating for operators. Your agency isn't lying to you. Rankings can remain stable. Search Console shows impressions. Maybe even growing impressions, because BrightEdge's Data Cube report found that total search impressions increased by 49% over the first year of AI Overviews. Google usage isn't declining. People are searching more, not less.

But fewer of those impressions convert to clicks. And fewer of those clicks come from patients who are still genuinely comparing. Some may have already made their shortlist inside an AI answer and are only visiting your site to confirm a detail or find your WhatsApp number.

BrightEdge also tracked citation overlap between AI Overviews and organic rankings in that same report. It increased from 32% to 54% over a matter of months. That suggests ranking well can improve your chances of being cited in AI Overviews. But 54% overlap still means nearly half of AI Overview citations go to sources that aren't the top organic result. Ranking helps. Ranking alone doesn't guarantee you appear in the answer.

The disconnect between reports and revenue comes from measuring the old funnel while the patient follows a new one. The old funnel: search, click, website, form fill, coordinator call, surgery. The new funnel, for a growing share of patients: AI prompt, AI summary, shortlist of two or three doctors, maybe one website visit, WhatsApp message, surgery.

If your measurement stops at rankings and traffic, you're watching the first stage of a process that increasingly happens somewhere else.

Old model
Search Click Website Form Fill Coordinator Surgery
New model
AI Prompt AI Summary Shortlist (2-3) WhatsApp Surgery

Why the Coordinator Model Makes This Worse in Tijuana

This is the part that's specific to us. To Tijuana. To cross-border medical tourism.

A US practice loses some organic traffic, it still has insurance referrals, physician networks, hospital affiliations driving volume. For many Tijuana practices serving US patients, the main acquisition path is still online discovery followed by coordinator-led conversion. There's no insurance network. There's no PCP referral pipeline. If the patient never reaches your website or your WhatsApp, the coordinator never knows they existed.

I've watched this at VIDA. Our coordinators often manage 30 or more active patient conversations in WhatsApp at a time. They're good at converting leads. But they can only convert leads they receive. When AI answers intercept the discovery process, the lead simply never arrives. It's not that the coordinator failed to convert. It's that the patient built their shortlist inside ChatGPT, picked two practices to message, and yours wasn't one of them.

There's a compounding problem here that most traffic reports won't show you. If traffic drops 20%, that's visible. What's easier to miss is whether the remaining traffic is also lower intent, because the highest-intent patients already got their answer from the AI and acted on it. Lower volume plus lower intent can produce a consultation drop that's materially worse than the traffic decline alone would suggest. And the marketing meeting blames "soft demand."

The coordinator-driven model that made Tijuana medical tourism work is the same model that can make AI interception especially risky. Every patient who never clicks is a patient the coordinator never gets.

Why Your SEO Agency Probably Cannot Solve This With More of the Same

I'll be direct about this. I'm not here to attack SEO agencies. SEO still matters. If your site doesn't rank, AI Overviews are less likely to cite you. BrightEdge's own data shows increasing overlap between organic rankings and AI citations. That foundation still matters.

But traditional SEO optimizes for rankings, keywords, click-through, backlinks, and crawlability. What AI visibility increasingly rewards is slightly different: entity clarity (can a machine confidently identify your doctor as a distinct, credentialed physician?), structured data (can a machine read your surgeon's fellowship, board certification, and case volume?), factual density (does your content answer specific questions with specific data?), source consistency (does the same information about your doctor appear across multiple independent sources?), and review specificity (do your reviews mention doctor names, procedures, and patient origin cities?).

These are overlapping but not identical skill sets. A great SEO agency can keep you ranking. But if they're still selling you rankings as the primary proof of success, they're measuring the last stage of a search model that's already changing.

Business websites
58%
Directory mentions
27%
Other sources
15%
Source types in local AI recommendations (BrightLocal, 2025)

BrightLocal's 2025 Local Consumer Review Survey found that business websites were the most common source type in the local AI recommendations they analyzed, ahead of directories and other third-party sources. That means your own site is still the most important source. But "important" and "sufficient" aren't the same thing. A significant share of citations comes from directories, reviews, forums, and third-party mentions you may not be tracking at all.

The question to add to "are we ranking?" is "are we being cited?"

The New KPI: Citation Share

Internally, I've started using a directional metric that I think more Tijuana practices should track. I call it Citation Share.

Citation Share means: how often does your doctor, your practice, or your pages appear in AI-generated answers for the prompts that actually drive patient decisions?

Not branded searches. Not "VIDA Wellness Tijuana." The prompts where the patient hasn't chosen you yet. "Best facelift surgeon in Tijuana." "Gastric sleeve cost Tijuana vs US." "Is it safe to get bariatric surgery in Mexico?" "Top dental implant clinics near San Diego."

Internally, we track this across ChatGPT, Gemini, Perplexity, and Google AI Overviews, counting how many times a practice or doctor appears across a defined prompt set. We run a fixed set of 30 to 40 prompts monthly, logged out, from a US-based IP, on the same device type, and track appearances over time. It's not a perfect metric. AI outputs vary by session, by phrasing, by timing. But it gives you a directional read that traffic reports can't.

If your citation share is effectively zero across a meaningful prompt set, your practice is absent from the discovery layer that increasingly shapes patient shortlists. Your website might still rank. Your reviews might still be strong. But in the AI answer the patient reads first, you don't exist.

In our limited internal prompt testing, digitally legible doctor profiles often surfaced more reliably than more experienced surgeons whose online credentials were poorly structured. A surgeon with 20 years of experience and a brochure-style bio gets passed over for a surgeon with five years and a structured, fact-rich profile that the AI can parse and verify. That's not fair. But it's what we observed in our own testing, and it's consistent with what the GEO benchmark research suggests about structured content outperforming unstructured content in generative search settings.

The Aggregator Problem Nobody Talks About

There's another competitor most Tijuana practices aren't watching: the facilitator sites.

Facilitator and aggregator sites such as Medical Departure, PlacidWay, MedicalTourismCo, and dozens of smaller affiliate operations publish comparison articles targeting exactly the prompts your patients use. "Top 5 Bariatric Surgeons in Tijuana." "Best Dental Clinics in Mexico." These comparison articles are often structured to answer the question directly, which can make them easier for AI systems to cite.

When a patient asks Perplexity "best gastric sleeve surgeon in Tijuana," the AI is more likely to pull from a comparison article on a facilitator site than from your own practice website. The comparison article looks neutral and comprehensive. Your website looks like marketing. That structural bias can work against your practice.

I've seen this happen with VIDA's own doctors. A facilitator site with outdated information and a three-star review of our practice appears in AI answers, while our own pages, with current credentials, hundreds of reviews, and detailed procedure information, don't get cited because they're formatted as marketing copy rather than structured medical data.

One alternative to relying on facilitator commissions is building content authoritative enough to be cited directly. Doctor pages that are machine-readable. Cost pages that answer the question before the patient needs to ask. Procedure content deep enough that the AI treats you as the primary source, not the facilitator.

The 3-Part Pivot: From Search Rankings to AI Visibility

Here's what we changed at VIDA. This isn't theoretical for us. It's what we implemented, and it's what I think the research supports.

1. Build machine-readable doctor entities.

Most Tijuana practice doctor pages are brochure copy. A photo, a paragraph, a list of procedures, a contact button. AI systems often struggle to extract reliable entity data from pages like that. What they need: full name with consistent formatting across every page and platform, medical school with graduation year, residency and fellowship details with institution names, board certifications with certifying body names, specific procedures performed, hospital affiliations, professional society memberships. All marked up with Physician schema and MedicalOrganization schema. Not buried in paragraph form. Structured so a machine can read it like a database record.

Research from Princeton, Georgia Tech, and collaborators found that certain content changes, including clearer sourcing and more specific information, improved visibility in generative search settings in their benchmark environment. In our experience, the biggest single lever for medical practices is moving doctor credentials from brochure copy into structured, verifiable data.

2. Turn service pages into answer-engine content.

Your gastric sleeve page probably opens with a paragraph about how life-changing the procedure is. That isn't the part AI systems are most likely to rely on. What they need: cost range in Tijuana vs US, what's included in the package, BMI candidacy requirements, surgical technique used, recovery timeline by week, comparison with other procedures, comparison with GLP-1 medications, complication rates, and specific logistics for cross-border patients.

Answer the question first. Put specifics first. Cover real comparisons that the patient would ask an AI to summarize. If your page doesn't contain the answer, the AI will find a page that does. Often a facilitator's page.

3. Upgrade your reviews from stars to semantic proof.

A five-star review that says "Amazing doctor! Best experience ever!" gives the AI much less to work with. No procedure name. No surgeon name. No city of origin. No recovery detail. No timeline. A review that says "Dr. Rodriguez performed my gastric sleeve at VIDA. I flew from Phoenix. Down 80 lbs at 6 months" gives the AI a doctor entity, a procedure entity, a practice entity, a geographic signal, and a measurable outcome.

The operational change: implement a two-stage review request. One at discharge (captures the emotional response). One at three to four weeks post-op via WhatsApp (captures specific details, procedure names, results, logistics). The second review is usually far more valuable for AI visibility than the first.

What Tijuana Practices Need to Check This Month

This is useful whether or not you work with us. Do this today.

Your AI visibility diagnostic
Search your top three procedures in ChatGPT, Gemini, Perplexity, and Google (AI Overviews or AI Mode, depending on availability in your market). Are your doctors named? Are credentials accurate? Are you on the shortlist?
Run comparison prompts: "best [procedure] surgeon in Tijuana" and "[procedure] Tijuana vs [US city]." Note who gets cited instead of you.
Check your Bing Webmaster Tools. ChatGPT's web-browsing behavior has relied heavily on Bing indexing, so weak Bing visibility can limit discoverability.
Open your doctor pages. Can a machine read the credentials, or are they buried in a paragraph? Is there Physician schema markup?
Read your last 20 Google reviews. How many mention the doctor by name, the specific procedure, and the patient's home city? That ratio gives you a simple internal review-specificity score.
Check your English-language site separately from your Spanish site. Is the structured data implemented on both? Is the English version technically equal or an afterthought?
Search for your practice on Reddit. Reddit discussions can surface in AI-generated answers, especially for comparison and patient-experience queries. Know what patients are saying about you there.

If you find that your doctors aren't named, your credentials are wrong, or your practice doesn't appear at all, that's not just an SEO problem. It's an AI visibility problem. And it means a growing share of patients are building shortlists that don't include you.

The Uncomfortable Math

Let me put this in coordinator terms, because that's how Tijuana practices actually think about revenue.

Say your practice gets 500 website inquiries per month and your coordinators convert 15% to booked procedures. That's 75 surgeries. If AI interception reduces inbound inquiries by 20% to 400 per month, that's visible. But here's the part that isn't visible: those 400 remaining leads may also convert at a lower rate, because the highest-intent patients, the ones who were ready to book, already got their shortlist from AI and chose a competitor who was cited. Conversion drops to 10%. Now you're at 40 surgeries. In a simplified model with stable revenue per case, that would mean a 47% decline in booked procedures from what looked like a 20% traffic drop.

That's the math that makes the marketing meeting confusing. "Traffic only dipped a little" and "revenue dropped significantly" can both be true at the same time if the quality of traffic changed along with the volume.

I can't claim this exact ratio applies to every practice. There are too many variables. But the dynamic is real, and I've watched it play out in our own numbers. Lower volume and lower intent compound. And the SEO report doesn't capture either signal well.

What Comes Next

Traditional SEO still matters. Rankings still matter. Your website still matters. BrightEdge's own data confirms that organic search remains the primary conversion driver, and that ranking well increases your chances of appearing in AI Overviews.

But ranking alone no longer guarantees attention, clicks, or consideration. The patient journey for cross-border medical tourism now includes a layer that summarizes, compares, and recommends before your site loads. If your practice is absent from that layer, you're competing for a shrinking share of clicks while your competitor gets named in the answer.

Google has described AI Mode as using "query fan-out," where a single question can trigger multiple supporting queries behind the scenes. One prompt from a patient generates several searches you never see in your analytics. The sources cited in that fan-out are the ones that shape the answer. If your content isn't structured to be found, parsed, and cited in that process, you can lose visibility before the patient ever reaches your site.

Open ChatGPT right now. Type "best [your specialty] surgeon in Tijuana." See what comes back. Then try Gemini. Then Perplexity. Then Google with AI Overviews or AI Mode on.

If your name isn't there, your next patient may never reach your website. Not because your SEO failed. Because the question was answered before they needed to click.

Is AI recommending your competitor instead of you?

We test 20+ real patient prompts across ChatGPT, Gemini, Claude, and Perplexity. Full visibility report in 48 hours. Free.

Get your free AI audit