The VIDA Case Study: How 4 Invisible Surgeons Became AI-Visible in 6 Months

It was late 2024 when I typed "Dr. Alejandro Quiroz" into ChatGPT and got nothing back.

Not a wrong answer. Not a partial match. Nothing. The model had no idea who he was. I tried "Dr. Juan Carlos Fuentes." Nothing. "Dr. Carlos Castaneda." Nothing. "Dr. Gabriela Rodriguez Ruiz." Nothing. Four surgeons. Four blank responses. I sat there looking at the screen trying to reconcile what I knew about these doctors with what one of the most widely adopted AI research tools returned about them.

What I knew: Quiroz, Fuentes, and Castaneda are all fellows of Dr. Bruce Connell, the surgeon who pioneered the deep plane facelift technique. Between them, they perform hundreds of procedures a year at VIDA Wellness and Beauty Center, among the larger medical tourism centers in Tijuana. Dr. Gabriela Rodriguez Ruiz holds an MD, a PhD, is a Fellow of the American College of Surgeons, and has performed more than 7,800 bariatric procedures across her career. By the credentials and case experience visible to us, these are among the more qualified surgeons in the city.

What ChatGPT returned in our tests: no useful information.

Disclosure: Tersefy led this implementation for VIDA Wellness and Beauty Center. VIDA is a client. The observations, methodology, and results described here reflect our direct work and internal tracking.

I ran the same names through Gemini. Through Perplexity. Through Google's AI Overviews. Same result everywhere. These doctors were not surfacing in the AI systems many patients now use during provider research. The problem was never the doctors. It was the page.

0
AI mentions across 40+ unique prompts (500+ total executions across platforms), late 2024
4
Highly credentialed surgeons, completely invisible
7,800+
Procedures by Dr. Rodriguez Ruiz alone

Here is exactly what we found when we audited their digital presence, how we built the infrastructure to fix it over six months, and the results we measured. I'll be specific about the implementation and honest about the limitations of what we can claim.

What We Found When We Audited the Doctor Profiles

Each doctor had a page on the VIDA website. Each page was a brochure-style bio: professional headshot, a paragraph summarizing their credentials, a list of procedures they perform, and a contact button. Beautiful for a patient scrolling on their phone. A weak format for systems trying to extract and verify entity data.

This wasn't a mistake. It was a product of the era these pages were built in. Between roughly 2008 and 2020, Tijuana medical tourism growth was largely intermediated by facilitator platforms like PlacidWay and Medical Departures. Clinics grew by being excellent at the clinical product and outsourcing digital distribution to intermediaries. The website's job was to reassure patients after a facilitator had already referred them, not to be independently discoverable. These pages weren't built with machine parsing or entity verification in mind.

But that's exactly what started happening. And when we audited each doctor's digital presence, we found seven specific problems.

The 7 problems we found in the initial audit
Name inconsistency across platforms. "Dr. Alejandro Quiroz" on the website, "Dr. A. Quiroz" on a directory, "Alejandro Quiroz Gutierrez" on LinkedIn. Five platforms, potentially five different entities in the eyes of AI.
Zero structured data. No Physician schema. No hasCredential markup. No medicalSpecialty property. No sameAs links. The page looked professional to humans but exposed little structured data for parsers and search systems.
Credentials buried in paragraph form. "Dr. Quiroz is a board-certified plastic surgeon who trained under Dr. Bruce Connell..." Beautiful sentence. AI may extract those details from a paragraph, but the result is less reliable than explicit sections, labels, and structured data.
No credential verification links. The page said "board certified" without consistently naming the certifying body or linking to a verification source.
Reviews were emotional, not entity-rich. Hundreds of five-star reviews saying "Amazing experience!" but very few mentioning the doctor by name, the specific procedure, or the patient's city of origin.
No individual Google Business Profile. All reviews went to the VIDA clinic listing. Individual surgeon entities had no independent presence.
The Connell fellowship lineage was invisible. Arguably their strongest differentiator, and it existed only as a clause in a bio paragraph. No structured data. No entity connection.

That last point deserves emphasis. Dr. Bruce Connell is a well-documented surgeon in the literature and in online professional references. He published extensively on the deep plane technique, trained fellows internationally, and is referenced across PubMed and surgical education records. Connecting the VIDA surgeons to Connell through explicit biographical and training data may help systems associate them with a well-documented surgical lineage. Most Tijuana practices can't replicate this advantage. VIDA's surgeons were sitting on it and it was buried in a sentence.

The pattern held across all four doctors. These weren't marketing problems. They were primarily information-architecture and verification problems. Much of the evidence a machine might use to assess these doctors already existed, but it was fragmented and inconsistently presented. It just wasn't in a format machines could reliably extract and verify.

The 6-Part Implementation: What We Actually Built

We spent six months building what I think of as entity infrastructure. Not content. Not ads. Not social media. Infrastructure. The kind of structural work that makes a doctor easier for AI systems to parse and verify. Here's what each part involved.

Part 1: Entity Architecture

This was the foundation everything else was built on. We created individual entity home pages for each doctor. Each page became the canonical digital identity for that surgeon. Dedicated URL. H1 with full professional name (using the full legal/professional naming format, which later proved helpful for disambiguation). Structured sections for credentials, specialty, procedures, hospital affiliation, and training history.

"Each page was built as a database record a machine could parse, not a brochure a human would scan."

The key design decision: each page was built as a database record a machine could parse, not a brochure a human would scan. Credentials were separated into distinct sections, not merged into narrative paragraphs. Procedure lists were tagged. Training lineage was explicit with named institutions and named mentors. We wrote the detailed guide for building these profiles separately on the blog, so I won't repeat the full specification here. But the principle is simple: give the machine the same data a hospital credentialing committee would want, in a format it can actually read.

Part 2: Physician Schema Markup

This is the part that sounds intimidating but is actually the most mechanical. We implemented multi-typed JSON-LD using Person plus Physician on each doctor page. In Schema.org, Physician is technically modeled under MedicalBusiness and local business semantics rather than purely as an individual clinician type. We chose to combine it with Person as a practical implementation decision for doctor-profile pages, which allowed us to leverage properties from both types.

The markup included hasCredential entries for Mexican credential types such as the Cédula Profesional and specialty credentials, along with sameAs, medicalSpecialty, hospitalAffiliation, alumniOf, knowsAbout, and availableService properties fully populated. The sameAs links pointed to LinkedIn, Doctoralia, CMCPER directory, and RealSelf.

None of this is exotic technology. Schema.org publishes the specification openly. Schema App has documented physician page best practices using these exact properties. The tooling exists. Almost nobody in Tijuana medical tourism was using it.

Part 3: Credential Verification Infrastructure

This one sounds obvious when you say it out loud. Saying "board certified" on a web page is a claim. Linking to the verification source is evidence.

We added verification links for each credential layer. Cédula Profesional verifiable at cedulaprofesional.sep.gob.mx. CMCPER board certification verifiable through CONACEM's specialist directory. Hospital affiliation documented with CSG (Consejo de Salubridad General) certification reference. Mexico's credential verification system is actually more centralized and digitally accessible than many Americans realize. The government registries exist. They just weren't being connected to the doctor pages in any structured way.

We also built a US-to-Mexico credential equivalency section on each doctor page. American patients don't know what CMCPER means or how the Cédula Profesional relates to a US medical license. The equivalency table explains CMCPER in U.S. terms; it's not the same as the American Board of Plastic Surgery, but it serves a parallel board-certification role in Mexico. The Cédula Profesional is the federal registration credential issued by SEP that authorizes professional medical practice, tied to recognized training and degree completion. This translation helps both human patients and AI systems evaluating credential authority.

Part 4: Cross-Platform Entity Consistency

This was the most tedious part and possibly the most important.

We audited every platform where each doctor appeared: website, Google Business Profile, LinkedIn, Doctoralia, RealSelf, facilitator sites, and hospital directories. We standardized name format, specialty description, credential claims, and contact information across all of them.

Doctoralia deserves special mention. It's the dominant doctor directory platform in Latin America and is well-structured with consistent data formatting. A complete and consistent Doctoralia profile is likely helpful because Doctoralia is a major physician directory in Latin America, similar to what Healthgrades or Zocdoc does for US physicians. Making sure each VIDA doctor's Doctoralia profile was complete and consistent was probably one of the highest-impact individual actions in this phase.

The uglier work was cleaning up facilitator sites. PlacidWay and similar platforms still had pages with outdated information. Surgeons who no longer practiced at VIDA. Credentials from years ago. Pricing from a different era. These outdated pages actively inject noise into the entity graph. Every inconsistency between what VIDA's site says and what a facilitator site says about the same doctor is a signal that reduces AI confidence. Our working view is that multiple corroborating sources improve the odds that AI systems will surface a doctor confidently. A contradictory source can undermine that threshold. Cleaning up facilitator sites was unglamorous. It was also necessary.

Part 5: Review Intelligence

This is where the project stopped being a technical exercise and started requiring me to change how real people work. Everything above lives in code and dashboards. This part lives in WhatsApp threads and coordinator habits.

We implemented a two-touch review system. Touch 1 happens at discharge. It captures the emotional, star-rating review while gratitude is high. Touch 2 happens at 3 to 4 weeks post-op via WhatsApp, when the patient has specific outcomes to describe. Review requests were sent to all eligible patients uniformly, not selectively based on satisfaction, and were structured to comply with Google and platform review policies. No gating. No filtering.

Touch 1 (discharge): emotional but vague

"Amazing experience! Everyone was so nice! Would definitely recommend!"

No doctor name No procedure No origin city
Touch 2 (3 to 4 weeks): entity-rich

"Dr. Rodriguez Ruiz performed my gastric sleeve at VIDA. I flew from Phoenix. Down 22 lbs in the first month and my incisions are healing great."

Surgeon name Procedure Origin city Outcome

The 3 to 4 week timing was chosen based on how post-op communication actually works and when patients usually have something specific to say. For gastric sleeve patients, that's when initial weight loss is visible and acute discomfort has passed. For facelift patients, major swelling has subsided and they're starting to see the result. It's often past the most emotionally raw part of recovery and into the stage where patients can describe early results more specifically. Reviews collected at this moment are more positive and dramatically more specific.

What mattered operationally was this: WhatsApp is already the primary communication channel for Tijuana medical tourism. The coordinator is already checking in with the patient post-op via WhatsApp. Adding an entity-rich review request is one extra message in an existing conversation. We trained coordinators on specific scripts and, more importantly, on why entity-rich reviews matter. The coordinator who sends "I'm following up on behalf of Dr. Quiroz's team. How is your deep plane facelift recovery going?" and then follows with a review request gets a fundamentally different review than one who sends a generic "please leave us a review" message.

Part 6: Content Architecture

The final piece. We built search- and AI-oriented content around each doctor's specialty to reinforce entity associations.

For the facelift surgeons: deep plane facelift technique pages, recovery content, comparison content (facelift vs fillers, deep plane vs SMAS). This matters especially because "deep plane facelift" has grown sharply as a consumer search term since 2020, driven largely by US surgeons active on YouTube and TikTok. Google Trends data shows the term's search interest roughly tripled between 2020 and 2024. The Tijuana surgeons who trained under the technique's pioneer were invisible to all of it.

For Dr. Rodriguez Ruiz: gastric sleeve content, GLP-1 vs surgery comparison pages, BMI candidacy content, and cost transparency pages. The GLP-1 comparison content is strategically critical because prompts like "should I get gastric sleeve or take Ozempic" are growing rapidly in volume. Being present in AI answers to these queries is high-value.

Each piece of content internally links to the doctor profiles and uses author attribution. Every article reinforces the entity association between the doctor, their specialty, and the practice.

The Before and After: What Changed in AI Visibility

Before the implementation, in late 2024, we ran a set of test prompts across ChatGPT, Gemini, Perplexity, and Google AI Overviews. We used 40+ unique prompts, run across four platforms and multiple sessions, totaling over 500 prompt executions. Prompts like "best facelift surgeon in Tijuana," "best bariatric surgeon in Tijuana," "deep plane facelift Tijuana," "gastric sleeve Tijuana reviews." None of the four VIDA doctors appeared in any AI answer. Zero citation share.

After the implementation, approximately six months later, running the same prompt set, we observed meaningful change. Doctor names began appearing with correct credentials and specialty descriptions. AI answers that previously cited only facilitator sites began citing VIDA directly. Some AI answers began reflecting the surgeons' training relationship to Dr. Connell more accurately. Review content with specific doctor names and procedures began being referenced in AI summaries.

To give one concrete example: the prompt "best deep plane facelift surgeon in Tijuana" returned no mention of any VIDA surgeon across any platform in late 2024. By mid-2025, the same prompt returned Dr. Quiroz's name with correct credential context on multiple platforms, including references to his training lineage under Dr. Connell.

The mentions weren't uniform. Some prompts surfaced the doctors consistently. Others sporadically. Some platforms cited them more reliably than others. But compared with our earlier tests, the doctors appeared more often and with more accurate context.

"We cannot draw a clean causal line. But the direction from zero to present is unmistakable."

Here's the part that matters most for credibility.

We can't draw a clean causal line. AI models update. Training data changes. Retrieval systems evolve. Competitors move. We implemented multiple changes simultaneously (entity optimization, schema, reviews, content), so we can't isolate which single factor drove the most impact. I could package this into a clean narrative with a specific percentage increase and a neat attribution story. It would be dishonest.

What we can say: across our implementations so far, we've repeatedly observed that stronger entity structure tends to coincide with improved AI visibility. The pattern appears positive even though precise attribution remains difficult. These are our observations, not gospel. But they've held up.

The Numbers We Can Share

I promised honesty about what we can and can't claim. Here are the numbers I'm comfortable defending.

Citation visibility: From no appearances in our initial prompt set to recurring appearances in later tests. We don't publish the exact percentage because AI outputs vary by session, by user, by geography, and the number would be misleadingly precise. The meaningful metric is the shift from complete absence to consistent presence. That shift is real.

Review Specificity Score: We track the percentage of reviews that mention the doctor by name and the specific procedure, with patient origin city counted as an additional enrichment field. We scored reviews across Google, RealSelf, Doctoralia, and internal follow-up captures. A review counted as "specific" if it named the doctor and the procedure. Before implementing the two-touch system, across a sample of approximately 200 recent reviews, we estimated sub-20% contained this entity-rich information. Within 90 days, our internal tracking across a comparable sample put that number above 40%.

Review specificity before
~20%
Review specificity after 90 days
40%+

Entity Consistency: This one was embarrassing once we saw it. Before implementation, each doctor had an average of 3 to 4 inconsistencies across the platforms we audited. Name format differences. Specialty descriptions that didn't match. Credential claims from outdated facilitator pages. After standardization, we removed most major inconsistencies across the main platforms we could update.

Content Architecture: Published search- and AI-oriented articles for each specialty vertical. Each article internally links to the doctor profiles, reinforcing entity associations. The content is built to answer the specific prompts patients actually ask AI systems.

These numbers aren't a marketing dashboard with lead attribution. They're entity-level measurements of discoverability. The kind of metrics that matter when the question isn't "how many clicks did we get" but "does the machine know we exist."

The "Invisible Surgeon, Visible Practice" Paradox

We noticed something early in the project: VIDA as a practice entity probably had some AI visibility before we started. The practice has years of review history, media mentions, and facilitator listings. AI systems were more likely to recognize the organization than the individual surgeons. A prompt might return something about "VIDA Wellness and Beauty Center in Tijuana offers plastic surgery and bariatric surgery." The brand had some presence in the machine's understanding.

But the surgeons didn't.

This is a problem because patients choosing a surgeon for a deep plane facelift or a gastric sleeve aren't choosing a brand. They're choosing a specific pair of hands. AI prompts reflect this. People ask "who is the best facelift surgeon in Tijuana," not "what is the best clinic in Tijuana." BrightLocal's 2024 research suggests business websites are a major source in ChatGPT local search outputs. If your website treats doctors as supporting characters in the practice's story rather than as distinct entities, the machine will do the same.

The implementation essentially unbundled the practice entity into individual surgeon entities while keeping the practice as the institutional anchor. Each surgeon's entity page links back to VIDA through hospitalAffiliation. VIDA's pages link to the surgeons. The result is a bidirectional entity relationship that strengthens both. The practice gives institutional credibility to the surgeon. The surgeon gives procedural specificity to the practice.

What We Learned That We Didn't Expect

Six months of building this surfaced insights I didn't anticipate. Some of these challenged assumptions we started with.

In this implementation, schema alone didn't appear to change outcomes visibly. We expected structured data to be the primary lever. In isolation, it appeared to have minimal visible effect. Schema without consistent cross-platform entity data produced limited change. The combination of schema plus directory consistency plus review specificity is what appeared to compound. None of the individual pieces seemed to do much alone. Together, the effect was clear. This aligns with what we've come to think of as a confidence threshold. AI systems appear to respond better when multiple authoritative sources corroborate the same doctor-level facts before surfacing a recommendation.

The Connell lineage mattered more than we expected. The training relationship with a recognized pioneer created an entity association that AI seemed to follow. Dr. Bruce Connell exists as a well-documented surgeon in medical literature and AI training data. Explicitly documenting the training relationship may have helped systems connect these surgeons to a recognized figure in their specialty. You could think of this as borrowing context from a better-documented professional lineage. It's also something most Tijuana practices can't replicate, which makes it a genuine early visibility advantage.

The two-surname convention is a real advantage. Mexican naming convention provides better entity disambiguation than English naming. "Dr. Juan Carlos Fuentes Gutierrez" is nearly unique globally. "Dr. John Smith" is essentially unresolvable without extensive additional context. The four-part Mexican name structure functions almost like a natural identifier. Longer, more distinctive full names likely reduce ambiguity across platforms and search systems. This may be a structural advantage when names are kept consistent across platforms.

In our case, Bing indexing appeared to be a meaningful unlock for ChatGPT visibility. We were focused on Google. When we set up Bing Webmaster Tools and submitted the sitemap, ChatGPT mentions appeared to increase within weeks. This makes directional sense. ChatGPT's web browsing features have used Bing's search index for retrieval in certain modes, though not all ChatGPT responses depend on live Bing retrieval in the same way. Bing has negligible market share as a standalone search engine, which is why almost no practice thinks about it. But for ChatGPT experiences that use web retrieval, Bing indexing quality may directly affect what ChatGPT can find and surface. This is a simple, low-cost step worth taking, especially if your pages aren't well indexed outside Google. Takes 15 minutes.

The coordinator is the bottleneck and the solution. Review quality improved dramatically once coordinators had specific scripts and understood why entity-rich reviews mattered. The operational change was small. One extra WhatsApp message at 3 to 4 weeks post-op. The impact was disproportionate. But without coordinator buy-in, the two-touch system produces nothing. The coordinator who doesn't understand what AI does with reviews will send a generic request and get a generic review.

Facilitator sites were actively working against us. Outdated information on PlacidWay and similar platforms created entity conflicts that we believe degraded AI confidence. Cleaning these up was the least exciting part of the entire project. It was also one of the most necessary.

Operator note: If you do nothing else after reading this, set up Bing Webmaster Tools and submit your sitemap. It takes 15 minutes and it may improve the odds that ChatGPT can retrieve your pages through web search.

What This Means for Your Practice

I realize not every practice is VIDA. Not every surgeon trained under a pioneer of their specialty. But the structural problems we found are universal across Tijuana's medical tourism market. In fact, based on what we've seen, entity optimization still appears uncommon across Tijuana's medical tourism market.

If you're a single-surgeon practice, the implementation is actually simpler. One entity to optimize. One set of platforms to align. One review strategy to implement. You don't need the multi-doctor complexity we navigated at VIDA.

If you're a multi-specialty practice, the complexity multiplies but the framework is the same. Each doctor needs an entity home, consistent cross-platform presence, and a review collection system that captures entity-rich feedback. The key operational challenge is routing reviews to individual surgeon entities rather than letting everything pool under the practice brand.

If you're a dental practice, the credential system is different (CONACEM doesn't certify dentists the same way) but the entity optimization principles are identical. Name consistency, structured data, verification links, review specificity. The mechanics don't change by specialty.

The broader context makes this a priority. BrightEdge reported significant growth in AI-referred traffic to healthcare sites during 2024, though definitions and measurement methods vary across studies. Concurrently, Google AI Overviews rapidly expanded its global footprint across hundreds of countries during 2024 and 2025. This shift is already underway. The practices that build entity infrastructure now may have an outsized advantage as these systems become the primary way patients research and choose providers.

The four VIDA doctors aren't more qualified today than they were in late 2024. Their credentials haven't changed. Their experience hasn't changed. Their surgical skills haven't changed.

What changed is how their credentials, experience, and skills are represented in the systems patients are increasingly using to choose a doctor.

The problem was never the doctors. It was the page.

Frequently Asked Questions

How long does it take for AI visibility changes to show results?

In our experience at VIDA, we began observing AI mentions approximately 4 to 6 months after the initial entity optimization. But this isn't a clean timeline. Some changes, like Bing Webmaster Tools setup, appeared to have a faster effect than others. Schema implementation alone didn't produce visible change for weeks. The honest answer is that AI visibility builds gradually as multiple signals compound, and there's no guaranteed timeline.

Can a single-doctor practice benefit from entity optimization, or is this only for large practices?

A single-doctor practice may actually benefit more. You have one entity to optimize, one set of platforms to audit, and one review strategy to implement. The VIDA project was complex precisely because we had to build distinct entity profiles for four surgeons within one practice brand. A solo practitioner can execute the same framework with less coordination overhead.

What is the most important first step for a practice that wants to improve AI visibility?

Open ChatGPT, Gemini, and Perplexity. Type your surgeon's full name. Type "best [specialty] in [city]." See what comes back. That diagnostic takes five minutes and tells you exactly where you stand. If the answer is nothing, you know the scope of the problem. If the answer is wrong or outdated, you know the nature of the problem. The audit always comes first.

Does physician schema markup alone improve AI recommendations?

In our observation, no. Schema without consistent cross-platform entity data produced minimal visible change. Schema is one layer of a multi-layer system. It makes your credentials machine-readable, but if those credentials are contradicted by outdated facilitator listings or diluted by vague reviews, the schema alone doesn't appear to be enough. The combination is what matters.

How do you measure AI citation share for a doctor?

We run a set of specialty-specific and location-specific prompts across ChatGPT, Gemini, Perplexity, and Google AI Overviews. We record whether the doctor appears, with what credentials, and whether the source is the practice website, a directory, or a facilitator site. We repeat this monthly. It's manual, it varies by session, and it isn't as clean as Google Analytics. But it's the best measurement available right now.

Why were experienced surgeons invisible to AI despite having strong credentials?

Because AI systems don't read résumés. They extract structured entity data from web pages, directories, and reviews. A surgeon with strong real-world credentials but weak web representation is far less legible to machines than they should be. Digital legibility isn't the same as clinical qualification. It's a formatting problem, not a quality problem.

Is entity optimization a one-time project or an ongoing process?

The initial build (entity pages, schema, cross-platform consistency) is a project with a clear endpoint. The ongoing work is review collection, content creation, and monitoring for entity drift: platforms updating your information, new directories appearing, facilitator sites publishing outdated data. Think of it as building the house and then maintaining it. The foundation is a one-time investment. The upkeep is continuous but lighter.

What is the role of patient reviews in AI visibility for doctors?

Reviews serve as independent corroboration of the claims on your website. When your website says "Dr. Quiroz specializes in deep plane facelifts" and multiple reviews mention "Dr. Quiroz" and "deep plane facelift" in natural language, that consistency across sources builds entity confidence. Generic reviews ("great experience!") provide star ratings but zero entity data. Specific reviews (doctor name, procedure, origin city, outcome) function as independent entity verification that AI systems can extract and cross-reference.

Want to see where your clinic stands?

We test 20+ real patient prompts across ChatGPT, Gemini, Claude, and Perplexity. Full report in 48 hours. Free.

Get your free AI audit