Education

AI Visibility for Tijuana Surgeons: How to Get Found When Patients Ask ChatGPT

Emilio Alcolea Emilio Alcolea April 29, 2026
HUMAN CRAFTED
Contents
    The 30-second take. If you run a clinic in Tijuana that sees US patients, the question is no longer whether you rank on Google. The question is whether ChatGPT, Gemini, Claude, Perplexity, and Google AI Overviews can find you, verify your credentials, connect you to the procedure the patient is asking about, and cite a source for what they say. Most surgeons in this city fail at least three of those five tests, and the failure point is rarely surgical skill. The failure point is digital evidence: how your bio reads, how your procedure pages are structured, what reviews mention, what third parties cite, and whether your information stays consistent across platforms. AI does not pick the best surgeon. It picks the surgeon it can understand, verify, and cite. According to the KFF Health Tracking Poll (2025), roughly one in six US adults already turn to AI tools for health information, and that share is rising fastest among adults under 50. This article is what I learned watching one clinic work through that gap, and what every surgeon in Tijuana should run through before the next consultation walks in.

    1. The moment a surgeon realizes ChatGPT is part of the consultation

    A patient finishes the consultation. She has the printed quote, she likes the coordinator, the result photos look good. On the way out she says, "I'm going to check with ChatGPT and get back to you."

    Two years ago that sentence did not exist. Patients said "let me think about it," and that meant the spouse, the budget, the calendar, the spouse again. Now it can mean a model.

    After she leaves, you open ChatGPT on your phone and type the question she is about to type: "Who is the best plastic surgeon in Tijuana for a deep plane facelift?" Three names appear in the answer, and you know all three. They are good surgeons but they are not better than the one she just consulted with. One did fewer cases than your lead surgeon last year, another graduated after him, and the third works five minutes from your clinic.

    Your name is not there.

    That is the moment the problem stops being traffic. You had the lead. You had the consultation. You had the quote in her hand. Then a model built a shortlist without you in it.

    The lead did not disappear. She kept researching. She just researched without you in the room.

    I previously ran marketing at VIDA Wellness & Beauty Center, a multi-specialty clinic in Tijuana that does plastic surgery, bariatrics, dental, and hair, and serves mostly US patients crossing the border from San Diego, Los Angeles, and Phoenix. I have been doing paid acquisition in this market for years. The patient behavior changed in 2024 and accelerated through 2025. By early 2026, coordinators were hearing the AI second-opinion conversation often enough that it became impossible to treat it as an edge case. We built a system at VIDA to make sure AI could answer those second-opinion questions correctly. What follows is the operator's version of how. The thing your competitor probably already started.

    2. What AI visibility means for a Tijuana surgeon

    AI visibility is not "ranking in ChatGPT." That phrase is too soft.

    AI visibility means a model can name you, connect you to the procedure the patient asked about, explain why you are relevant, and point to sources that make the answer defensible. The last part is what most surgeons miss. A model may know your name and still avoid recommending you if the public evidence is thin, inconsistent, or trapped in places it cannot read.

    The difference on the ground works like this. If a patient Googles "plastic surgeon Tijuana facelift," ten blue links appear and whether the patient clicks yours depends on title, snippet, and trust signals on the SERP itself. If the same patient asks ChatGPT the same thing, the model produces a synthesized answer naming three to five surgeons by name, with reasoning, sometimes with citations. If you are not in that synthesized answer, you do not get the click, you do not get the second opinion, you do not get the booking, and the patient never knows you exist for that procedure.

    For a Tijuana surgeon serving US patients, this is the harder problem because trust friction is higher. A San Diego patient considering a domestic surgeon does not need a second opinion before booking a consultation. The same patient considering Tijuana wants to validate. Crossing the border, going under anesthesia in another country, recovering in a hotel near the clinic, those decisions sit inside a higher anxiety budget. AI is the new cheap, fast, anonymous validator. The Princeton GEO study (Aggarwal et al., 2024) reported that the right structural changes to a domain can lift visibility in generative answers by up to 40%. If AI cannot validate you, the friction tips against you, and the lead is won or lost before your coordinator ever gets the callback.

    3. Why US patients ask AI before they call

    When a US patient asks ChatGPT about surgery in Tijuana, she is usually not looking for inspiration. She is trying to lower risk. The real questions sitting underneath the surface prompt are:

    • Is this clinic actually safe?
    • Is this doctor really certified, and what does that even mean in Mexico?
    • Is the price too low for a reason?
    • Will I be okay crossing the border for this procedure specifically?
    • Is there someone better than the surgeon I just talked to?
    • Can I validate any of this without another coordinator selling me?

    That is the actual job AI is doing for the patient. Six concrete validations she will not call your front desk to ask. If you understand those six, you understand what the model needs to be able to say about you. It is not "we are the best." It is procedure-specific safety data, credential context the US patient can verify, transparent pricing logic, infrastructure trust signals, and an honest answer to the comparison question.

    Most clinic websites in Tijuana do not address any of those six in a machine-readable way. That is the gap, and it is the same gap a careful US patient feels when she lands on a clinic homepage written in marketing copy and cannot find a single answer to the question she actually has.

    4. Why do good surgeons not automatically appear in ChatGPT?

    This is the part doctors find hardest to accept.

    AI does not recommend the best surgeon. It recommends the surgeon it can understand, verify, and cite.

    The answer is built from the public web, indexed databases, third-party directories, structured data on your site, and review platforms, then synthesized from whatever can be extracted with confidence. The operating room is not in that input set, and case volume does not get weighted the way a referring physician would weight it.

    The model cannot respect what it cannot read.

    What that means concretely is uncomfortable.

    A surgeon with eighteen years of experience whose credentials live inside an image scan on a single bio page is, to the model, less verifiable than a surgeon with six years whose credentials are in plain HTML, schema-tagged, with sameAs links to the certifying body and three medical directory listings. A clinic with five hundred reviews that say "amazing experience, would come back" carries less informational weight than a clinic with two hundred reviews that mention the doctor's name, the specific procedure, the city the patient flew in from, and how recovery went at six months. And a bio page describing someone as "one of Tijuana's leading plastic surgeons" gives the model nothing to cite, while a bio listing the training hospital, the residency program, the CMCPER certification number, the years in practice as a number, and the approximate share of patients traveling from the US gives the model a stack of independently verifiable claims.

    Volume of content does not solve this. Specificity does. The kind of specificity that helps AI is the same kind that helps a careful US patient, and the two audiences happen to want the same thing.

    The surgeons who appear first in AI answers in Tijuana right now are not always the most experienced. They are the most legible.

    5. The five things AI needs before it can recommend you

    This is where most clinics break, and where the work pays off fastest.

    If your site fails any of these five tests, AI will be slow to verify you and slower to recommend you over a competitor that passes. None of this is advanced GEO. It is foundational evidence work. Across the Tijuana clinic sites we reviewed in 2025, most failed several of these tests at once. That is why the gap is so addressable.

    1. Clear doctor entity

      The model needs to know you are one doctor, not three messy versions of the same name. Name spelled the same way everywhere. Specialty stated explicitly. Clinic named, located, addressed. Years in practice as a number. Memberships listed by full name before any abbreviation. Credentials in text, not in an image. Photos with alt text. A sameAs link to your LinkedIn, your CMCPER profile if available, and your hospital affiliation page.

      If your site says "Dr. Maria Gonzalez," your LinkedIn says "Maria Gonzalez MD," and your Healthgrades profile says "Dr M. Gonzalez," the question of whether those are the same person has to be resolved at retrieval time. Sometimes the resolution is correct. Sometimes the entity gets split into two or three lower-confidence profiles. You lose either way.

    2. Procedure-specific pages

      Every procedure you offer needs its own URL with structured content. Not "plastic surgery." Facelift, deep plane facelift, mommy makeover, tummy tuck, BBL, breast augmentation, rhinoplasty, each as a dedicated page that answers the questions patients actually ask before that procedure: what is included, what the recovery timeline looks like, what the price range is, who is a candidate, who is not, what the safety protocols are, what hospital the procedure is performed in.

      If your only procedure content is a list on the homepage, AI cannot connect you to a specific search. A patient asks "best deep plane facelift Tijuana" and the response needs a page that talks specifically about deep plane facelift. Generic plastic surgery pages will not be retrieved for that query.

    3. Third-party proof

      This is the one most clinics underweight. Models cite other websites talking about you more often than they cite your own website talking about yourself. AirOps (2025) reported that across grounded LLM answers, the majority of cited pages come from third-party sources rather than first-party brand pages. Medical directories that list you. The official board page where your certification appears. RealSelf if you are on it. Google Business Profile. Hospital affiliations on the hospital's own domain. Press mentions. Podcasts. Conference talks.

      If every claim about you exists only on your own domain, the cross-reference set is empty and confidence in the answer drops. The output favors the surgeon with three independent sources over the surgeon with one source even if the one-source surgeon has better numbers.

    4. Review signals that connect doctor, procedure, and outcome

      Generic reviews create generic trust.

      A review that says "great experience, lovely staff" is signal-thin. A review that says "I had my mommy makeover with Dr. Gonzalez in October, traveled from Dallas, recovered for seven days at the Quartz Hotel in Zona Rio, scar healing well at six months" carries five extractable facts the model can use: doctor, procedure, patient origin, recovery location, outcome timeline.

      When AI answers are built, they draw on these specific reviews. If your review base is generic, those reviews are not extractable as evidence for procedure-specific recommendations. You can ask for better reviews. Most clinics never do, because they are still asking for stars.

    5. Consistent pricing, safety, and credential information

      Price ranges should appear in text on procedure pages. Even if the final price is personalized, a range tells the model what bracket you operate in. Safety protocols, anesthesia provider, hospital, accreditation status when applicable, ICU access, all of it in text on the page where a patient would expect to find it. Credentials with verification links. The same numbers everywhere on the site and on your third-party listings.

      The killer is inconsistency. If your homepage says "$5,000 to $8,000," your procedure page says "starting at $4,500," and a third-party directory says "$6,500 average," the answer has to resolve to one number. Sometimes the lowest gets cited. Sometimes the answer averages across them. Sometimes all three appear side by side and the patient walks away confused. Yext (2025) found that price inconsistency across listings is one of the top three drivers of mismatched AI answers in healthcare verticals.

    A clinic that gets these five right does not need to do anything fancy on top. Schema markup, llms.txt directives, entity architecture, citation building, all of that amplifies these five fundamentals. None of it substitutes for them.

    6. The Tijuana problem: trust before the border

    There is a layer above the five fundamentals that only matters if you serve US patients crossing into Tijuana. Domestic Mexican patients do not have this layer. Domestic US surgeons do not have this layer. You do.

    The Tijuana trust problem is geography working against you while the patient still wants to believe you. She does not know San Ysidro from Otay. She does not know what Zona Rio looks like at night. She does not know whether Hospital Angeles, CER, or Hospital de la Mujer is a credible operating environment. She does not know whether the recovery house her coordinator mentioned is safe. She does not know whether the anesthesiologist is certified by anyone she can verify.

    AI Overview, ChatGPT, and Gemini construct a trust answer from whatever public information exists about Tijuana medical tourism. If that public information is mostly cautionary news articles from 2018 plus a few recent travel blogs, responses lean cautious. If that public information includes your clinic's safety page, your hospital's accreditation status, your anesthesia provider's credentials, your recovery infrastructure, and sample patient timelines, responses lean confident. Patients Beyond Borders (2024) reports that roughly 1.2 million Americans cross into Mexico annually for medical care, with Tijuana as the largest entry point, which makes this trust gap the daily structural condition of the market, not a hypothetical.

    You cannot fix Tijuana's reputation alone. You can make your part of it dense enough with verifiable, specific, US-readable trust signals that AI describes you as the credible exception. This is the section most clinics here miss completely. They write their websites for Mexican patients and assume the US patient will figure out the rest. The US patient does not figure it out. She asks AI, and AI answers based on what you wrote.

    The clinics that win the AI second-opinion conversation in this city are the ones that translate Mexican medical proof into US patient context. CMCPER becomes "Mexico's national board certification body for plastic, aesthetic, and reconstructive surgery, with the official registry linked." Cedula profesional becomes "the federal medical license number issued by the Mexican Ministry of Education." Hospital information becomes specific and citable: name, location, accreditation status when applicable, anesthesia setup, emergency protocols, where the patient recovers. The patient does not have to do this translation herself. AI can do it, because you wrote it down.

    This is one of the most valuable sections of any Tijuana clinic site. It is also the section nobody has.

    7. The 30-minute self-test: can the model find you right now?

    Before you spend a dollar on AI visibility work, run this test yourself. It costs nothing, it takes thirty minutes, and it tells you how much of a problem you actually have.

    Open these five tools in separate tabs:

    • ChatGPT (chat.openai.com)
    • Gemini (gemini.google.com)
    • Claude (claude.ai)
    • Perplexity (perplexity.ai)
    • Google search with AI Overviews enabled

    Run it in the cleanest environment you can. New chat, no extra context, no leading prompts, the same prompt across tools. Then ask these:

    The patient prompts

    • Who are the best plastic surgeons in Tijuana for a deep plane facelift?
    • Which bariatric surgeon in Tijuana is safest for US patients?
    • What should I know before getting dental implants in Tijuana?
    • Compare Dr. [Your Name] with other surgeons in Tijuana for [your main procedure].
    • Is Dr. [Your Name] in Tijuana a good option for [procedure]?

    What to record per prompt

    • Did your name appear at all
    • If yes, what position
    • What did the model say about you
    • Did it cite sources
    • Were the sources accurate
    • Did the model name competitors
    • Were the competitors accurate
    • Did the model get your specialty right
    • Did the model get your clinic right
    • Did the model get your pricing right

    How to read the results

    There are six possible outcomes for each prompt, worst to best:

    1. Not mentionedNot mentioned at all.
    2. Wrong infoMentioned with wrong information.
    3. Mentioned, not recommendedMentioned but not recommended.
    4. Recommended, no sourceRecommended but not cited.
    5. Recommended with a thin sourceRecommended and cited but with a thin or single source.
    6. Recommended with strong sourcesRecommended, cited, multiple independent sources, accurate procedure context.

    Most Tijuana surgeons score between level 1 and level 3 across most prompts. Some score level 4 on their main procedure. Almost none score level 5 or 6 across the board. The gap between where you are and where you need to be is the work.

    Run this test on a Sunday morning before you talk to any agency, including mine. The output of the test is your real baseline. Anyone who tells you what to do without you having run this test is selling.

    Want this run for you in 24 hours?

    The Free AI Visibility Scorecard is a structured version of the 30-minute self-test. We run the prompts across all five platforms and send you the report. No credit card.

    8. What you can fix this week

    Most surgeons after running the test want to know what to do that does not require an agency. The honest answer is that there are six things you can do this week that move the needle without anyone's help.

    Start with the boring work.

    • Update your bio. Move every credential out of any image into plain text. Spell your name the same way everywhere. Add sameAs links to your LinkedIn, your CMCPER profile, and your hospital affiliation page. Use the full credential name before any abbreviation the first time it appears. Write years of experience as a number, not as "extensive experience."
    • List your specialties using the words patients actually use. If you do facelifts, the word "facelift" should appear on your site. Not "facial rejuvenation," not "anti-aging procedures." Use the words that show up in the AI prompt.
    • Audit your Google Business Profile. Photos current. Hours correct. Categories accurate. Description rewritten with procedure names. Owner replies on every review for the last twelve months.
    • Ask better reviews. Take your last ten happy patients and ask them to add detail to a review they already left: doctor name, specific procedure, recovery time, outcome at six months. Most will do it if you ask in person.
    • Write one FAQ page for your top procedure. Use the real questions you hear in consults. Answer them in plain text, two to four sentences each. This single page often surfaces in AI answers within weeks.
    • Search your own name. Look for variations and misspellings. Fix any directory listing that has your name wrong. Claim any profile you did not know existed and update it to match the canonical version.

    None of this is advanced. That is exactly why it is embarrassing how often it is missing, and exactly why a competitor with no surgical advantage can beat you with it.

    9. What requires deeper structural work?

    After the floor is the structural work. This is where most clinics get stuck because nobody owns it. The surgeon thinks marketing has it. Marketing thinks the web developer has it. The developer thinks the copywriter has it. So nothing connects, the work fragments, and twelve months later the site still looks like it did before.

    The structural work is what only compounds when it is done together:

    • Schema markup. Each page tagged with the right Schema.org type. Physician, MedicalOrganization, MedicalProcedure, Review, FAQPage. Done correctly so the model can extract structured facts in one pass.
    • Entity architecture. The graph that connects doctor to clinic to procedure to credential to hospital to certifying body, in machine-readable form, consistent across the site.
    • llms.txt directives. The newer file format for telling AI crawlers what is canonical, what is current, and what to ignore. Most clinics do not have one yet.
    • Physician profile depth. Long-form, narrative, third-person profiles for each surgeon, with credential context, training history, case volume, specialty positioning, and philosophy of care. These pages compound.
    • Procedure page depth. Each procedure as a content cluster, not a single page. Pillar plus FAQs plus comparison content plus recovery content.
    • Internal linking. Every page connected to every other relevant page in a way that reinforces the entity graph.
    • Third-party citation building. Active outreach to medical directories, press, podcasts, conference circuits. Earned mentions matter more than purchased ones.
    • Reviews system. Templates, prompts, and flows that produce specific, procedure-tagged reviews at volume.
    • Monthly prompt tracking. The same patient prompts run on the same five platforms every month, scored, charted, attributed.

    This is what the GEO Setup phase of our work installs, and the part that takes eight to twelve weeks to do correctly. Done in pieces, the work breaks. Done together, it holds.

    Ready for the deep diagnostic?

    The Cross-Border GEO Audit is $997, delivered in 3 business days, and credited toward GEO Setup if you continue within 30 days.

    10. How Tersefy fits

    I do not like hiding the offer behind a "book a call" button, so here is the actual sequence and what each step costs.

    • Diagnosis: Free AI Visibility Scorecard. A structured, free way to see whether AI systems can find, understand, and mention your clinic across patient-style prompts.
    • Deep diagnostic: Cross-Border GEO Audit. A $997 audit delivered in 3 business days that reviews prompts, competitors, source gaps, and technical visibility issues.
    • Foundation: GEO Setup. The one-time setup that fixes the core entity, schema, profile, and content structure AI systems need before they can understand the clinic.
    • Ongoing: Tersefy AI. Monthly content, citation building, review signal improvement, and prompt tracking so visibility can compound over time.

    The starting point for any surgeon is the Scorecard. If we cannot show you something useful from a free diagnostic, we cannot help you on the bigger work either. The Scorecard exists to make the cheap diagnostic real and the expensive engagement honest.

    11. What the VIDA case study showed

    I previously ran marketing at VIDA Wellness & Beauty Center, and VIDA served as the operator-led reference implementation when we were building the Tersefy methodology. We documented what changed.

    In a measured period across a four-surgeon financial cohort, VIDA reduced ad spend by approximately 50% and saw approximately 13% more surgeries, while several surgeons moved from being invisible in AI answers to being named for procedure-specific patient prompts. Those numbers are operator-reported, not independently audited. The cohort is one clinic. Results are not a guarantee for any other clinic and they should not be read as one.

    What the VIDA case did show was that when AI visibility, response speed, review specificity, and funnel operations are addressed together, ad-driven acquisition can become less of the load and organic AI-driven inquiries can pick up the rest. In our own tracking, AI-validated patients also behaved differently once they reached the consultation: they asked fewer basic trust questions, compared less on price, and moved faster once the coordinator followed up. They had already done their validation. They were not shopping. They were ready.

    The full VIDA breakdown lives at /case-studies/vida/. The numbers there are conservative, defensible, and dated. The methodology is the same one available to any clinic that goes through the Scorecard, Audit, and GEO Setup sequence. None of it is proprietary to VIDA, and none of it is unrepeatable, but it does take the work.

    12. How to measure progress monthly

    If you start doing this work, with us or anyone else, here is the scoreboard. The monthly report should answer five questions, in order:

    1. Are we being named?
    2. Are we being named for the right procedures?
    3. Are the sources the model cites correct?
    4. Are competitors gaining ground or losing ground inside our prompts?
    5. Are those mentions producing actual consultation requests?

    The metrics behind those five questions are the same ones we use internally and publish to clients:

    • Brand Mention Rate across a defined set of patient prompts on the five platforms.
    • Position when mentioned. When your name appears, where in the answer.
    • Source citations. How many independent third-party sources are cited next to your name.
    • Source URLs. Which specific pages are doing the work.
    • Competitor mentions. Who else is showing up in your prompts.
    • Information accuracy. When AI describes you, is it correct.
    • AIO appearances. Specifically Google AI Overview triggering rate for your queries.
    • Scorecard or audit submission rate, if you run one.
    • Inbound consultation request rate, attributed where possible.

    A monthly report against this scoreboard tells you whether the work is moving. If after ninety days none of these numbers have improved, the work is wrong, the prompts are wrong, or the foundation has not been installed correctly. We have seen all three. They are all fixable, but only if you are measuring.

    The clinics that measure this monthly will see the shift early. The ones that do not will blame Meta, Google, coordinators, or pricing before realizing the patient never saw them in the first place.

    Quick answers

    How do patients actually find a Tijuana surgeon through ChatGPT?

    They start with a procedure-plus-city query, then narrow with safety, credentials, and price questions. They almost never type a surgeon's name first. They build a shortlist from the model's first answer and ask follow-ups about the names that surfaced. If you are not in the first answer, you almost never make the shortlist.

    Why do less experienced surgeons sometimes appear before better-credentialed surgeons?

    Because the AI answer is built from digital evidence, not surgical evidence. A less experienced surgeon with a clean entity, structured procedure pages, third-party citations, and procedure-specific reviews surfaces ahead of a more experienced surgeon whose credentials sit inside an image and whose reviews are generic.

    Is AI visibility different from SEO?

    Yes. SEO helps a page rank on a results page. AI visibility helps a doctor get named inside a synthesized answer with sources. The technical work overlaps in places, but the win condition is different: cited recommendation, not just a blue link.

    What can I check myself before hiring anyone?

    Run the 30-minute self-test in section 7. Five prompts, five platforms, write down what happens. Anyone you talk to after that, including us, should be able to read your test results and tell you what is realistic.

    How long does it take to start showing up in AI answers?

    Surface-level fixes from section 8 can show movement in two to four weeks on Perplexity and Google AI Overviews. Structural work from section 9 starts compounding around week six to eight and stabilizes around week twelve. Anyone promising same-week AI visibility is selling something else.

    Should I run ads or fix organic AI visibility first?

    Both, in different sizes. Cut ads to maintenance level while you install AI visibility. Once AI visibility is producing inbound, ads become amplification. Clinics that ride ads as the only engine into 2026 will pay higher CAC every year.

    Does any of this work if I am not in Tijuana?

    Most of it, yes. The five fundamentals in section 5 are universal. Section 6, the cross-border trust layer, is specific to medical tourism and most pronounced in Tijuana, San Jose Costa Rica, Bogota, Bangkok, and Istanbul. If you do not serve cross-border patients, skip section 6 and the rest still applies.

    What to do next

    If you have made it this far and the test in section 7 sounds useful, run it this Sunday. If the results show what they show for most clinics in this city, the next step is the Free AI Visibility Scorecard. It is a structured version of the same test, it is free, and it takes 24 hours.

    After the Scorecard, the path is the Cross-Border GEO Audit if the gap is wider than you expected. Then GEO Setup if you decide to install the foundation. Then Tersefy AI if you want sustained authority work. Each step is decidable on its own and you do not have to commit to the full path to start.

    The thing not to do is wait. AI visibility compounds. The clinics that started in 2024 are eighteen months ahead of the ones starting now. The cost of delay is not paid in dollars. It is paid in patients you never knew were searching.

    Is AI already recommending your surgeon? Find out free.

    Start with the free Scorecard: we tell you exactly where you appear (or do not) when US patients ask ChatGPT, Claude, Gemini, and Perplexity about Tijuana surgeons. Personalized report in 24 hours. No pitch.

    Sources

    • Aggarwal, P. et al. (2024). GEO: Generative Engine Optimization. Princeton University. arXiv:2311.09735. Cited inline in section 2.
    • Schmidt, S. et al. (2024). Search engine manipulation in the era of generative AI. Journal of Online Trust and Safety.
    • KFF Health Tracking Poll (2025). Health information seeking through AI tools. Cited inline in the 30-second take.
    • Yext (2025). Healthcare AI visibility benchmark report. Cited inline in section 5, fundamental 5.
    • AirOps (2025). Source-attribution study across LLM-grounded answers, methodology disclosed. Cited inline in section 5, fundamental 3.
    • Patients Beyond Borders (2024). Medical Tourism Statistics & Facts, Mexico chapter. Cited inline in section 6.
    • Mexican Council of Plastic, Aesthetic, and Reconstructive Surgery (CMCPER), public certification registry. Referenced in sections 5, 6, 8.
    • Mexican Ministry of Education (Secretaria de Educacion Publica), Cedula Profesional registry. Referenced in section 6.
    • Healthgrades and RealSelf provider directories. Referenced in sections 5, 8 as third-party verification surfaces.
    • Google Business Profile owner-managed listing surface. Referenced in sections 5, 8.
    • Tersefy internal observations (2025 to 2026), VIDA Wellness & Beauty Center reference implementation, n = 4 surgeons, 12-month measurement window. Cited inline in section 5.
    Emilio Alcolea
    Author

    Emilio Alcolea

    Founder, Tersefy. Former Head of Marketing & Sales at VIDA Wellness & Beauty Center (Tijuana's largest medical tourism clinic) and Washington Vascular Specialists (USA). Built AI visibility systems for 5 surgeons, taking them from invisible to AI-recommended in 6 months.

    VIDA Wellness & Beauty Center Washington Vascular 57 articles Tijuana-based
    See what AI says about you Get Free Scorecard