Education

Why ChatGPT Recommends Your Competitors Instead of You

Emilio Alcolea Emilio Alcolea May 4, 2026
HUMAN CRAFTED
Contents
    The 30-second take. When ChatGPT, Gemini, Perplexity, or Google AI Overviews keeps recommending your competitors instead of you, the model is not choosing the better surgeon. It is repeating the strongest source pattern it can verify. Your competitor has more extractable evidence: cleaner bios, stronger third-party profiles, procedure-specific reviews, hospital mentions, podcast appearances, directory listings, and consistent pricing language. The fix is not more ads or more generic blog posts. The fix is competitor source mapping: identify the exact URLs AI uses to justify your competitor, rebuild those source categories around your own clinic, and make your evidence easier to cite than theirs.

    1. The competitor is not beating you in surgery. They are beating you in evidence.

    Almost every clinic that talks to me about Tersefy says the same thing in the first ten minutes.

    “I asked ChatGPT who the best surgeon in Tijuana is for my procedure. It named three people. None of them are me. One is good. Two are average. I have more experience than they do. Why is AI recommending them?”

    That question has a clean answer.

    AI is not auditing the operating room. It is not reviewing your surgical outcomes, watching your hands, or comparing case difficulty. It is reading public evidence.

    If your competitor has a complete Healthgrades profile, a RealSelf listing, a podcast transcript, a hospital affiliation page, procedure-specific reviews, and a schema-tagged bio, the model has something to work with.

    If you have a single bio page, credentials trapped in an image, generic reviews, no third-party footprint, and procedure pages that barely mention the procedure, the model has almost nothing to defend.

    That is how average competitors win AI answers. They are not better. They are more legible.

    I saw this while previously running marketing at VIDA Wellness & Beauty Center. A senior surgeon with higher case volume and stronger outcomes kept disappearing from ChatGPT and Perplexity answers for his specialty. A younger surgeon with less case volume kept surfacing instead.

    The reason was not clinical. The younger surgeon had a cleaner public footprint: complete directory profiles, podcast appearances, procedure-specific copy, and a consistent bio across multiple websites. The senior surgeon had more real-world authority, but less machine-readable evidence.

    AI was not picking the worse surgeon. AI was picking the surgeon it could cite.

    Your competitor is not winning because the model likes them. They are winning because the model can prove them.

    2. What “ChatGPT recommends my competitor” actually means

    When a patient asks “Who is the best plastic surgeon in Tijuana for a deep plane facelift?”, the answer is built from source patterns.

    The model looks for entities, procedures, credentials, locations, reviews, and third-party mentions that appear together across the public web. Then it synthesizes a recommendation from the names it can explain and defend.

    That means three things.

    First, clinical experience only matters if it is published. Years in practice, case volume, training, hospital affiliation, certifications, surgical philosophy, and procedure focus do not help you if they are missing, vague, inconsistent, or trapped inside images.

    Second, third-party sources carry more weight than self-published claims. Your own website is necessary, but not enough. A claim repeated across a board registry, hospital page, medical directory, review platform, podcast transcript, and clinic profile is easier for AI to trust than the same claim sitting alone on your homepage.

    Third, consistency beats volume. Ten generic blog posts do less than five sources that all say the same clear thing: same doctor, same specialty, same procedure, same city, same credentials, same clinic.

    This is why a competitor with weaker credentials can beat you in AI recommendations. They have source depth. You have reputation trapped offline.

    AI cannot reward the authority it cannot extract.

    3. The five source signals that make competitors show up

    Most clinics losing AI recommendations are not losing for mysterious reasons. They are losing because competitors have a stronger source pattern.

    Source signalWhat your competitor hasWhy AI rewards it
    Clean doctor entityOne canonical name, complete bio, credentials in plain text, sameAs links, consistent spellingThe model can resolve the surgeon as one stable entity
    Procedure-specific footprintDedicated pages for facelift, tummy tuck, rhinoplasty, gastric sleeve, dental implantsThe model can connect the doctor to the exact patient prompt
    Third-party verificationHealthgrades, RealSelf, hospital pages, board registries, medical directories, interviews, podcastsIndependent sources make the recommendation easier to defend
    Extractable reviewsReviews mention doctor, procedure, city, recovery, outcome, and patient originAI can use review language as evidence, not just sentiment
    Consistent commercial factsPrice ranges, location, transport, recovery, hospital, and consultation process match across sourcesThe model does not have to guess or reconcile contradictions

    Most clinics losing to competitors are losing on at least three of these five. That is the good news.

    This is not magic. It is not “the algorithm.” It is source pattern competition.

    4. How to read your competitor’s AI footprint

    Do not start by writing content. Start by reading the map.

    The fastest way to understand why a competitor keeps showing up is to force AI tools to expose the sources behind the recommendation.

    Use this prompt on Perplexity first:

    “Compare Dr. [Your Name] and Dr. [Competitor Name] in Tijuana for [procedure]. Cite sources for each claim.”

    Then run this one:

    “Why is Dr. [Competitor Name] recommended for [procedure] in Tijuana? List the sources that support the recommendation.”

    Then this one:

    “What public sources mention Dr. [Competitor Name] for [procedure] in Tijuana? Separate clinic-owned sources from third-party sources.”

    Save every URL.

    Do not read the answer first. Read the sources. The answer is opinion. The URLs are the strategy.

    Build a simple source map:

    Competitor sourceSource typeWhat it provesCan we match it?Can we beat it?
    Clinic bioOwnedCredentials, specialty, locationYesYes, with stronger schema and detail
    Healthgrades profileDirectorySpecialty and third-party profileYesYes, if completed better
    RealSelf profileDirectory and reviewsProcedure relevance and patient languageMaybeYes, if reviews are specific
    Hospital pageAffiliationOperating environment and credibilityYesDepends on hospital cooperation
    Podcast transcriptEarned mediaExpertise and topical authorityYesYes, through outreach
    Review pagePatient proofProcedure, recovery, outcomeYesYes, with better review prompts

    This is a citation map. You are not copying your competitor’s content. You are copying the source categories that made AI trust them.

    The cited URL is the lever. The AI answer is just the output.

    5. The Competitor Source Map

    After you pull the URLs, classify them into six buckets. This is the Competitor Source Map, and it is the diagnostic backbone of every Tersefy engagement.

    BucketExamplesPriority
    Owned entity sourcesDoctor bio, clinic profile, about page, procedure pageCritical
    Credential sourcesBoard registry, CMCPER, society membership, hospital profileCritical
    Commercial sourcesPricing page, consultation page, package page, recovery informationHigh
    Review sourcesGoogle Business Profile, RealSelf, Trustpilot, patient testimonialsHigh
    Authority sourcesPodcasts, interviews, press, conference pages, guest articlesHigh
    Local trust sourcesHospital page, recovery hotel page, transport page, Tijuana medical tourism referencesMedium-high

    Now look for asymmetry.

    If your competitor has a Healthgrades profile and you do not, that is not a branding issue. That is a missing source.

    If they have a hospital page and you do not, that is not a content gap. That is a trust gap.

    If their reviews mention “deep plane facelift” and yours only say “great staff,” that is not a review count problem. That is an extraction problem.

    If they have three podcast transcripts and you have zero, that is not PR fluff. That is source depth.

    The model is not rewarding their marketing. It is rewarding their evidence stack.

    6. The source-pattern gap table

    Once you know the competitor’s source pattern, the work becomes obvious.

    If the competitor wins because…Your counter-move
    Their doctor bio is clearerRewrite your bio as a canonical entity page with full name, specialty, credentials, sameAs links, hospital affiliation, and procedure focus
    Their procedure pages are strongerBuild one dedicated page per money procedure using patient-language titles
    Their directory profiles are completeClaim and fully complete equivalent profiles with consistent name, specialty, photos, procedures, and links
    Their reviews are more specificChange your review request script to ask for doctor, procedure, city, recovery, and outcome timeline
    Their hospital affiliation is publicAsk the hospital or facility to create or update a public surgeon affiliation page
    Their podcast or interview footprint is strongerPitch 3 to 5 podcast or interview placements around the exact procedure you want AI to associate with you
    Their pricing is clearerPublish defensible price ranges and explain what changes the quote
    Their source set is more recentShip new third-party mentions and update stale profiles before the next campaign

    This is the part most clinics avoid because it is boring. But boring wins.

    Directory cleanup, bio consistency, review specificity, hospital pages, procedure pages, podcast transcripts. That is the work. Not “more content.” Not “better captions.” Not “boost more posts.”

    The model follows the source pattern.

    7. The source categories that move fastest

    Some source gaps take months. Some can move inside a quarter. Here is the order I would attack.

    MoveDifficultyExpected AI impactWhy it works
    Rewrite canonical doctor bioLowHighGives AI one clean source for identity, credentials, procedures, and clinic context
    Complete directory profilesLow to mediumMedium-highCreates third-party confirmation across sources AI already understands
    Add procedure-specific pagesMediumHighConnects the doctor to the exact patient prompt
    Fix review request scriptMediumHigh over timeTurns patient reviews into extractable evidence
    Add hospital affiliation pageMediumHighSupports safety and credibility questions
    Earn podcast or interview mentionsMedium-highHighAdds independent authority and fresh source depth
    Publish comparison-safe FAQ pagesMediumMedium-highCaptures AI follow-up questions without directly attacking competitors

    The fastest stack is this:

    1. Bio cleanup
    2. Directory completion
    3. Procedure page
    4. Review script update
    5. One earned mention

    That stack can change what Perplexity and ChatGPT have to work with in 8 to 12 weeks. Google AI Overviews usually moves slower because it depends more heavily on Google’s primary index and trusted-source thresholds.

    Want a side-by-side competitor source map?

    The Free AI Visibility Scorecard runs the 5 × 5 × 3 test on your clinic and pulls the source pattern for the competitors who keep appearing in your queries. Delivered in 24 hours with the source map applied to your top procedure.

    8. Aggressive but defensible tactics

    This is where clinics can get aggressive without crossing into fabricated reviews, fake credentials, or spam. Five moves that actually work.

    1. Build comparison-safe pages. Do not publish trash like “Dr. X vs Dr. Y: Why We Are Better.” That looks desperate. Instead, publish pages that answer the comparison questions patients actually ask:

    • How to compare facelift surgeons in Tijuana
    • What to check before choosing a tummy tuck surgeon in Mexico
    • Questions to ask before booking bariatric surgery in Tijuana
    • How to verify a plastic surgeon’s credentials in Mexico
    • What makes a dental implant clinic safe for US patients

    These pages intercept the comparison query without naming competitors. They put your evidence in front of the patient at the exact moment she is comparing.

    2. Seed the exact procedure language everywhere. If you want AI to associate you with “deep plane facelift,” that phrase needs to appear across your doctor bio, procedure page, review requests, GBP description, directory profiles, podcast topics, FAQ pages, image alt text, internal links, and schema markup. One mention is not an entity association. Repetition across source types is.

    3. Turn reviews into extraction assets. Stop asking “Can you leave us a review?” Start asking “Could you mention the procedure you had, the doctor’s name, where you traveled from, and what recovery was like?” That is not manipulation. That is helping the patient write a useful review. Generic praise does not help AI. Specific context does.

    4. Earn coverage on pages that already appear in AI answers. Do not chase random backlinks. Earn coverage on pages already in the model’s grounding. If Perplexity keeps citing a medical tourism blog, a directory, a podcast, or a hospital page, that is your target list. The question is not “where can we get exposure?” The question is “which sources are already being used to answer our money prompts?” That is where you want to appear.

    5. Build source redundancy. One source is fragile. Five sources saying the same thing is a pattern. If your bio says facelift, your procedure page says facelift, your RealSelf says facelift, your reviews say facelift, and your podcast appearance says facelift, the model does not have to infer. It can repeat. That is the goal. Make the right answer obvious.

    The most aggressive GEO tactic is not tricking the model. It is making your evidence impossible to ignore.

    9. Why more blog posts usually do not fix this

    When clinics see competitors winning AI answers, the first instinct is to publish more content. That usually fails.

    Generic blog volume does not fix a weak source pattern. A clinic with 80 generic posts about “things to know before surgery” can still lose to a clinic with one clean doctor bio, five procedure-specific pages, three complete directory profiles, one hospital affiliation page, ten procedure-tagged reviews, and two podcast transcripts.

    AI tools do not need more words from you. They need more verifiable evidence about you.

    Blog content helps only when it becomes a source worth citing. The same logic applies to the assets your competitors lift into AI: when a patient uploads your PDF quote into ChatGPT, the model anchors on whichever clinic has the cleanest verifiable evidence around it.

    Content typeUseful for AI?Why
    Generic procedure blogLowToo common, weak entity signal
    Surgeon-authored procedure guideHighConnects expert, procedure, and clinical judgment
    FAQ page based on real consult questionsHighAnswers patient prompts directly
    Case study with method and datesHighCreates original evidence
    Comparison-safe guideMedium-highCaptures competitor research without naming competitors
    Thin SEO listicleLowAdds volume without proof

    Write fewer posts. Make each one easier to cite.

    10. The reframe: stop trying to rank in AI answers

    Most clinics treat ChatGPT like a new search engine. That is the wrong frame.

    You are not trying to “rank” inside ChatGPT. You are trying to become the easiest defensible answer.

    The model is not where the work happens. The model is where the source pattern shows up.

    If the source pattern says your competitor is more documented, more cited, more procedure-specific, and more consistent, they win. If your source pattern becomes cleaner, deeper, and easier to verify, you start replacing them.

    That is GEO in medical tourism. Not keyword stuffing. Not blog volume. Not praying that the model “finally understands.”

    Source pattern in. Recommendation out.

    The model is not where the work happens. The model is where the result shows up.

    11. How to start closing the gap this week

    Start with one procedure and one competitor. Do not try to fix your entire clinic footprint at once.

    Run this prompt:

    “Compare Dr. [Your Name] and Dr. [Competitor Name] in Tijuana for [procedure]. Cite sources for each claim.”

    Save every cited URL.

    Then build this sheet:

    Source categoryCompetitor has it?We have it?GapFirst action
    Canonical doctor bioYes/NoYes/NoWeak / missing / strongRewrite or expand bio
    Procedure pageYes/NoYes/NoWeak / missing / strongBuild dedicated page
    Directory profileYes/NoYes/NoWeak / missing / strongClaim and complete profile
    Hospital pageYes/NoYes/NoWeak / missing / strongRequest listing or update
    Procedure-specific reviewsYes/NoYes/NoWeak / missing / strongUpdate review script
    Earned mediaYes/NoYes/NoWeak / missing / strongPitch podcast or interview
    Pricing consistencyYes/NoYes/NoWeak / missing / strongPublish or align range

    The goal is not to beat every competitor everywhere. The goal is to beat the one who keeps appearing for your money procedure. Once you close that gap, move to the next procedure.

    Want this run on your top three competitors?

    The Cross-Border GEO Audit pulls the source pattern for your top competitors, maps it against yours, and delivers a gap inventory tied to specific URLs. $997, delivered in 3 business days, and credited toward GEO Setup if you continue within 30 days.

    Quick answers

    Why does ChatGPT recommend my competitors and not me?

    Because their public evidence is easier to extract, verify, and cite. AI is not judging surgical quality directly. It is using visible source patterns: bios, procedure pages, reviews, directories, hospital pages, press, and consistent facts. Your competitor is not winning because the model likes them. They are winning because the model can prove them.

    Are competitors paying ChatGPT to recommend them?

    Usually no. The major AI tools do not sell simple pay-to-be-recommended placements inside organic answers. If a competitor keeps appearing, assume their source pattern is stronger before assuming paid influence.

    Should I write pages attacking my competitors?

    No. Attack pages make your clinic look low-trust and AI tools deprioritize them. Write comparison-safe pages instead: how to compare surgeons, how to verify credentials, what safety signals matter, what questions to ask before booking. These intercept comparison queries without naming competitors.

    How do I find the sources AI is using for my competitor?

    Run competitor comparison prompts on Perplexity and ask for cited sources. Then repeat on ChatGPT with browsing and Google AI Overviews. Save the URLs. The repeated source categories are the competitor’s AI footprint and your real target list.

    If my reviews are better, why does AI still recommend them?

    Because AI needs extractable detail, not just positive sentiment. Reviews mentioning doctor name, procedure, city, recovery, and outcome are more useful than generic five-star reviews that say “great experience.”

    How long does it take to replace a competitor in AI answers?

    For ChatGPT and Perplexity, expect early movement in 8 to 12 weeks after source fixes. For Gemini and Google AI Overviews, expect 3 to 6 months. Faster claims are usually sales talk.

    Can I block competitors from appearing in answers about my clinic?

    No. AI answers are not under your editorial control. The only thing you control is the strength of your own evidence base. The goal is not to erase competitors. The goal is to be the recommended one when comparison happens.

    What to do next

    Pick one competitor. Pick one procedure. Run the comparison prompt with citations. Save the URLs.

    That source list is the work. Not the vibes. Not the model. Not the ad budget. The URLs.

    Some gaps can be closed in two weeks. Others take a quarter. But once you see the source pattern, the problem stops being mysterious.

    The full diagnostic is the 5 × 5 × 3 AI Visibility Test. The consultation-room response is the AI Second-Opinion Loop. The strategic foundation is the full guide on AI visibility for Tijuana surgeons.

    The model is not the problem. The source pattern is. Fix the source pattern and the answer changes.

    Sources

    • Aggarwal, P. et al. (2024). GEO: Generative Engine Optimization. Princeton University. arXiv:2311.09735. Reference for the entity-and-citation framing in section 2.
    • AirOps (2025). Source-attribution study across LLM-grounded answers. Reference for the source-pattern analysis in sections 2, 4, 5.
    • KFF Health Tracking Poll (2025). Health information seeking through AI tools. Reference for patient query behavior context.
    • Patients Beyond Borders (2024). Medical Tourism Statistics & Facts, Mexico chapter. Reference for cross-border patient context.
    • Mexican Council of Plastic, Aesthetic, and Reconstructive Surgery (CMCPER), public certification registry. Referenced in section 1.
    • Tersefy internal observations (2025-2026), VIDA Wellness & Beauty Center reference implementation, n = 4 surgeons, 12-month measurement window. Cited inline in section 1.
    Version history(2 versions)
    • v2.02026-05-04Editorial consolidation. Added Competitor Source Map framework with 6-bucket classification, expanded counter-move tables, added comparison-safe pages tactic, sharpened operator language.
    • v1.02026-05-04Initial publication. Final article on the competitive dynamics of AI recommendation.
    Emilio Alcolea
    Author

    Emilio Alcolea

    Founder, Tersefy. Former Head of Marketing & Sales at VIDA Wellness & Beauty Center (Tijuana's largest medical tourism clinic) and Washington Vascular Specialists (USA). Built AI visibility systems for 5 surgeons, taking them from invisible to AI-recommended in 6 months.

    VIDA Wellness & Beauty Center Washington Vascular 57 articles Tijuana-based
    See what AI says about you Get Free Scorecard