How to Get Patient Reviews That AI Can Actually Use: The Medical Tourism Coordinator's Playbook

You get reviews AI systems can actually use by asking at the right time, prompting for specific details, and converting private WhatsApp praise into public reviews.

Most practices already have the star ratings. The problem is what's inside them. A review that says "Amazing experience, 10/10 would recommend" adds to your count but gives an AI system nothing to extract when a patient asks who performs facelifts in Tijuana. No doctor name. No procedure. No city the patient flew in from. No recovery timeline. No measurable outcome.

I asked this question at VIDA about a year ago, and the answer was uncomfortable. We had hundreds of Google reviews. Strong star rating. Patients genuinely loved their experience. But when I started reading the reviews through the lens of what AI could actually extract from them, the picture changed. Most of our reviews were testimonials written for humans. They were warm, grateful, emotional. And they were far less useful in the systems our future patients were already using to research doctors.

58%
Business websites are the most-cited source type in AI-mediated local search
BrightLocal, 2024
Rising fast
AI-referred traffic to healthcare sites has risen sharply in recent reporting
BrightEdge, 2024
Highly variable
AI brand recommendations vary meaningfully across prompts and sessions
SparkToro, 2024

We wrote about why this matters in How Google Reviews Impact AI Recommendations for Medical Clinics. That article explains the problem. This one is the playbook for fixing it. Exact scripts your coordinators can use this week. Timing strategies built for the medical tourism patient journey. And a measurement framework so you know whether it's working.

Your reviews are your reputation. But if they don't contain the right information, they're far less useful to the AI systems millions of people are already using to ask healthcare questions.

Why Most Medical Tourism Reviews Are Useless for AI

Your patients will leave good reviews. That's not the issue. The issue is when and how you ask.

Think about the moment most coordinators request a review. It's at discharge. The patient just had surgery. They're tired, maybe still on pain medication. They're thinking about the border crossing and whether the wait at San Ysidro will be one hour or three. They're grateful but not analytical. They're in a rush.

The review they leave at this moment reflects that state: "Best decision ever! Dr. was wonderful! 5 stars!"

That review still has value. It adds to your star count and your volume. But from an entity-extraction standpoint, it may contain little usable detail. No doctor name. No procedure. No patient origin city. No outcome. No recovery timeline. When an AI system encounters it, there's nothing specific to associate with any particular doctor, procedure, or geography.

The best moment for a detailed, entity-rich review is 3 to 4 weeks post-op. By then, the patient is seeing real results. Recovery is progressing. They've had time to process the experience. They have specific details to share because they've lived them.

But nobody asks at that moment. The coordinator has moved on to 30 other active patients in their WhatsApp threads. The post-op follow-up happened at Day 7. By Day 21, the thread has gone quiet. The review window closes.

The timing of the ask strongly influences the quality of the review. Ask at discharge, get emotion. Ask at three to four weeks, get entities.

What Makes a Review "Entity-Rich" (And Why AI Cares)

An entity-rich review contains specific, extractable data points that can be associated with a doctor, a practice, a procedure, and an outcome. The difference becomes clear when you see it side by side.

Low-signal review

"Amazing doctor! Best experience ever! 10/10 would recommend!"

No doctor name No procedure No origin city No outcome
AI goldmine

"Dr. Quiroz performed my deep plane facelift at VIDA in Tijuana. I drove from San Diego. Natural results, minimal bruising, back to work in 10 days."

Doctor name Procedure Practice City Origin Outcome Timeline
Review entity extractor
Generic review

"Amazing doctor! Best experience ever! 10/10 would recommend!"

Entity-rich review

"Dr. Quiroz performed my deep plane facelift at VIDA in Tijuana. I drove from San Diego. Natural results, minimal bruising, back to work in 10 days."

One review containing multiple specific, extractable details. That single review may be more useful for AI-mediated discovery than many generic five-star reviews. When multiple reviews independently mention the same doctor, procedure, and outcomes, they create a stronger and more consistent public signal. You can think of this as a confidence-building pattern: repeated, consistent mentions across reviews make a provider easier to understand and easier for any system to surface.

This is where a simple internal KPI can help. We call it the Review Specificity Score, and there are two ways to use it depending on what you need.

Quick audit (do it right now): Pull your last 20 Google reviews. Count how many mention the doctor by name AND the specific procedure. Divide by 20. That gives you a snapshot.

Monthly KPI (track it over time): Count the number of reviews in the last 90 days that mention the doctor by name AND the specific procedure, then divide by total reviews in that same 90-day window. The rolling window matters because recent reviews tend to carry more practical weight than older ones in how your practice is perceived and discovered.

As a practical internal benchmark, a score below 30% suggests weak specificity. Between 30% and 50% suggests mixed quality. Above 50% suggests a stronger entity signal.

And the score doesn't just give you a number. It reveals which doctors and procedures are underrepresented. If Dr. A has 35 reviews mentioning his name and procedure and Dr. B has 3, Dr. B will be less visible regardless of experience or skill. The score is a diagnostic tool, not just a metric.

The Two-Touch Review System for Medical Tourism

This is the operational core of what we built. Two review asks, timed to the medical tourism patient journey, each with a different purpose.

1
Day 0-1: Discharge
Emotional review
(star rating + brief sentiment)
"Amazing experience! 5 stars!"
2
Day 21-28: Post-Op
Entity-rich review
(doctor, procedure, origin, outcome)
"Dr. Quiroz performed my deep plane facelift at VIDA. I flew from Phoenix..."

Touch 1: At Discharge (Day 0 to 1)

Purpose: capture the emotional response and the star rating. This review will be short and generic. That's okay. Its job is to increase review count and maintain your star average.

Script for coordinator (WhatsApp, English):

"Hi [patient name]! We're so glad everything went well. If you have 30 seconds, leaving a Google review would mean a lot to Dr. [name] personally. Here's the direct link: [link]"

Send via WhatsApp with a direct Google review link. Not the Google Business Profile URL. The direct review link that opens the review form.

A timing detail specific to Tijuana: the border crossing is actually your friend here. Patients sitting in the San Ysidro or Otay Mesa line often have downtime and are already on their phones. A coordinator who sends the link with "While you're in the border line, here's the link if you have a moment" catches the patient in a captive-audience moment.

Expected result: "Amazing experience, Dr. Quiroz was wonderful, 5 stars." Short. Emotional. Fine for now.

Touch 2: At 3 to 4 Weeks Post-Op (Day 21 to 28)

Purpose: capture the detailed, entity-rich review. In our testing, this is the review that appears to contribute more to AI visibility.

Script for coordinator (WhatsApp, English):

"Hi [patient name]! It's been a few weeks since your [procedure] with Dr. [name]. How are you feeling? We're so happy to hear about your progress. If you have a moment, would you mind editing your Google review to add some details about your experience? Things like how recovery went, any results you're seeing, and where you traveled from really help other patients who are researching the same procedure. Here's the link: [link]"

Alternative script (more specific prompt):

"Hi [patient name]! We hope recovery from your [procedure] with Dr. [name] is going great. If you'd be willing to update your review with a few more details, it really helps future patients. Some things that are especially helpful to mention: the specific procedure you had, your doctor's name, where you traveled from, how recovery has been, and any results you're noticing. No pressure at all, but if you have 2 minutes, here's the link: [link]"

A quick note on Google's review system: each person can leave one review per business listing. If the patient already left a review at discharge, the right ask is to edit that existing review and add detail. Google makes this easy. They open their original review, tap the edit icon, and expand it. If for some reason they didn't leave a Touch 1 review, then Touch 2 becomes their first and only review, which is fine.

"You're not asking patients to lie or exaggerate. You're asking them to be specific. Specificity is what they already want to share. They just need the prompt."

Expected result: "Dr. Quiroz performed my deep plane facelift at VIDA in March 2024. I flew from Phoenix. Recovery was smooth, minimal swelling, natural results. The practice arranged border transportation and hotel. Highly recommend for anyone considering this procedure in Tijuana."

Operationally, we've seen that many satisfied medical tourism patients are more willing to write a detailed public review once they can point to early recovery or visible results. They chose to travel to Mexico for surgery. Friends and family were skeptical. A detailed review lets them make their case. You're not fighting patient reluctance. You're channeling motivation they already have.

Compliance note: Google's policies prohibit review gating (filtering who gets asked based on expected sentiment) and incentivized reviews. They generally distinguish between asking for honest detail and prohibited practices like gating, coercion, or incentives. Asking a patient to mention their procedure and doctor's name is asking for completeness, not manipulation. The FTC's 2024 final rule on fake reviews reinforces this: soliciting honest, specific feedback is permitted, while fabricating or selectively filtering is not. Clinics should still review current platform rules and compliance requirements for their market. (FTC, 2024; Google Business Profile Guidelines)

Most practices already do a post-op check-in at 2 to 4 weeks. It's a clinical follow-up. The review ask doesn't have to be a separate touchpoint. It's one extra line at the end of an existing conversation. "By the way, if you have a moment to add some of that to your Google review..." That reframing matters for coordinator buy-in. This isn't extra work. It's adding one line to a conversation that's already happening.

The WhatsApp Visibility Problem

This is a Tijuana-specific operational issue that doesn't get discussed enough.

The best patient feedback your practice receives probably lives in WhatsApp threads. Patients send before-and-after photos, voice notes, detailed recovery updates, grateful messages with exclamation marks and heart emojis. Coordinators see these every day. Some of them are the most compelling testimonials you could ever hope for.

But WhatsApp is a closed platform. Public search engines and AI systems generally cannot directly access private WhatsApp conversations. The most powerful testimonials your practice receives never enter the public web. They sit in a coordinator's phone, seen by one person, invisible to every system your future patients are using.

The operational fix is simple but requires a habit change. When a patient sends a glowing WhatsApp message or shares results, the coordinator responds within two hours:

"That's amazing! Would you mind sharing something similar as a Google review? It really helps other patients find us. Here's the link: [link]"

You're not asking them to copy-paste the WhatsApp message. You're asking them to share the same sentiment publicly. In our experience at VIDA, response rates improve when the ask comes quickly, ideally the same day. Wait a day and the moment passes. The patient already got the dopamine hit from sharing with the coordinator. The motivation to share publicly drops fast.

There's a specific version of this worth calling out. Patients frequently send WhatsApp voice notes that are one to three minutes of detailed, emotional, specific feedback. These voice notes often contain exactly the entity-rich content we're talking about: "Dr. Quiroz, you changed my life. The deep plane facelift was exactly what I wanted. I can't believe I drove from Phoenix and I was back home in two days." The coordinator can offer to summarize the key points and say: "That was so beautiful. Would you mind saying something similar as a Google review? I can send you a quick summary of what you said if that would make it easier." If the patient agrees, send the summary as a starting point they can use or adapt in their own words. The important thing: the patient writes and posts the review themselves. Staff should never draft or post reviews on a patient's behalf. You're converting the patient's own words into a public review, with their permission. Authentic. Specific. And finally visible.

Where to Diversify Your Reviews

Google is still the primary review platform for most practices. It's not the only review source patients encounter, and it's not the only place AI systems may look.

Business websites
58%
Business mentions
27%
Directories
15%

Source breakdown of ChatGPT local search citations. BrightLocal, 2024.

Review platform priority by specialty
Google Business Profile
Highest priority for all specialties
Tier 1
RealSelf
Plastic surgery
Doctoralia
All specialties (MX)
BariatricPal
Bariatric surgery
Reddit
Organic mentions only
Facebook Groups
Organic mentions only

Your review diversification strategy should match your specialty.

Google Business Profile. Highest priority for everyone. Both the practice GBP and individual doctor GBP listings if practitioner profiles exist. If they don't exist yet and your doctors meet Google's eligibility criteria for practitioner listings, creating them can be a high-impact step. Check Google's guidelines for individual practitioner profiles before setting them up, as not all provider types qualify and duplicate listings can cause problems. When reviews go to a single practice listing, the entity association is: review → practice. When a doctor has their own practitioner listing, the association becomes: review → doctor → practice. That second path creates a much stronger doctor-entity signal.

RealSelf. High priority for plastic surgery. RealSelf's "Worth It" rating system is one of the most structured review formats in medical tourism. The platform structures reviews around a worth-it/not-worth-it binary, then prompts for procedure name, doctor name, location, cost, and detailed narrative. A doctor with 50 detailed RealSelf reviews may have a clearer public evidence trail than a doctor with 200 generic Google reviews. RealSelf also ties reviews to procedure pages like "Facelift in Tijuana," which creates geographic-procedure entity associations. RealSelf requires reviews from genuine patients only (RealSelf Community Guidelines, 2025). Ask patients to leave a review specifically for the doctor, not just the practice.

BariatricPal. High priority for bariatric surgery. Something interesting: bariatric practices in Tijuana have accidentally stumbled into better review signals than plastic surgery practices, and BariatricPal is why. The platform's journal format essentially forces entity-rich reviews. It asks for specific fields: procedure type, surgeon name, date, weight loss stats, complications. Patients writing BariatricPal journals naturally include the exact data that makes reviews useful for discovery. Practices like Obesity Control Center and CER Bariatrics benefit from this without even trying. If you do bariatric surgery, make sure your patients know BariatricPal exists.

Doctoralia. Doctoralia is especially relevant for Mexican healthcare discovery and can strengthen a doctor's public profile across search and discovery systems. Many Tijuana doctors already have Doctoralia profiles created by the platform itself. They may have reviews they don't even know about. Claim those profiles. The reviews on Doctoralia tend to be in Spanish from Mexican patients, which provides a different but complementary entity signal. A doctor with reviews in both English and Spanish has a richer, more diverse entity profile.

Reddit and Facebook Groups. Medium priority but growing. Public forums like Reddit can influence how patients research providers, and discussions there may surface in search and AI answers. Subreddits like r/plasticsurgery, r/gastricsleeve, and r/medicaltourism see active medical tourism discussion. Facebook groups like "Gastric Sleeve Mexico" with 50,000+ members function as de facto review platforms. Be cautious about solicitation and follow each platform's rules. But you can mention to patients: "Many of our patients share their experiences on Reddit and in Facebook groups. We love seeing that." The organic sharing from these communities contributes to a doctor's overall public footprint.

Your Review Responses Are Entity Opportunities Too

I didn't expect this one. When your practice responds to a Google review, that response text is public and can add useful context for searchers and automated systems. Most practices waste this opportunity with cookie-cutter responses: "Thank you for your kind words! We hope to see you again."

Compare that to: "Thank you, Sarah! Dr. Quiroz and the entire team at VIDA appreciate you sharing your experience. We love helping patients from the San Diego area."

That response reinforces the doctor entity, the practice name, and the patient's general origin without overstepping. Even if the original review was generic, a thoughtful response can layer in useful context like the doctor name and practice location.

Privacy note: Be careful about confirming specific treatment details in public review responses, even if the patient mentioned them first. In healthcare, publicly acknowledging a patient's procedure or medical outcome in a practice-authored response can create compliance risk. The safest approach: reference the doctor's name and the practice, mirror the patient's general sentiment, but avoid restating specific procedures, diagnoses, or health outcomes. If you're unsure where the line is, check with your compliance counsel before writing response templates.

This takes maybe 30 extra seconds per response. But across hundreds of reviews, it builds a layer of entity-rich content that can be extracted by any system reading your reviews. We started doing this at VIDA about eight months ago. I can't draw a clean causal line because there are too many variables. But the correlation between entity-enriched responses and improved AI mention rates has been consistent enough in our monitoring that we haven't stopped.

How to Measure Review Quality: The Review Specificity Score

You need a number. Something you can track monthly and use to set targets. Here's the KPI we use internally.

Quick audit (do it today): Pull your last 20 Google reviews. Count how many mention the doctor by name AND the specific procedure. Divide by 20.

Monthly KPI: Count all reviews from the last 90 days that mention doctor name AND specific procedure. Divide by total reviews in that 90-day window.

The 90-day window matters for the ongoing KPI. Recent reviews tend to carry more practical weight than older ones in how practices are perceived and discovered. A practice that had great entity-rich reviews in 2022 but generic reviews since then may see declining visibility. What you collected last quarter matters more than what you collected three years ago.

Scoring benchmarks:

  • Above 50%: Strong entity signal. Your reviews are working for visibility.
  • 30% to 50%: Workable but needs improvement. Implement the two-touch system.
  • Below 30%: Weak specificity. In our experience, many Tijuana practices currently fall into this range.

The score also tells you which doctors and procedures need attention. When you break it down by doctor, you often find dramatic imbalances. One surgeon has 40 entity-rich reviews. Another has 3. That second surgeon is less visible regardless of how skilled they are or how many procedures they perform.

"The review AI can use is the review your coordinator knows how to ask for."

Set a target of 50%+ within 90 days of implementing the two-touch system. Track monthly. When I say track, I mean someone actually reads the recent reviews and counts. It takes 15 minutes. For most practices, a manual monthly count is the simplest way to start, though you could eventually build an automated workflow using tagging tools or LLM-based review analysis.

One thing about how AI uses this data worth understanding: recent research shows AI-generated recommendations vary meaningfully across prompts and sessions (SparkToro, 2024). Your goal isn't to "rank #1" in AI. It's to build enough consistent entity signal that your doctors appear regularly, across platforms, across queries. In our experience, review specificity has been one of the most effective levers we've found for that.

A Counterintuitive Finding About Imperfect Reviews

This took me a while to accept. In our monitoring, a 4.7-star profile with detailed, credible reviews has consistently appeared more useful than a 5.0 profile filled with generic praise, both for converting patients and for AI-mediated discovery.

A perfect 5.0 where every review says "amazing experience" gives any system less to work with than a 4.7 where reviews mention specific doctors, procedures, outcomes, and patient origins. The 4.7 also reads as more credible. A perfect score with only short, generic reviews can look suspicious to both AI systems and humans.

This doesn't mean you should want lower ratings. It means that a patient who gives 4 stars but writes "Dr. Rodriguez performed my gastric sleeve at VIDA. I flew from Dallas. Down 65 lbs at 4 months. Recovery was harder than expected the first week but worth it" has given you something far more valuable for visibility than a 5-star "Great doctor!!!"

The implication for coordinators is straightforward: don't be afraid of detailed reviews. Even the ones with constructive feedback contain entity data. And a review profile that mixes genuine praise with occasional honest nuance reads as more authentic to every system that evaluates it.

What We Built for This at VIDA (And What Tersefy Offers)

At VIDA, we built this system from the inside. Two-touch timing. Coordinator scripts in English and Spanish. WhatsApp-to-review conversion protocols. Review diversification across four platforms. Monthly review specificity scoring. Entity-enriched response templates. And monitoring of how reviews were being cited, or not cited, in AI answers.

We tracked which review patterns seemed to appear more often in AI answers and refined our scripts accordingly. It took months of iteration. The scripts in this article are the result of that iteration.

This system is now part of the AI + Reputation plan at Tersefy. What the plan includes: a review strategy audit with your current specificity score and platform coverage, coordinator training with WhatsApp script templates in English and Spanish, review monitoring across Google, RealSelf, Doctoralia, and specialty platforms, monthly review specificity scoring and reporting by doctor, AI citation monitoring to track how your reviews appear in ChatGPT, Gemini, and Perplexity answers, and integration with the full GEO strategy including entity optimization, structured data, and pricing transparency.

If your practice has hundreds of five-star reviews and AI still doesn't mention your doctors, the problem isn't your reputation. It's how your reputation is structured. Request your free AI Visibility Audit and we'll show you your current Review Specificity Score, which platforms appear most visible in AI answers about your practice, and exactly what to change.

The Quick-Start Checklist

Implement this week
Calculate your Review Specificity Score. Pull your last 20 Google reviews. Count how many mention the doctor by name AND the procedure. Divide by 20.
Set up a two-touch review flow. Touch 1 at discharge (Google review link via WhatsApp). Touch 2 at 3 to 4 weeks (specific prompt asking them to edit their existing review with details).
Create direct Google review links for each doctor's GBP listing. If individual practitioner listings don't exist and your doctors qualify under Google's practitioner guidelines, create them.
Audit your coordinator's current review request script. Is it "please leave us a review" or "please mention Dr. X and your procedure"?
Check RealSelf, Doctoralia, and BariatricPal. Does each doctor have active profiles? Are patients reviewing there?
Search your doctors' names on Reddit. What are patients saying? Is it consistent with your marketing?
Convert one WhatsApp testimonial to a public review this week. Ask the patient directly, within two hours of receiving the positive message. Let them write and post it themselves.
Rewrite your review response templates to include doctor name and practice location. Avoid confirming specific treatment details in public responses.
Set a 90-day target: 50% Review Specificity Score.

Want to see where your clinic stands?

We test 20+ real patient prompts across ChatGPT, Gemini, Claude, and Perplexity. Full report in 48 hours. Free.

Get your free AI audit