Cybercriminals are using artificial intelligence to create fake doctor profiles: these are the scams that put health at risk
Artificial intelligence (AI), a tool destined to revolutionize medicine and improve diagnoses, is being used by cybercriminals to spread a new form of fraud: the creation of fake medical pages, profiles, and documents that promise miracle treatments for overweight, diabetes, or blood sugar control.
According to an analysis by Check Point Software, AI-driven pharmaceutical scams have grown alarmingly by 2025, affecting both patients and healthcare institutions worldwide. The report notes that criminals impersonate certified doctors or reputable clinics and use deepfake technology to generate seemingly credible photos, videos, and testimonials in order to sell counterfeit or unsafe medications.
“The consequences go far beyond financial theft,” warns Amit Weigman, from the CTO Office at Check Point. “Victims are persuaded to consume unapproved substances, marketed as if they were legitimate prescriptions.”

The victims not only lose money, but they also put their health at risk. Photo: iStock
The study found that since January 2025, there has been a coordinated wave of digital fraud using artificial intelligence to impersonate medical entities, especially on social media. On Facebook alone, researchers found more than 500 fake pages created daily offering drugs similar to Ozempic or Wegovy, legitimate medications indicated for weight management and type 2 diabetes.
One of the most representative cases was that of an account impersonating a licensed American physician, using his credentials and professional photographs. Through paid advertisements, the page directed users to purported “online pharmacies” selling unregulated products.
Among the most aggressive counterfeit products identified by the company is PEAKA GLP-1 Slimming Pearls, also known as Slimming Drops or Liquid Pearls. This product is falsely advertised as equivalent to FDA-approved medications, even though it lacks scientific backing or regulatory approval.
Fraudulent ads use AI-generated audiovisual content—deepfake videos, cloned voices, and fake testimonials—to mimic legitimate medical promotions. Since October 2025, Check Point has identified more than 200 such ads, 72% of which use synthetic material that includes the image or voice of real endocrinologists.

Cybercriminals are using AI tools to create fake doctor profiles. Photo: iStock
The goal is to attract users interested in weight loss treatments by redirecting them to websites that mimic real clinics, complete with professional photos, cloned logos, and fake contact numbers. Once a patient decides to purchase, payment is processed through opaque or international systems, even though well-known brand logos may be displayed to create a false sense of security.
The result, Check Point warns, is usually one of two things: the total loss of money or receiving unlabeled products of unknown composition and potentially dangerous.
Digital medical fraud industry Check Point's technical analysis, conducted through its External Risk Management (ERM) platform, revealed that these scams operate like a structured criminal industry, with shared infrastructure and fraud kits available for sale on the dark web.
- Common infrastructure: Many fraudulent sites use the same hosting providers, often located in countries with weak regulations.
- Repeated templates: the source code and payment systems are identical, demonstrating the use of pre-designed web kits to set up fake clinics in a matter of hours.
- AI-generated images: photos of doctors, packaging, and hospitals show signs of having been created by visual generation algorithms.
- Fraud kits for sale: these packages include everything needed to operate a fake medical website, from templates and scripts to automatic translations.
Beyond the financial aspect, the primary risk is to public health. Counterfeit products often contain untested or inert substances, and their promises of “losing 20 kilos in a month” reflect a pattern of manipulation that exploits patients' emotional vulnerability.
Furthermore, this type of deception erodes trust in telemedicine and legitimate online care platforms by sowing doubt about the authenticity of professionals offering virtual medical services.

More than 500 fraudulent pages are created on social media every day. Photo: iStock
The report concludes that combating this threat requires coordinated action among cybersecurity experts, health authorities, and e-commerce platforms. Meanwhile, Check Point suggests some basic guidelines for protection: always verify the legitimacy of pharmacies and doctors' certifications, be wary of social media ads, check the sources of recommendations, and be alert to signs of manipulation such as extreme discounts or "limited-time" timers.
“Trust in the digital age can no longer be taken for granted,” Weigman emphasizes. “Protecting health now requires the same vigilance we apply to safeguarding critical systems: verifying, informing ourselves, and combating misinformation before it spreads.”
Environment and Health Journalist
eltiempo



