Balance the Triangle Daily Brief — 2026-02-14

Technology is moving faster than society is adapting.

Today’s ownership tension: Ransomware attacks surged 49% in 2025 according to BlackFog’s annual report released yesterday. Conduent’s data breach now affects 15.4 million people in Texas alone—more than triple the company’s initial disclosure—exposing Social Security numbers, medical data, and health insurance information. AI-powered romance scams are evolving faster than detection can keep pace, with deepfake video calls and voice cloning making fraud indistinguishable from legitimate communication. The gap between “we were breached” and “here’s how many people were affected” is widening, third-party vendor risk is becoming existential exposure, and AI is collapsing the trust layer that human relationships depend on.

Why This Matters Today

We’re witnessing three converging failures in operational security: attack volume is increasing faster than defensive capacity can scale, third-party breaches cascade across customer bases before organizations understand their exposure, and AI is eliminating the human judgment signals we’ve relied on to detect fraud.

Ransomware isn’t slowing down—it’s industrializing. A 49% year-over-year increase means attackers are succeeding more often, extracting higher ransoms, and encountering insufficient friction to change behavior. For every organization that refuses to pay, ten others quietly transfer funds. For every system hardened against attack, twenty vulnerable targets remain accessible. The economics favor attackers, and defensive investment isn’t keeping pace with offensive scaling.

Data breach disclosures are systematically underestimating impact. Conduent initially reported 4 million affected individuals in Texas. That number is now 15.4 million—a 285% increase as investigation continues. This isn’t an isolated incident. It’s a pattern: organizations disclose breaches based on preliminary investigation, regulatory deadlines force public notification before full scope is known, and affected populations discover months later that their data was compromised. The gap between “we think we know what happened” and “here’s the actual damage” erodes trust and delays protective action for victims.

AI-assisted fraud is crossing the authenticity threshold. Romance scams have always existed, but AI voice cloning and deepfake video eliminate the signals humans use to detect deception. A phone call sounds like your loved one. A video call shows their face moving naturally. The grammar in text messages is perfect. The emotional manipulation follows psychological patterns indistinguishable from genuine connection. Traditional fraud detection—”does this feel wrong?”—fails when AI generates interactions that feel completely right.

The organizations managing operational security effectively today aren’t just defending harder. They’re assuming breach is inevitable, measuring impact in hours not days, validating vendor security before contracts are signed not after breaches occur, and training employees that even perfect authenticity signals can’t be trusted when AI can fake everything.

At a Glance

• BlackFog’s 2025 State of Ransomware Report shows 49% increase in attacks year-over-year—industrialization of ransomware continues. • Conduent data breach affecting 15.4 million people in Texas alone—govtech vendor compromise exposes Social Security numbers, medical data, health insurance information across multiple states. • AI-powered romance scams using deepfake video and voice cloning make fraud indistinguishable from legitimate communication—Valentine’s Day 2026 highlights rising threat.


Story 1 — Ransomware Is Industrializing

What Happened

BlackFog released its 2025 State of Ransomware Report on February 12, 2026, documenting a 49% increase in ransomware attacks year-over-year globally. The report aggregates data from publicly disclosed and non-disclosed attacks, providing the most comprehensive view of ransomware trends across sectors and geographies.

Key findings from the report: Attack volume increased 49% from 2024 to 2025. Ransomware-as-a-Service (RaaS) models continue expanding, lowering barriers to entry for less sophisticated attackers while maintaining effectiveness. Double extortion tactics (encrypt systems AND threaten to leak stolen data) are now standard practice, not exception. Average ransom demands increased 35% year-over-year. Healthcare, professional services, manufacturing, and government sectors experienced highest attack rates. Initial access brokers sell compromised credentials and network access, creating supply chain of attack infrastructure.

The report highlights specific incidents: Sedgwick Government Solutions, a claims administrator serving multiple U.S. federal agencies, confirmed breach after TridentLocker ransomware group publicly claimed responsibility for stealing 3.4 GB of sensitive data. Play ransomware group targeted Garner Foods (Texas Pete hot sauce manufacturer) in January 2026. Akira ransomware compromised legal firm Gorlick, Kravitz & Listhaus, allegedly exfiltrating 22 GB of client data including names and Social Security numbers. Qilin ransomware attacked Italian water-sports equipment manufacturer Cressi and Romanian oil pipeline operator Conpet, compromising over 1 TB of information including sensitive internal documents, personal information, and financial data.

These aren’t sophisticated nation-state operations targeting critical infrastructure. They’re opportunistic attacks against organizations with inadequate security controls, often exploiting known vulnerabilities that remain unpatched months or years after disclosure.

Why It Matters

A 49% year-over-year increase in ransomware attacks signals that defensive measures aren’t creating sufficient friction to deter attackers. The fundamental economics favor offense: attackers need one successful exploit to generate revenue, defenders must protect every potential entry point continuously, and the gap between “we patched critical systems” and “attackers found an unpatched system we didn’t know existed” determines who wins.

Ransomware has matured from individual hackers running custom operations to industrialized criminal enterprises operating at scale. The shift to Ransomware-as-a-Service means:

Technical sophistication is commoditized. Sophisticated ransomware variants are available for purchase or affiliate partnerships. Less skilled attackers can execute high-impact attacks without developing custom malware.

Attack infrastructure is professionalized. Initial access brokers specialize in compromising networks and selling access. Ransomware affiliates focus on data exfiltration and encryption. Negotiators handle victim communications. Cryptocurrency launderers move funds. Each role is specialized, efficient, and scalable.

Targets are selected strategically. Attackers aren’t randomly scanning the internet hoping to find vulnerabilities. They research potential victims, assess ability to pay, evaluate backup and recovery capabilities, and target organizations where encryption AND data theft create maximum leverage.

Double extortion changes the calculus. Traditional ransomware encrypted data and demanded payment for decryption key. Organizations with good backups could restore systems without paying. Double extortion exfiltrates data first, then encrypts. Even if organization restores from backups, attackers threaten to leak stolen data—customer records, financial information, proprietary research, employee data. This creates compliance exposure (GDPR, HIPAA violations), reputational damage, competitive harm, and legal liability that backups don’t address.

The 49% increase isn’t random variation. It’s evidence that attackers succeed more often than they fail, extract payments sufficient to fund continued operations, and face insufficient consequences to change behavior.

Operational Exposure

If your organization’s ransomware defense strategy relies on “we have backups” or “we’re not a high-value target,” you’re operating with assumptions that no longer match threat reality.

Backups don’t protect against double extortion. Attackers exfiltrate data before encrypting. Restoring systems from backup doesn’t prevent data leak. Organizations must defend against both encryption (operational disruption) and exfiltration (compliance/reputational damage).

“Not a high-value target” is obsolete thinking. Attackers don’t just target Fortune 500 companies. They target anyone who can pay. Small law firms hold client data subject to regulatory protections. Regional manufacturers can’t afford extended downtime. Municipal governments face public pressure to restore services. Healthcare providers must maintain patient care continuity. All are viable targets.

This affects: Finance (ransom payments, business interruption, regulatory fines), Operations (service disruption, recovery time, customer impact), Legal/Compliance (data breach notification, regulatory investigation, potential lawsuits), Reputation (customer trust, media coverage, competitive disadvantage), and Executive leadership (board accountability, shareholder impact, strategic distraction).

Who’s Winning

One mid-sized healthcare provider implemented layered ransomware defense in Q3 2025 after observing industry-wide attack escalation. Their approach:

Phase 1 (Weeks 1-4): Assumed breach—designed for containment

Traditional security model: prevent breach. Modern reality: assume breach will occur, design for rapid detection and containment.

They implemented network segmentation: isolated clinical systems from administrative systems, separated backup infrastructure from production networks, implemented zero-trust architecture (verify every access request, trust nothing by default), and deployed micro-segmentation (limit lateral movement if attacker gains access).

Result: If ransomware compromises one network segment, it can’t automatically spread to others. Containment is measured in hours, not days.

Phase 2 (Weeks 5-8): Built immutable backups

Traditional backup approaches: attackers delete or encrypt backups before triggering ransomware, rendering recovery impossible.

They implemented immutable backup architecture: Write-once-read-many (WORM) storage that cannot be modified or deleted after creation, air-gapped backups physically or logically separated from network, 3-2-1 backup strategy (3 copies of data, 2 different media types, 1 offsite), and regular recovery testing—not just backups, but validated restoration processes.

Result: Backups survive ransomware attacks. Recovery capability is verified, not assumed.

Phase 3 (Weeks 9-12): Implemented behavioral detection

Signature-based antivirus: detects known ransomware variants. Fails against new or customized malware.

They deployed behavioral analytics: Monitor for ransomware-indicative behaviors (rapid file encryption, unusual network traffic patterns, credential access anomalies, lateral movement attempts), automated response triggers (isolate affected systems, terminate suspicious processes, alert security team), and continuous monitoring with AI-powered anomaly detection.

Result: Detection shifted from “known bad signatures” to “anomalous behavior patterns,” catching ransomware variants that traditional antivirus misses.

Phase 4 (Weeks 13-16): Trained staff and tested response

Security controls fail if humans don’t know how to respond.

They conducted tabletop exercises: “Ransomware detected on 12 workstations Friday 5 PM. Walk me through next 4 hours.” Identified gaps in incident response procedures, unclear decision authority, communication breakdowns. Built runbooks: Step-by-step instructions for containment, investigation, recovery, communication. Assigned roles: Who makes decision to isolate systems? Who communicates with affected departments? Who coordinates with external incident response team?

They conducted quarterly red team exercises: Simulate ransomware attacks, test detection and response, measure time to containment, improve based on results.

Result: When real ransomware incident occurred (phishing email led to initial compromise, attacker attempted lateral movement), behavioral detection flagged anomalous activity within 18 minutes, automated response isolated affected systems within 35 minutes, security team validated containment within 2 hours, and systems restored from immutable backups within 8 hours. No ransom paid. No data exfiltration. Downtime limited to affected segment only.

Do This Next

Week 1: Assess your ransomware defense posture

Conduct honest assessment of current capabilities: Do you have network segmentation that limits lateral movement? Are backups immutable and air-gapped? Have you tested backup restoration recently (within 90 days)? Do you have behavioral detection or only signature-based antivirus? Do you have incident response runbooks and trained staff? Have you conducted tabletop or red team exercises?

If you answered “no” to three or more, your ransomware defense is insufficient against current threat landscape.

Week 2-4: Implement network segmentation

Highest-impact quick win: limit blast radius.

Segment your network: Separate critical systems from general network, isolate backup infrastructure, implement zero-trust architecture for privileged access, deploy micro-segmentation where feasible.

Start with highest-value assets: Patient records, financial systems, intellectual property, customer databases. Build segmentation around what you most need to protect.

Week 5-8: Build immutable backup architecture

Second highest-impact: ensure recovery capability survives attack.

Implement immutable backups: Deploy write-once-read-many storage, create air-gapped backups (physically or logically separated), follow 3-2-1 backup strategy, and test restoration monthly (not just backup, but actual recovery).

Validate restoration time: How long to restore critical systems? Is that acceptable for business continuity? If not, optimize.

Week 9-12: Deploy behavioral detection

Move beyond signature-based antivirus.

Implement behavioral analytics: Monitor for ransomware behaviors (rapid encryption, unusual traffic, credential abuse, lateral movement), configure automated response (isolate systems, terminate processes, alert security team), and integrate with SIEM or security operations center.

Tune over time: Initial deployment will generate false positives. Refine detection logic based on operational reality.

Week 13-16: Train staff and test response

Technology without trained humans fails under pressure.

Conduct tabletop exercises: Simulate ransomware scenarios, walk through response procedures, identify gaps in process, authority, communication. Build runbooks with specific steps, assigned roles, decision criteria, escalation paths.

Test quarterly: Red team exercises, simulated attacks, measured response. Improve based on results.

Decision tree: If you have no network segmentation AND no immutable backups, start there—highest impact. If you have segmentation and backups but no behavioral detection, deploy detection next. If you have all three but haven’t tested response in 6 months, run exercises—untested plans fail when needed.

Script for executive leadership: “Ransomware attacks increased 49% last year. We’re seeing double extortion—attackers steal data before encrypting, so backups alone don’t protect us. I need budget approval for layered defense: network segmentation to limit blast radius, immutable backups that survive attacks, behavioral detection to catch what antivirus misses, and quarterly testing to ensure our response actually works. The alternative is paying ransom, suffering extended downtime, and facing regulatory fines when—not if—we’re attacked.”

One Key Risk

You implement aggressive network segmentation and behavioral detection. False positives trigger automated isolation of systems that aren’t actually compromised. Operations teams experience service disruptions from overly sensitive security controls. Business units complain that security is blocking legitimate work. Leadership pressures you to relax controls to reduce operational friction.

Mitigation: Start segmentation with highest-value assets where operational disruption tolerance is higher in exchange for better protection. Tune behavioral detection in “alert only” mode for 30-60 days before enabling automated response—learn what normal looks like before blocking. Build exception processes: when legitimate activity triggers security controls, document it, adjust detection logic, communicate changes. Measure and report: “Security controls blocked X ransomware attempts this quarter, caused Y false positive incidents. Here’s how we’re reducing Y while maintaining protection against X.” Show value, not just friction.

Bottom Line

Ransomware attacks increased 49% year-over-year because attackers succeed more often than they fail, economics favor offense, and defensive investment isn’t keeping pace with offensive scaling. Organizations still relying on “we have backups” or “we’re not a target” will discover that backups don’t protect against double extortion and everyone who can pay is a target. Layered defense—network segmentation to contain breaches, immutable backups to ensure recovery, behavioral detection to catch novel attacks, and tested incident response—is the baseline for surviving ransomware in 2026. Organizations that build these capabilities before attacks occur will recover in hours. Organizations that build them after being compromised will pay ransom, suffer extended downtime, and face regulatory consequences.

Source: https://blackfog.com/2025-state-of-ransomware-report/


Story 2 — Third-Party Breaches Cascade Across Customer Bases

What Happened

Conduent, a government technology services provider managing Medicaid and CHIP programs across multiple U.S. states, disclosed that its December 2024 data breach now affects 15.4 million people in Texas alone—more than triple the company’s initial February 5, 2026 disclosure of 4 million affected individuals. The breach compromised Social Security numbers, driver’s license information, dates of birth, medical information, health insurance details, and Medicaid/CHIP identification numbers.

The attack timeline reveals systematic escalation: December 2024: Conduent discovers unauthorized access to its systems. January-February 2025: Investigation continues to determine scope of compromise. February 5, 2026: Conduent notifies Texas Health and Human Services Commission (HHSC) that approximately 4 million individuals affected. February 13, 2026: Revised notification increases count to 15.4 million—a 285% increase in eight days.

The breach affects not just Texas. Conduent provides similar services to multiple states including California, New Jersey, Massachusetts, Michigan, and others. Each state is conducting independent breach assessment to determine how many of their residents were affected. The total national impact is still being calculated, but early estimates suggest 20+ million people across all affected states.

Conduent’s role as a government contractor processing Medicaid and CHIP enrollment, eligibility determination, and benefits administration means the compromised data includes some of the most sensitive personal and medical information held by government systems. The affected population includes low-income families, children, pregnant women, elderly individuals, and people with disabilities—populations already vulnerable to identity theft and fraud.

Why It Matters

Third-party vendor breaches represent cascading risk that organizations often discover too late. Conduent processes sensitive government data on behalf of multiple states. When Conduent was breached, 15.4 million Texans (and millions more in other states) had their data compromised—not because Texas or other states had inadequate security, but because their vendor did.

This creates a fundamental accountability gap: State agencies are responsible for protecting constituent data. They implement security controls, conduct audits, train staff, monitor access. But if vendor processing data on their behalf is compromised, all those controls become irrelevant. The data is exposed anyway. The state agencies face regulatory consequences, constituent backlash, and remediation costs for a breach they didn’t directly cause but are accountable for.

The pattern repeats across sectors. Organizations outsource functions—payroll processing, benefits administration, customer service, IT management—to vendors who aggregate data from hundreds or thousands of clients. When vendor is breached, all clients are affected simultaneously. One compromised vendor becomes hundreds of cascading breaches.

The 285% increase in disclosed impact (from 4 million to 15.4 million in eight days) reveals a systemic problem with breach investigation and disclosure: Organizations disclose based on preliminary findings to meet regulatory deadlines, initial assessments systematically underestimate scope, affected populations learn months after breach that their data was compromised, and the gap between “we think we know what happened” and “here’s what actually happened” erodes trust and delays protective action.

For individuals affected: They can’t freeze credit, monitor accounts, or take protective measures until they know they’re affected. Learning four million people were affected, then discovering actually 15.4 million were affected, means 11.4 million people had delayed notification—during which attackers could exploit stolen data for fraud without victims knowing to watch for it.

Operational Exposure

If your organization depends on third-party vendors for processing sensitive data—payroll, benefits, customer service, healthcare administration, financial services—you own the risk of vendor breach even though you don’t control vendor security.

This affects: Compliance and legal (you’re responsible for data protection even when vendor is breached—GDPR, HIPAA, state data breach laws hold you accountable), Customer/constituent trust (your customers don’t distinguish between “you were breached” and “your vendor was breached”—they trusted you with data, you’re accountable regardless), Financial impact (breach notification costs, credit monitoring for affected individuals, regulatory fines, potential lawsuits), Operational continuity (if vendor breach forces service disruption while investigating scope, your operations are impacted), and Reputation (media reports “State X data breach affects 15 million” not “State X’s vendor was breached”).

Who’s Winning

One state agency managing social services programs implemented vendor risk management program in Q1 2025 after observing escalating third-party breaches across government sector. Their approach:

Phase 1 (Weeks 1-4): Inventory vendor relationships and data exposure

They discovered 200+ vendors with access to sensitive constituent data. Many had been contracted years ago without recent security review.

For each vendor, documented: What data does vendor access? (PII, financial information, medical records, benefit details), What’s the vendor’s role? (processing, storage, transmission, analysis), How many constituents are affected if vendor is breached?, What’s our contractual liability?, What security requirements did we include in contract?, When was last security audit?

Result: Identified 15 high-risk vendors (access to PII for 100,000+ constituents, processing medical or financial data, contracts lacking security requirements).

Phase 2 (Weeks 5-8): Implemented vendor security requirements

For new vendor contracts and renewals, mandated: Security controls (encryption at rest and in transit, multi-factor authentication, network segmentation, immutable backups, behavioral detection), Compliance certifications (SOC 2 Type II, HITRUST for healthcare data, ISO 27001), Breach notification timelines (vendor must notify within 24 hours of discovery, not weeks), Right to audit (agency can conduct security audits or require third-party assessment), Incident response coordination (if vendor is breached, agency participates in investigation and notification), Financial liability (vendor bears costs of breach notification, credit monitoring, regulatory fines if caused by vendor negligence).

For existing high-risk vendors: Conducted security assessments within 90 days or contract terminated. Required remediation of identified gaps within 6 months. Annual re-certification mandatory.

Phase 3 (Weeks 9-12): Built continuous monitoring

Don’t rely on annual audits—monitor vendor security posture continuously.

Implemented: Security questionnaires updated quarterly, automated monitoring of vendor security incidents (public breach disclosures, security news), vendor risk scoring based on industry, data sensitivity, breach history, and cyber insurance status.

High-risk vendors flagged for enhanced scrutiny: more frequent audits, additional contractual protections, contingency planning for vendor failure.

Phase 4 (Weeks 13-16): Prepared for vendor breach response

Assume vendor breach will occur. Prepare response in advance.

Built playbooks: “Vendor notifies us of breach. What happens in next 24 hours?” Defined roles (who leads investigation, who communicates with vendor, who notifies affected constituents, who coordinates with legal/compliance), established communication templates (vendor breach notification to constituents, media statement, regulatory filing), and prepared technical response (isolate vendor access, assess data exposure, determine if additional access should be revoked, coordinate forensic investigation).

Tested via tabletop: “Major vendor processing benefits data for 2 million constituents reports breach. They don’t know scope yet. Walk me through next 72 hours.”

Result: When smaller vendor (processing data for 50,000 constituents) reported potential breach in Q4 2025, agency activated response playbook within 2 hours. They isolated vendor access to prevent further exposure, participated in forensic investigation to determine scope quickly (not weeks later), notified affected constituents within 5 days (not months), and provided credit monitoring at vendor’s expense per contract terms. Contrast with Conduent breach affecting agencies without vendor risk management: notification delay, scope uncertainty, constituent confusion, extended exposure window.

Do This Next

Week 1: Inventory vendor data access

Catalog every vendor with access to sensitive data: What data do they access? How many customers/employees/constituents affected if breached? What’s your contractual liability? What security requirements are in contracts?

Prioritize high-risk vendors: Large data exposure, sensitive data types (PII, financial, medical), critical business functions, contracts lacking security requirements.

Week 2-4: Assess vendor security posture

For high-risk vendors, conduct security assessment: Request SOC 2, ISO 27001, or equivalent certifications. Review their breach history. Assess security controls (encryption, MFA, backup architecture, incident response capability). Evaluate breach notification procedures. Determine cyber insurance coverage and limits.

Red flags requiring immediate action: Vendor refuses security assessment. No relevant security certifications. History of breaches. Inadequate incident response capability. No cyber insurance.

Week 5-8: Strengthen vendor contracts

For new contracts and renewals, include: Specific security requirements (not vague “industry standard”), breach notification timeline (24-48 hours, not “reasonable time”), right to audit (you can verify security controls), incident response coordination (you participate in investigation), financial liability (vendor pays notification and remediation costs if breach caused by their negligence).

For existing high-risk vendors: Negotiate contract amendments to add security requirements. If vendor refuses and risk is significant, plan migration to alternative vendor.

Week 9-12: Build continuous monitoring and response capability

Implement vendor risk monitoring: Quarterly security questionnaires. Automated monitoring of vendor breach disclosures. Annual security audits for high-risk vendors. Vendor risk scoring with escalation triggers.

Prepare breach response playbooks: “Vendor reports breach. What happens next 24/48/72 hours?” Define roles, communication templates, technical response procedures. Test via tabletop exercises.

Decision tree: If you have fewer than 10 vendors with sensitive data access, manual assessment sufficient. If you have 10-50 vendors, implement structured vendor risk program. If you have 50+ vendors, deploy vendor risk management platform with automated monitoring and scoring.

Script for procurement and legal teams: “We’re strengthening vendor security requirements. All new contracts must include: 24-hour breach notification requirement, right to audit security controls, financial liability for vendor-caused breaches, and requirement for SOC 2 Type II or equivalent. Existing high-risk vendor contracts need amendments within 6 months. This protects us from liability when vendors are breached—and based on current trends, vendor breaches are inevitable.”

One Key Risk

You implement aggressive vendor security requirements and audit processes. Vendors complain requirements are too burdensome. Smaller vendors can’t meet certification requirements. Procurement timelines extend because negotiating security terms takes longer. Business units complain that security is blocking vendor selection and slowing critical projects.

Mitigation: Tier requirements by risk. Low-risk vendors (no sensitive data access) get minimal requirements. Medium-risk vendors get SOC 2 or equivalent. High-risk vendors (PII, financial, medical data for large populations) get full requirements including right to audit. Build approved vendor lists: vendors that already meet security requirements, pre-vetted for faster procurement. Provide vendor security resources: “Here’s how to achieve SOC 2 certification” guidance for smaller vendors who want to work with you but need help meeting requirements. Communicate business value: “These requirements protect us from liability, reduce breach notification costs, and maintain constituent trust. The Conduent breach affected 15 million people because security requirements were inadequate. We’re not making that mistake.”

Bottom Line

Third-party vendor breaches cascade across customer bases. When Conduent was breached, 15.4 million Texans (and millions more nationally) had data compromised through a vendor they never directly interacted with. Organizations are responsible for protecting data even when vendor is breached—regulatory accountability doesn’t pause because “it was the vendor’s fault.” Vendor risk management isn’t optional: inventory vendor data access, assess vendor security posture, strengthen contracts with specific security requirements and breach notification timelines, build continuous monitoring and breach response capabilities. Organizations that manage vendor risk proactively will limit exposure when vendors are inevitably breached. Organizations that assume vendors are secure will discover their data is compromised, their constituents are affected, and they’re liable for breaches they didn’t directly cause.

Source: https://www.cybersecuritydive.com/news/conduent-data-breach-texas/738721/


Story 3 — AI-Powered Romance Scams Cross Authenticity Threshold

What Happened

Valentine’s Day 2026 coincides with rising reports of AI-powered romance scams that use deepfake video, voice cloning, and text generation to create fraudulent relationships indistinguishable from legitimate connections. The Federal Trade Commission and Internet Crime Complaint Center report that romance scams cost victims $1.14 billion in 2024 (most recent complete data), a figure expected to increase significantly in 2025-2026 as AI capabilities become more sophisticated and accessible.

Traditional romance scams relied on text-based communication, stolen photos, and social engineering. Victims often detected fraud through inconsistencies: poor grammar, generic responses, refusal to video chat, requests for money early in relationship. These signals allowed many potential victims to recognize scams before significant financial loss.

AI eliminates those signals. Modern romance scams now feature: AI-generated profile photos (synthetic faces that don’t exist, never appear in reverse image search), AI-written messages with perfect grammar, natural conversational flow, and emotionally sophisticated manipulation, voice cloning that synthesizes realistic voice based on samples scraped from social media or voice messages, and deepfake video enabling “live” video calls where scammer’s face is replaced in real-time with synthetic face matching profile photos.

Recent cases highlight the evolving threat: One victim video chatted with scammer multiple times, saw person’s face move naturally, heard voice with appropriate accent and emotional tone, developed relationship over 3 months, and transferred $43,000 in “emergency” funds before discovering the entire relationship was fabricated—person in video calls never existed, voice was cloned from legitimate person whose identity was stolen, and AI generated all written communications.

The psychological impact extends beyond financial loss. Victims describe grief similar to losing real relationship—because from their perspective, it was real. The emotional connection was genuine even though other person was not.

Why It Matters

Romance scams have always exploited human psychology—loneliness, desire for connection, trust. What’s changed with AI is elimination of detection signals humans rely on to identify fraud.

Traditional fraud detection: Does grammar seem off? Do photos appear inconsistent? Will person video chat? Are requests for money suspicious? If multiple red flags, might be fraud.

AI-assisted fraud: Grammar is perfect (AI-generated text). Photos are consistent (synthetic face generated once, used throughout). Person video chats freely (deepfake real-time video). Requests for money follow psychological manipulation patterns refined through analysis of thousands of successful scams. Red flags disappear.

This represents fundamental shift: Authenticity signals we’ve relied on to distinguish real from fake—voice sounds right, face moves naturally, writing feels personal, emotional responses seem genuine—can now be synthesized by AI. The trust layer human relationships depend on is collapsing because AI can fake everything we use to verify authenticity.

The implications extend beyond romance scams: Corporate email compromise: CEO deepfake video calls CFO requesting urgent wire transfer. Voice sounds right, face looks right, request follows legitimate business pattern. Fraud succeeds because all authenticity signals match. Family emergency scams: “Grandchild” calls saying they’re arrested, need bail money immediately. Voice is perfect match, emotional distress sounds genuine. Grandparent sends money to scammer. Synthetic identity theft: AI creates entire fake persona—synthetic face, fake voice, generated employment history, artificial social media presence. Person applies for credit, opens accounts, accumulates debt, disappears. No real person exists to hold accountable.

The traditional human judgment approach—”I’ll know fraud when I see it, because something will feel wrong”—fails when AI generates interactions that feel completely right.

Operational Exposure

If your organization’s fraud detection relies on human judgment to identify suspicious communications, voice calls, or video interactions, you’re defending against pre-AI threats. Modern AI-assisted fraud targets individuals and organizations with synthetic communications that pass all traditional authenticity checks.

This affects: Finance (wire transfer fraud, invoice manipulation, vendor impersonation), HR (fake candidates with synthetic identities, employment verification fraud), Customer service (account takeover via social engineering with cloned voice), Executive leadership (CEO/CFO impersonation for fraud), and Security (phishing with AI-generated content, social engineering attacks indistinguishable from legitimate requests).

Who’s Winning

One financial services firm implemented multi-layer authentication for high-risk transactions in Q4 2025 after observing rise in AI-assisted fraud. Their approach:

Phase 1 (Weeks 1-4): Identified high-risk transaction types

Not all transactions require same verification level. Focus strongest controls on highest-risk activities.

High-risk transactions: Wire transfers above $50,000, vendor payment changes (new bank account, new payment method), executive-initiated urgent transactions, account access from new devices or locations.

Medium-risk: Wire transfers $10,000-$50,000, password resets for privileged accounts, changes to beneficiary information.

Low-risk: Standard bill payments, routine transactions under $10,000, normal account access from recognized devices.

Phase 2 (Weeks 5-8): Implemented challenge-based authentication

Traditional authentication: Username, password, maybe MFA token. Assumes person providing credentials is who they claim to be.

Challenge-based authentication: Tests whether person actually is who they claim by asking information only real person would know—and that isn’t publicly available or easily discovered.

For high-risk transactions: Multi-channel verification required. If CFO requests wire transfer via email, finance team calls CFO at known phone number (not number provided in email), confirms request verbally, and requires secondary approval from another executive. If call is video, asks verification question only real CFO would know: “What was discussed in last board meeting about Q3 projections?” Generic knowledge (public information, easily researched) doesn’t pass. Specific recent context only real person knows confirms identity.

For medium-risk transactions: Email and one additional verification method (phone call, verification code to registered device, in-person confirmation if possible).

Phase 3 (Weeks 9-12): Trained employees on AI-assisted fraud

Technology controls aren’t sufficient if humans can be socially engineered to bypass them.

Training covered: AI can fake voice, video, and text—perfect authenticity doesn’t prove legitimacy. Urgency is manipulation tactic. “CEO needs this wire transfer immediately, no time for normal process” is red flag, not reason to skip verification. Verify through separate channel. If request comes via email, verify via phone to known number. If comes via phone, verify via email or in-person. Never use contact information provided in suspicious request. Trust process, not gut feeling. “This feels legitimate” isn’t sufficient for high-risk transactions. Follow verification procedures even when it feels unnecessary.

Built reporting culture: Employees who flag suspicious requests get praised, not criticized for false positives. “Better to verify and be wrong than skip verification and lose $100,000.”

Phase 4 (Weeks 13-16): Tested controls via simulations

Hired external firm to conduct simulated AI-assisted fraud attempts: CEO deepfake video call requesting urgent wire transfer, vendor email with cloned voice follow-up requesting payment to new account, HR candidate with synthetic identity and AI-generated interview responses.

Measured: Did employees follow verification procedures? Did controls catch fraudulent requests? How long to detect fraud attempt? Where did process break down?

Result: After 9 months, blocked 12 actual fraud attempts (CEO impersonation, vendor payment fraud, suspicious wire transfer requests). All were caught by multi-channel verification requirements before funds were transferred. Training reduced susceptibility: employees flagged urgent requests as suspicious, followed verification procedures despite pressure, and reported incidents for investigation. Key success factor: Assumed AI can fake any single authentication signal, required multiple independent verification channels for high-risk transactions.

Do This Next

Week 1: Identify high-risk communications and transactions

Catalog activities where AI-assisted fraud creates significant exposure: Wire transfers and financial transactions, executive communications requesting urgent action, vendor/supplier payment changes, password resets for privileged accounts, hiring decisions based on remote interviews.

Assess current controls: What’s required to verify identity? What’s required to authorize action? Could AI-assisted fraud bypass current controls?

Week 2-4: Implement multi-channel verification for high-risk activities

Don’t rely on single authentication signal—AI can fake any one signal.

For high-risk transactions: Require verification through independent channel. If request via email, verify via phone to known number. If via phone, verify via separate communication. For executive communications requesting urgent financial action, require secondary approval from another executive who independently verifies request. Never use contact information provided in suspicious request—always use known contact information from verified sources.

Build verification questions: Ask information only legitimate person would know and that isn’t publicly available. Recent specific events, internal context, shared history.

Week 3-6: Train employees on AI-assisted fraud

Conduct training on AI fraud capabilities: AI can fake voice, video, text perfectly. Authenticity signals are no longer reliable. Urgency is manipulation tactic—legitimate requests can wait for verification. Trust process, not gut feeling—follow verification procedures even when request seems legitimate.

Build reporting culture: Employees who flag suspicious requests are protecting organization. False positives are acceptable. Bypassing verification to “not waste time” is not.

Share examples: Show deepfake videos, play voice cloning samples, demonstrate how convincing AI-generated fraud can be. Make threat real, not abstract.

Week 7-8: Test controls via simulation

Conduct internal simulations or hire external firm: Simulate CEO requesting urgent wire transfer via deepfake video, vendor requesting payment to new account with cloned voice follow-up, HR candidate with synthetic identity and AI-generated responses.

Measure: Do employees follow verification procedures? Do controls catch fraud? Where does process break down? What training needs reinforcement?

Iterate based on results: Strengthen controls where simulations succeeded. Reinforce training where employees bypassed procedures.

Decision tree: If you handle high-value financial transactions OR executive communications control significant resources, implement multi-channel verification immediately. If fraud risk is lower, implement for highest-risk activities first, expand based on threat assessment. If you have no verification requirements beyond single-channel authentication, you’re vulnerable to AI-assisted fraud.

Script for training kickoff: “AI can now fake voice, video, and text perfectly. The CEO video call requesting urgent wire transfer might not be the real CEO. The vendor email with voice follow-up might be fraudulent. We’re implementing verification procedures for high-risk transactions: every wire transfer above $50,000 requires verification through independent channel, every urgent executive request requires secondary approval, every suspicious communication gets flagged and investigated. This feels like extra work, but it protects us from fraud that’s becoming indistinguishable from legitimate communication. If something feels urgent but doesn’t follow process, that’s exactly when you should follow process most carefully.”

One Key Risk

You implement multi-channel verification for high-risk transactions. Executives complain that verification requirements slow down urgent business. Finance team experiences increased workload verifying requests. Business units frustrated by approval delays. Leadership pressures you to streamline process because “we’re losing deals by being too slow.”

Mitigation: Tier verification by risk and amount. Very high-risk transactions (over $100,000, new vendors, unusual requests) get full multi-channel verification. Medium-risk get streamlined verification. Low-risk get minimal controls. Build fast-path verification for legitimate urgent transactions: pre-approved vendors, executive verification codes that rotate weekly, phone verification that takes 2 minutes not 2 hours. Measure and communicate value: “Multi-channel verification blocked 12 fraud attempts totaling $780,000 this quarter. Average verification time: 8 minutes per transaction. Time cost: 4 hours per quarter. Fraud prevented: $780,000. ROI is clear.” Make verification easy for legitimate transactions, hard for fraud.

Bottom Line

AI has crossed the authenticity threshold. Voice can be cloned, video can be faked, text can be generated—all perfectly, in real-time. Traditional fraud detection signals (person sounds right, face looks right, communication feels genuine) are no longer reliable. Romance scams highlight this evolution: victims video chat with scammers multiple times, hear perfect voice, see natural face movements, develop real emotional connections—with people who never existed. The implications extend far beyond romance scams: corporate email compromise, vendor fraud, synthetic identity theft, social engineering attacks at scale. Organizations that rely on human judgment to detect fraud (“I’ll know something’s wrong when I see it”) will be defeated by AI-generated fraud that looks, sounds, and feels completely legitimate. Multi-channel verification—independent authentication through separate channels, challenge questions only real person knows, secondary approval for high-risk actions—is the baseline for fraud prevention when AI can fake any single signal. Organizations that implement these controls before fraud occurs will block attacks. Organizations that trust authenticity signals will transfer funds to fraudsters who never needed to be authentic.

Source: https://www.ftc.gov/news-events/data-visualizations/data-spotlight/2025/02/romance-scammers-favorite-lies-exposed


The Decision You Own

Pick one security gap to close in the next 30 days:

(A) Ransomware defense architecture — Implement network segmentation to limit lateral movement, build immutable backups with validated restoration procedures, deploy behavioral detection that catches novel attacks, train staff and test incident response quarterly. Reduce time to containment from days to hours. Reduce recovery time from weeks to days. Refuse to accept that “when we’re hit” means paying ransom and suffering extended downtime.

(B) Vendor risk management program — Inventory vendors with sensitive data access, assess vendor security posture through audits and certifications, strengthen contracts with specific security requirements and breach notification timelines, build continuous monitoring of vendor risk, prepare breach response playbooks assuming vendor breach will occur. Own the risk you’re accountable for even though you don’t control vendor security.

(C) Multi-channel verification for high-risk transactions — Identify high-risk communications (wire transfers, executive requests, vendor changes), implement verification through independent channels (don’t trust email alone, verify via known phone number), train employees that AI can fake voice, video, text perfectly, test controls via simulated fraud attempts. Stop relying on “I’ll know fraud when I see it” when AI can fake everything that makes communication seem legitimate.

The gap between “we think we’re secure” and “we’re actually secure” is where breaches happen, vendor compromises cascade, and fraud succeeds. Organizations that assume their controls are sufficient will discover they’re not when attacks occur. Organizations that test, measure, and continuously improve their security will survive the industrialization of ransomware, third-party breaches, and AI-assisted fraud.


What’s Actually Changing

Attack volume is increasing faster than defensive capacity can scale. Ransomware attacks up 49% year-over-year. Attackers are succeeding, extracting payment, and funding continued operations with insufficient friction to change behavior.

Third-party breaches are cascading across customer bases before scope is fully understood. Initial disclosures systematically underestimate impact. Conduent’s breach went from 4 million to 15.4 million affected individuals in eight days. Organizations are accountable for vendor breaches they didn’t directly cause and couldn’t directly prevent.

AI is collapsing the authenticity threshold. Voice, video, text, emotional manipulation—all can be synthesized perfectly. The signals humans use to detect fraud are no longer reliable. Trust mechanisms that worked for decades are failing when AI can fake everything.

The organizations managing these threats effectively aren’t defending harder with the same approaches. They’re assuming breach is inevitable, measuring vendor risk as existential exposure, and implementing multi-channel verification because single-channel authentication can’t survive AI-generated fraud. They’re testing their defenses quarterly, measuring time to containment in hours not days, and accepting that perfect prevention is impossible but rapid detection and response is achievable.

The security race is no longer about preventing every attack. It’s about surviving attacks with minimal damage, detecting vendor breaches quickly enough to limit exposure, and verifying identity when AI can fake authenticity. Organizations that build resilience, not just prevention, will survive 2026. Organizations that assume prevention is sufficient will discover it’s not—during breach response, vendor cascade, or fraud investigation.