When AI Gets It Wrong: The Real Crisis Risk Behind Synthetic PR (Commentary by Farrell Tan)


AI-powered spokespeople promise something PR teams have always wanted: control.

No off-script comments. No fatigue. No misquotes. Just perfectly calibrated messaging, delivered at scale.

But control is not the same as resilience.

When AI gets it wrong, it doesn't just fail. It fails differently.

The New Failure Mode: Precision Without Judgement

Synthetic spokespeople are designed for consistency. They don't improvise, deviate, or second-guess.

That is exactly the problem.

A mistimed post, a culturally tone-deaf message, a misaligned automated response -- these can trigger a crisis that escalates faster than traditional human error. AI does not read the room. It executes.

In traditional PR, crises are managed through accountability. A spokesperson steps forward. A leader responds. Someone owns the message.

With synthetic PR, that clarity disappears. As I wrote in "When the Avatar Misspeaks" for Strategic Global (When the Avatar Misspeaks - Crisis Management in the Age of Synthetic PR - Strategic), the question becomes: when an avatar misspeaks, who actually apologises?

An AI cannot feel remorse. It cannot demonstrate empathy. More importantly, it cannot rebuild trust. The burden does not disappear. It shifts back, often more aggressively, to the organisation. And in markets like Malaysia, where AI regulation is still developing, the legal liability may be just as unclear as the reputational one.

Why This Hits Harder in Asia

In Southeast Asia, communication is relational, not purely transactional. But the real difference is how that plays out in a country like Malaysia, where content moves across Bahasa Malaysia, English, Mandarin, and Tamil, and the networks that verify it are segmented by language and community.

A synthetic spokesperson delivering a message in English may not reach audiences who consume information in Bahasa Malaysia. More critically, a failure in one language does not stay contained. It travels through those segmented networks, verified or not, before the organisation even knows it happened.

Reputation here is built on sincerity, not just accuracy. AI can deliver correctness at scale. It cannot deliver cultural intuition across a fragmented audience landscape.

The Illusion of Risk Reduction

There is a growing narrative that synthetic spokespeople reduce risk.

In reality, they repackage it.

Instead of unpredictable human error, you get systemic risk: errors at scale, faster amplification, harder attribution, slower recovery. Fewer small mistakes. Potentially much bigger ones.

What Smart Organisations Should Be Doing Now

The takeaway is not to avoid synthetic PR. It is to approach it like what it is: a high-leverage, high-risk tool.

That means scenario mapping before deployment; not just how it works, but how it fails, and how bad that failure could get. Human override built in by design, not as backup. Crisis playbooks updated for AI failure, because traditional frameworks assume human missteps. Synthetic PR requires faster, clearer, human-led responses.

The Orchan View

Crises have always revealed the same truth: trust is not built in moments of control. It is tested in moments of failure.

Synthetic spokespeople may change delivery. They do not change responsibility.

Automation can scale communication. It cannot absorb accountability.

If you are exploring synthetic PR, or rethinking crisis readiness in an AI-driven landscape, we work with organisations to integrate AI without compromising trust, credibility, or control. Get in touch.

  • Email: changenow@orchan.asia
  • Phone: +603-7972 6377
  • Website: www.orchan.asia
  • Comments