For comms leaders — April 2026
A buyer-side reframe

Vendor, or partner?

Five questions before your next media intelligence procurement.

Presented by Wisma contact@wisma.ai
Vendor · or · Partner 01 / 17
Fig. 01 — Composite
Vendor A
Vendor B
Vendor C

Three capability statements, laid out side by side. Almost indistinguishable.

Shortlist · Procurement cycle · Y+0
A familiar scene

A ministry comms team reaches the end of a procurement cycle. Three vendors are on the shortlist. On paper, they look almost identical.

Same mainstream and social coverage. Same language lists. Same sentiment analysis. Same dashboards, with slightly different colour palettes.

The team picks one. Signs for three years. Gets back to work.

Vendor · or · Partner 02 / 17
Two years later

The comms function is not meaningfully sharper at reading the public conversation than it was before.

This is the dominant outcome of media intelligence procurements being completed in this category today — and the reason is not that the team chose wrong between the three shortlisted options.

The reason is that the RFP itself was asking 2020 questions. The media environment and the technology capability set those questions were designed for have changed.

Vendor · or · Partner 03 / 17
Part one · The ground that moved

Three shifts beneath the template.

Shift · 01 / 03 04 / 17
01 The user moved

AI augments the comms officer directly.

Large language models can read, reason over, and synthesise public discourse fast enough that the comms officer becomes the primary user of media intelligence — not an analyst who aggregates and summarises.

Faster briefs. Live narrative reads during unfolding events. Hypothesis tests. Bespoke sector deep-dives written against the ministry's actual questions.

The procurement question is no longer "what does the tool do?"
It is now: "how far is our partner willing to push the envelope of what our team can do next quarter?"

Shift · 02 / 03 05 / 17
02 The signal moved

Narratives move faster than consumption metrics can track — and some of them are not real.

Reach, impressions, share-of-voice, % negative sentiment: all measure volume. None measure what is being said, how it is framed, which framings are gaining ground, or what the divergence between framings reveals.

They also cannot distinguish a narrative that is organic from one amplified inauthentically — or, increasingly, one whose source text was generated by an LLM in the first place.

Shift · 03 / 03 06 / 17
03 The ground moved

The technology moves too fast to buy once and forget.

Model capabilities, bias profiles, hallucination patterns, platform access, risk postures — all shift on quarterly or monthly timescales.

A ministry buying media intelligence for a five-year horizon needs a counterparty who can speak fluently about both the capabilities and the risks of the AI stack underneath, and who will keep reassessing both as the landscape moves.

Vendor · or · Partner 07 / 17
Are you shopping for a media intelligence vendor, or a media intelligence partner?
The question that replaces the RFP
The distinction 08 / 17

Not a stylistic difference. A procurement difference.

— Vendor

Sells you a tool against the RFP you wrote.

Ships a dashboard. Trains the users. Hands over a support email.

A procurement decision you will find yourself working around.

— Partner

Sits with you and asks whether the RFP is still the right one.

Shows up next month with new workflows you had not asked for. Treats compounding your team as their core obligation.

A procurement decision you will defend in five years.

Vendor · or · Partner 09 / 17
Part two · The five questions

What a buyer produces when the buyer wants to think clearly.

Question · 01 / 05 10 / 17
1 The compounding question

Can they help your comms officers do more — or just replace a spreadsheet?

A vendor ships a dashboard and hands you a support email. A partner shows up next month with three new workflows you had not asked for: a live narrative-read for the next major event, a faster format for the weekly comms huddle, a way to pressure-test a draft key-message before it leaves the building.

The point is not that every suggestion lands. The point is that the partner is actively using AI to compound what your team can do, quarter over quarter — and treats that as their core obligation, not a premium add-on.

— What a good partner brings

A specific client workflow that did not exist twelve months ago, with a plain-language account of why it exists now and what problem dashboard-era tooling could not solve.

Question · 02 / 05 11 / 17
2 The meaning question

Can they read meaning, not just volume?

A vendor quotes you the 67% negative. A partner decomposes it into the four distinct stories actually driving it, traces each to the communities carrying it, and names the framing devices they are using.

Fig. 02 — "67% negative" decomposed by persona
Young professionals 28%
Heartland families 31%
Retirees & seniors 22%
Foreign residents 19%
Cost-of-living Fairness Competence Identity
Question · 03 / 05 12 / 17
3 The authenticity question

Can they tell organic from coordinated — and trace how narratives move between them?

Two concerns. Inauthentic spread: paid personas, brigading rings, astroturf, sockpuppet farms. A partner names the taxonomy and shows the detection. Narrative crossover: not "is this AI-generated" — that is the wrong question — but how a narrative moves from inauthentic seeding into genuine public uptake.

Fig. 03 — Narrative crossover path
Seed
Inauthentic channels: persona networks, brigading rings.
T + 0h
Crossing
First authentic uptake — the signal that changes the playbook.
T + 6–48h
Organic spread
Public discourse carries the narrative forward.
T + 2d onward
Question · 04 / 05 13 / 17
4 The defensibility question

Can they defend every claim — and name what their AI can't do?

Every classification traceable to its source, its method, and its confidence. That is the minimum bar.

A partner goes further: they volunteer the limits of their own stack. Where the model loses accuracy. Where bias is most likely to creep in. Which claims need a human reviewer before publication. A partner answers these before you ask; a vendor meets limitations only when you push.

— What a good partner brings

An unprompted list of three things their AI stack cannot do reliably — and the review process that sits around them.

Question · 05 / 05 14 / 17
5 The longevity question

Will they still be useful when the platforms and models change underneath?

Twitter became X. Reddit throttled its API. TikTok's regulatory status is perennially uncertain. The models your partner relies on today will not be the models they rely on in eighteen months.

A partnership built around a specific API or a specific model's reasoning style is brittle by construction. A methodology-first partner can show you how they adapted when the ground moved last — the methodology stayed stable even as the tooling underneath changed.

— What a good partner brings

A concrete example of a platform or model shift they have already navigated — and the methodological choices that made it survivable.

Recap 15 / 17

Five questions. One decision.

Together, they do not diagnose a feature set. They diagnose whether the counterparty across the table is thinking alongside you about where media intelligence is going — or selling you what existed when this category's definition was last settled.

1
Compound the team, or replace a spreadsheet?
2
Meaning, or just volume?
3
Organic or coordinated — and the crossing?
4
Defend every claim; name the limits?
5
Survive the next platform shift?
How we would answer 16 / 17
An open note from Wisma

We should say openly: we are one of the companies that would try to answer these well.

— Stack

Proprietary narrative models blended with state-of-the-art LLMs.

The outputs that carry political weight are reviewed by domain analysts who understand Singapore's media and socio-political context.

— Defensibility

Source, method, and confidence documented for every analysis we publish.

If a claim travels from a dashboard into a briefing, its provenance travels with it.

— Multilingual

Models that natively understand multilingual and multicultural contexts.

Not simple English translation — native comprehension across the languages and registers Singaporeans actually speak.

— Living product

Every monthly sector brief refined against the events that unfolded.

Not templated numbers — a living product that compounds against reality.

17 / 17
An invitation

If you are rethinking how government communications should look in this fast-evolving era of AI, we would be glad to sit down and walk through how we would answer each of these five questions ourselves.

No commitment to anything beyond the conversation.

Write to us — contact@wisma.ai

Wisma · NarrativeIQ TL-19 · April 2026
01 / 18