A ministry comms team gets to the end of a procurement cycle for media intelligence. Three vendors have made the shortlist. Their capability statements have been laid out side by side on a single page, and on paper, they are almost indistinguishable. The same coverage of mainstream news, social platforms, and online forums. The same list of languages. The same sentiment analysis. The same dashboards, with slightly different colour palettes. The team picks one, signs for three years, and gets back to work.
Two years later, the comms function is not meaningfully sharper at reading the public conversation than it was before.
This is not a hypothetical. It is the dominant outcome of media intelligence procurements currently being completed in this category — and the reason is not that the team chose wrong between the three shortlisted options. The reason is that the RFP itself was asking 2020 (pre LLM- or Agentic-AI) questions. The media environment and technology capability set those questions were designed for has changed.
Three shifts in particular have pulled the ground out from under the conventional procurement template. They are worth naming before we get to the questions that replace it.
Shift 1 — AI now augments the comms officer directly.
Large language models can read, reason over, and synthesise public discourse at a speed and depth that make the comms officer themselves the primary user of media intelligence, rather than an analyst who aggregates and summarises key points. Faster briefs. Live narrative reads during unfolding events. Thoughtful hypothesis tests. Bespoke sector deep-dives written against the ministry's actual questions. All of this is in-reach for a comms team whose partner is willing to build it with them. The right procurement question is no longer “what does the tool do?” It is “how far is our partner willing to push the envelope of what our team can do next quarter?”
Shift 2 — Narratives move faster than consumption metrics can track, and some of them are not real.
Reach, impressions, share-of-voice, and “% negative sentiment” measure volume. They do not measure what is being said, how it is framed, which framings are gaining ground, or what the divergence between framings reveals. They also cannot distinguish a narrative that is organic from one that is being amplified inauthentically — or, increasingly, one whose source text was generated by an LLM in the first place. Moving past consumption metrics to narrative understanding, and then to false-narrative detection, is no longer a nice-to-have.
Shift 3 — The technology moves too fast to buy once and forget.
Model capabilities, bias profiles, hallucination patterns, platform access, and risk postures all shift on quarterly or even monthly timescales. A ministry buying media intelligence for a five-year horizon needs a counterparty who can speak fluently about both the capabilities and the risks of the AI stack underneath — and who will keep reassessing both as the landscape moves.
Put all three shifts together, and the right question for your next procurement becomes legible:
Are you shopping for a media intelligence vendor, or a media intelligence partner?
A vendor sells you a tool against the RFP you wrote. A partner sits with you and asks whether the RFP is still the right one. In a category whose capability surface is being rewritten by AI every six months, that is not a stylistic difference — it is the difference between a procurement decision you will defend in five years and one you will find yourself working around.
Media intelligence, for a Singapore ministry, is not a tooling purchase. It is part of the country's comms infrastructure. The judgment it produces shapes how the government reads public mood, the coherence of its own messaging, and the resilience of its narrative environment against actors who are themselves getting more sophisticated by the quarter. A decade of that judgment rides on who you choose as a partner.
The rest of this piece is the five questions we would ask if we were on the buyer's side of the table. They are not designed to make any vendor pass or fail. They are designed to separate a partner — who can think alongside you as AI reshapes this category — from a vendor, who is selling you what existed when the RFP template was written.
1. Can they help your comms officers do more — or just replace a spreadsheet?
A vendor ships a dashboard, trains the users, and hands you a support email. A partner shows up the following month with three new workflows you had not asked for: a live narrative-read capability for the next major event, a faster-turnaround format for the weekly comms huddle, an experimental way to pressure-test a draft key-message before it leaves the building. The point is not that every suggestion lands. The point is that the partner is actively using AI to compound what your team can do quarter over quarter, and treats that as their core obligation — not a premium add-on.
2. Can they read meaning, not just volume?
Most media intelligence tooling in market is still built on reach, impressions, share-of-voice, and coarse sentiment buckets. These numbers are useful, but they measure how loud the conversation is — not what it is about. A ministry needs to know which narrative is gaining ground, how it is being framed by different communities, where competing framings diverge, and why the divergence matters. A partner talks about narratives, frames, and framing contests, and can decompose a single “67% negative” sentiment reading into the four distinct stories actually driving it. A vendor will quote you the 67%.

3. Can they tell organic from coordinated — and trace how narratives move between them?
Two related but distinct concerns sit under this question, and a serious partner will help you understand both.
Inauthentic spread. Coordinated inauthentic behaviour is no longer limited to botnets. It shows up as paid persona networks, brigading rings, astroturf campaigns, and sockpuppet farms operated by humans. A vendor reports the same spike either way, and you have no basis for telling campaign influence apart from genuine public concern. A partner can name the taxonomy and show you how they detect each form.
Narrative crossover. It is tempting to ask whether a piece of content was AI-generated. That is the wrong question. Authentic users legitimately and routinely use AI to draft what they share, and reliable content-level detection is not something any honest partner will claim. The more useful question is how a narrative moves: whether it is being seeded in inauthentic channels, where it first crosses into authentic discourse, and how fast it spreads once it does. A partner can trace narrative origins and watch the moment they cross into genuine public uptake. A vendor cannot hold that distinction, let alone track its movement.
4. Can they defend every claim — and name what their AI can't do?
Every classification the system surfaces should be traceable to its source, its method, and its confidence. That is the minimum bar. A partner goes further: they volunteer the limits of their own stack. Where does the model lose accuracy? Where is bias most likely to creep in? Which claims need a human reviewer before publication? A partner will tell you the answers to those questions before you ask. A vendor will market capabilities and meet limitations only when you push.
The partner is the one whose output you can still cite in a week, with the confidence intervals and the method footnotes intact, when the question is asked in a different register. The vendor's output is harder to defend the further it travels from the dashboard it was produced on.
5. Will they still be useful when the platforms and models change underneath?
Twitter became X. Reddit throttled its API. TikTok's regulatory status is perennially uncertain. The models your partner relies on today will not be the models they rely on in eighteen months. A partnership built around a specific platform's API, or a specific model's reasoning style, is brittle by construction. A methodology-first partner can show you how they adapted when the ground moved last — the methodology stayed stable even as the tooling underneath changed.
This matters for the five-year horizon a ministry's comms infrastructure actually requires. A procurement decision made today should still be serving you in 2031.
Five questions, one decision
What these five questions together diagnose is not a feature set. It is whether the counterparty across the table is thinking alongside you about where media intelligence is going, or selling you what existed when this category's definition was last settled.
You will notice what this piece does not include. There is no RFP template. No scoring matrix. No side-by-side comparison of the vendors currently in the market. Those artefacts are what the market produces when the market wants to sell to you. These five questions are the artefact a buyer produces when the buyer wants to think clearly.
We should say openly: we are one of the companies that would try to answer these questions well. We have been building NarrativeIQ around the same reframe this piece argues for — that the media intelligence a Singapore ministry needs today is not the product that was being sold five years ago. Concretely: we blend proprietary narrative models with state-of-the-art LLMs, and the outputs that carry political weight are reviewed by domain analysts who understand Singapore's media and socio-political context. We document the source, method, and confidence behind every analysis we publish. We use models that natively understand multilingual and multicultural contexts, not just simple English translation. And we refine every monthly sector brief based on the actual events that unfolded, not from templated numbers.
If you are rethinking how government communications should look in this fast-evolving era of AI, we would be glad to sit down and walk through how we would answer each of these five questions ourselves — with no commitment to anything beyond the conversation. Write to us at contact@wisma.ai.