Article:

Making AI your analyst: practical uses of LLMs for consultants

Written by Ian Wylie Wednesday 29 April 2026
Independent consultants are turning to large language models to boost efficiency and insight, but real value depends on deploying AI effectively while maintaining trust and judgement
Close-up of an iPhone screen showing an "AI" app folder containing ChatGPT, Gemini, Claude, DeepSeek, Perplexity, Copilot, Meta, and Grok icons

For independent management consultants, the rise of large language models (LLMs) presents both an opportunity and a threat. Used well, tools such as ChatGPT and Claude can act as a kind of junior analyst: fast, tireless and surprisingly capable. Used badly, they risk producing confident nonsense, eroding trust and adding little value. The difference lies in how consultants deploy them.

“AI tools can be incredibly helpful for a range of specific tasks,” says Zoe Webster, an independent AI consultant and former AI director at BT and Innovate UK. But she is quick to puncture any hype. “They aren’t magic and can hallucinate or get things plain wrong. They can’t read between the lines or apply common sense.”

That tension between speed and limitation defines how consultants should approach LLMs. The most effective way to think about them, Zoe suggests, is as a junior member of the team. And like any junior, they need clear direction, careful supervision and structured tasks.

Start with what LLMs do well

In practice, consultants are finding immediate value in relatively bounded tasks. Summarisation is the obvious starting point. 

“Generating that executive summary in moments from the report you and your team have compiled is an easy job,” says Zoe.

Equally useful is transformation, turning one format into another. A report can become a slide deck, a LinkedIn post or even a podcast script. For time-pressed independents, this kind of repurposing can significantly extend the value of existing work.

There is also growing use of LLMs in early-stage analysis. Consultants are using them to scan a topic, identify themes, surface key players and highlight areas of uncertainty. Zoe describes a workflow where AI first maps the landscape of sources, themes and perspectives, before the consultant steps in to interpret and prioritise.

The key is to break work into discrete components. Even complex projects can often be decomposed into smaller tasks that lend themselves to AI assistance. “It may be useful to break down the task so that sub-tasks lend themselves to AI assistance,” adds Zoe.

Prompt with precision

If LLMs are junior analysts, then prompting is management. Vague instructions produce vague outputs. Clear, specific prompts yield far better results. 

Zoe advises being explicit: “Use prompts like ‘Tell me what is wrong with this’ or ‘Give me ideas for how I can develop this concept further with Gen Z in mind.’” 

The more context and direction you provide, the more useful the response.

Over time, consultants can ‘train’ their tools through repeated interactions or system prompts to respond in ways that align with their style and expectations. But this is an ongoing process. 

“This is a skill I think we are all still learning and improving at,” says Zoe.

Keep reading – more AI insights

Login

If you are already registered as a CMI Friend, Subscriber or Member, just login to view this article.

Confirm your registration

Login below to confirm your details and access this article.

Log in

Register for Free Access

Not yet a Member, Subscriber or Friend? Register as a CMI Friend for free, and get access to this and many other exclusive resources, as well as weekly updates straight to your inbox.