Insight

What Finance Teams Are Actually Using AI For Right Now

AI is already present inside finance teams. The question is how it is being used in practice.

In our recent roundtable, published in Reframing the CFO Mandate, senior finance leaders described adoption that is steady and shaped by accountability.

The patterns were consistent:

  • Small-scale experiments rather than sweeping change
  • Controlled access with defined boundaries
  • Clear human review layered over outputs
  • Gains measured in time saved and clarity improved

The whitepaper highlights that AI is delivering value, yet reliability varies and requires supervision . That reality defines how finance engages with these tools.

Accuracy standards in finance remain fixed. Financial outputs carry legal and reputational weight, so any tool introduced into this environment must operate within those constraints.

Usage today reflects that discipline.

Let’s examine where AI is currently being applied inside finance functions and how teams are structuring that use.

Spotting Signals: Anomalies, Patterns, Outliers 

One of the most established uses of AI inside finance today is signal detection.

In Reframing the CFO Mandate, finance leaders consistently observed that current tools are more reliable at identifying anomalies and emerging trends than at guaranteeing data accuracy. That distinction shapes how teams are deploying them.

AI is being used to scan large data sets for unexpected movements in revenue or cost lines, unusual transaction activity, and variances that fall outside historical norms. The objective is to narrow the field of attention and surface areas that require deeper examination.

An FP&A analyst reviewing monthly performance may use an AI-enabled tool to highlight revenue movements that deviate from trend. The output provides direction for further analysis and helps prioritise where time should be spent.

The responsibility for validating the data and confirming assumptions remains with the analyst and their manager.

The whitepaper describes this capability as “pattern matching”, a strength in identifying signals without resolving questions of reliability. Finance retains ownership of accuracy and interpretation. That accountability does not move.

Where underlying data is structured and consistent, signal detection can meaningfully accelerate investigative work. Where data is fragmented or incomplete, additional review effort increases.

In both cases, the tool supports direction-setting while the assurance function remains human-led.

Finance leaders noted that AI tools are currently strongest in spotting anomalies and surfacing trends. The judgment layer, and responsibility for reliability, remains firmly within the finance function.

From the STOIX whitepaper, Reframing the CFO Mandate

Faster Variance Analysis and Driver Exploration  

One of the more visible early gains highlighted in the whitepaper is faster variance analysis. This is where AI moves from concept to operational reality.

The shift happens in the preparation phase. Time that was previously spent consolidating data and drafting initial commentary is reduced, allowing teams to focus earlier on interpretation and narrative. AI-enabled tools are being used to process structured financial outputs and surface movement across key lines in minutes rather than hours.

In practice, this typically includes:

  • Summarising revenue and cost movements
  • Flagging significant deviations from plan
  • Highlighting correlations across business units
  • Drafting a first iteration of commentary

An FP&A manager preparing the monthly pack might upload a structured extract and receive a first draft of the variance commentary within minutes. Instead of starting with a blank page, they begin with something workable.

That shift changes the flow of the conversation. Time is spent less assembling numbers and more asking why margins moved or why revenue dipped in one segment but not another.

These are described as incremental gains that accumulate. They do not transform the process overnight, but they steadily free up attention for sharper analysis and clearer explanation.

AI shortens the distance between data and discussion. The judgement behind the message remains human.

Early-Stage Analytical Support: Sense-Checking Thinking  

Beyond signal detection and variance drafting, another use case is emerging quietly inside finance teams: 

AI is being used as an early-stage analytical sounding board.

When preparing a revised forecast, a finance manager may input key assumptions and ask how shifts in pricing or volume affect cash flow under different conditions. Instead of manually building multiple rough scenarios, they can surface possible pressure points quickly and decide which lines of inquiry deserve deeper modelling. 

A commercial finance lead preparing commentary for senior leadership might use AI to interrogate their own argument. They may ask where the narrative could be challenged, where logic feels thin, or which assumptions appear optimistic.  

The output sharpens preparation and exposes gaps before the discussion reaches the boardroom. 

The whitepaper describes this dynamic as maintaining momentum in complex work. Analysis often slows when uncertainty increases. Having a tool that can explore implications rapidly helps teams move forward with greater clarity. 

Finance leaders still own the decision and the credibility that comes with it. However, the thinking that leads to that point now moves faster and explores a wider range of angles before anything is shared.

Small Automations Built Closest to the Work 

While board discussions often focus on large-scale transformation, much of the meaningful progress is happening inside teams, led by analysts and operational finance staff.

The report notes that practical learning is taking place closest to the work. Analysts are building lightweight automations and adapting workflows in response to recurring friction points.

These efforts are rarely branded as innovation. They tend to begin with a simple irritation:

  • a manual consolidation step that repeats each month
  • a recurring data export that requires formatting before analysis can begin
  • a reconciliation process that moves between systems through email attachments

In response, teams experiment with small workflow connections and integration scripts that remove manual handoffs. They link systems so that structured data flows automatically into the next stage of reporting. They build simple automations that trigger reminders or flag exceptions without requiring constant oversight.

An operational finance team, for example, might connect a billing system to a reporting model so that updates feed directly into performance tracking. The improvement does not change the overall operating model, but it removes a recurring manual intervention and reduces the risk of version confusion.

These opportunities are often overlooked by formal transformation programmes. Large initiatives tend to prioritise platform decisions and structural redesign.

Meanwhile, incremental workflow adjustments made by those closest to the process can compound quietly over time.

What stands out is not the scale of each automation but the location of the experimentation. Practical capability is developing inside the function, influenced by daily operational needs rather than top-down instruction.

Prompt Sharing and Workflow Reuse  

Once experimentation moves beyond individuals, something else begins to happen. Finance teams start comparing notes.

In Reframing the CFO Mandate, participants described prompt sharing and workflow reuse as one of the clearest emerging behaviours inside finance functions. What begins as private experimentation becomes visible through repetition.

The shift is subtle: an analyst produces a particularly clean variance summary. Someone asks how it was structured. The prompt gets shared. A few weeks later, three people are using a variation of it.

This often shows up as shared prompt templates for variance commentary, agreed formats for management packs, consistent ways of framing scenarios, and reusable prompts for working capital or cash flow sensitivity.

Over time, these prompts accumulate. Some teams formalise them into a shared document. Others keep them in collaborative tools where updates are continuous.

The glossary in the whitepaper refers to this as the development of prompt libraries.

The operational impact is straightforward. When prompts are reused, outputs become more consistent. Analysts spend less time refining structure and more time refining judgement.

This results in teams being able to align around a common analytical language without requiring additional governance layers, thereby reducing friction.

Over time, shared prompts quietly reorganise how the team works. A new joiner steps into an environment where useful starting points already exist. First drafts begin to look more consistent, which in turn makes review sharper and less time-consuming.

The behaviour may look small. Its cumulative effect is not.

What Finance Teams Are Deliberately Not Using AI For (Yet)  

Despite steady experimentation, there are areas where finance teams are drawing firm lines.

Across the roundtable, leaders were clear that certain activities remain tightly controlled and largely untouched by AI.

These include:

  • Preparation of statutory financial statements
  • Final numbers in earnings releases or board packs
  • Regulatory submissions and audited disclosures
  • Accounting judgements where interpretation affects reported results

In these areas, the output becomes part of the public or regulated record. Errors carry financial, legal, and reputational consequences, which the whitepaper highlights in its discussion of adoption reality.

Teams may use AI to explore scenarios or support preparatory analysis. The final numbers and sign-off still move through established control processes, where data is checked through existing systems and oversight remains structured and deliberate.

The closer the work sits to external scrutiny, the narrower the scope for experimentation becomes.

This restraint reflects the weight finance carries inside the organisation. Standards of accuracy and control remain fixed, even as tools evolve.

The Hidden Cost: Review Load and the Expanding Judgement Layer   

As output speed increases, something else expands alongside it.

There is a shift away from task execution and toward judgement and oversight. The glossary refers to this as the emergence of a “judgment layer” and a growing “reviewer role”.

That language captures what many teams are already feeling.

When AI accelerates early-stage drafting, anomaly detection, or scenario exploration, more material reaches the review stage more quickly. The volume of interpretation rises. Someone still needs to assess whether the reasoning holds, whether the assumptions are sound, and whether the narrative reflects commercial reality.

An FP&A team may now receive a polished first draft of commentary in minutes. That draft still requires interrogation.

Where did the model overstate correlation?
Which cost movement needs deeper explanation?
Does the tone reflect actual business risk?

Review is cognitively demanding work. It requires confidence in financial fundamentals and the ability to recognise when something feels incomplete. It takes time and draws on experience.

As preparation becomes faster, review capacity becomes a constraint. The shift reshapes how finance work is experienced day to day, with more time required to evaluate outputs, apply context, and protect the integrity of what leaves the function.

This shift is easy to miss. Automation does remove some of the manual build, but it replaces it with the work of questioning outputs and making sure the final message can stand up to challenge.

This is why many finance teams describe the work as feeling heavier, even as processes appear more efficient.

How These Use Cases Spread    

In many finance teams, adoption begins with a quiet permission.

A handful of analysts are given access to a tool. They use it for internal drafting or exploratory analysis. The work stays contained, and nothing moves into live systems without review.

A few weeks later, the team gathers to review what they’ve seen. Some outputs prove reliable with minimal adjustment, while others need heavy correction before they can be trusted, and confidence develops at different speeds across the group.

This is usually how experimentation takes hold inside finance teams. It spreads quietly, through shared experience rather than formal rollout.

Over time, a rhythm develops. Tools are tested in sandbox environments away from live reporting. Teams review outputs together before reusing them. Clear lines are drawn around sensitive processes. Prompts and workflows that prove dependable begin to circulate more widely.

The result is a contained form of learning. Access expands where confidence builds and pulls back where outputs fall short, allowing capability to grow without exposing the function to unnecessary risk.

A Snapshot of Today 

The picture that emerges is steady rather than sudden. Progress inside finance is building through small, controlled applications that accumulate over time . 

What matters now is not the presence of tools, but the patterns forming around them. How they are introduced. Where they are held back. How review capacity absorbs the increase in output. How teams create space to learn without weakening control. 

These choices are already shaping how finance work is experienced day to day. 

Reframing the CFO Mandate explores what these patterns mean at a deeper level, and how they will influence finance leadership in the years ahead . 

If you want the full perspective behind these observations, download the whitepaper and examine the implications in detail.