Let's talk!

Trust Is the Core Operating System of the Agentic Enterprise

agentic ai artificial intelligence insurance strategy Apr 14, 2026
 

Written by Alchemy Crew Ventures

 

Why Steven Abel and Franklin Manchester Say the Agentic Enterprise Lives or Dies on Architecture

  • Only 7% of insurers using AI are doing anything transformative with it, according to the SAS Trust Imperative report. The other 70% are running tools, not transforming systems.
  • Steven Abel, Global Technology Partner at Oliver Wyman, argues that human-in-the-loop is no longer a scalable trust mechanism for agentic AI. Trust must be built as infrastructure, not bolted on through process.
  • Franklin Manchester, Global Insurance Strategic Advisor at SAS, identifies process as the primary failure point in agentic deployments, citing Cigna's PXDX system that denied 300,000 pre-approved claims in a single month.

What if the agent has already made the decision?

A claims agent processes 300,000 pre-approved claims in a single month. Every one of them is denied. The system is fast. The system is consistent. The system is also wrong.

This is not a thought experiment. It is the Cigna PXDX example that Franklin Manchester used on a recent episode of Scouting for Growth to make a point most insurance leaders are not yet ready to hear. When agents start acting on behalf of the enterprise, process failures become catastrophic at machine speed.

Sabine VanderLinden hosted Franklin Manchester, Global Insurance Strategic Advisor at SAS, and Steven Abel, Global Technology Partner and Deputy Global Head of AI and Transformation at Oliver Wyman, to ask a sharper question. What does it actually take to operationalise trust by design before autonomy outpaces governance? The answer they offered was uncomfortable for an industry still treating trust as a compliance line item.

Why this matters now

The numbers tell the story before the leaders do. According to the SAS Trust Imperative report, 70% of insurers are using some form of AI, mostly traditional machine learning. Only 7% are doing anything transformative with it. The same study found that humans trust generative AI roughly 200% more than they trust machine learning, despite ML being a 30-year-old discipline with vastly better explainability characteristics.

That trust gap is not a curiosity. It is a leading indicator of where governance will fail.

Add the regulatory pressure. 24 US states have now adopted AI oversight rules for insurers, according to the NAIC tracker. PSD3 is rolling out across Europe between 2026 and 2028, setting a hard window for agentic commerce. Boards are no longer asking whether their organisation has an AI strategy. They are asking whether anyone in the building can explain how the AI is making decisions.

Most cannot.

The Frontier Firm thesis, popularised by Microsoft and adapted by Alchemy Crew for insurance, holds that the winning organizations of the next decade will be human-led and agent-operated. That requires three things working together: an Intelligence Core, a method for adopting innovation through frameworks like DIVAAA, and an execution engine like the Venture Client Model. Trust is not a fourth pillar bolted on at the end. Trust is the connective tissue that holds the other three together.

Four insights from the conversation

1. The industry is over-indexing on models and under-investing on trust

Steve Abel opened the conversation with a structural argument. Most enterprise AI investment is being directed at large language models and the tools surrounding them. Very little is being directed at the architecture that makes those models trustworthy in production.

"The architecture of these models themselves doesn't lend itself to a high trust environment. It's not to say that AI and LLMs are unbelievably powerful tools. It's just been an under invested, under focused on area in the AI landscape." — Steven Abel 

The point is not that LLMs are bad. The point is that LLMs alone are not a system. They are a component. Without the surrounding infrastructure for transparency, auditability, and governance, the component is unsupported.

This maps directly to the Intelligent Layers pillar of the Frontier Firm thesis. The Intelligence Core has five layers: data, intelligence core, governance, agent orchestration, and human oversight. The carriers winning the AI race are the ones building all five together. The carriers losing it are the ones procuring point solutions for the model layer and assuming everything else will catch up.

2. Human in the loop is not a scalable trust mechanism

Both guests pushed back on the comfort blanket of human oversight. Abel called the phrase "human in the loop" something that drives him crazy. The reasoning is straightforward. As agentic systems scale, the ratio of human supervisors to autonomous decisions becomes untenable.

Manchester reinforced the point with a thought experiment. Could a team of 13 people supervise a 10 billion dollar insurance portfolio, no longer acting as claims professionals or underwriters but as agentic supervisors? His answer was that this future is closer than the industry assumes.

"There's still no more sophisticated sensor than a human being and a more powerful computer than the human brain. But the idea that we'll federate AI risk management out to millions of people is probably not a good idea." — Franklin Manchester 

Abel pushed the implication further. He drew an analogy to the early days of data centres, when developers walked in and physically touched servers. That model did not scale. It was replaced by infrastructure managed by a small number of specialists, with the result that most users now interact with cyber and server infrastructure invisibly. The same shift, he argued, is overdue for AI governance.

This is what Abel meant by "control as infrastructure and trust as infrastructure." Not a process to be followed. Not a checkbox to be ticked. A property of the system itself.

3. Process is where agentic deployments actually break

Manchester offered the most operationally specific warning of the conversation. When agentic systems fail, they rarely fail at the model layer. They fail at the process layer, and they fail because of tacit knowledge that was never documented.

"25% of what an agent would need to do isn't documented anywhere. It's anecdotal, it's colloquial, it's in the digital realm of the hallways of our organizations where an underwriter, an adjuster, an actuary asks their colleagues, what do I do here?" — Franklin Manchester 

This is the part of digital transformation that most insurance leaders systematically underestimate. The visible processes get mapped, automated, and handed to agents. The invisible processes, the ones that live in the heads of senior practitioners, do not. When the agent encounters a case the documented process does not cover, it improvises. At scale, that improvisation becomes Cigna's 300,000 denials.

The DIVAAA framework, Alchemy Crew's six-step methodology for venture adoption, directly addresses this. The Validate and Adopt steps exist precisely to surface the tacit knowledge that pilots tend to skip. Skipping those steps creates the gap between a successful demo and a production failure.

4. The brand is the ultimate trust signal, and it erodes in seconds

When Sabine asked the guests for their non-negotiable recommendations for CEOs and boards, Manchester's answer ran counter to the industry's technology-first instinct. His three priorities were people-centric: embrace new-collar workers fluent in AI, break down data silos, and protect the brand at all costs.

"It takes decades sometimes to build trust in your insurance company and seconds to erode it when something bad happens." — Franklin Manchester 

Abel reinforced the leadership dimension. The carriers that win, he predicted, will not be the ones trying to outmaneuver regulators. They will be the ones calling the regulators and asking to collaborate. Embracing regulation, not avoiding it.

This connects to the Venture Client Model in a way that does not get enough airtime. When a corporate buys from a startup as an early customer rather than as an investor, the corporate retains control of the trust architecture. The startup's technology operates inside the corporate's governance layer, not outside it. That is one of the structural reasons VCM is moving from edge case to default sourcing strategy in regulated industries.

Actionable takeaways for leaders

Five things a CDO, CIO, or innovation leader can do this quarter, drawn directly from the conversation.

Audit your AI inventory honestly. Document every model, every agent, every shadow IT deployment. Carriers that cannot answer the question "How many AI systems are making decisions in our business today?" cannot govern what they do not know.

Map the tacit processes before deploying agents. The 25% of the process that lives in colleagues' heads is the 25% that will break first. Spend the time to surface it before an agent improvises around it.

Treat governance as a competitive advantage, not a compliance overhead. Stand up an AI ethics committee with real authority. Move governance investment from the bottom of the IT budget to the top of the board agenda.

Redesign roles for supervision, not execution. Underwriters and claims professionals need to be trained as agentic supervisors. Career ladders need to reward judgment and oversight, not case throughput.

Call your regulator. Not to lobby. To collaborate. The carriers building the trust infrastructure with their regulators in the room will move faster, not slower.

Frequently asked questions

What does trust by design mean in practice?

Trust by design means auditability, transparency, and human judgment are built into AI systems from the first line of code, not bolted on through process after deployment. Steven Abel of Oliver Wyman defines it as treating control and trust as infrastructure properties of the system, comparable to cybersecurity, rather than as procedures that depend on individual human vigilance.

Why is human in the loop no longer enough for agentic AI?

Human in the loop assumes a human can review every consequential decision. When agentic systems make millions of decisions per day, that ratio breaks. As Franklin Manchester put it, federating AI risk management out to millions of human supervisors is not realistic. The work has to move into the architecture itself.

What is the SAS Trust Imperative report and what did it find?

The SAS Trust Imperative report is a global research study on AI adoption and trust, published with The Economist. Its headline findings include that 70% of insurers use some form of AI but only 7% are doing anything transformative, and that humans trust generative AI roughly 200% more than they trust machine learning, despite the latter being more mature and explainable.

The frontier is a leadership test, not a technology choice

The carriers that thrive in the agentic enterprise will not be the ones with the largest model budgets. They will be the ones whose systems can be trusted by customers, regulators, and their own boards. That is an architecture problem and a leadership problem, in that order.

Trust is not what you say. Trust is what your system does.

Listen to the full conversation with Franklin Manchester and Steven Abel on Scouting for Growth, available on Apple Podcasts, Spotify, and Podbean. More on Alchemy Crew's Frontier Firm thesis at alchemycrew.ventures/amplifying-success.

Sources and citations

  • SAS Trust Imperative report, SAS Institute and The Economist Intelligence Unit. Findings cited: 70% of insurers using AI, 7% transformative use, 200% trust differential between generative AI and machine learning.
  • National Association of Insurance Commissioners (NAIC) AI bulletin tracker. 24 US states adopted AI oversight rules for insurers.
  • Microsoft Work Trend Index. Frontier Firm definition and the 71% / 39% thriving differential between Frontier Firm leaders and the global average.
  • Cigna PXDX (Performance to Diagnostic) automated claims review system, ProPublica investigation. 300,000 pre-approved claims denied in a single month.
  • EU PSD3 directive. Rollout window 2026 to 2028, establishing the regulatory framework for agentic commerce in payments and adjacent regulated industries.
  • Alchemy Crew Ventures, alchemycrew.ventures/blog. The Frontier Firm thesis, the Intelligent Layers architecture, and the DIVAAA methodology.
  • Scouting for Growth podcast episode: "Trust Is the Operating System of the Agentic Enterprise" with Franklin Manchester (SAS) and Steven Abel (Oliver Wyman). Hosted by Sabine VanderLinden.

We have been featured in many mainstream and FutureTech publications. Learn more here.

Let's talk!

[email protected]

 

Join our programs

Activate Your Authentic Identity
Unlock $1M Funding in 90 Days

Scouting for Growth

 

Listen to our podcasts

Scouting for Growth
Beyond Tech Frontiers