David Pan, Moody’s

AI: Complexity mismatch amid the case for augmenting the investment office

David Pan, Moody’s

January 21 2026

As we settle into 2026, agility in insurance means speed of insight.

Insurers are accumulating exposures that legacy systems struggle to track. Those relying solely on human vigilance to monitor an exponentially growing data universe will find themselves reacting to events their competitors anticipated weeks ago.

Asia-Pacific insurance CIOs, for example, are increasingly turning to global private credit as an attractive alternative to public bonds, typically offering yield premiums 50-100bps above traditional fixed income.

Globally, insurers’ private credit assets are concentrated toward real estate lending and private placements with corporates.

Private credit has evolved from a niche market into a major competitor to – and a collaborator of – traditional banking, with further expansion likely. Its flexibility provides vital financing for sectors underserved by conventional lenders.

However, this shift comes with a hidden challenge: complexity.

Portfolios now include opaque, data-heavy asset classes whose intricacies exceed the capacity of manually staffed research teams.

Private credit and bespoke structured exposures are often less liquid and harder to value than traditional bonds, with limited transparency (e.g., privately letter-rated deals disclosed only to select parties).

For example, private loans often carry off-balance-sheet leverage or weak covenants that escape conventional credit metrics, making them difficult to monitor and potentially heightening risk in a downturn.

In short, portfolio complexity is outpacing human monitoring capacity, creating a gap in risk oversight, which has become the industry’s newest form of technical debt.

Market observers and regulators warn that unmanaged complexity can lead to hidden credit pressures and underpriced risks.

The complexity mismatch

The core driver of this shift is the changing nature of credit data. A decade ago, monitoring a portfolio of highly rated public bonds was a linear task based on quarterly filings.

Today, insurers hold infrastructure debt in Vietnam or private placements in India, where risk signals are buried in local-language news, supply chain data, and unstructured operational reports.

Hiring an army of analysts to manually monitor is not feasible.

The solution? The augmented workflow – moving from periodic updates to continuous surveillance.

“Always-on” sentry (Monitor). AI agents continuously scan unstructured data streams, detecting “weak” signals such as hawkish monetary policies or supply chain concentration that can precede a credit downgrade or default by weeks or months.

Synthetic scribe (Draft). When a signal emerges, GenAI agents draft a deep-dive credit assessment in minutes, compressing days of manual research into actionable insights.

Interactive interrogation (Refine). The workflow ends with a human-in-the-loop Q&A. Analysts interrogate findings via secure knowledge assistants powered by retrieval-augmented generation (RAG), validating context and turning static reporting into dynamic risk assessment.

The implementation trap

The strategic value is clear, but implementing the augmented workflow can be fraught with challenges. Poorly designed AI systems can create a false sense of security.

The first hurdle is data hygiene. Think of data quality as nutrition for AI. Insurers sit on a vast domain of siloed information from structured databases to third-party data subscriptions.

High-quality, trustworthy data requires a sustained top-down commitment to the data journey to ensure availability, standardisation, and governance.

The second is context engineering. Without robust pipelines to clean and vectorize data from legacy PDFs, inconsistent Excel models, and scattered emails, GenAI models cannot discern ground truth, leading to analysis based on outdated, irrelevant, or incorrect context.

Careful architecting and data pipelines can feed agents the right amount of context at the right time without leading to attention dilution, which can result in hallucinated replies.

Hallucinated replies, or the smooth lie, where models generate plausible but incorrect answers. Language models are probabilistic in nature and must guess the next best word or token, and it’s this nature that makes them prone to infer smooth-sounding, grammatically correct responses that are factually incorrect.

Proper data hygiene and context engineering will significantly reduce hallucinations, but additional guardrails and evaluation frameworks should also be undertaken. Every AI-generated claim should link to source documents, enabling analysts to verify the “why” behind the “what”.

Turning technical debt into a dividend

The augmented investment office is not just a tech upgrade – it is a viable way to scale risk management in an unscalable world.

Hiring more analysts isn’t the answer; arming existing teams with machine-speed surveillance is. Done right, this upgrade turns technical debt into a technical dividend.

With prudent oversight and strong guardrails, an augmented investment office can allow insurers to confidently ride the private credit boom without being undermined by the complexity they have embraced.

David Pan is the director and GenAI industry practice lead at Moody’s

MORE FROM: Comment
Partner Content