INSURASALES

NAIC's AI Systems Evaluation Tool: Enhancing Insurer Transparency

NAIC’s AI Systems Evaluation Tool Signals a New Phase of Insurance Oversight

Artificial intelligence has moved from experimentation to everyday operations across underwriting, claims, pricing, marketing, and fraud detection. Regulators know this. Carriers know this. What has been missing is a shared, practical framework for discussing how AI is actually being used in insurance operations.

The National Association of Insurance Commissioners is now taking a decisive step to close that gap.

Building on its 2023 AI Model Bulletin, the NAIC is advancing an AI Systems Evaluation Tool designed to give regulators a clearer, more consistent view into insurers’ AI programs. Presented at the NAIC Fall National Meeting, the tool introduces a structured questionnaire that regulators can use to better understand how insurers design, govern, and monitor AI-driven systems.

Rather than signaling a sudden shift toward heavy-handed oversight, the initiative reflects a growing desire for clarity, comparability, and informed dialogue.


What the Evaluation Tool Is Designed to Capture

At its core, the AI Systems Evaluation Tool functions as a standardized intake process. It is meant to help regulators ask the same questions, in roughly the same way, across states and market conduct examinations.

The draft framework is organized into four broad areas:

  • Quantifying how and where AI is used across insurance operations

  • Evaluating governance structures and risk management controls

  • Identifying and describing high-risk AI models

  • Examining the data inputs, outputs, and monitoring practices tied to those models

Early discussions, including a December 7 working session led by Doug Ommen, co-vice chair of the NAIC’s Big Data and Artificial Intelligence Working Group, focused heavily on the first section. Topics included how AI disclosures might surface during market conduct exams, how confidential information would be protected, and how states might coordinate their regulatory efforts.

A recurring theme in those conversations was scope. Regulators are still working through which models warrant closer scrutiny and which operational areas truly present elevated risk.


“This tool is really about starting the conversation in a more structured way,” said Doug Ommen, Co-Vice Chair, NAIC Big Data and Artificial Intelligence Working Group.


How the Industry Is Reading the Move

From an industry standpoint, the tool is being viewed less as an audit mechanism and more as a cataloging exercise.

John Romano, a principal at Baker Tilly, has characterized the effort as an attempt to inventory AI usage rather than formally grade it. That distinction matters to carriers that are already balancing innovation with compliance obligations.


“Right now, this feels more like an effort to understand what exists, not to pass judgment on it,” said John Romano, Principal, Baker Tilly.


Still, concerns remain. Insurers are wary of over-disclosure, particularly when sensitive models or proprietary processes are involved. Questions about how the collected information will be used, stored, and shared across jurisdictions remain top of mind.


The Push for Regulatory Consistency

One of the NAIC’s stated goals is alignment. AI oversight today varies widely by state, largely reflecting differences in regulatory familiarity with advanced analytics and machine learning.

Heidi Lawson, who works within Fenwick’s insurance and insurtech practice, has noted that regulators are starting from very different baselines when it comes to AI fluency. That reality makes standardization both more difficult and more necessary.


“You have regulators at very different stages of understanding when it comes to AI,” said Heidi Lawson, Insurance and Insurtech Partner, Fenwick. “Getting everyone on the same page reduces the risk of confusion and misinterpretation.”


A consistent framework could help carriers avoid duplicative or conflicting disclosures, while giving regulators a more reliable basis for comparison.


Adoption Snapshot

While still evolving, the evaluation tool is already gaining traction among state regulators.

Regulatory Initiative States Adopted
AI Model Bulletin 24 states
AI Systems Evaluation Tool 10 states

This early adoption suggests the tool will increasingly show up in examinations and regulatory conversations over the next year.


What Comes Next for Insurers

The broader regulatory environment continues to influence the NAIC’s work, from federal AI guidance to growing public scrutiny of algorithmic decision-making. Against that backdrop, the evaluation tool represents a measured attempt to balance consumer protection with innovation.

Some industry observers argue that traditional questionnaires may not be the most effective long-term solution. Lawson has suggested that regulators could eventually lean more heavily on independent AI testing firms that already specialize in accuracy, bias detection, and model performance.

For now, insurers would be wise to treat the NAIC’s effort as a signal rather than a threat. Documentation, governance discipline, and clear explanations of AI use cases are becoming essential components of regulatory readiness.

The message from regulators is increasingly clear. AI is welcome in insurance, but it must be understandable, governable, and defensible. The AI Systems Evaluation Tool is the next step in making that expectation concrete.