Leading Insurer Selects the Best LLM for Underwriting with Skan

  • IndustryInsurance
  • Company Size 20,000+
  • Revenue $120B+
  • Location United States
01

Our Client

This Fortune 500 financial services group, and a leader in property and casualty insurance, group benefits, and mutual funds, sells products primarily through a network of independent agents and brokers. They are the only nationally endorsed direct auto and home insurance program for AARP’s nearly 38 million members.

02

Objective

The client’s claims underwriters spent a substantial amount of time using traditional search engine technologies to query internal and external databases for statistics, metrics, and knowledge. The goal was to explore multiple Large Language Models (LLMs) available in the market to replace traditional search functionalities, ultimately reducing the processing time to underwrite each claim.

The client faced a major challenge in identifying which LLMs were best suited for this purpose. In addition, the client wanted to understand how the search query outputs were utilized within the underwriting journey and how they might differ based on case type and case complexity.

Cost Savings
$10M

annual savings by reducing the processing time.

Time Saving
10%

reduction processing time, wait time, and turnaround time.

03

Solution

Skan’s Process Intelligence Platform allowed the carrier to conduct A/B testing of different LLMs to identify the optimal model for their use case. In addition, Skan showcased how these outputs differed by case type and complexity, illuminating the entire end-to-end process.

Phase 0: Established current underwriting process baseline and metrics by installing Skan on systems used by the entire population of underwriters.

Phase 1: Designed an experiment to measure the impact of the LLMs by splitting the participants into sub-groups and having each sub-group perform queries.

Phase 2: Assessed each LLM using Skan to measure metrics, such as Processing time, and Skan’s Clickstreams, a playback of the entire end-to-end case journey, to validate query output accuracy.

Phase 3: SMEs reviewed all outputs, provided quality ratings, and along with Skan’s process metrics, assessed each LLM. 

04

Outcome


By A/B testing, Skan was able to help the client establish a baseline for underwriting operations and then test various LLMs to see how they affected outputs and decisions. Skan help them determine the optimal LLMs for specific use cases, reducing search query time by 55% over traditional search functionality.

In some instances, processing time was reduced significantly due to the use of LLMs, and Skan measured improvements down to the minute. The client also created LLM ‘prompt’ training material so that operators use LLMs for complex queries rather than simple queries for which answers are readily available.

Skan enabled the client to quantify their business case for using LLMs and to select the best LLM for their organization.

“Skan helped us quickly prepare a baseline of process metrics around LLMs. Introducing LLMs into our current workflow cut processing time rapidly and we were able to complete our assessment of selecting the best LLM within weeks.”

Chief Technology Officer

Similar Posts

Subscribe to our Newsletter

Unlock your transformation potential. Subscribe for expert tips and industry news.