Login


Notice: Passwords are now case-sensitive

Remember Me
Register a new account
Forgot your password?

Liberty Mutual Creates its Own Massive Workers' Comp Model

By James J. Moore

Tuesday, March 27, 2012 | 0

A massive Workers Compensation predictive model was recently generated by Liberty Mutual for their internal uses. A video about the database was very informative. I am sure Liberty invested a large amount of time and effort constructing the model. According to the manager in the video, there were terabytes and terabytes of data to combine.

I think a tip of the hat is in order for this undertaking, even though the model will likely never see the light of day outside any Liberty Mutual data servers. As I see it, the data is their private data.

The video points out the main reason for gathering the data. The outliers or very expensive claims will have certain characteristics such as obesity, remote location, etc. Liberty wants to cut the analysis period of a claim from two years (which is too late) to one month. Their policyholders should be pleased with this development. I wanted to cover a few pros and cons of the database.

The pros are:

The database is massive. Trends can be analyzed very easily.
The number of years covered could cover almost all tail claims due to the state-of-the art BoComp (r) computer system that was being used in the 1980's and forward.
The data is homegrown, so the accuracy would be higher than agency data.
Certain outlier claims can be identified very early in the claim hence more accurate reserves for the claim
Liberty was always meticulous about correct and full data input.
I am assuming with the model just now being built that it was not thrown together or synthesized just so they can say they have a model.

The cons are:

The data was from Liberty Mutual claims only. A mix of carriers would have been more accurate. Self-sourced data can lead to erroneous conclusions.
Using old data to forecast new trends may be inaccurate. Actuaries argue the triangulation method is better than regression as it gives more weight to the most current conditions.
The data would likely have to be analyzed on a state-by-state basis as what happens in Montana may not be used to forecast what would happen in West Virginia. There are too many state-specific variables. The NCCI can vouch for that assumption.
With this large amount of data structures, some assumptions would have to be made overall. A model is only as good as the assumptions.
Data security - would someone want to pirate the model?

The bottom line is that I want to be the last person to sound critical of the data modeling performed by Liberty Mutual. I oversaw a state agency data transfer that was on a smaller basis, but still very large. I called it the never-ending nightmare.

I was trained by Liberty Mutual many years ago in the gold standard for training new recruits. I also used to be a systems engineer years ago. That is why I assumed a little more than usual in this article.

<i>James J. Moore is owner of J&L Risk Management Consultants in Raleigh, N.C. This column was reprinted with his permission from his blog, http://blogs.cutcompcosts.com/</i>

Comments

Related Articles