Taming the Beast

Using predictive modeling to transform an unwieldly asset into powerful results

January 06, 2016 Photo

Predictive modeling is now viewed as a valuable asset in the claims management space. Yet capabilities vary greatly, and confusion persists around the definition of predictive modeling and its true impact on claims outcomes. The need to effectively inject the output of data analytics tools into claims workflows also adds an ongoing practical challenge. Possessing strong analytic tools is critical, but ultimate success is not defined solely by having the best algorithm; it is the organization’s ability to effectively operationalize its models in a way that achieves improved claims results.

In terms of the core claims management process (contact, investigation, etc.), advancements in technology certainly have bolstered the ability to access information and improve efficiency, but the fundamental, day-to-day best practices remain constant. The advantage of this long-standing method is that it applies a proven set of adjudication standards that promote a fairly comprehensive and consistent approach to claims handling. The drawback is that a large subset of claims may be over managed, with meaningful effort and expertise applied to simple claims that have low ultimate exposure. For claims supervisors who may oversee 800 claims or more, low-value cases create a diversion from the potentially explosive ones that truly deserve their attention. For handlers, a “check the box” mentality may develop as stretched resources are held to a standard that they know, intuitively, is excessive for a significant portion of their demanding claims inventory.

The constant operational challenge is to heighten focus on the higher exposure, cost-driving segment of claims without compromising overall claims management discipline. Without the proper tools and approach, we risk missing the opportunities through too narrow a focus or watering down the results by casting too wide a net. The answer lies in the ability to accurately segment claims.

Stop Chasing Fireflies

Effective segmentation is a critical requirement for a differentiated claims management approach that recognizes that all claims are not created equal. And yet claims segmentation is a polarizing concept. A decade ago, the suggestion that an algorithm could spot a group of potentially explosive claims more reliably than a seasoned claims veteran was a notion commonly challenged (“I know a bad claim when I see it!”). In response, segmentation techniques evolved to utilize business rules based on industry experience or general trends found in claims data. The result has been an oversimplification and dependency on highly intuitive severity drivers, such as state, injury, and age. Instead of pinpointing the 20 percent of claims that drive 70 percent of losses, the output of rudimentary rules and models may lead claims organizations to focus on 40 percent of claims that account for only 60 percent of the dollars spent—not an especially effective proposition.

Enter predictive models—sophisticated analytic tools developed using several thousand historical claims records that form the launching pad for reliable claims segmentation and a differentiated claims process. With advanced predictive models now in play, the prevailing challenge becomes how to put it to work.

Game Changing Approach

The emphasis placed on the technology that builds and supports predictive modeling is at the same time liberating and constraining. Without these tools, the power of predictive modeling is lost, but producing the output of a model is just one of the critical steps in the process toward better outcomes. Described below are the four stages of a more progressive claims management approach fueled by predictive modeling. The approach is planted firmly on a solid foundation of best practices but injects sophisticated data analytics, redesigned oversight protocols, disciplined execution, and rigorous monitoring and measuring of claims outcomes. The deployment of this claims management approach continues to be tested in the field, yet early returns indicate millions in potential cost avoidance.

1. Building the Asset

Think of all that makes up a claim. The circumstances that came together to create the occurrence coupled with the characteristics that define the people and place involved. These data points translate into hundreds of variables, or predictors, found in both structured (codes and payment date) and unstructured (claim notes) data. Predictive modeling sifts through the hundreds of variables to pinpoint a core set that have a causal relationship with a particular type of outcome.

For instance, when studying unrecognized severity, you can’t rely on blanket statements about the relationship of age, injury type, or jurisdiction to predict ultimate claim severity. In each of these cases, there is a range of experience. Workers’ compensation claimants aged 55-65 may have more severe cases in general, but there are many instances when these workers resolve their claims quickly and for relatively low cost. The same is true of most injury types and jurisdictions. General trends are important but tell an incomplete story when it comes to identifying explosive claims or other opportunities for management intervention. It takes a combination of variables, some obvious and some indiscernible, to accurately predict the probable direction of a claim.

How does the predictive modeling process work? Once the modeling technique is determined, based on the business problem being solved, there are three critical steps to prepare a model for implementation:

  • Variable selection and refinement
  • Model development
  • Validation

Variable selection is about the identification of variables and making them ready for the modeling process. The process taps business and data experts to ensure that all possibilities are explored. The end product may be hundreds of possible characteristics that range from intuitive to innovative. The resulting set of data elements form the basis of the first model run. The idea is to cast a wide net at first and whittle the variables down through the optimization stage.

One rich and often untapped variable source worth highlighting is claims handler notes. System constraints and poor data capture create blind spots in structured fields. Meanwhile, claims handlers have a penchant for documenting important claim and claimant information, providing a line of sight into accident details or a claimant’s condition that improves model effectiveness. Technical model development that excludes text mining and analysis may still prove effective, but even if the core of a model favors structured data, opportunities to increase accuracy and provide better insight to users exist through the use of text variables.

Development builds on variable refinement and consists of the supporting techniques that determine the strength and breadth of relationships across variables. Only variables that demonstrate power in predicting the target outcome remain in play. For instance, a key indicator for claim severity is the number of references to “pain” found in the notes. The model development process may not only identify the link between this variable and claim severity, but also determine how different frequency ranges impact severity. Claims with fewer than three references to pain, for example, could be shown to have a lower likelihood of severity than the average, while those with more than 10 would indicate a claim with a significant chance of exceeding $100,000 in incurred costs.

Validation, the final step, determines the accuracy and precision of a model. The standard approach begins with data sampling. At the time the data set for a model is identified, sampling occurs to create two or three subsets. One subset is used to build the model; the others are used to validate the model. Since the test data subset contains known outcomes, the model’s prediction of the outcome can be compared and measured against actual ultimate results. Models that have strong “lift,” or accuracy, can reliably segment outcomes. Overall, claim severity models have proven to be very accurate, depending on the breadth of data utilized and the timing of the application of the model(s) within the claims management process.

2. Optimizing the Asset

Optimizing the asset requires a supporting structure and coordinated approach that includes technology, people, and procedures. It is the mechanics of how predictive model outputs are translated into improved claims outcomes. The endgame is not generating a model score and simply dropping it into the claims handler’s queue for processing. It requires an organized operational response. Powerful models are proven to be directionally correct, and the valuable output produced—reliable claims segmentation—requires a tailored set of actions that take full advantage of the insight provided. Otherwise, like an unused gym membership, the only result is a poor return on investment.

The supporting structure also must be constructed to optimize the analytics tools embedded within it. For instance, a revamped supervisory oversight model can be established where timing and frequency of supervisor intervention is guided by predicted claim severity. Although human judgment is always applied, the model drives a claims management process that promotes supervisor engagement for the claims that need it most. The combination of multiple analytics tools firing during the claim cycle and well-timed claims supervisor touch points enable a dynamic intervention strategy to be created, modified, and executed.

3. Rigorous Execution

Model execution focuses on the less tangible, but essential, concepts of culture and motivation. The success of any program hinges on execution; it requires organizations to take repeatable, meaningful action without overemphasizing procedure. If the organizational culture is resistant—seeing the approach as something that must be complied with rather than embracing it—the use of the model and the processes put in place will be viewed as burdensome. It is critical for those involved to understand the “why” of the program and how it can positively transform their role and impact the organization.

Another cultural component centers on the definition of “normal” and the notion of “predict.” Normal, within the context of claims management, can be defined as adherence to best practices—a prescribed response as the claim follows its expected course. The concept of “predict” refers to that which will likely happen in the future, even if that ultimate outcome is not apparent. When an opportunity is identified by a predictive model, the current state of the claim may not seem to resemble what the model says is likely to occur. It is important to recognize this apparent disconnect. Many predictive modeling programs suffer because those involved with claims intervention fail to take action. They rely on their human bias and dismiss the science as a false-positive; unfortunately, many of these cases turn bad.

To optimize the program, even a claim that appears “normal” must be viewed and proactively managed as if it were already deteriorating to mitigate the exposure. Trust in the model is critical. This is a change in mindset that requires planning, thoughtful implementation, and constant reinforcement.

4. Measuring the Impact

Measurement is often the missing link in program designs and implementations. The next project calls for more time, and data may be necessary to obtain credible results. While assessing outcomes can’t happen right away, planning the measurement process needs to be part of the design of the program, and monitoring needs to occur from the start. Monitoring ensures that the program is applied consistently. Consistency improves the ability to link action with an outcome. Monitoring also creates a mechanism to enhance the program, enabling it to evolve and become naturally embedded in the claims organization. Ultimately, only monitoring and measurement can validate that the program truly works.

Put Your Assets to Work

The conventional approach to claims administration, from caseloads to claim-level tasks, remains rooted in old-school logic. If the industry truly knows a bad claim when it sees one, how do so many become unpleasant surprises? And why do we need such burdensome oversight of every claim?

There is a new alternative—one where sophisticated predictive analytics can power effective claims segmentation and a differentiated approach to claims management. This approach enhances claims expertise; it does not replace it. With rigorous planning and execution, modeling-based claims segmentation will drive results that can be quantified and repeated. Through a progressive, disciplined approach to deploying claims predictive modeling, the industry has an opportunity to move beyond the rhetoric and achieve tangible results.  

photo
About The Authors
Multiple Contributors
Steven Laudermilch

Steven Laudermilch, senior vice president, claims, with ACE Group. He has been a CLM Fellow since 2013 and can be reached at www.acegroup.com

Keith Higdon

Keith Higdon, vice president, claims data analytics with ACE Group. He has been a CLM Fellow since 2013 and can be reached at www.acegroup.com 

Sponsored Content
photo
Daily Claims News
  Powered by Claims Pages
photo
Community Events
  Claims Management
No community events