We are now several months into the new general insurance pricing practices.1 For many insurance providers, preparing for the new regulations meant re-evaluating their data enrichment strategies and taking a fresh look at the datasets now available to the market to assess risk, particularly at renewal. There is also a growing understanding that it’s not just the predictive power of the data that’s important – how that data is accessed is just as key.
Without a defined strategy for assessing new data to support fair and accurate pricing, it’s easy to lose oneself in a data black hole. Too much data from varied platforms and suppliers can hinder the quotation process, slowing down the time to quote. Too little data risks gaps in knowledge – potentially higher claims costs, underinsurance and unsatisfied customers. So how does the insurance professional avoid these pitfalls?
Assessing the value of new data for the insurance market is what we do at LexisNexis Risk Solutions, all day, every day.
Narrowing down the major underlying need for data is elemental. Insurance providers need to ask themselves, what outcomes are required and what are the predictive qualities of different datasets to match them? Why is data required and when is it needed in the policy cycle? Is its predictive capability just as good for claims as it is for fraud prevention or policy cancellations? Where in the business is it required most and why? Who is the customer? Is it equally effective in motor, home or commercial insurance?
By asking these questions of the data, insurance providers can assess how a new dataset could benefit different functions and areas of business. For example, ghost broking remains an ever-present threat to consumers costing victims an average of £559.2 If an insurance provider chose to invest in new identity verification solutions using email intelligence, this could help spot ghost broking activity in motor insurance and support fraud prevention across travel and home insurance lines too. Alternatively, granular data used to pre-fill home insurance applications can equally be used to validate pre-quote for landlord portfolios.
Lack of data should never be a problem in risk assessment given the market today has access to over 2.3 billion records from over 30 data sources,3 but when really drilling down to what’s going to bring most value, historical policy and claims data should perhaps be top of the ‘wish list’. Attributes built from this data can help insurance providers understand the market’s experience of a customer not just their own.
Whether an insurance broker is seeking data to predict cancellation risk, or an underwriter is interested in reducing claims, all new data must be looked at in the context of how it complements existing datasets. What does it offer that’s different? Does it provide a solution to more than one problem? For instance, motor policy history will offer predictive risk on cancellation as well as claims.
Lucy Small | Senior Data Scientist, Insurance | LexisNexis® Risk Solutions
Read more article from Modern Insurance Magazine here
Footnotes
- https://www.fca.org.uk/publications/policy-statements/ps21-11-general-insurance-pricing-practices-amendments.
- https://www.actionfraudalert.co.uk/da/387634/Do_you_know_what_a_ghost_broker_is_.html.
- Source: LexisNexis Risk Solutions, UK and Ireland.