Blog

AI perspectives from Tractable.

Back to blog list: See all posts

23 Mar 2021

AI and claims management: what really matters?

By: Jimmy Spears, MAL, MSPM, AIC, AAM, AIS

Man in suit pointing to front, creating illusion he's touching a digital screen

Image by Gerd Altmann from Pixabay

Artificial intelligence has come a long way since the term was first coined in 1955. It’s now well established in our day to day lives. Despite this, the public’s understanding of what it is is actually pretty limited. In 2019, a survey of over 66,800 people found that while 52% consider themselves to have ‘some knowledge’ of AI, only 10% are well educated on the matter.

The study by the Mozilla Foundation shows that a lack of knowledge and understanding hasn’t stopped people from having high expectations, with the majority of respondents believe it should be ‘way’ smarter than humans. 

What does this mean for the insurance industry? AI needs to not only be beneficial logistically, but also match or exceed the performance of a human. In case that wasn’t enough, while people have a strong sense of hope attached to AI they have equal levels of concern too. 

There is an automatic level of skepticism for new technologies, and rightfully so. It is therefore the responsibility of the creators to build that confidence and trust. If the AI does what it says it does, this shouldn’t be too daunting a challenge.

Cutting through the noise

At the end of the day, not all AI models provide value. One of the biggest challenges insurance carriers have right now in deploying AI is cutting through all the buzzwords and deciding on the right solution for them. 

One theory on AI is that a model must be interpretable, rather than explainable. Explainable AI separates the model into the actual technology and the visual tool (e.g. heat maps) that tries to explain it. Separating these two hides the decision-making process behind the output, like trying to find Waldo when Waldo has been completely covered by a bus. It’s designed for you not to find him.

For damage appraisal, you want to know how the list of repairs have been prioritized. Is it ranked by price, the danger it poses to the driver, or the amount of time it will take to fix? 

Keeping algorithms hidden and gated, only accessible to a select few (i.e. the developers), makes it nearly impossible to actually test or investigate how the technology really works. While you can appreciate a certain level of secrecy, e.g. to stop competitors from replicating their success, it prioritizes the creators of the AI. The end user is pretty much left to fend for themselves.

Relying on systems that can make inaccurate decisions will cost a company and its customers greatly. Not just financially but also in service and reputation. Carriers need to be able to trust that the solution they’re using is as accurate as possible and provides the best possible value. 

What should insurers look for?

The goal for any AI solution should be to make it interpretable. You want to know the what and the why – i.e. what the damage is, and why the AI has come to that conclusion. It’s not about understanding the how because that will require an in-depth understanding of machine learning which most people don’t have. 

Transparency does not automatically equate to giving away a competitive edge. If anything, by building a stronger and trustworthy relationship with insurers it does the exact opposite. 

As we’ve already discussed, you need to know that the technology is proven to be as accurate as a human assessor or better. Imagine an auto repairer who has repaired millions of vehicles constantly to standard. It shouldn’t have large gaps in terms of car makes or models and must be able to identify when repair vs replacement is most appropriate. 

What makes Tractable different?

Tractable uses deep learning to train its AI on over 100 million photos shared by customers and partners, developing algorithms to recognize general visual patterns. As the platform grows, Tractable’s AI is constantly improving. It’s learning from more and more images, applications and integrations. This means that the platform works universally, no matter the country. 

By working with Tractable, a number of carriers have been able to achieve up to 2% overall accuracy improvement across deliverable claims. Tractable AI’s robustness can be seen as it can even make correct decisions regarding car models it has never seen before, such as those that are new or redesigned.  

Just like a human can look at a vehicle it has never seen before and recognize that there is damage, without the need to compare it with an undamaged vehicle, so can Tractable’s AI. And it doesn’t just work in “perfect” studio conditions, but can make determinations based on images taken in the real world. Real world meaning imperfect lighting and framing, rather than photos taken by professional photographers or in staged environments.

Tractable is already being used by 21 of the world’s top 100 insurers and is live in 14 countries. This equates to over $1bn (and counting!) auto accident claims being processed, accelerating the recovery for over a million households. To learn more about how the technology works, book a demo with the team today

—– 

More content we think you’ll like:

5 things you need to know when integrating AI into your claims environment

Previous post: UK insurers must prepare for the post-pandemic roads

Back to blog list: See all posts

Next post: Polish insurers: accelerating change with technology

Get updates