Business-Blog | adesso insurance solutions

Deep Fakes – a threat to insurance companies?

Written by Melanie Hoppen | 10.02.2021

"Deep fakes" are one of the downsides of digitalization. These photos, audio and video recordings produced with the help of AI reflect a reality that does not exist. And that has consequences for the insurance industry.

In preschool and elementary school, children are taught that it is best to believe only what they have seen with their own eyes or heard directly. But what if what we see doesn't exist at all, but is a computer-generated fake? So-called "deep fakes" can appear so real that they are no longer recognizable to humans as manipulation.

Spectacular case in Great Britain

Given enough criminal energy and a computer system augmented with AI, credible and convincing forgeries are created. A case from the UK made headlines in the fall of 2019. After receiving a call from his boss, the employee complied with his request and transferred over 200,000 euros to the desired account. The employee wondered, but recognizing his manager's voice, he followed the instructions.

But the call was a "deep fake". The well-known voice was produced synthetically. The damage incurred had to be settled under the fidelity insurance.

New forms of insurance fraud

Insurance fraud using fake evidence is not a new phenomenon. But while in the age of analogue photography it took a lot of effort and expertise to forge evidence photos of a quality that would not arouse suspicion, today tools for image manipulation are part of every image processing software. Objects can be cut out and modified with just a few clicks of the mouse. AI systems then optimize the manipulations.

The unstoppable advances in technology are leading to "deep fakes" in various guises and thus various forms of fraud, such as manipulated photos attached as proof to the damage report or only pretending to be in possession of an allegedly stolen item.

Video recordings purporting to come from surveillance equipment and to serve as proof of damage are also conceivable. Or, as in the case described, telephone calls and voice messages that encourage people to take actions that then lead, in essence, to damages that have to be settled.

The arsenal of tools is there, and it's far from just about the casual scammer soliciting regulation of alleged vandalism.

And let's not forget a reverse effect: evidence can be called into question by pointing to "deep fakes".

The world of risk management is thus becoming even more confusing.

AI can unmask AI
 
The loss potential for the insurance industry from "deep fakes" in the medium and long term is likely to be significant. But there is no reason to be pessimistic, because companies are not helplessly at the mercy of these occurrences. AI can be used not only against insurance companies, but also by them. And (still) "deep fakes" can be unmasked and manipulations detected.

Small changes in the modulation of the voice, inaudible to humans, become visible in a computer analysis. Nuances in lighting conditions in a photo or video that are difficult for the human eye to detect can be detected by AI analytics.

"Deep fakes" should not be taken lightly by the insurance industry, but those who embrace the challenges and invest in AI technology to prevent fraud are putting themselves on equal footing with the criminals. In this way, insurance companies prevent "deep fakes" from becoming a nightmare for their business model.

Would you like to learn more about our AI solutions for fraud identification? Then contact our expert Karsten Schmitt directly or read more about the topic here.