Potential and risks: Large language models (LLMs) for insurers


Self-learning artificial intelligence (AI) models have made significant progress in recent years. One of these remarkable models is ChatGPT.

Based on the large language model (LLM) GPT 3.5, ChatGPT is characterized by its ability to understand natural language and generate human-like text. Among other things, it can be used to create content and answer complex questions. In this article, we'll look at the functionality, areas of application, and opportunities and risks of ChatGPT in the insurance industry.

Artificial intelligence in LLMs

The machine learning model (or the underlying LLM) used in language models is a subarea of artificial intelligence that deals with the processing of natural language (natural language processing or "NLP").
Large language models can understand, process, and generate natural language, and serve as a form of generative artificial intelligence for text-based content.

LLMs can be used for many different tasks such as answering questions and summarizing, completing, translating, or generating data. True AI is self-learning and learns with experience, constantly improving its decision rules over time with the help of training data. Unlike with rule-based software, the results of its calculations can therefore not be predicted exactly.

Large language models

LLMs use an algorithm that describes words and punctuation marks (in the respective context) to perform arithmetic operations and thereby capture the complexity of language.
This information is stored in parameters. Since human language is extremely complex, there are many different parameters to describe it. Thus the term "large" language models. These pre-trained language models can handle complex questions and therefore provide conclusive answers.
LLMs are based on artificial neural networks. These neural networks use deep learning algorithms and are trained with a variety of text and information. Today, LLMs can even solve tasks for which they have not been trained. For example, they can now create program code in various programming languages.

The functionality of LLMs

In addition to ChatGPT, there are many different artificial language models with extremely varied purposes and areas of use. These models often acquire knowledge and information from different sources and periods of time.

ChatGPT-3, for example, is based on knowledge up to 2021 and was trained with around 175 billion parameters. Version 4, on the other hand, can scrape current data from the Internet and has been trained with around 100 trillion parameters.
This alone shows the great differences that can exist between individual language models. Nevertheless, all "intelligent" language models share certain basic skills and functionalities.
They generate logical, grammatically correct, and error-free (in terms of spelling) text, and incorporate previous conversations with the person using the model. A distinction however must be made between basic and functional skills. Functional skills might include the creation of strategies, routines, or even imagination.
To date, functional skills have only been observed in isolated cases. This is mainly due to the fact that so-called "weak AI" is currently in use.
Unlike "strong AI," "weak AI" is solely used for specific tasks and cannot achieve (or surpass) human intelligence or apply problem-solving skills to any kind of problem.

To develop this level of AI (aka "human-level AI"), hardware comparable to the human brain would be required. In 2016, the Center for Data Innovation provided some examples of this type of AI. "Strong AI" also includes the ability to write programming code, which is already possible. In just seven years, we've therefore seen certain "weak AI" make extremely considerable progress.
Among other things, AI can now also create deceptively realistic images. Well-known platforms such as Midjourney, DALL-E, and ArtSmart are best suited for such images.

Large language models can abundantly access information from training data, and in certain circumstances, current data from the Internet. This data can be a good start for research (for example), since targeted questions can yield extremely comprehensive answers. Nevertheless, automated code and deceptively realistic images pose major risks for many financial and insurance service providers. The question therefore remains as to how these sectors can benefit from the new technology. In addition, what are the limits of LLMs when it comes to insurance?

Opportunities and risks during the use of artificial language models in the insurance industry

Like other sectors, the insurance industry must be open to new innovations and willing to experiment. In such a regulated industry, however, it is often difficult to introduce new developments without having to worry about legal issues. The question of data protection is omnipresent and must be constantly taken into account when dealing with the sensitive information of customers. Particular attention must be paid to tools that are operated by companies based outside the European Union.
Even the internal use of ChatGPT can pose a problem if personal data is disclosed, despite the fact that it can be extremely tempting to use an intelligent language model to write the perfect letter to a customer. However, this example can also serve as an incentive for the company to integrate such language models into its own tool—in accordance, of course, with all data protection regulations. The first prototype is already in use at adesso, and not only helps to obtain knowledge from the Internet, but also to quickly and easily find information on the intranet.
Nevertheless, restrictions also apply here. If "PfefferminziaGPT" is to be used in departments that handle sensitive information (e.g., claims processing), it must be trained with a massive number of datasets. At the same time, specific datasets are required to develop the chatbot, and it can be difficult to comply with data protection regulations when using external training data. Internal data is usually not sufficient (see number of training datasets for ChatGPT).

Self-learning chatbots, on the other hand, can be a huge help for less demanding tasks. With AI, support requests, claims notifications, and use-case-dependent routing to specific departments can be filtered and/or answered in advance, and follow-up processes can be initiated. A simple chatbot can already reduce the volume of support requests by around 12%. And since more and more companies are now opting for the use of chatbots, the number of customer inquiries answered automatically will likely continue to increase over time. Finally, since OpenAI's groundbreaking release of ChatGPT in 2022/2023, other AI providers are now seeing the light of day, the technology keeps getting better, and systems continue to be improved.

Language models in the field of health insurance

In addition to the advantages described above, such language models also offer help with research in the context of health insurance. Detailed diagnoses and medical reports can be summarized, and a search can be executed to explain the respective illness to the employee in simple terms, as well as any follow-up processes that should be initiated (e.g., obtaining a second opinion). However, it should be noted that ChatGPT and the like can also create opportunities for fraud, as Christoph Dombrowski explains in this blog post.

And when it comes to complex processes, one fact often stands in the way of all the possibilities: decisions, especially critical decisions with far-reaching consequences (such as a ruling on an occupational disability), must be comprehensible and repeatable. In other words, a decision made today must also be applicable tomorrow. Since the inside of AI often resembles a kind of "black box," such decisions are however still quite difficult to implement. Nevertheless, other AI-related tasks can also be very helpful. For example, a model can be used to check the plausibility of health issues. Decisions as to whether additional medical reports are required can be made in a matter of seconds, often directly at the premises of the customer.

However, even with protective measures, the models are not without risks. These can take the form of cyberattacks (on the AI system itself) or other such risks. There is also a high risk that the AI system will be manipulated. Here, the intent is not always malicious. Carelessness or an excessive leap of faith can also lead to undesirable results. The results of AI should not be accepted without being verified, as this can often lead to false declarations.

How should risks be addressed without having to resort to restrictive limitations?

The question remains as to how the risks can be dealt with, and which means will simultaneously make it possible to increase potential. A straightforward answer and solution for all risks is neither feasible nor realistic. Even if certain risk factors were limited, this would simultaneously reduce the potential opportunities of the AI. Nevertheless, certain rules and precautions can help increase security without slowing down the digital revolution.
Many dangers can be avoided in advance via awareness-raising and training. Insurance companies should actively test all relevant digital options and weigh the risks and opportunities. A rolling-wave process must also be set up for AI tools that are already in use.
Chatbots should not necessarily be excluded from day-to-day work. A survey by Bitkom found that 53% of schoolkids have already used ChatGPT. There is no doubt that this tool will continue to become more and more popular in the future. It should be used in a controlled and well-thought-out manner, and serve as a kind of sparring partner.

Want to learn more about modern insurance software? Feel free to contact our expert Karsten Schmitt, Senior Business Developer at adesso insurance solutions.

All articles

Are you interested in products from adesso insurance solutions?

Here you will find an overview of all software solutions for all insurance lines - for portfolio management, benefits management, claims processing, product modeling or for general digitization.

To the product page
}