Intelligent input management through the use of a large communication model


Artificial Intelligence (AI) is being used to an increasing degree in the different areas of our life. It paints pictures and paintings based on a text description, writes poems and stories and helps doctors establish medical histories. Modern machine learning technologies, which are behind artificial intelligence, have advanced greatly in recent years and are of interest to the general public with the release of ChatGPT, Dall-E 2 and other major AI technologies.

Modern, robust machine learning technologies also offer insurance companies huge opportunities in all sectors. Unstructured input, e.g. in the form of letters, emails or email attachments takes up a lot of insurance companies’ time. The transformation of unstructured data into structured data is an important building block for digitisation. The use of artificial intelligence in input management is a powerful lever in the digitisation strategy of insurance companies. In input management, artificial intelligence is used to categorise documents, extract relevant information from documents and for the structured processing of the data extracted. The result or outcome is a structured dataset which can then be processed on a case-by-case basis by an insurance company’s fully-automated workflow engine across the entire application landscape. In this respect, a high degree of integration of the input and process management is required.

Trust and flexibility are the bases for intelligent input management

In order to achieve consistent digitisation and automation, in addition to the corresponding integrations, the two factors “trust” and “flexibility” are decisive for the successful use of artificial intelligence in input management. In the context of the process to transform unstructured data into structured data, the best accuracy for categorisation and data extraction is decisive. Inaccurate detections not only inevitably lead to manual post-processing, which interrupts end-to-end automation, but also to a loss of trust in the new technologies.

The detection of new customer requests, the introduction of new document categories and use cases occurs regularly for insurance companies and is often associated with a lot of effort in rule-based input management applications. A large number of existing rules have to be adjusted manually, new rules are to be implemented and comprehensive manual tests with documents from new document categories are necessary. Through the use of modern machine learning technologies, new requirements can be implemented flexibly and quickly. In order to integrate new customer requests into the model, a few documents from one category are often sufficient. Initial training is possible from 100 documents to some extent.

The communication models currently flourishing are particularly suitable for use in input management. It has been possible to use neural networks for image processing for some time now. There are freely available models that allow the user to automatically distinguish a bridge from a tree without having to teach a model yourself. Such a possibility does not currently exist for text processing. The so-called communication models have now countered this. They offer numerous applications from translations to categorisation to dialogue systems and text completion. As a result, potentially higher levels of accuracy can also be achieved in input management than with rule-based applications.

What are communication models?
Back to the beginning - the first communication models appeared in the 50s. The paper A Mathematical Theory of Communication published by Claude Shannon in 1948 describes the first stochastic communication model based on a Markov chain. The development of neural networks in the subsequent decades forms the basis of the communication models used today. With steadily increasing computer power and advancement of the models, computer linguistics performances are constantly improving. The US software company OpenAI made a breakthrough when it made ChatGPT available for free on 30 November 2022. Since then, there has been a lot of hype about the technology.

Communication models are models that are trained using large quantities of data from the Internet. They learn the probability distribution for word sequences. Certain words occur more often than others in a certain context. As an example, a sentence that begins with “My dog can...” is more likely to end with “...bark” than with “...phone”. Communication models are able to understand the semantics of the sentence and thus complete the sentence meaningfully. This requires suitable word embedding in which words that occur in similar contexts (e.g. fire and hot) are assigned to similar vectors.

How do communication models work

But how can a model be taught context-based knowledge so that it “understands” connections? The keys are algorithms based on how the human brain works.

The human brain has the ability to direct awareness resources to certain tasks, people or things - attention. This ability means that information can be accepted, selected or ignored. Attention can be targeted towards things or events that are currently relevant. Insignificant things are filtered out and, therefore, barely noticed or not noticed at all. As an example, when driving a car, you focus on the other road users, traffic lights and road signs. A quiet sound or a leaf falling from a tree is barely noticed or not noticed at all. Therefore, the brain is constantly busy separating relevant information from irrelevant information.

So-called transformer models focus on this mechanism. They were first introduced in the publication Attention is all you need from 2017. The core of the architecture is based on the encoder-decoder principle. The encoder consists of several identical encoder layers which contain a self-attention mechanism. This mechanism is responsible for establishing the relationship of a word within a text with other words. An input sequence (such as a sentence or text) is captured and encoded into a vector (encoding). This vector includes any semantic and syntactic references of the individual components. The decoder is required in order to use this vector, e.g. to translate from a source language to a target language. The vector created in the encoder is “decoded” in the decoder and the relevant information extracted therefrom. In the model, the decoder also contains several identical layers, which can be viewed with a self-attention.

A key point of the transformer models is that information is processed in parallel. This means that processing is efficient and large quantities of text can be captured contiguously. In contrast, there are the LSTM architectures (Long short-term memory) used to date, which process data sequentially.

Use of communication models with prompting

Communication with communication models makes use of prompts formulated using natural language. The content of these prompts is instructions or queries. As an example, a prompt may read as follows: “Extract the clinician’s address from the bill”. The advantage is that anyone can use artificial intelligence with simple language. You don’t need to be a machine learning expert or a software developer to use the technology.

This presents previous experts with new challenges. A number of different inputs usually need to be tested to obtain the desired result from the model. The instruction that gives the optimal result is not visible “from the outside”. There are now automated approaches to creating and selecting prompts; however, there is no analytical way to determine the “optimal” prompt. So that communication models are as useful as possible, model-specific knowledge and extensive experience are required. Therefore, a simple query through input in a chat console such as in ChatGPT does not reflect the reality with which experts are actually faced. A further aspect is the evaluation of the prompt quality. Suitable KPIs must be defined considering the accuracy of the results as well as economic aspects.

Evolution of AI in input management

Communication models can be used in numerous sections and various applications thanks to the provision of “world knowledge” and potential uses by way of natural language. A topic already mentioned is the involvement in input management systems of insurance companies. As an example, private health insurance companies in particular receive large quantities of documents such as bills from doctors, alternative practitioners or hospitals on a daily basis. This is in the form of PDFs, photos, emails or letters. Processing this data in a structured manner requires a lot of time and effort from operators. A system which has no lengthy training and offers accurate extraction with large amounts of data is able to process large amounts of unstructured data efficiently and automatically. To implement this, a feasibility analysis is recommended initially in which a small observation horizon is defined. The following steps are necessary for this.

  1. Definition of document categories (e.g. bills from dentists, bills for treatments)
  2. Determination of extraction attributes (e.g. master data, diagnoses, invoice line items)
  3. Provision and processing of raw data
  4. Choice of a suitable communication model and optional fine tuning
  5. Choice of one or more suitable prompts for communication with the model
  6. Evaluation of the model’s performance by way of an unseen dataset.

The evaluation is based on certain KPIs. The accuracy of the extraction and classification is often the main criterion here. Other criteria such as processing time are also commonly used. The results of the feasibility analysis offer a well-founded basis for decision-making. On this basis, it is possible to assess how and whether an advanced use of a communication model in intelligent input management can be scaled to additional document categories and use cases.

Conclusion

Overall, embedding communication models in input management systems offers insurance companies extensive advantages. On the one hand, there is increased detection accuracy by way of the inherent, deep language understanding. The rate of background processing can be increased and postprocessing by operators can be significantly reduced. This results in significant cost saving opportunities due to faster processing times. On the other hand, since the model requires no time-consuming training, its implementation and introduction is inexpensive and fast. This offers the flexibility to continuously adapt the application to individually required document categories and extraction attributes.

Want to find out more about automation and the use of AI by insurance companies. Contact our expert, Florian Petermann, Senior Business Developer.

All articles

Are you interested in products from adesso insurance solutions?

Here you will find an overview of all software solutions for all insurance lines - for portfolio management, benefits management, claims processing, product modeling or for general digitization.

To the product page
}