Digitalisation & Technology, 4 July 2025

How LLMs are changing search behaviour – and what this means for insurers

New Whitepaper: Insights from Luisa Schmolke, ERGO Innovation Lab

Wie LLMs das Suchverhalten verändern

The way people search for information online is currently undergoing radical change. Where hit lists used to dominate (e.g. in classic Google searches), users now expect understandable, precise and context-related answers. This is made possible by large language models (LLMs) such as ChatGPT, Perplexity and Gemini. They no longer just provide links, but direct, linguistically formulated responses that take into account intent, context and follow-up questions. For companies, this is changing how digital visibility is created. A text by Luisa Schmolke, ERGO Innovation Lab, on the new white paper on AI-supported online search.

Classic search engines such as Google are essentially based on the evaluation of keywords, backlinks and structural ranking signals. These mechanisms have proven themselves over many years and continue to deliver reliable, comprehensive results, especially when users know what they are looking for. LLM-based systems such as ChatGPT, Perplexity and Gemini, on the other hand, take a different approach: They analyse the semantic content of a query, recognise contextual relationships and generate linguistically formulated responses. Instead of a list of hits, the result is a direct response that is context-sensitive, conversational and tailored to the intent of the query. This architecture not only changes user expectations, but also the requirements for content and its visibility in the digital space.

Information accessibility plays a central role in the insurance industry in particular. Products require explanation, and trust is essential. LLMs offer clear advantages in this environment: someone searching for ‘best travel insurance for families with young children’ receives a context-sensitive response tailored to typical needs such as overseas cover or family benefits. The switch from search query to dialogue is not only more convenient, but also more efficient: Users can ask questions, compare rates and explore content in greater depth without media discontinuity. At the same time, the barrier to entry is lowered. Legal terms can be explained and translated into everyday language.

These new possibilities lead to new requirements. The ERGO Innovation Lab has conducted a study in collaboration with Ecodynamics (to be published soon) that confirms four key hypotheses:

  1. Content must be machine-readable and technically accessible: Pages that are cleanly structured, responsive and fast are reliably recognised and processed by LLMs.
  2. Semantic coherence is crucial: Content that is logically linked, thematically consistent and structurally well thought out is more likely to be referenced.
  3. The source matters: Language models prefer content with clear authorship, trustworthy origins and verifiable evidence.
  4. Format logic plays a role: Content that is structured as dialogue, such as question-and-answer blocks, modular guides or decision trees, is better suited to the structure of LLMs, which are trained on precisely such data sets.

The relevance of these factors becomes particularly clear when comparing different providers. In open LLM search queries, i.e. without specific brand loyalty, the visibility share of broker platforms was around 36%. Traditional insurers only achieved 17%. This is not because their content is worse, but because their content is less clearly structured and therefore more difficult for LLMs to interpret. Brokers often use clear language, modular comparison formats and structured decision trees – precisely the formats that LLMs process particularly efficiently. Many insurers rely on content-rich, brand-centric formats. To make these visible in LLM systems, additional structures are needed that are semantically clearer to reference.

What's more, not all LLM platforms work the same way. ChatGPT and You.com deliver a particularly large number of results, but also have a higher hallucination rate (around 9.7% for ChatGPT). Perplexity and Gemini rely more heavily on editorial filtering and prefer content with a high trust index. Uniform optimisation strategies do not work here. In future, visibility must be considered on a platform- and product-specific basis, from the indexing logic to the API (application programming interface). APIs enable structured and up-to-date content to be provided in a targeted manner, which improves its findability and usability by LLMs. Content without an API connection or with a poor structure is more difficult to access and is less likely to be taken into account in LLM search queries.

Areas of action for companies

This results in clear areas of action for companies. The technical basis is a semantically clean, mobile-optimised and machine-readable content architecture. Content should be modularised, dialogue-capable and reusable, not only for humans but also for machines. Platforms such as Perplexity are already experimenting with integrated advertising formats and curated response sources.

And the next paradigm shift is already upon us: LLM-based systems act as digital agents. This means that they not only provide information, but also perform actions. Anyone who cannot be integrated in a machine-readable way will simply not be present in these interactions. APIs and structured interfaces are thus becoming strategic assets.

Conclusion

The conclusion of this development is clear: digital visibility is no longer a passive result, but a question of structure, clarity and trust. Those who continue to optimise solely according to the principles of classic SEO will lose relevance in AI-based search. Those who start thinking about content in an LLM-compatible way today are not only investing in visibility, but also in connectivity to a search logic that is currently being rewritten.

Text: Luisa Schmolke

Luisa Schmolke, ERGO Innovation Lab

Your opinion
If you would like to share your opinion on this topic with us, please send us a message to: radar@ergo.de


Further articles