ベンダー
English

Generative AI and Large Language Models: A Snap Poll for the Celent Executive Panel

Create a vendor selection project
Click to express your interest in this report
Indication of coverage against your requirements
A subscription is required to activate this feature. Contact us for more info.
Celent have reviewed this profile and believe it to be accurate.
We are waiting for the vendor to publish their solution profile. Contact us or request the RFX.
Projects allow you to export Registered Vendor details and survey responses for analysis outside of Marsh CND. Please refer to the Marsh CND User Guide for detailed instructions.
Download Registered Vendor Survey responses as PDF
Contact vendor directly with specific questions (ie. pricing, capacity, etc)
2023/05/20

Available Only for Members of the NA Celent Insurance Executive Panel

Abstract

Snap polls reflect questions posed by members of the Celent Executive Panel, a group of C level executives in the insurance industry. This question came about from a member who was looking for insights on how others are thinking about Large Language Models, Generative AI, and tools such as ChatGPT. This deck provides a summary of the responses to a Snap Poll conducted in May 2023. 36 insurers responded to this survey over the course of one week.

If you are an insurer and are interested in participating and receiving these snap polls, please email kcarnahan@celent.com to verify eligibility.

The question that was posed was:

Background:

This carrier is interested in understanding how others are thinking about GPT and other LLMs – whether they’re using them in production yet or developing them. They’re interested in how and where they’re being used or where carriers are thinking about using them. And they’re interested in where carriers anticipate pitfalls.

Questions:

Are you currently using large language models in any production applications?

  • Yes, we had implemented them in production prior to the launch of ChatGPT.
  • Yes, we started using them in production since the launch of ChatGPT.
  • Yes, we use them through a 3rd party application.
  • No, we do not use LLMs in production but plan to by year end.
  • No, we do not use LLMs in production at this time and have no plans.
  • I don’t know.

Are you currently developing large language models in a test environment for future usage?

  • Yes, we were testing LLMs prior to the launch of ChatGPT.
  • Yes, we started testing LLMs since the launch of ChatGPT.
  • No, we are not currently testing LLMs but plan to by year end.
  • No, we do not plan to test LLMs at this time.
  • I don’t know.

If you are either testing or using LLMs in production, where are you testing/using them? (Select all that apply)

  • Customer Service.
  • Marketing.
  • Sales.
  • Claims.
  • Underwriting.
  • Other – (please identify).

Does your organization allow the use of ChatGPT or any other publicly available GPT models?

  • Yes, we allow the use throughout the organization.
  • Yes, we allow the use but only to specific groups within the organization.
  • Yes, we allow the use but only to certain groups for testing and research purposes.(Which groups? E.g. IT, UW, etc.).
  • No, we currently do not allow the use but plan to by year end.
  • No, we currently do not allow the use and have no plans to in 2023.
  • I don’t know.

Are you currently using LLMs for direct interaction with customers or providing support to internal service employees?

  • Yes, we use LLMs for direct customer interaction.
  • Yes, we use LLMs for providing support to internal service employees.
  • Yes, we use LLMs for both customer interaction and providing support for internal service employees.
  • No, we currently do not use LLMs for either but plan to in the future.
  • No, we currently do not use LLMs for either and have no plans to in 2023.
  • I don’t know.

There are known potential areas of risk for LLMs that use public and private data. Which areas are you most concerned most with? (Please select all that apply). Do you have any formal efforts to investigate and research these areas?

  • Ethics.
  • Bias.
  • Regulatory.
  • Intellectual Property Rights.
  • Nefarious (e.g., deep fakes, disinformation, social engineering, etc.).
  • All of the above.
  • I don’t know.

Have you personally used ChatGPT 3 or 4?

  • Yes, I have, and I am excited by the possibilities fueled by AI.
  • Yes, I have, and I am concerned about the future impact of AI.
  • Yes, I have, and I am both concerned and excited about the future impact of AI
  • No, but I plan to soon.
  • No, and I have no plans to.