• Contact us
      • Contact Us
      Have a question?
      Try speaking to one of our experts
      Contact us
      Information
      • Careers
      • Privacy Notice
      • Cookie Notice
      • Terms of Use
      • Office Locations
      Sign up for industry updates
      Stay up to date on Celent's latest features and releases.
      Sign up
      • Privacy Notice
      • Cookie Notice
      • Terms of Use
      ITC Vegas Recap - The Future of Nat Cat Management
      Precision, Partnerships, and a New Data Mindset
      19th November 2025
      //ITC Vegas Recap - The Future of Nat Cat Management

      Last month eight Celent analysts spent four days in Las Vegas attending ITC Vegas (formerly “InsureTech Connect”). It is one of the largest gatherings in the insurance / insurtech ecosystem, bringing together insurance carriers, technology providers, startups, brokers/agents, investors and regulators. The conference included over 9,000 attendees from across the insurance industry, hundreds of exhibitors / solution providers, and 14 stages of educational content covering topics such as Gen AI, customer acquisition & retention, and next-gen of claims and underwriting. The event is as much a networking-event as a conference, so Celent was very busy meeting with clients and prospects.

      Celent held its Pre-Conference Kickoff Summit: Disrupt, Decode, Deliver – The Future Hits Fast on Oct 14th. We presented a half-day immersive session designed for senior professionals focused on strategy, transformation, and innovation. Six sessions covered topics such as the evolving insurtech ecosystem, catastrophe management, underwriting modernization, and the emerging impact of Generative AI. The sessions included interactive panels and expert presentations.

      Over the last weeks we have been blogging about some of these sessions, and we will continue to blog for a few more weeks, giving a summary of the presentations and panel discussions. I had the chance to host an amazing panel, with plenty of insights and food for thought. I compiled my impressions and learnings from the panel, which are shared below. I hope you enjoy them as much as I did in person.

      In a world where “once‑in‑a‑century” events now arrive every few years, natural catastrophe (Nat Cat) management is under intense pressure to evolve. Insurers must respond faster, price more accurately, and manage risk proactively—while climate, exposure, and loss patterns all shift beneath their feet.

      At our panel, three specialists from very different backgrounds—a geologist, a meteorologist, and a data scientist—explored how advanced analytics, AI, and collaboration are reshaping Nat Cat management:

      • Andrew Notohamiprodjo, Head of Data Science at Delos Insurance, a wildfire‑focused MGA and Celent Model Insurer award recipient
      • Will Stikeleather, Meteorologist and North America Peril Advisory specialist at Guy Carpenter, a leading reinsurance broker
      • Helge Joergensen, Geologist and co‑founder of 7Analytics, winner of the Climate Tech Connect pitch competition

      The discussion surfaced a clear message: the future of Nat Cat management is granular, data‑driven, and collaborative—or it won’t work at all.


      1. Wildfire: One Peril, Many Micro‑Climates

      For Delos, which writes homeowners coverage across California, wildfire is the defining peril—and a moving target.

      Andrew’s first point was blunt: you can’t treat wildfire as a single, generic risk.

      “We’re finding a lot of success by not treating wildfire as this universal peril, but giving it the specificity it needs per location.”

      Northern, Central, and Southern California each behave differently in terms of fuel, wind patterns, terrain, and community layout. Delos’ approach is to:

      • Model wildfire at multiple scales—from foot‑level fuel and wind patterns to decadal climate trends
      • Respect local nuance—recognizing that a small fire in the “wrong” place can be more devastating than a huge one in remote terrain
      • Continuously reconcile old and new data sources—for example, translating from coarse U.S. Forest Service land classifications to ultra‑high‑resolution aerial vegetation imagery

      Even as AI and machine learning power ever more detailed models, Andrew stressed that analytics and domain expertise remain essential. Someone still has to understand what a new data source really means for risk and how to recalibrate models as technology changes.


      2. Geospatial Analytics: Resolution Is (Almost) Everything

      On the reinsurance advisory side, Will highlighted how geospatial analytics have leapt forward, driven by both better data and more of it.

      A simple example: Guy Carpenter’s wildfire risk score for the U.S. has improved from 270‑meter resolution in 2018 to 30 meters in 2025—a ninefold increase.

      Why that matters:

      • Wildfire losses are hyper‑local—often concentrated in the wildland–urban interface (WUI), where vegetation meets dense housing
      • Risk can change street to street or even structure to structure
      • Coarse grids blur out the features that most influence loss (e.g., slope, aspect, fuel type, proximity to WUI)

      At 30 meters, carriers can:

      • Identify micro‑pockets of extreme risk within otherwise moderate zones
      • Differentiate between two adjacent streets with very different exposure
      • Tailor underwriting, pricing, and mitigation incentives to the actual risk on the ground

      The same story applies beyond wildfire. As remote sensing, open data, and modeling power expand, the industry is moving from “county‑level heat map” thinking to true property‑level risk intelligence.


      3. Urban Flood: From 30 Meters to 3 Feet

      If 30 meters is good, urban flood increasingly demands even finer resolution.

      Helge’s company, 7Analytics, focuses on pluvial (surface water) flooding in cities, where rainfall, micro‑terrain, and the built environment interact in complex ways. Most portfolios dramatically understate this risk.

      In many books of business, up to 97% of policies may be tagged as “zero flood risk”—a number Helge says is clearly wrong and repeatedly disproved by claims.

      The culprit: terrain models that are too coarse and too smooth.

      7Analytics works with 3‑by‑3‑foot resolution and enhanced digital elevation models that incorporate:

      • Sidewalks
      • Speed bumps
      • Small drainage features and obstacles
      • Other micro‑structures that determine exactly where water will flow and pool

      The difference is not academic. At 30‑by‑30‑feet resolution, vertical uncertainty can be around 2 meters. At 3‑by‑3‑feet, it drops to roughly 7–8 inches. That matters because:

      • Most urban flood damage occurs at depths below 1 meter (~3 feet)
      • River flooding damage often peaks around 2 meters (~6½ feet)

      If your model’s vertical error is on the same order as your damage‑driving water depth, you will miss a large share of real‑world loss.

      The takeaway: for urban flood, hyper‑local terrain and land‑use detail is not a “nice to have”; it’s the core of the risk signal.


      4. Machine Learning: Teaching Models to “Read” the Landscape

      Both wildfire and flood modeling are now tapping machine learning to capture the relationship between physical features and actual loss experience.

      Helge described 7Analytics’ approach for flood:

      • Compute 600+ physical parameters around each building (geology, hydrology, geomorphology, catchment, wetness indices, etc.)
      • Train ML models on historical claims data, so the model learns which combinations of terrain and land‑use actually lead to loss
      • Predict probability of flooding at building level, even where there is no mapped river, no apparent sink, and no obvious source

      This enables detection of “hidden” risk where traditional hazard layers show nothing. For example, a small channel or slope upstream can funnel water toward a property that appears safe on simple maps.

      The principle carries across perils: let the model learn the terrain’s response to the hazard, not just the hazard itself.

      Andrew described something similar for wildfire:

      • Foot‑level fuel and wind models
      • Species regrowth modeling—what grows back after mitigation or after a fire, and how that changes risk
      • High‑frequency fire detection data (down from hours to minutes)

      All of this is unified in Delos’ machine learning stack to answer a sequence of questions:

      1. Is fire possible here?
      2. If so, what does that fire look like (intensity, spread, ember behavior)?
      3. How often is it likely to occur, and how severe will it be?

      The common enabler: massively scalable ML systems that can ingest diverse, high‑resolution datasets and convert them into actionable risk metrics.


      5. Forecasting and Event Response: From Cones to Scenarios

      Better forecasting is not just about prettier meteorological maps; it’s about sharper decisions before and after events.

      Hurricanes: Smaller Cones, Smarter Readiness

      Will spotlighted the progress in numerical weather prediction and hurricane modeling. Over the last 20 years, the error in the National Hurricane Center’s 4‑ to 5‑day forecast cone has shrunk by 60–70%.

      That shift is transformative:

      • In 2005, Hurricane Katrina’s 4–5 day cone covered a huge swath from Florida’s Big Bend to east of New Orleans—making it extremely hard for insurers to pre‑position adjusters.
      • By contrast, recent storms like Milton and Helene have benefited from much tighter track forecasts, allowing far more targeted deployment of resources.

      Even more impactful is the use of similar stochastic events (SSEs)—tens of thousands of simulated hurricane tracks produced by cat model vendors.

      With improved forecasts, reinsurers like Guy Carpenter can:

      • Narrow down to a small subset of plausible SSEs a few days before landfall
      • Provide clients with a much clearer range of expected losses depending on whether the storm tracks slightly north or south

      Will gave the example of Hurricane Helene approaching Tampa:

      • A landfall north of Tampa Bay would have pushed massive storm surge into the metro area—much higher losses
      • The actual track south of Tampa Bay drove surge into less populated regions—lower overall losses

      Days before landfall, there was still path uncertainty. But being able to isolate just two primary scenarios gave carriers a far more useful planning window than they would’ve had a decade ago.

      Flood and Hurricanes: Real‑Time Depth, Real‑World Actions

      On the flood side, Helge described real‑time modeling during Hurricane Beryl:

      • As the storm unexpectedly turned toward Houston roughly 28 hours before impact, 7Analytics’ models updated the probability and depth of flooding around key sites
      • For a hospital client, this meant early warning: when to close, how to staff, and how to prepare for isolation by surrounding floodwaters

      For insurers, similar tools can:

      • Trigger customer alerts before floodwaters arrive
      • Inform claims triage by estimating which properties likely have a few inches of water versus several feet
      • Support precise moratoria—pausing new business only where and when flood risk is imminent

      The message is clear: real‑time data, tied to high‑resolution physical models, lets carriers move from reactive claims handling to proactive risk management.


      6. Vulnerability: The “Third Side” of the Risk Triangle

      Everyone on the panel agreed: location matters—but so does what you build and how you maintain it.

      Will emphasized the full risk triangle:

      1. Hazard – wildfire, wind, flood, etc.
      2. Exposure – what’s in harm’s way
      3. Vulnerability – how likely that exposure is to be damaged

      Recent events have made vulnerability impossible to ignore:

      • In Southern California fires and the Lahaina (Maui) wildfire, some neighborhoods and even individual structures suffered dramatically less damage than others in the same hazard footprint.
      • The now‑famous “miracle house” in Lahaina survived largely because of vulnerability‑related choices: building features and surroundings that reduced ignition pathways.

      Cat models already support secondary modifiers—detailed vulnerability inputs such as:

      • Roof type (e.g., clay tile vs. wood shingles)
      • Presence of decks, combustible fencing, and attached structures
      • Defensible space—vegetation clearance and separation from nearby fuels
      • Highly flammable ornamentals (e.g., juniper, Italian cypress) versus more resistant landscaping

      The potential impact is substantial:

      • Robust mitigation can cut expected damage by 50–60% in some communities
      • Poor construction and landscaping can increase expected damage by 20–30%

      Field observations after Hurricane Milton reinforced the point: Florida’s modern building code performed well, while older structures with weaker roofs bore the brunt of losses.

      As organizations like the Insurance Institute for Business & Home Safety (IBHS) publish standards for wildfire‑resilient or wind‑resilient construction, vulnerability is becoming a lever insurers can actually pull, not just something they measure post‑event.


      7. Lessons Learned: Location, Change, and Continuous Updating

      Asked about lessons from recent catastrophes, the panelists converged on a few themes.

      1. Location Granularity Is Non‑Negotiable

      • The “right” resolution is peril‑dependent: 30 meters may be enough for some wildfire use cases, but urban flood can require 3‑foot grids.
      • Risk can change with one block, one slope, one drainage change—and models need to keep up.

      2. Vulnerability Is as Important as Hazard

      • Codes, materials, landscaping, and community‑wide defensible space shape the loss outcome as much as hazard intensity.
      • Aging or poorly built stock consistently underperforms, even under the same hazard.

      3. Land Use and Terrain Change Faster Than Your Models

      • Upstream development, new infrastructure, grading, and drainage changes can significantly alter downstream flood behavior.
      • If models don’t capture and refresh land‑use and terrain changes, they quickly become obsolete.

      In short: static models for a dynamic world are a recipe for surprise losses.


      8. A Call to Action: Build Precision Together

      The panel closed with a forward‑looking challenge to the industry.

      Use Advanced Tools—and the People Who Understand Them

      Andrew stressed that simply having access to AI and high‑res data isn’t enough. Insurers must:

      • Partner with scientists and specialists in wildfire, hydrology, meteorology, and geoscience
      • Integrate these experts alongside traditional actuaries and modelers
      • Ensure new tools are used correctly and interpreted wisely

      Use Analytics to Keep Insurance Viable

      Will pointed out that sophisticated geospatial analytics are not just about better underwriting—they’re about keeping whole regions insurable.

      As private carriers retreat from high‑risk states like Florida and California, the ability to:

      • Accurately differentiate good risks from bad
      • Target mitigation and resilience investments
      • Quantify the impact of codes and community action

      …will determine whether private insurance can function at all in these markets.

      Share Data, Share Insight, Share the Work

      Helge’s final point was about collaboration:

      • Insurers hold the claims data and portfolio experience that reveal what really drives loss
      • Insurtechs and analytics firms often hold the technical capacity and innovation culture needed to convert raw data into next‑generation models

      Bridging that gap requires data sharing, knowledge sharing, and mutual trust. Many of the answers to Nat Cat challenges are already latent in existing data—what’s missing is the joint effort to unlock them.


      Conclusion: From “Cat Management” to Resilience Engineering

      The panel made one thing very clear: Nat Cat management is no longer just about buying reinsurance and running annual cat models. It’s becoming a discipline of continuous, hyper‑local, data‑rich resilience engineering.

      That future rests on three pillars:

      1. Precision – high‑resolution hazard, terrain, and vulnerability data, refreshed frequently
      2. Intelligence – machine learning and advanced analytics grounded in domain expertise
      3. Partnership – insurers, reinsurers, scientists, and insurtechs working together, sharing data and insight

      For carriers, MGAs, and risk managers, the next step is not simply asking, “Which tools should we buy?” but rather:

      • Who are the right partners to help us understand our perils at the level they demand?
      • How do we connect our internal data to these new models and insights?
      • And how can we use this intelligence not only to price risk—but to reduce it?

      That, increasingly, is what will separate the carriers who merely survive the next catastrophe cycle from those who shape a more resilient future.

      Look for other blogs and video links in this series over the next few weeks.

      Author
      Juan Mazzini
      Juan Mazzini
      Global Head of Celent
      Details
      Geographic Focus
      Asia-Pacific, EMEA, LATAM, North America
      Horizontal Topics
      Artificial Intelligence, Artificial Intelligence - Generative AI e.g. ChatGPT, Data & Analytics, Emerging Technologies, Innovation, Societal Issues (e.g., Inclusion, ESG, Diversity)
      Industry
      Property & Casualty Insurance