NHERI Computational Symposium

May 28-29, 2026

Regional Risk and Resilience: Scalable simulation and uncertainty

Session 3B: Jarvis Auditorium, 1pm Chair: Milad Roohi

 


Soung Eil Houng

Outline of a generic headshot

PhD Student UC Berkeley

Scalable Scenario-based Earthquake Risk Modeling via Linearized Ground-Motion--Fragility Coupling and Probabilistic PCA

Co-Author: Luis Ceferino (UC Berkeley)

Abstract: In scenario-based regional risk modeling, the traditional workflow follows a two-step procedure that first simulates spatially correlated ground motions, and subsequently samples building damage states from lognormal fragility functions. In this procedure, the dimensionality grows with the number of assets and quickly becomes computationally prohibitive for large cities. To overcome this limitation, we introduce a scalable computational framework that (i) reformulates the ground-motion--fragility coupling through an exact linearization, thereby converting traditional two-step workflow into one-dimensional Gaussian simulation and (ii) applies probabilistic principal component analysis to the linearized formulation, enabling the identification of low-dimensional latent variables for efficient simulation. We validate the proposed approach on SF downtown portfolio of 1,000 buildings, benchmarking its predicted damage against results from SimCenter R2D’s computational testbed under the same input. The most likely damage states match exactly, with a mean difference below 0.44 (on a 0–4 ordinal scale representing none to complete damage), confirming the framework’s robustness. The achievable dimensionality reduction depends primarily on the portfolio’s spatial extent rather than building density, implying that—given a fixed spatial region—the computational burden remains nearly constant with portfolio size, unlike current approaches. In tests on SF downtown (15,836 buildings) and the broader Bay Area, a single and 20 latent dimensions, respectively, reproduce the benchmark loss distributions within a Kolmogorov–Smirnov distance <0.02. The method reduces computational complexity from O(N^3) to O(N^2) , where N denotes the number of buildings, yielding 110x faster simulation for a 30,000-building subset, with speedups growing linearly with portfolio size. The framework substantially lowers the computational barrier for high-resolution regional seismic risk assessment.

R2D

Jinyan Zhao

Outline of a generic headshot

Postdoctoral Scholar California Institute of Technology

Reducing seismic risk assessment uncertainty with physics-informed non-ergodic ground motion model: a case study on Los Angeles’ power transmission network

Co-Authors: Grigorios Lavrentiadis (University at Buffalo) and Domniki Asimaki (California Institute of Technology)

Abstract: Conventional seismic risk assessment relies on empirical ground‐motion models (GMMs) derived from global datasets under the ergodic assumption, which presumes that the relationship between ground-motion intensity and earthquake characteristics is identical across similar tectonic environments. This assumption neglects region-specific geological and geotechnical effects, thereby introducing substantial aleatory uncertainty. Recent studies have demonstrated that non-ergodic GMMs (NGMMs), developed using regionally specific data and Gaussian Process (GP) regression, can significantly reduce this uncertainty. However, existing NGMMs face two key challenges: (1) the limited availability of regional ground-motion recordings, which constrains the models’ effectiveness in uncertainty reduction, and (2) the prohibitive computational cost of conventional GP implementations when large datasets are available.

To address these challenges, this study develops an NGMM for the Los Angeles region using a large-scale dataset of physics-based ground-motion simulations from SCEC CyberShake. Developing the NGMM requires tuning GP hyperparameters and making conditional predictions over approximately 85 million data points, which necessitates scalable computational strategies. To achieve this, we employ the GPU-accelerated Python framework GPyTorch, together with a sparse conditional point selection strategy based on Kullback–Leibler divergence minimization.

Preliminary results demonstrate that the proposed NGMM delivers accurate ground-motion predictions with robust uncertainty quantification while achieving a substantial reduction in overall modeling uncertainty. The trained NGMM may serve as a surrogate model for the underlying physics-based simulator, and its practical utility is demonstrated through a system-level seismic risk assessment of the Los Angeles power transmission network. SimCenter’s Pelicun is employed in this study.

PelicunData DepotDesignSafe HPC

Chenhao Wu Member GSC

Chenhao Wu

PhD Student University of California Los Angeles

Uncertainty Quantification of Surrogate Exposure Characterization in Regional Seismic Risk Assessment

Co-Author: Henry Burton (UCLA)

Abstract: Exposure in a regional risk assessment refers to set of physical assets that can be affected by extreme events. Exposure characterization aims to assign properties to these assets (e.g., number of floors for a building and number of spans for a bridge) that are necessary to group individual assets into an appropriate fragility class. Such characterization has benefited from available datasets, such as National Structure Inventory and National Bridge Inventory. However, these public datasets often lack detailed attributes required for high-resolution regional assessment. To augment these datasets, manual inspections (either on-site or virtually) have been performed. The resulting inspection data, when combined with public datasets, form a comprehensive dataset which serves as the basis for constructing surrogate models for large-scale exposure characterization. Because surrogate models are inherently imperfect, their misclassifications can introduce additional epistemic uncertainty and potential bias into downstream risk estimates. This study aims to: (1) collect California statewide bridge attributes through systematic virtual inspections, (2) construct a series of surrogate models with varying levels of predictive accuracies, and (3) assess the magnitude of additional epistemic uncertainty and bias introduced through surrogate-based exposure characterizations.

R2DDesignSafe HPC

Raul Rincon

Raul Rincon

Postdoctoral Scholar Rice University

Bias-based evaluation of sub-model fidelity integration effects on multiscale infrastructure resilience estimates

Co-Author: Jamie E. Padgett (Rice University)

Abstract: Computational modeling has become a cornerstone for planning and designing the built environment, considering the potential (and uncertain) impact of natural hazards. With the increasing availability of computational resources, engineering modeling has led to a quest for the development of highly detailed models. While these models aim to better represent the problem’s granularity, structure-to-infrastructure system models require integrating several submodels across scales of analysis. We argue that, depending on the quantity of interest (QoI), highly detailed models can lead to outcomes that do not improve accuracy with respect to lower fidelity abstractions, yet are more expensive. We propose a method that helps determine whether a multiscale model derived from lower-fidelity submodels can achieve a more efficient Monte Carlo estimate than using the Most Accurate But Expensive (MABE) model. The method yields a model comparison criterion based on the bias of the reduced-fidelity model (relative to the MABE model), thereby facilitating model selection. We assess the role of model resolution, choices, and integration—which affect model fidelity—in terms of the induced bias on QoI using transportation networks subjected to earthquakes. We leverage high-performance computing resources from the DesignSafe Cyberinfrastructure to conduct sensitivity analyses across multiple study cases. The interactions of the submodels’ fidelity across scales, for some QoI, were found to overshadow the efforts made in attaining higher fidelity in lower-scale submodels. These findings are crucial for ensuring that the models are reliable, cost-efficiently developed, and defined at a level of resolution that suits the intended purpose of the analysis.

Data DepotDesignSafe HPC

Ahsan Kareem

Ahsan Kareem

Professor University of Notre Dame

High-fidelity wind simulation and visualization for real cities based on Google Maps

Co-Authors: Donglian Gu (University of Science and Technology Beijing) and Haoran Shao

Abstract: This study proposes an efficient end-to-end workflow for urban wind environment spanning from Google Maps data extraction and OpenFOAM simulation to interactive visualization on the Google Maps platform. Taking Shinjuku Ward, Japan, as an example, high-fidelity numerical models are constructed by extracting architectural geometric data from Google Maps. The 3D models obtained from Google Maps meet the geometric accuracy requirements for urban-scale CFD simulation. Subsequently, wind field simulations are performed using the open-source computational fluid dynamics tool OpenFOAM. Finally, the achieved high-fidelity visualization of CFD results as 2D heatmaps and 3D point clouds on the real geographic topology/environment provides an efficient and intuitive technical pathway for assessing the urban wind environment.

WE-UQData Depot

Jikun Liu

Outline of a generic headshot

PhD Student Texas A&M University

A Wide-and-Deep-Based Time Sequence Model for Predicting Power Outages Caused by Extreme Winter Storms

Co-Authors: Zhe Zhang (Texas A&M University), Jangjae Lee (University of Houston), Stephanie Paal (Texxas A&M University), Diya Li (ESRI), and Yuhan Cheng (Meta USA)

Abstract: In February 2021, Winter Storm Uri caused widespread power outages across Texas, affecting over 5 million people and resulting in an estimated $190 billion in damages. To support extreme weather outage resilience, this study introduces the Wide-and-Deep-Based Time Sequence Algorithm (WDTSA) for predicting power outage severity. The model combines a deep bidirectional LSTM for time-lagged weather and outage history with a wide pathway for weakly temporal features, enabling synergistic integration of heterogeneous inputs. This dual pathway design significantly outperforms standard baselines, achieving 0.99 accuracy at coarse resolution (K=3) and 0.84 at fine granularity (K=15), utilizing fewer parameters than expanded LSTM alternatives. Ablation and comparative analyses confirm that the performance gains arise from specialized feature routing and non-additive synergy between input groups, growingly so under complex classification tasks. County-level visualizations during Winter Storm Uri are provided to illustrate the model’s ability to anticipate outage progression, offering actionable forecasts for emergency planning. While current validation focuses on extreme events and does not offer spatial dependency modeling, the framework provides a compact and flexible foundation for resilient grid operations and targeted response in potential weather-induced disruptions.