Session Abstracts:
Graduate Student Researcher
UCLA
Functional Recovery of Bridge Networks: Framework Development and Implementation
Co-Authors: Henry Burton (UCLA), Shanshan Chen (McGill University), Yazhou Xie (McGill University), Jamie Padgett (Rice University), Ádám Zsarnóczay (Stanford University), Michael Mieler (Arup), Ibbi Almufti (Arup), Selim Günay (Pacific Earthquake Engineering Research Center), and Vesna Terzić (California State University, Long Beach)
Abstract: While building post-earthquake functional recovery assessments has made significant progress, comparatively, the advancements on this topic for bridges have been less fruitful. This is partially due to the absence of a systematic and unified procedure, coupled with the lack of a well-organized data and information set that supports such assessments. This study addresses this research gap by proposing a methodology for evaluating bridge post-earthquake functional recovery that can be implemented at multiple scales (i.e., individual bridge and/or network). The framework is comprised of three modules that probabilistically evaluate the functionality state, impeding factor time delays, and repair or replacement durations. It is supplemented by expert opinion-based data and information from interviews with bridge engineers, engineering managers, and contractors. The framework is implemented on a bridge network that is comprised of all state and local bridges in the city of Los Angeles. A large-scale simulation-based strategy is utilized to generate random samples for each constituent bridge that serves as the basis for the regional recovery assessment. The proposed model supports the development of seismic retrofit programs and decisions related to post-earthquake inspections and repairs. The code and expert opinion-based database that implements the model will be integrated into the SimCenter research tool.
Graduate Student Researcher
Stanford University
Enhanced Methods for Residential Building Stock Inventory Development for Seismic Assessment in California
Co-Authors: Gregory Deierlein (Stanford University) and Adam Zsarnoczay (Stanford University)
Abstract: Detailed building inventories are required to support high-fidelity regional earthquake scenario assessments. As a critical input to any regional assessment, accurate building inventories can lead to robust predictions of structural performance, downtime, and human impact, whereas incorrect inventory information may cause bias. Existing inventory information is limited both in terms of spatial aggregation and typology aggregation. This research focuses on the development and implementation of enhanced building inventory methods, with a focus on seismic vulnerabilities in single family housing. First, an algorithm is developed to pair point-based data – such as the National Structures Inventory (obtained through the BRAILS tool) – with building footprints. These methods aim to ground point-based inventory data in real building footprints, and through this process, several main limitations are revealed in common data sources. The proposed method addresses several of these limitations by synthesizing multiple data sources, leveraging population information for better building estimates, and allowing for mixed-use building designations. Beyond leveraging existing data, this research also focuses on using Large Vision and Language Models (VLMs), such as OpenAI’s GPT models, to detect specific seismic vulnerabilities using street-view imagery. VLMs, as compared to more commonly used Convolutional Neural Networks, do not require hand-labeled training data and have improved generalizability. This study explores applications of these models for inventory development, including an assessment of the effect of building-specific context in making predictions. This inventory study is part of a larger effort to evaluate regional risk and retrofit impact of seismically vulnerable housing, focusing on the California Earthquake Authority’s Brace & Bolt and Soft Story retrofit programs.
Graduate Student Researcher
Utah State University
Comparison of Detailed and Simplified Seismic Loss Estimate Methodologies and their Sensitivity to Analysis Choices for a Steel Building Inventory
Co-Authors: Mohsen Zaker Esteghamati (Utah State University)
Abstract: Numerous seismic loss methodologies exist in the literature, from detailed methods such as FEMA P58 component-based and story loss function (SLF) to simplified methods, namely HAZUS assembly-based approach. The present study benchmarks and systematically compares the FEMA P58 component-, story loss function (SLF)-, and HAZUS assembly-based methodologies using a consistent inventory of 621 steel moment-resisting buildings. A sensitivity analysis is conducted to evaluate the impact of various decisions at fragility and loss analyses on the estimated seismic loss, including: (1) the effect of different methods of constructing the relationship between engineering demand parameter (EDP) and intensity measures (IM) using linear, piecewise linear, and power law formulation, (2) the impact of residual drift value on demolition loss considering various deterministic median and standard deviation values of demolition fragility functions. The results show that the median value of total normalized EAL from the assembly-based approach is 0.7% lower and 23.5% higher than the component- and SLF-based methods, respectively. The standard deviation of the total normalized EAL is 3.4%, 1.2%, and 0.75% in assembly-, component-, and SLF-based methods, respectively. The dispersion in the assembly-based method is significantly higher than in two other methods, indicating possible inconsistent loss estimates for regional/portfolio assessment applications. Furthermore, in the assembly-based approach, the median value of total normalized EAL decreases by 4.71% and 27.9% when using piecewise linear and power law EDP-IM formulations, respectively, compared to the linear EDP-IM model. In contrast, for the component-based approach, the median values of total normalized EAL increase by 4.8% and 16.75% with piecewise linear and power law formulations, respectively, compared to the linear model. Similarly, the SLF-based method shows increases of 10.18% and 11.13% when using piecewise linear and power law.
Graduate Student Researcher
Oklahoma State University
Influence of Damage Modeling Uncertainty on Assessment of Structural Collapse under Dynamic Earthquake Loading
Co-Authors: Maha Kenawy (Oklahoma State University)
Abstract: Prediction of extreme limit states in civil structures subjected to earthquake loading is critical for performance-based seismic design (PBSD) but may be associated with substantial uncertainties due to the limitations of the engineering modeling tools. In addition to the inherent randomness in structural and material properties (aleatory uncertainty) considered in PBSD, epistemic uncertainty due the challenges of simulating the strength and stiffness deterioration in structures can lead to biased estimates of damage states and collapse capacities, influencing decisions on structural safety and post-earthquake functionality. This study evaluates the influence of the modeling uncertainty on the collapse assessment of reinforced concrete structural components under dynamic earthquake loading using models of different fidelities, including (1) a conventional lumped-plasticity frame model which uses nonlinear springs to represent several damage mechanisms, (2) a new regularized distributed-plasticity frame model created by the second author, which utilizes a nonlocal damage technique to address strain singularities in representing the deterioration of concrete and steel. We create a framework that integrates our numerical models with the NHERI SimCenter tool quoFEM (Quantified Uncertainty with Optimization for the Finite Element Method) to study effect of the modeling uncertainty in structural collapse assessment due to the constitutive parameters controlling the post-peak response of the structural components. Analysis of hundreds of nonlinear dynamic simulations reveals significant bias in the estimated structural collapse capacity due to the choice of the numerical modeling approach. We highlight the strengths and deficiencies of the different modeling strategies and identify promising approaches to reduce modeling uncertainty.
Researcher
Stanford University
Modeling residual drifts: Recent improvements, challenges and opportunities
Co-Authors: Ioannis Vouvakis Manousakis (UC Berkeley) and Dimitrios Konstantinidis (UC Berkeley)
Abstract: Building replacement is the most severe possible outcome after an earthquake, and even small changes in its likelihood can significantly alter the seismic risk of a building. The two primary reasons for replacement are collapse and irreparable damage. While collapse is a substantial contributor only for existing buildings with one of a few vulnerable structural systems, experiencing irreparable damage is plausible for every building, even modern, code-conforming structures. The likelihood of irreparable damage is typically estimated by comparing residual displacements, specifically residual interstory drift ratios (RID), to an RID capacity defined based on engineering experience after historical seismic events.
Capturing RID values reliably has been traditionally challenging. We developed a model that characterizes RIDs using a Weibull distribution with parameters that are a function of the corresponding peak interstory drift ratio (PID). We also propose a global optimization strategy for robust calibration based on maximum likelihood estimation on censored raw simulation data.
In this talk, we first discuss the scenarios when this new, more accurate characterization of the probabilistic RID distribution significantly impacts seismic performance assessment. We point out challenges in modeling RIDs due to typical small simulation sample sizes and suggest simplified versions of our model that promise more robust results in such applications. Finally, we discuss open questions with the potential for impactful future work on modeling the dependence of RIDs on ground motions and structural system characteristics.
Researcher
UC Berkeley
Adaptive Gaussian Process Surrogate Modeling for Efficient Iterative Bayesian Calibration of Expensive Numerical Models
Co-Authors: Alexandros, Taflanidis (University of Notre Dame), Sang-ri Yi (UC Berkeley), and Joel P. Conte (UC San Diego)
Abstract: Bayesian calibration of complex numerical models of engineering systems often entails a prohibitively high computational burden, presenting a significant challenge for accurate model calibration.
This study proposes an efficient iterative Bayesian calibration method by integrating adaptive Gaussian process (GP) surrogate modeling to approximate the likelihood function within a sequential Monte Carlo (MC) framework.
The method employs an iterative strategy to develop the GP surrogate. In each iteration, the current surrogate is used to sample from and approximate the posterior density, which is compared with the approximation of the posterior density in the previous iteration using quantitative convergence criteria. If the posterior densities are not sufficiently similar, the GP is refined using additional training data acquired through an adaptive design of experiments (DoE) approach that uses the weighted integrated mean square error. The weights are selected either to enhance the GP accuracy in the promising regions of the GP domain by strategically incorporating information from the intermediate auxiliary densities and the posterior probability density, or to facilitate broader exploration of the GP domain.
The calibration method also employs strategies to enhance the computational efficiency of both the DoE and sequential MC sampling.
Results from example static and dynamic Bayesian calibration problems (1) highlight the efficiency of the adaptive GP approach, (2) offer insights into optimal adaptation strategies and (3) guide the selection of response quantities for GP implementation.