Session Abstracts:
Session 7A
Research Associate
University of Texas at Austin
Presentation Title: Probabilistic machine learning approaches for efficient regional-scale seismic fragility and loss assessments of buildings
Co-Authors: Ertugrul Taciroglu; Peng-Yu Chen
Abstract: Performance-based seismic design (PBSD) has become a widely accepted design strategy in civil engineering practice, which can be used to effectively reduce damage to structures from earthquake shakings or other earthquake-induced natural hazards, such as landslides and liquefaction. Unlike the traditional design method, the concept of PBSD relies on evaluating a building’s design to control the probability of reaching different types of damage states, considering a realistic range of expected earthquake intensities in the region where the building is located. One crucial step in PBSD is the fragility analysis, usually carried out by performing incremental dynamic analysis (IDA) or probabilistic seismic demand analysis (PSDA). Compared with IDA, PSDA does not require a large number of nonlinear time history analyses (NTHAs) and manually scale the ground motions, but it does need to pre-assume a relationship, normally linear in log-log scale, between the engineering demand parameters (EDPs) and the intensity measures (IMs). This assumption can be inaccurate when severe structural nonlinearity occurs. Recently, we analyzed the seismic fragilities and probable losses of 16,030 buildings in Istanbul, Turkey, utilizing a suite of 57 broadband (8~12 Hz) physics-based ground motion simulations. In this study, we revisit that prior work and use machine learning- based approaches instead. NGBoost—a gradient boosting-based model—and the Bayesian Neural Network (BNN) are adopted for probabilistic prediction of EDPs, given the structural, site, and ground motion properties. The predicted EDPs and their uncertainties are used to conduct the regional-scale fragility and loss assessments. Results indicate that the well-trained probabilistic machine learning models can accurately predict the structural responses and quantify their uncertainties over a wide range of ground motion intensities. In the future, this study can also be used as a surrogate model for rapid post-earthquake damage assessment for the Istanbul region.
Associate Professor
University of California, Los Angeles
Presentation Title: Complete Reconstruction of Backbone Curves for use in Structural Macro-Element Models
Co-Authors: Sepehr Saeidi; Amir Hossein Asjodi; Kiarash M. Dolatshahi
Abstract: A key mission of the NHERI SimCenter is to promote high-fidelity simulations in natural hazard engineering. In this regard, macro-element models have been shown to be especially useful for high-fidelity nonlinear structural response simulation in regional seismic risk assessments. This presentation will discuss a new approach that reconstructs the entire backbone curve of a macro-finite element model. The framework is demonstrated using unreinforced masonry walls (URM) but is applicable to other types of structural components. Machine learning (ML) is used to quantify the contribution of various failure modes that ultimately affect the behavior of URM walls, which are then used to reconstruct the backbone curve. Using data from 330 backbone curves from URM experiments, an autoencoder along with the K-means algorithm is used to cluster the walls into four damage classes, each with their own characteristic backbone curve. A new hybridity index is also introduced based on the distance from the center of the clusters, which shows the contribution of each failure mode to the ultimate cyclic behavior. Then, an XGBoost model is utilized to attribute the design and mechanical properties of the URM walls to the proposed hybridity index. Lastly, these indices are fed into a Random Forest model to predict the weights that quantify the contribution of each failure mode, which are then used to reconstruct the original backbone curve. The explicit consideration of the structural component failure mode via the characteristic backbone curves makes the proposed framework highly intuitive and interpretable, especially compared to previous ML-based approaches.
Senior Associate
Degenkolb
Presentation Title: AI for ASCE 41 Life Safety Seismic Performance Evaluation
Co-Authors: Pearl Ranchal; Daniel Gaspar; Evan Prodo; Alex Chu
Abstract: Degenkolb Engineers has performed seismic evaluations of various types of buildings in California, USA, in accordance with the consensus standard, ASCE/SEI 41, Seismic Evaluation and Retrofit of Existing Buildings over the last couple of decades. A database with around 3000 building seismic evaluation projects was developed and used for training Artificial Intelligence (AI) with the Random Forest and the Support Vector Machine (SVM) Machine Learning (ML) models. The paper will present key training features, feature importance, validation approaches, and example applications to large building inventories.
PhD Student
Stanford University
Presentation Title: Surrogate Models of Highway Bridges for Regional-Scale Simulations of Transportation Networks
Co-Authors: Greg Deierlein; Kuanshi Zhong; Peter Lee
Abstract: Understanding the performance of a regional bridge stock is an integral part of planning for the design, retrofit, and post-earthquake recovery of transportation systems. This research focuses on the development and implementation of high-fidelity simulations of bridge performance using machine learning methods into the R2D simulation platform. First, a comparative study is conducted for two surrogate models, including Gaussian Process Regression and Probabilistic Learning on Manifolds. Each method is trained to use specific structural and ground motion parameters to predict various bridge response parameters. The two methods are trained using analyses of different bridge designs with gridded ground motion characteristics, and then validated against benchmark response data for specific bridge realizations under hazard-consistent sets of ground motions. The study also involves the integration of Caltrans fragility functions into Pelicun, which relate the structural response data to standard damage states that are used in post-earthquake investigations. This study compares the accuracy of the predictions from each model, as well as ease of use, dimensionality, input and output distribution capabilities, and limitations of the models. The trained surrogate models are then implemented into R2D to assess their applicability in a regional earthquake scenario study. The models are evaluated for 12 archetype bridges under ground motions corresponding to the HayWired M7 Earthquake Scenario, simulated using R2D’s UCERF/OpenSHA functionality. Results of the surrogate models are validated through comparisons to a set of “ground truth” data, created using site-specific OpenSees models in R2D’s CustomPy functionality.
Associate Professor
Oregon State University
Presentation Title: Use of machine learning to identify mechanistic behavior of housing during the 2021 Marshall Fire
Co-Authors: Amy Metz; Abbie Liel
Abstract: On December 30, 2021 the Marshall Fire ignited in Boulder County, Colorado. Driven by hurricane force wind gusts (129 - 160 kilometers per hour (kph)) and sustained winds of 80 - 96 kph, the small grass fire intensified and spread. A random forest model was employed to understand housing loss. Four different models were developed with remotely collected housing loss data to consider subsets of the data for each impacted jurisdiction as well as the aggregated dataset. Many of the predictors with high importance for housing outcome predictions for the Marshall Fire aligned with previous studies. This study showed that a RF model can more accurately predict housing survivability in a wildfire than a logistic regression model, that housing arrangement within communities was influential in predicting home survivability for the Marshall fire, and that aggregating data across jurisdictions can potentially overlook predictors that influence home survivability that may be jurisdictionally specific. Overall, none of the most impactful predictors in predicting housing loss were within homeowner control, but rather they were a function of the community layout and planning. In addition, all models used pre-fire predictors demonstrating the predictive capacity of the developed models. This presentation will summarize the important variables that were indicated by the Random Forest analysis and what that means for mitigation practices within suburban communities in the future.
PhD Student
Washington State University
Presentation Title: Machine-learning-enabled Dynamic Vegetation Mapping for Enhanced Wildfire Risk Assessment
Co-Author: Ji Yun Lee
Abstract: Wildfire risk has escalated considerably in recent years, posing a significant challenge to ecosystems and communities. The confluence of shifting climate patterns, evolving land management practices, and urban expansion has led to this surge in wildfire occurrences, with projections indicating a further exacerbation of this trend across multiple regions within the United States. While accurate wildfire risk assessment is crucial for its mitigation, a persistent challenge arises from the predominantly static and outdated nature of assessment data, resulting in a notable gap between projected and actual risk scenarios. This discrepancy significantly hampers risk management efficacy, failing to capture dynamic factors driving evolving wildfire threats. Capitalizing on recent advances in remote sensing technologies, fine-scale dynamic data on vegetation and land cover become available. Despite its potential, integrating such data into wildfire risk assessment remains untapped. This study bridges the gap by developing machine learning (ML) tools specifically tailored to harness satellite imagery in dynamic vegetation mapping. Using satellite-imagery-derived dynamic vegetation data, the ML tools (a) classify the ranges of vegetation density and (b) identify vegetation species at every location – necessary information for accurate wildfire risk assessment. Central to their training is Harmonized Landsat Sentinel-2 imagery due to its widespread accessibility and high temporal resolution, though spatial resolution falls short. This limitation is surmounted through transfer learning, wherein a temporal vision transformer and a masked autoencoder, pre-trained by the IBM/NASA team, come into play. The proposed ML tools empower dynamic vegetation mapping, enhancing wildfire risk assessment accuracy and circumventing resource-heavy in-situ data collection.
Postdoctoral Scholar
University of Michigan
Presentation Title: Zero-shot Building Attribute Extraction from Large-Scale Vision and Language Models
Co-Authors: Brian Wang; Frank McKenna; Stella X. Yu
Abstract: SimCenter’s current building recognition framework, BRAILS, utilizes supervised learning to extract building relevant information from satellite and street-view images. However, each task module requires human-annotated data, hindering the scalability and robustness to regional variations and annotation imbalances. We propose a new zero-shot workflow for building attribute extraction that utilizes large-scale vision and language models without relying on external annotations. The proposed workflow contains key components of general-purpose image-level captioning, interest region detection, and segment-level captioning, all based on the vocabularies pertinent to structural and civil engineering. Consequently, our framework delivers a single general-purpose building attribute extraction model that is versatile to a variety of tasks, robust to regional image differences, and without the need of human expert annotations.
Professor
University of Southern California
Presentation Title: Machine-Learning Surrogates for Second-Order Corrections in Wave Models
Co-Author: Maile McCann
Abstract: Boussinesq- type wave models have the accuracy to resolve wave propagation in coastal zones, with the ability to capture nearshore dynamics that include both nonlinear and dispersive effects for relatively short waves. The accuracy of Boussinesq type models over their counterparts which utilize the non- linear shallow water (NLSW) equations provides a clear advantage in studying nearshore processes. However, the computational expense of finding the Boussinesq solution over the NLSW solution hinders fast and/ or real time simulation using Boussinesq type models.
To capture the dispersive effects in the Boussinesq solution while maintaining the computational efficiency of the NLSW solution, we propose a numerical scheme that solves the non- linear shallow water equations using traditional methods but uses a data- driven machine learning (ML) surrogate to “add” frequency dispersion. The ML component corrects for the difference in the NLSW and Boussinesq solution, effectively providing a close approximation to the Boussinesq solution while incurring the computational expense of a nonlinear shallow water solution. This machine learning correction is a two-dimensional neural network trained on the per-time-step differences in the depth-integrated mass fluxes in both x and y directions as well as the difference in free surface elevation between the NSLW and Boussinesq solution for a variety of wave and bathymetry conditions.
We plan to extend this concept to include ML-surrogate approximations of complex breaking models, generation of directional input spectrum, and various other computationally expensive model components. Ultimately, this would permit large-scale and rapid phase-resolving computation of coastal short-waves, as well as a “simple” approach to include frequency dispersion in existing NLSW solvers.
Assistant Professor
University of Notre Dame
Presentation Title: Scientific Machine Learning Enhanced Computational Fluid Dynamics
Abstract: The integration of scientific machine learning (SciML) with computational fluid dynamics (CFD) represents a promising frontier in the modeling and simulation of complex flow systems. Traditional CFD methods, grounded in partial differential equations (PDE) and numerical discretization, have enjoyed decades of success but face formidable challenges in various practical scenarios, including solving inverse problems, uncertainty quantification, and the modeling of systems with incomplete knowledge of their underlying physics. In this talk, we explore the intersection of data science and CFD, highlighting recent advances in SciML and the increasing availability of high-fidelity simulation and measurement data. While state-of-the-art machine learning techniques hold immense potential, they grapple with issues such as the demand for extensive data, generalizability, and interpretability. In response, we present our recent developments in SciML for computational physics. Our approach centers on the integration of PDE operators into neural architectures, enabling the fusion of prior knowledge of physics, multi- resolution data, numerical techniques, and neural networks through differentiable programming. By combining the strengths of established physical principles with cutting-edge machine learning, we aim to address the aforementioned challenges in CFD. This innovative framework promises to inaugurate a new era of understanding and modeling complex fluid systems, with far-reaching implications for science and engineering applications.
Associate Professor
University at Buffalo
Presentation Title: Optimizing Post-Hurricane Recovery of Interdependent Infrastructure Systems via Knowledge-Enhanced Deep Reinforcement Learning
Co-Author: Shaopeng Li
Abstract: Hurricanes with the inherent multi-hazard nature (e.g., strong wind, high surge/wave and heavy rainfall) can cause great damages to multiple types of infrastructure systems. Rapid functional restoration of hurricane-damaged infrastructure systems requires optimal management of limited repair resources in the recovery process and the prioritization of repair tasks could be greatly impacted by interdependencies in these infrastructure systems. The hurricane damage recovery of interdependent infrastructure systems is essentially a stochastic sequential decision problem and can be effectively formulated as a Markov decision process (MDP). This work approaches the dynamic optimization of the MDP with knowledge-enhanced deep reinforcement learning (RL). Specifically, the recovery policy (i.e., mapping from the observation of system status to the action of repair resource allocation) is represented by a deep neural network (DNN) while the optimal weights of the DNN are obtained using RL methodology. Domain knowledge is incorporated into the learning process through knowledge-guided exploration and knowledge-shaped reward to enhance the training efficiency of the deep RL-based scheme. To demonstrate the performance of the proposed framework, a case study on optimizing post-hurricane recovery of an interdependent traffic-electric power network is conducted. The simulation results highlight the enhanced training efficiency by incorporating domain knowledge in the deep RL algorithm and suggest the advantage of collaborative decision making over independent scheduling.