Session Abstracts:
Session 7A
PhD Student
University of California, San Diego
Presentation Title: Hierarchical Bayesian Modeling and Updating Applied to Linear FE Model of the Geisel Library
Co-Author: Aakash Bangalore Satish
Abstract: Hierarchical models provide a means to combine information contained in datasets from multiple experiments repeated under similar conditions. In a hierarchical model, in addition to the assumption of a specific parametric model form to explain the observed physical behavior during an individual experiment, the variability in the observed response across experimental repetitions is modeled by assuming the existence of a parent distribution for the parameters explaining the physical behavior in each experiment. Estimation of the hierarchical model parameters then involves using the multiple experimental datasets to jointly estimate the values of the parameters of the physical model for each experiment and the parameters of the assumed form of the parent distribution (i.e., the hyperparameters). This work showcases a unique application of hierarchical Bayesian updating to a large and complex real-world operational structure using real measured data. The Geisel Library building in UC San Diego has been instrumented with 10 accelerometers on various levels, which measure the ambient vibration response of the structure. The modal properties (mode shape and natural frequencies) of the structure are estimated using system identification algorithms by utilizing a subset of the measured acceleration corresponding to the first 10 minutes of every hour over a 100 hour span. The 100 sets of the identified modal properties are used as data to infer the parameters of a detailed linear finite element model of the structure, in addition to the parameters of the probability distribution modeling their variability caused by the changing ambient environmental conditions.
PhD Student
University of California, Los Angeles
Presentation Title: Model misspecification in seismic code-prescriptive and risk-based assessments of CA bridges
Co-Author: Henry Burton
Abstract: The number of ground motions used in seismic analysis procedures determines the precision of the parameter estimates. Probabilistic model misspecification has the potential to increase parametric estimation uncertainty and hence requires a greater number of ground motions to achieve a target precision. This study investigates the effects of the number of ground motions on estimation uncertainties both in code-prescriptive and risk-based assessments, with explicit consideration of model misspecifications. Specifically, we employ the quasi-maximum likelihood estimation (QMLE) approach, which increases the robustness of the standard error estimates, thereby ensuring asymptotic consistency even if the probabilistic model is misspecified. Illustrative results reveal misspecification errors are detected for the dispersion estimates of a critical bridge engineering demand parameter in code-prescriptive assessment. In the risk-based assessment, model misspecification is detected in 30% of considered cases. In the most extreme case, the misspecification increases the estimation uncertainty of mean annual frequency as much as three times, which substantially increases the required number of ground motions. Based on the findings from this study, we advocate for the use of QMLE as a tool to detect and rectify the implications of model misspecification to estimation uncertainty in probabilistic seismic assessments.
PhD Student
University of California, San Diego
Presentation Title: DesignSafe Machine Learning Example Case for Regression Analysis
Co-Author: Gilberto Mosqueda
Abstract: A series of Jupyter Notebooks harnessing machine learning techniques for post-processing shake table data has been conducted within DesignSafe. This suite of notebooks progressively employs increasingly intricate algorithms to establish the linkage between input signal sets and predictions of measured forces. The Notebooks are written in a modular fashion to be adaptable by user users exploring machine learning applications in DesignSafe. The algorithms encompass linear regression, a Deep Neural Network (DNN), and a Physics Informed Neural Network (PINN) for regression analysis. The linear regression algorithm is accompanied by toolsets that enable users to input data, construct desired relationships, and select optimal input sets based on mean squared error analysis against testing data. For the DNN algorithm, tools are provided that facilitate the visualization of network architecture and a systematic approach for refining hyperparameters. In the case of the PINN algorithm, the inclusion of physical constraints within the loss function enhances generalization and data fitting reliability over the DNN. While both linear regression and PINN models are underpinned by physics principles, they necessitate substantial adjustments when applied to different datasets. Nevertheless, the overarching training methodology remains transferrable. The DNN algorithm, on the other hand, exhibits adaptability in generating effective solutions across diverse input-output regression scenarios, contingent on meticulous parameter tuning, albeit with some trade-offs in model transparency. Effective utilization of the developed notebooks stand as notable exemplars for constructing highly accurate models while optimizing the utilization of DesignSafe resources.
PhD Student
University of California, Berkeley
Presentation Title: A surrogate model for the prediction of the hysteresis behavior of reinforced concrete columns
Co-Author: Matthew DeJong
Abstract: A detailed understanding of the consequences of earthquakes in urban areas is needed for the definition of effective mitigation and recovery strategies. Relevant information on regional damage and losses can be obtained from high-resolution regional simulations. This approach employs physics-based models (commonly finite elements) to represent the behavior of each asset in a region. In earthquake simulations, each asset is subjected to multiple ground motions, and damage and losses are estimated following the Performance Based Engineering methodology. Setting up all these finite element models requires both detailed information from the asset, which is often not fully available, and expert judgement for model development and calibration. This paper presents an alternative procedure for predicting the response of bridge piers. This approach combines a Surrogate Model with a Modified Bouc-Wen Model, utilizing data from an experimental reinforced concrete (RC) column database. Model class selection is performed in a set of 400 experimental hysteresis curves, covering multiple configurations, defining the best fit Bouc-Wen parameters. A Gaussian Process Surrogate model is developed using SimCenter’s quoFEM program to map a set of nondimensional physical parameters from the experimental tests into the parameter space of the Bouc-Wen model, providing a quick tool for the prediction of the hysteresis behavior of any RC column. The model is integrated into the SimCenter’s Regional Resilience Determination tool. The results show that the model can predict the hysteresis behavior of columns outside the training data and can accurately represent the main behavioral components of the bridges under analysis.
Senior Developer, NHERI SimCenter
University of California, Berkeley
Presentation Title: Gaussian Process Surrogate-Aided Efficient Bayesian Posterior Sampling
Co-Authors: Sang-ri Yi; Joel P. Conte; Alexandros Taflanidis
Abstract: Bayesian inference using computational models of engineering systems entails sampling from the posterior probability distribution of the model parameters. However, evaluating the model a large number of times to draw samples can be computationally prohibitive. To overcome this, a Gaussian process-based approach is proposed to efficiently approximate the posterior probability density. The approach builds the approximation iteratively, assessing convergence at the end of each iteration using a weighted normalized root mean squared error. If convergence has not been achieved, new experiments are designed sequentially using a weighted integrated mean squared error criterion. The weights in the criterion are a function of the target. This choice of weights is made to improve the accuracy and precision of the approximation in important regions of the parameter space while also exploring the parameter space adequately. The approximation is refined using the data from the from the new batch of sequentially designed experiments until the termination criteria for convergence to the target density are met. The method is demonstrated in an example application of updating the probability distribution of parameters governing the nonlinear response of a steel frame.
Postdoctoral Scholar
University of Illinois at Urbana-Champaign
Presentation Title: Data-Physics Coupling Driven Multi-Scale Response Simulation Method for Shear Wall Structures
Co-Authors: Guanghao Zhai; Jinhui Yan
Abstract: Multi-scale structural simulation is the basis of many tasks. Despite the advances in modeling component behavior, high-fidelity finite element simulation remains challenging and is limited by the computational efficiency of large-scale refined models. Pure data-driven methods, however, face the problem of lacking generalization, requiring large basic datasets and violating the physics laws. To combine the advantages of these two categories and pursue accurate and efficient multi-scale simulation, this study proposes a data-physics coupling driven multi-scale response simulation method for shear wall structures. In this study, simplified OpenSees model is established to represent the overall behavior of the shear wall structure, and the neural network-based surrogate models are designed and trained to reflect the micro-scale refined behaviors of shear wall components (such as stress, strain and crack maps). Physics rules are embedded into the neural networks to guarantee the reliability. Case studies prove that, the proposed data-physics coupling driven simulation methods can accurately and efficiently fulfill the multi-scale simulation tasks, which enables the carrying out of simulation tasks with larger scales or more refined details.