Discover
Earthquake Science Center Seminars

148 Episodes
Reverse
Gaspard Farge, University of California, Santa Cruz
Tectonic tremor tracks the repeated slow rupture of certain major plate boundary faults. One of the most perplexing aspects of tremor activity is that some fault segments produce strongly periodic, spatially extensive tremor episodes, while others have more disorganized, asynchronous activity. Here we measure the size of segments that activate synchronously during tremor episodes and the relationship to regional earthquake rate on major plate boundaries. Tremor synchronization in space seems to be limited by the activity of small, nearby crustal and intraslab earthquakes. This observation can be explained by a competition between the self-synchronization of fault segments and perturbation by regional earthquakes. Our results imply previously unrecognized interactions across subduction systems, in which earthquake activity far from the fault influences whether it breaks in small or large segments.
Tina Dura, Virgina Tech
Climate-driven sea-level rise is increasing flood risks worldwide, but sudden land subsidence from great (>M8) earthquakes remains an overlooked factor. Along the Washington, Oregon, and northern California coasts, the next Cascadia subduction zone (CSZ) earthquake could cause 0.5-2 m of rapid subsidence, dramatically expanding floodplains and exposing communities to heightened flooding hazards.
This talk explores the coastal geologic methods used to estimate coseismic subsidence along the CSZ, and then quantifies potential floodplain expansion across 24 Cascadia estuaries under low (~0.5 m), medium (~1 m), and high (~2 m) earthquake-driven subsidence scenarios—both today and by 2100, when compounded by climate-driven sea-level rise. We will also explore the implications for residents, infrastructure, and decision-makers preparing for the intersection of seismic and climate hazards.
Rick Aster, Colorado State University
The long-period seismic background microseism wavefield is a globally visible signal that is generated by the incessant forces of ocean waves upon the solid Earth and is excited via two distinct source processes. Extensive continuous digital seismic data archives enable the analysis of this signal across nearly four decades to assess trends and other features in global ocean wave energy. This seminar considers primary and secondary microseism intensity between 4 and 20 s period between 1988 and late 2024. 73 stations from 82.5 deg. N to 89.9 deg. S latitude with >20 years of data and >75% data completeness from the NSF/USGS Global Seismographic, GEOSCOPE, and New China Digital Networks. The primary microseism wavefield is excited at ocean wave periods through seafloor tractions induced by the dynamic pressures of traveling waves where bathymetric depths are less than about 300 m. The much stronger secondary wavefield is excited at half the ocean wave period through seafloor pressure variations generated by crossing seas. It is not restricted to shallower depths but is sensitive to acoustic resonance periods in the ocean water column. Acceleration power spectral densities are estimated using 50%-overlapping, 1-hr moving windows and are integrated in 2-s wide period bands to produce band-passed seismic amplitude and energy time series. Nonphysical outliers, earthquake signals, and Fourier series seasonal variations (with a fundamental period of 365.2422 d) are removed. Secular period-dependent trends are then estimated using L1 norm residual-minimizing regression. Increasing microseism amplitude is observed across most of the Earth for both the primary and secondary microseism bands, with average median-normalized trends of +0.15 and +0.10 %/yr, respectively. Primary and secondary band microseism secular change rates relative to station medians correlate across global seismic stations at R=0.65 and have a regression slope of 1.04 with secondary trends being systematically lower by about 0.05 %/yr. Multiyear and geographically extensive seismic intensity variations show globally observable interannual climate index (e.g., El Niño–Southern Oscillation) influence on large-scale storm and ocean wave energy. Microseism intensity histories in 2-s period bands exhibit regional to global correlations that reflect ocean-basin-scale teleconnected ocean swell, long-range Rayleigh wave propagation, and the large-scale reach of climate variation. Global secular intensity increases in recent decades occur across the entire 4 – 20 s microseism band and progressively greater intensification at longer periods, consistent with more frequent large-scale storm systems that generate ocean swell at the longest periods.
Chris Johnson, Los Alamos National Lab
Significant progress has been made in probing the state of an earthquake fault by applying machine learning to continuous seismic waveforms. The breakthroughs were originally obtained from laboratory shear experiments and numerical simulations of fault shear, then successfully extended to slow-slipping faults. Applying these machine learning models typically require task-specific labeled data for training and tuning for experimental results or a region of interest, thus limiting the generalization and robustness when broadly applied. Foundation models diverge from labeled data training procedures and are widely used in natural language processing and computer vision. The primary different is these models learn a generalized representation of the data, thus allowing several downstream tasks performed in a unified framework. Here we apply the Wav2Vec 2.0 self-supervised framework for automatic speech recognition to continuous seismic signals emanating from a sequence of moderate magnitude earthquakes during the 2018 caldera collapse at the Kilauea volcano on the island of Hawai'i. We pre-train the Wav2Vec 2.0 model using caldera seismic waveforms and augment the model architecture to predict contemporaneous surface displacement during the caldera collapse sequence, a proxy for fault displacement. We find the model displacement predictions to be excellent. The model is adapted for near-future prediction information and found hints of prediction capability, but the results are not robust. The results demonstrate that earthquake faults emit seismic signatures in a similar manner to laboratory and numerical simulation faults, and artificial intelligence models developed for encoding audio of speech may have important applications in studying active fault zones.
Betsy Madden, San Jose State University
Seismic hazard assessments currently depend on fault slip rates, the cumulative offset over many earthquakes along individual faults, to determine the probability of earthquakes of a certain magnitude over a certain time period and potential ground motions. Geologic fault slip rates are estimated by a combination of field and laboratory techniques. Such data can be generated synthetically with mechanical models that capture slip rate variations along complex, three-dimensional fault networks. I will discuss opportunities provided by these synthetic data, as well as integration of the results with dynamic rupture models of individual earthquakes.
James Atterholt, USGS
Observations of broad-scale lithospheric structure and large earthquakes are often made with sparse measurements and are low resolution. This makes interpretations of the processes that shape the lithosphere fuzzy and nonunique. Distributed Acoustic Sensing (DAS) is an emergent technique that transforms fiber-optic cables into ultra-dense arrays of strainmeters, yielding meter-scale resolution over tens of kilometers for long recording periods. Recently, new techniques have made probing fiber-measured earthquake wavefields for signatures of large-scale deformation and dynamic behavior possible. With fibers in the Eastern California Shear Zone and near the Mendocino Triple Junction, I use DAS arrays to measure a diversity of tectonic-scale phenomena. These include the length scale over which the Garlock Fault penetrates the mantle, the plumbing system of the Coso Volcanic Field at the crust-mantle boundary, the topographic roughness of the Cascadia Megathrust, and the time-dependent rupture velocity of the 2024 M7 Cape Mendocino earthquake. Dense measurements vastly improve the clarity with which we can view these processes, offering new insights into how the lithosphere evolves and what drives the behavior of large earthquakes.
Doron Morad, University of California, Santa Cruz
In natural fault surfaces, stresses are not evenly distributed due to variations in the contact population within the medium, causing frictional variations that are not easy to anticipate. These variations are crucial for understanding the kinematics and dynamics of frictional motion and can be attributed to both the intact material and granular media accommodating the principal slip zone. Here, I explore the effects of heterogeneous frictional environments using two different approaches: fracture dynamics on non-mobilized surfaces and granular systems on mobilized ones.
First, I will present a quantitative analysis of laboratory earthquakes on heterogeneous surfaces, incorporating both laboratory-scale seismic measurements coupled with high-speed imaging of the controlled dynamic ruptures that generated them. We generated variations in the rupture properties by imposing sequences of controlled artificial barriers along the laboratory fault. We first demonstrate that direct measurements of imaged slip events correspond to established seismic analysis of acoustic signals; the seismograms correctly record the rupture moments and maximum moment rates. We then investigate the ruptures’ early growth by comparing their measured seismogram velocities to their final size. We investigate the laboratory conditions that allow final size predictability during the rupture early growth. Due to higher initial elastic energies imposed prior to nucleation, larger events accelerate more rapidly at the rupture onset for both heterogeneous and non-heterogeneous surfaces.
Second, I present a new Couette-style deformation cell designed to study stress localization in two-dimensional granular media under different flow regimes. This apparatus enables arbitrarily large deformations and spans four orders of magnitude in driving velocity, from sub-millimeter to meters per second. Using photoelasticity, we measure force distribution and localization
within the granular medium. High-speed imaging captures data from a representative patch, including both lower and upper boundaries, allowing us to characterize local variations in stress and velocity. For the first time, we present experimental results demonstrating predictive local granular behavior based on particle velocities, velocity fluctuations, and friction, as defined by
[tau/sigma_n]. Our findings also reveal that stress patterns in the granular medium are velocity-dependent, with higher driving velocities leading to increased stress localization.
These two end-member cases of frictional sliding, one dominated by gouge, and the second by intact surfaces, highlight two fundamental aspects of friction dynamics. The spatial distribution of heterogeneity directly influences stress distribution and, consequently, the stability of the medium. With these experimental methods, we can now measure and even control these effects.
Cassie Hanagan, USGS
Advancing our understanding of earthquake processes inevitably pushes the bounds of data resolution in the spatial and temporal domains. This talk will step through a series of examples leveraging two relatively niche geodetic datasets for understanding portions of the earthquake cycle: (1) temporally dense and sensitive borehole strainmeter (BSM) data, and (2) spatially dense sub-pixel image correlation displacement data. More specifically, I will detail gap-filling benefits of these two datasets for different earthquakes.
BSMs respond to a frequency of deformation that bridges the capabilities of more common GNSS stations and seismometers. As such, they are typically installed to capture deformation signals such as slow slip or transient creep. In practice they are also useful for measuring dynamic and static coseismic strains. This portion of the talk will focus on enhanced network capabilities for detecting both coseismic and postseismic deformation with a relatively new BSM array in the extensional Apennines of Italy, with events spanning tens to thousands of kms away. Then, we will transition toward how these instruments can constrain spatiotemporally variable afterslip following the 2019 Mw7.1 Ridgecrest, California earthquake.
High spatial resolution displacements from sub-pixel image correlation serve as gap-filling datasets in another way – providing higher spatial resolution (~0.5 m) maps of the displacement fields than any other method to date, and patching areas where other methods fail to capture the full deformation magnitude or extent, such as where InSAR decorrelates. This portion of the talk will focus on new results that define expected displacement detection thresholds from high-resolution satellite optical imagery and, alternatively, from repeat lidar data. Examples will include synthetic and real case studies of discrete and diffuse deformation from earthquakes and fault creep.
Evan Hirakawa, USGS
Northern California, specifically the San Francisco Bay Area, is a great place to study earthquake hazards and risk, due to its dense population centers surrounded by active faults, as well as complex geology that strongly influences earthquake ground motions. Computer simulations of seismic wave propagation which can incorporate 3D models of the subsurface properties and complex faulting behavior are good tools for studying seismic hazard, but ultimately require more development before unlocking full potential; specifically, the 3D seismic velocity models need to be further developed in many places and the simulated motions need to be validated with real, recorded data.
In this talk, I will summarize a few different research projects on these topics. First I will review recent efforts to improve the USGS San Francisco Bay region 3D seismic velocity model (SFCVM), the leading community velocity model in the area, and describe some of its interesting features. This will be followed by a preview of ongoing work from collaborators and some other promising avenues to explore, in hopes of further improving the model and stoking more community involvement. In the second part of the talk, I will switch gears and move farther north, to the Humboldt County area, where a recent M7 earthquake occurred offshore. I will show some preliminary modeling results, discuss the datasets available from this event, and describe some of the local geology and efforts to better understand subsurface structure.
Omar Issa, ResiQuant (Co-Founder)/Stanford University
A study by FEMA suggests that 20-40% modern code-conforming buildings would be unfit for re-occupancy following a major earthquake (taking months or years to repair) and 15-20% would be rendered irreparable. The increasing human and economic exposure in seismically active regions emphasize the urgent need to bridge the gap between national seismic design provisions (which do not consider time to recovery) and community resilience goals.
Recovery-based design has emerged as a new paradigm to address this gap by explicitly designing buildings to regain their basic intended functions within an acceptable time following an earthquake.
This shift is driven by the recognition that minimizing downtime is critical for supporting community resilience and reducing the socioeconomic impacts of earthquakes. This seminar presents engineering modeling frameworks and methods to support scalable assessment and optimization of recovery-based design, including:
1. Procedures for selection and evaluation of recovery-based performance objectives and study the efficacy of user-defined checking procedures.
2. A framework to rapidly optimize recovery-based design strategies based on user-defined performance objectives.
3. Building technology to support utilization of these approaches across geographies and industrial verticals.
Together, these contributions provide the technical underpinnings and industry-facing data requirements to perform broad, national-scale benefit-cost analysis (BCA) studies that can accelerate decision-making and engineering intuition as resilient design progresses in the coming years.
Martijn van den Ende, Université Côte d'Azur
Already for several years it has been suggested that Distributed Acoustic Sensing (DAS) could be a convenient, low-cost solution for Earthquake Early Warning (EEW). Several studies have investigated the potential of DAS in this context, and demonstrated their methods using small local earthquakes. Unfortunately, DAS has a finite dynamic range that is easily exceeded in the near-field of large earthquakes, which severely hampers any EEW efforts. In this talk, I will present a detailed analysis of the dynamic range, and how it impacts EEW: where does it come from? What can we do when the dynamic range is exceeded? And is there still hope for DAS-based EEW systems?
Sara Beth Cebry, U.S.G.S.
luid injection decreases effective normal stress on faults and can stimulate seismicity far from active tectonic regions. Based on earthquake nucleation models and measured stress levels, slip will be stable, aseismic, and limited to the fluid pressurized region—contrary to observed increases in seismicity. To understand how fluid injection effects earthquake initiation, rupture, and termination, I used large-scale laboratory faults to experimentally link effects of direct fluid injection to rupture behavior.
Comparison between the nucleation of dynamic events with and without fluid pressure showed that rapid fluid injection into a low permeability fault increases multi-scale stress/strength heterogeneities that can initiate seismic slip. Factors that increase the intensity of the heterogeneity, such as increased injection rate or background normal stress, promote the initiation of small seismic events that have the potential to “run away” and propagate beyond the fluid pressurized region.
Whether or not the seismic slip can “run away” depends on the background shear stress levels. When the fault was near critically stressed, dynamic slip initiated quickly after high fluid pressure levels were reached. The dynamic slip event propagated far beyond the fluid pressurized region. In comparison, when the fault was far from critically stressed, dynamic slip initiated hundreds of seconds after high injection pressures were reached and this event was limited in size by the region affected by fluid pressure.
We conclude that localized decreases in effective normal stress due to fluid pressure can initiate slip, sometimes seismic slip, but the background shear stress controls whether or not that slip and grows into a large earthquake.
John Rekoske, University of California San Diego
Rapidly estimating the ground shaking produced by earthquakes in real-time, and from future earthquakes, are important challenges in seismology. Numerical simulations of seismic wave propagation can be used to estimate ground motion; however, they require large amounts of computing power and are too slow for real-time problems, even with modern supercomputers. Our aim is to develop a method using both high-performance computing and machine learning techniques to obtain a close approximation of simulated seismic wavefields that can be solved rapidly. This approach integrates physics into the source- and site-specific ground motion estimates used for real-time applications (e.g., earthquake early warning) as well as many-source problems (e.g., probabilistic seismic hazard analysis). Specifically, I will focus this talk on applying data-driven reduced-order models (ROMs) that are based on the interpolated proper orthogonal decomposition method. I will discuss our work using ROMs to (1) instantaneously generate peak ground velocity maps and (2) to rapidly generate three-component velocity seismograms for earthquakes in the greater Los Angeles area. The approach is flexible, in that it can generate 3D elastodynamic Green’s functions which we can use to simulate seismograms for complex kinematic earthquake rupture models. Lastly, I will show how this approach can provide accurate, near-real-time wavefields that could be used to rapidly inform about possible earthquake damage.
Haiyang Kehoe, USGS
Seismograms contain information of an earthquake source, its path through the earth, and the local geologic conditions near a recording site. Ground shaking felt on Earth’s surface is modified by each of these contributions–the spatiotemporal evolution of rupture, three-dimensional subsurface structure, and site conditions all have a substantial impact on hazards experienced by exposed populations. In this talk, I highlight three studies that have improved our understanding of ground motion variability arising from source, path, and site effects. First, I describe the rupture process of the 2017 Mw 7.7 Komandorsky Islands earthquake, which reached supershear speeds following a rupture jump across a fault stepover, and demonstrate the enhanced hazard associated with supershear ruptures across Earth’s complex transform fault boundaries. Second, I compare high-frequency wavefield simulations of Cascadia earthquakes using various tomography models of the Puget Sound region, Washington State to highlight the role of basin structure on ground motion amplification. Third, I show horizontal-to-vertical spectral ratio maps of the continental United States and emphasize the continued importance of region-specific constraints on site characterization. While each study demonstrates progress towards understanding the individual roles of source, path, and site effects on damaging earthquake ground motions, together they underscore distinct challenges for improving seismic hazard models and their uncertainties.
Tara Nye, USGS
Models of earthquake ground motion (both simulations and ground-motion models) can be likened to a puzzle with three primary pieces representing the earthquake source, site conditions, and source-to-site path. Early versions of these models were developed using average behavior of earthquakes across a variety of regions and tectonic environments. Although informative, such models do not capture the unique source, path, and site effects that are expected to have a significant influence on resulting ground motion. This talk highlights several approaches for improving modeling of ground motion by focusing efforts on the different pieces of the ground-motion puzzle. Segments of the talk include (1) constraining rupture parameters of rare tsunami earthquakes, (2) estimating site-specific high-frequency attenuation in the San Francisco Bay Area, and (3) investigating relationships between path effects and crustal properties in the San Francisco Bay Area. With continued refinement to models of ground motion, we can improve confidence and reduce uncertainty in seismic hazard and risk assessments.
Rashid Shams, University of Southern California
Site response in sedimentary basins is partially governed by mechanisms associated with three-dimensional features. This includes the generation of propagating surface waves due to trapped and refracted seismic waves, focusing of seismic energy due to basin shape and size, and resonance of the entire basin sediment structure. These mechanisms are referred to as basin effects and they lead to a significant increase in the amplitude and duration of the observed ground motions from earthquake events. Currently, ground motion models (GMMs) incorporate basin effects using the time-averaged shear-wave velocity in the upper 30 m (V_S30), and the isosurface depths (depth to a particular shear wave velocity horizon, z_x). This approach captures site response features associated with the basin but uses parameters that are one-dimensional in nature and therefore are limited in their description of the lateral and other three-dimensional (3D) contributing effects. This work explores geometric features as predictive parameters in the development of region-specific models to improve the characterization of site response in sedimentary basins. In this work we constrained basin shape using depth to sedimentary basement (depth to a particular shear wave velocity horizon i.e., z_1.5 and z_2.3) and depth to crystalline basement (z_c,b) which are derived and validated using systematic exploration of geological cross sections and Community Velocity Model (CVM) profiles over Los Angeles Basin (LAB). Finally geometric parameters such as includes Standard deviation of zcb, Standard deviation of Absolute difference between z_1.5 and z_cb, distance from basin margin, and Spatial Area of Influence based on V_S30 are computed based on finalized shape. Residual analysis is employed to access derived geometric parameters for their ability to reduce bias and uncertainty in basin site response analysis.
Amy Williamson, University of California Berkeley
Alerts sent through earthquake early warning (EEW) programs provide precious seconds for those alerted to take simple protective actions to mitigate their seismic risk. Programs like ShakeAlert have been providing alerts for felt earthquakes across the west coast of the US for almost 5 years. Earthquakes are also one part of a multihazard system and can trigger secondary natural hazards such as tsunamis and landslides. However in order to be effective and timely, EEW and tsunami forecast algorithms must rely on the smallest amount of data available, often with variable quality and without analyst input. This talk focuses on potential advances to EEW algorithms to better constrain earthquake location and magnitude in real time, providing improved alerts, particularly in network sparse regions. Additionally, this talk highlights work using real time data to generate rapid tsunami early warning forecasts, its feasibility, and the benefit of unifying earthquake and tsunami alerts into one cohesive public-facing alerting structure.
James Biemiller, USGS
An unresolved aspect of tsunami generation in great subduction earthquakes is the offshore competition between coseismic deformation mechanisms, such as shallow megathrust slip, slip on one or more splay faults, and off-fault plastic deformation. In this presentation, we first review results from data-constrained 3D dynamic rupture modeling of an active plate-boundary-scale low-angle normal fault, the Mai’iu fault, that show how stress, fault structure, and the strength and thickness of overlying sediments influence shallow coseismic deformation partitioning in an extensional setting. Similar modeling approaches can shed light on shallow coseismic deformation in contractional settings, such as the Cascadia subduction zone (CSZ). Along the northwestern margin of the U.S., robust paleoseismic proxies record multiple M>8 paleoearthquakes over the Holocene, despite limited modern interface seismicity. Additionally, growth strata in the outer wedge record Late Quaternary slip on active landward- and seaward-vergent splay faults inboard of prominent variably-vergent frontal thrusts at the deformation front. The relative importance of megathrust vs. splay fault slip in generating tsunami hazards along the Pacific Northwest coastline is relatively unconstrained. Here, we develop data-driven 3D dynamic rupture models of the CSZ to analyze structural controls on shallow rupture processes including slip partitioning across the frontal thrusts, splays, and underlying decollement. Initial simulations show that trench-approaching ruptures typically involve meter-scale slip on variably oriented preexisting planar splay faults. Splay slip reduces slip on the subduction interface in a shadowed zone updip of their intersection, with greater splay slip leading to stronger shadowing. We discuss two structural controls on splay faults’ coseismic slip tendency: their dip angle and vergence. Gently dipping splays host more slip than steeply dipping ones and seaward-vergent splays host more slip than landward-vergent ones. We attribute these effects to distinct static and dynamic mechanisms, respectively. Finally, we show initial results from simulations with newly mapped frontal thrust geometries from CASIE21 seismic reflection data and discuss future directions for our CSZ dynamic rupture modeling project.
Jaeseok Lee, Brown University
Field observations indicate that fault systems are structurally complex, yet fault slip behavior has predominantly been attributed to local fault plane properties, such as friction parameters and roughness. Although relatively unexplored, emerging observations highlight the importance of fault system geometry in the mechanics governing earthquake rupture processes. In this talk, I will discuss how the geometrical complexities of fault networks impact various aspects of fault slip behavior, based on the analysis of surface fault trace misalignment. We discover that surface fault traces in creeping regions tend to be simpler, whereas those in locked regions are more complex. Additionally, we find correlations between complex fault geometry and enhanced high-frequency seismic radiation. Our findings suggest the potential for a new framework in which earthquake rupture behavior is influenced by a combination of geometric factors and rheological yielding properties.
Thomas Lee, Harvard University
Since the first seismograms were recorded in the late 19th century, the seismological community has accumulated millions of ground motion records on both paper and film. While almost all analog seismic recording ended by the late 20th century, replaced by digital media, the still-extant archives of paper and film seismograms are invaluable for many ongoing scientific applications. This long-running record of ground motion is crucial for developing understanding of how both natural and anthropogenic events have changed the Earth and its processes throughout the last century.
Today, most of these records are housed in institutions with limited resources, which must prioritize certain objects or types of objects for preservation and access. For example, when seismologists today are forced to triage collections, the bulky paper-records are oftentimes more at-risk for deaccessioning than more compact film copies. However, alterations introduced in reformatting (i.e., paper to film) as well as preservation requirements of the various records are not often fully understood or appreciated. To make these decisions in an informed way, it is vital to know the stability of the recording media and the level of accuracy that can be obtained from these different records. For example, image distortion and available color depth in paper and microfilm copies can result in discrepancies in derived time series which could lead to significant errors in products such as earthquake magnitude and location.
We present lessons learned from recent experiences with modern archiving and processing of legacy seismic data. These include techniques for data rescue (including both scanning and conversion to time series), the importance of characterizing the full processing chain, and the importance of involving archivists and citizen science in preservation efforts.