Projects per year
Abstract / Description of output
Recent developments in earthquake forecasting models have demonstrated the need for a robust method for identifying which model components are most beneficial to understanding spatial patterns of seismicity. Borrowing from ecology, we use Log‐Gaussian Cox process models to describe the spatially varying intensity of earthquake locations. These models are constructed using elements which may influence earthquake locations, including the underlying fault map and past seismicity models, and a random field to account for any excess spatial variation that cannot be explained by deterministic model components. Comparing the alternative models allows the assessment of the performance of models of varying complexity composed of different components, and therefore identifies which elements are most useful for describing the distribution of earthquake locations. We demonstrate the effectiveness of this approach using synthetic data and by making use of the earthquake and fault information available for California, including an application to the 2019 Ridgecrest sequence. We show the flexibility of this modelling approach and how it might be applied in areas where we do not have the same abundance of detailed information. We find results consistent with existing literature on the performance of past seismicity models, that slip rates are beneficial for describing the spatial locations of larger magnitude events and that strain rate maps can constrain the spatial limits of seismicity in California. We also demonstrate that maps of distance to the nearest fault can benefit spatial models of seismicity, even those that also include the primary fault geometry used to construct them.
FingerprintDive into the research topics of 'Data-driven optimization of seismicity models using diverse data sets: generation, evaluation and ranking using inlabru'. Together they form a unique fingerprint.
- 2 Finished