Faults' hot streaks and slumps could change earthquake hazard assessments

On Monday, 23 Sept. 2019, at the GSA Annual Meeting in Phoenix, Seth Stein, the Deering Professor of Geological Sciences at Northwestern University, will present a new model that he and his co-authors believe better explains the complexity of the “supercycles” that have been observed in long-term earthquake records. “One way to think about this is that faults have hot streaks — earthquake clusters — as well as slumps — earthquake gaps — just like sports teams,” says Stein.

In the traditional concept of the seismic cycle, Stein explains, the likelihood of a large earthquake depends solely upon the amount of time that has elapsed since the most recent large tremor reset the system. In this simple case, he says, the fault has only a “short-term memory.”

“The only thing that matters,” says Stein, “is when the last big earthquake was. The clock is reset every time there’s a big event.”

But this model is not realistic, he argues. “We would never predict the performance of a sports team based on how they performed during their previous game,” says Stein. “The rest of the season is likely to be much more useful.”

Geologists sometimes see long-term patterns in paleoseismic records that the seismic-cycle model can’t explain. In these cases, says Stein, “Not all the accumulated strain has been released after one big earthquake, so these systems have what we call “long-term memories.'”

To get a sense of how a system with Long-Term Fault Memory would function, the researchers sampled windows of 1,300-years — a period of time for which geologists might reasonably have a record available — from simulated 50,000-year paleoseismic records. The results indicate that earthquake recurrence intervals looked very different depending upon which 1,300-year window the scientists examined.

Because there are random elements involved, says Stein, there are windows when the recurrence intervals appear to be periodic, and other times when they look clustered. “But the fault hasn’t changed its properties,” he says. Eventually, the model predicts that the earthquakes will release much of the accumulated strain, at which point the system will reset and the fault’s “streak” will end.

According to this Long-Term Fault Memory model, the probability of an earthquake’s occurrence is controlled by the strain stored on the fault. This depends on two parameters: the rate at which strain accumulates along the fault, and how much strain is released after each big earthquake. “The usual earthquake-cycle model assumes that only the last quake matters,” says Stein, “whereas in the new model, earlier quakes have an effect, and this history influences the probability of an earthquake in the future.” After a big quake, he says, there can still be lots of strain left, so the fault will be on a hot streak. Eventually, however, most of the strain is released and the fault goes into a slump.

Ultimately, says Stein, the earthquake hazard depends upon whether or not a fault is in a slump or a streak. “Depending on which of those assumptions you make,” he says, “you can get the earthquake probability much higher or much lower.”

Seismologists have not yet come up with a compelling way to determine whether a fault is — or is not — in a cluster. As a result, says Stein, “There’s a much larger uncertainty in estimates of the probability of an earthquake than people have been wanting to admit.”

https://www.sciencedaily.com/rss/all.xml