Wednesday, May 29, 2013

The frequency of strong tornadoes

Given the recent EF5 Moore tornado, the 2011 Joplin EF5, the 2011 Tuscaloosa EF4 and other strong tornadoes in the last few years, a lot of people have been commenting on whether or not we're seeing more violent tornadoes now than in the past.  A few have even tried ascribing recent violent weather to climate change.  I find this particularly amusing, particularly consider that these last 12 months have been the least active 12 months of tornado activity on record.  But what do the trends say--are we seeing more strong or violent tornadoes more frequently?

A lot of people have looked at this particular question, including some excellent blog posts like those from  Jeff Masters.  A timely article in the Bulletin of the American Meteorological Society by Kunkel et al. shows that the frequency of strong (EF1+) tornado reports is not increasing, though weaker (EF0) tornado reports have increased.
Number of tornadoes per year (From Kunkel et al. 2013)
However, they note that these observations are complicated by many different factors, most notably the expansion of population areas over time and the increased prevalence of automated observations.  Because of these factors they looked at changes in the frequency of weather conditions that favor strong tornadoes instead.

Let's dive into these counts a little deeper to illustrate why it's difficult to pull any trend out of these numbers. I pulled a Patrick Marsh and grabbed the tornado count data from the SPC website and made the same sort of plot as the Kunkel et al. plot above, but for all of the different F/EF ratings.  On the left we see the counts of the total tornadoes reported each year in each strength category and on the right we see these counts as a fraction of the annual total.

In terms of raw numbers of tornadoes reported, the number definitely has been increasing over the last 53 years. However, we can see that it really has been those F/EF0 reports that have been driving the increasing trend in tornado reports.  If we look at the percentage plot on the right, we only see an increasing trend for F/EF0 reports--there is virtually no trend in the fraction of F/EF 1, 4 or 5 with slight downward trends (in terms of total fraction in F/EF2 and F/EF3 reports.  This is an interesting breakdown, particularly in the divergence between the reports of F/EF1 and F/EF 2 tornadoes.  This seems to be the breaking point--the total number of F/EF 1 and F/EF 0 tornadoes reported has been increasing while other levels haven't shown a whole lot of change.

A lot of this increase in reporting weaker tornadoes has to do with population growth as we urbanize more areas and a general increase in public awareness.  If you have strong tornadoes (here, F/EF 2 or greater), they're typically more likely to be reported; the damage is a lot more obvious and more easily attributable to tornadoes.  As we increasingly build up our urban areas and become more interconnected, we've become more sensitive to even small disruptions in our infrastructure and as such even damage caused by weak tornadoes gets reported.

I'm also curious as to the sudden jump in the number of  F/EF0 tornadoes that began in the late 1980s and has continued through this day.  I was wondering if this coincided with the roll-out of Doppler radars nationwide that took place in the late 80s and early 90s.  It turns out that a 2005 study by Simmons and Sutter looked at the impact of our Doppler radars on tornado warnings.  They broke down (on a forecast office-by-forecast office basis) the number of tornado reports by F-scale before and after the WSR-88D implementation:
From Simmons and Sutter (2005)
We can use these numbers to plot out the percent change in total number of tornado reports of each F scale category before and after Doppler radar implementation.
The percentage of reports of F0 tornadoes increased by about 11 percent--the only category where this happened.  I really think that the Doppler radar implementation has had a significant role in increasing the number of weak tornadoes reported, as this has allowed meteorologists to see small-scale spinups that might otherwise have been lost in larger thunderstorm wind damage.  This increased detection of weaker tornadoes helps focus storm survey efforts and attributes more damage reports to weak tornadoes than we may have suspected in the past.

So, after that brief look, it's true that the number of tornadoes reported has been increasing over time, but that doesn't necessarily mean that the total number of tornadoes has been increasing over time--we're just getting better and detecting weaker tornadoes and our population is more sensitive to the effects of weaker tornadoes.  In focusing on stronger tornadoes, there's not enough evidence to suggest that the number of reports are increasing.  In fact, as a fraction of total reports, the amount has remained somewhat steady if not decreasing slightly as weaker tornado reports have become far more common.

Friday, May 24, 2013

A return--with some thoughts on forecasting severe thunderstorms

Hello again everyone!  It has been a year since my last blog post, and having finished a lot of work in that time I finally decided to resume posting again.  Hopefully I'll continue more steadily again from here on out.  I continue to get comments and feedback on my old posts (thanks to Google cataloging everything so effectively...) and I appreciate all of the thoughts and comments I've received.  Keep them coming!

Today I just want to offer a few thoughts or notes on forecasting severe thunderstorms with some examples surrounding the Moore tornado.

  • It has been well-established by many people that the National Weather Service did an excellent job at warning the people of Moore before the storm hit--as much as 36 minutes of lead time by some estimates, which is well above the national average of around 10 minutes of lead time.  I happened to be in central Oklahoma last weekend (though I left on Sunday evening when there were a different set of tornadic storms moving through) and it was my experience, just as when I was living down there, that the people of Oklahoma are extremely weather literate and aware.  It seemed everywhere I went, the people I encountered would remark about how we're, "in for some rough weather this afternoon" or to, "get that rental car back before a hailstorm moves in...or worse".  Everywhere TVs were tuned to the local news and the Weather Channel and I overheard non-meteorologists talking about "moderate risks" and "mesoscale discussions".  This weather event was not something that was completely unexpected at all, and I admire and credit the people of Oklahoma for taking such an active interest in their weather forecasts so that they can stay safe.
  • As someone interested in mesoscale numerical modeling, I follow the work being done by the Hazardous Weather Testbed Spring Experiment teams who are looking at evaluating our model performance and nowcasting tools and abilities for severe weather events.  They have multiple blogs, for example the GOES-R Proving Ground group or the Experimental Forecast Program.  These groups have some fascinating new tools and observations that show what's on the forefront of our ability to predict severe convective weather events.  Below are some things that I noted from their tools surrounding the Moore event.
  • To illustrate just how hard it is for our models, even at high resolution, to predict when and where individual thunderstorms will strike, below is a comparison of two model runs from the NSSL 4km WRF, one starting at 00Z on May 20 and another at 12Z on the 20th.  Both are simulations of  what the radar composite would look like for the hours leading up to 20Z--the time when the tornado was entering Moore.  You can see that the 00Z model run (the left column) seemed to pick up more on storms in southwestern Oklahoma and some in central Oklahoma, but it completely missed the development further to the northeast along the cold front.  The 12Z run (middle column) really picked up on the development along the cold front, but kind of missed the storms in central Oklahoma.  The right column shows the actual observed reflectivity composite for comparison.  It's often been noted in convective events that the 12Z model guidance doesn't always provide a better forecast than the 00Z guidance, even though it was run later with more recent information.  This shows just how hard it is for our deterministic models to forecast storm development, even only a few hours in advance.
  • One way to try and work with the difficulties that any single model will have at trying to predict when and where storms will form is to use an ensemble--many models runs with slightly different initial conditions and/or different model formulations.  We can use these to estimate probabilities of certain events occurring by seeing how many of the ensemble members have that event occurring.  Below shows an example of forecast probabilities of updraft helicity (i.e., rotating updrafts) exceeding various thresholds from the Storm Prediction Center's Storm-Scale Ensemble of Opportunity for a three hour period including the time of the Moore tornado.  Not too bad--the ensemble suggests a high probability of  rotating updrafts throughout central Oklahoma.

          Of course, some ensembles can also be misleading or still not capture things well.  Below is an example prediction from the CAPS ensemble of where a certain parameter (the Significant Tornado Parameter) will be greater than 3 (indicating a likelihood of strong tornadoes) during the Moore event.  The highest probabilities are indicated far to the northeast of Moore.  However, low probabilities are still present in the Moore area, indicating that by this parameter a significant tornado was still possible in this area...

  • There are also efforts underway to support a project called "Warn-on Forecast".  The idea is that once cumulus clouds start growing, we can first identify which particular clouds have the highest potential of growing upscale into stronger thunderstorms.  Tools are being developed to do this, including satellite products that estimate the rate of cloud growth by how fast the tops of the clouds are cooling, or lightning-based methods that will identify storms where lightning frequency is ramping up.  Once these growing storms have been identified, the plan would be to build a high-resolution ensemble of models centered on that storm.  We could then forecast the evolution of that storm and get uncertainty information from the different ensemble members.  This would let forecasters evaluate the ensemble to get probabilities for the storm producing large hail, strong winds or even tornadoes.  We could also get uncertainty in the path of the storm, allowing forecasters to make much more accurate warnings.  Furthermore, as our confidence in these forecasts grows, we could issue warnings for the storms before they even become severe, greatly increasing lead times.
  • Unfortunately an active, real-time warn-on forecast system like this still a ways off, but some of the tools to support it do exist.  Cloud-top cooling from satellites and lightning mapping arrays are very real things and are actively being tested for their ability to identify storms that will grow.  Some groups are also working on using radar data to make high-resolution model analyses of the wind, temperature, pressure, and moisture fields surrounding these storms as they are developing.  This is the first step in trying to make these storm-scale ensembles.  Below is an example 1km analysis produced from radar data as the Moore tornado was developing.  Vorticity is contoured in black, and you can see that the analyzed wind fields have a lot of vorticity (rotation) in the right areas of these storms.
One problem with this warn-on forecast methodology is that it is inherently limited--we can only identify when and where the growing storms will be after they have already formed and are beginning to grow.  Forecasting convective initiation--that is, when and where storms will form before they've even formed is still incredibly difficult (see the NSSL WRF above for an exmaple).  But we're working on that too...

So, in brief, the meteorological community is working on ways to better predict severe convective events like what happened in Moore.  Forecasters already are doing an amazing job with the resources they have, but new tools are in the works to more accurately refine our forecasts to reduce false alarms and give people more lead time.  It's an exciting time to be researching in this field!