Friday, May 24, 2013

A return--with some thoughts on forecasting severe thunderstorms

Hello again everyone!  It has been a year since my last blog post, and having finished a lot of work in that time I finally decided to resume posting again.  Hopefully I'll continue more steadily again from here on out.  I continue to get comments and feedback on my old posts (thanks to Google cataloging everything so effectively...) and I appreciate all of the thoughts and comments I've received.  Keep them coming!

Today I just want to offer a few thoughts or notes on forecasting severe thunderstorms with some examples surrounding the Moore tornado.

  • It has been well-established by many people that the National Weather Service did an excellent job at warning the people of Moore before the storm hit--as much as 36 minutes of lead time by some estimates, which is well above the national average of around 10 minutes of lead time.  I happened to be in central Oklahoma last weekend (though I left on Sunday evening when there were a different set of tornadic storms moving through) and it was my experience, just as when I was living down there, that the people of Oklahoma are extremely weather literate and aware.  It seemed everywhere I went, the people I encountered would remark about how we're, "in for some rough weather this afternoon" or to, "get that rental car back before a hailstorm moves in...or worse".  Everywhere TVs were tuned to the local news and the Weather Channel and I overheard non-meteorologists talking about "moderate risks" and "mesoscale discussions".  This weather event was not something that was completely unexpected at all, and I admire and credit the people of Oklahoma for taking such an active interest in their weather forecasts so that they can stay safe.
  • As someone interested in mesoscale numerical modeling, I follow the work being done by the Hazardous Weather Testbed Spring Experiment teams who are looking at evaluating our model performance and nowcasting tools and abilities for severe weather events.  They have multiple blogs, for example the GOES-R Proving Ground group or the Experimental Forecast Program.  These groups have some fascinating new tools and observations that show what's on the forefront of our ability to predict severe convective weather events.  Below are some things that I noted from their tools surrounding the Moore event.
  • To illustrate just how hard it is for our models, even at high resolution, to predict when and where individual thunderstorms will strike, below is a comparison of two model runs from the NSSL 4km WRF, one starting at 00Z on May 20 and another at 12Z on the 20th.  Both are simulations of  what the radar composite would look like for the hours leading up to 20Z--the time when the tornado was entering Moore.  You can see that the 00Z model run (the left column) seemed to pick up more on storms in southwestern Oklahoma and some in central Oklahoma, but it completely missed the development further to the northeast along the cold front.  The 12Z run (middle column) really picked up on the development along the cold front, but kind of missed the storms in central Oklahoma.  The right column shows the actual observed reflectivity composite for comparison.  It's often been noted in convective events that the 12Z model guidance doesn't always provide a better forecast than the 00Z guidance, even though it was run later with more recent information.  This shows just how hard it is for our deterministic models to forecast storm development, even only a few hours in advance.
  • One way to try and work with the difficulties that any single model will have at trying to predict when and where storms will form is to use an ensemble--many models runs with slightly different initial conditions and/or different model formulations.  We can use these to estimate probabilities of certain events occurring by seeing how many of the ensemble members have that event occurring.  Below shows an example of forecast probabilities of updraft helicity (i.e., rotating updrafts) exceeding various thresholds from the Storm Prediction Center's Storm-Scale Ensemble of Opportunity for a three hour period including the time of the Moore tornado.  Not too bad--the ensemble suggests a high probability of  rotating updrafts throughout central Oklahoma.

          Of course, some ensembles can also be misleading or still not capture things well.  Below is an example prediction from the CAPS ensemble of where a certain parameter (the Significant Tornado Parameter) will be greater than 3 (indicating a likelihood of strong tornadoes) during the Moore event.  The highest probabilities are indicated far to the northeast of Moore.  However, low probabilities are still present in the Moore area, indicating that by this parameter a significant tornado was still possible in this area...

  • There are also efforts underway to support a project called "Warn-on Forecast".  The idea is that once cumulus clouds start growing, we can first identify which particular clouds have the highest potential of growing upscale into stronger thunderstorms.  Tools are being developed to do this, including satellite products that estimate the rate of cloud growth by how fast the tops of the clouds are cooling, or lightning-based methods that will identify storms where lightning frequency is ramping up.  Once these growing storms have been identified, the plan would be to build a high-resolution ensemble of models centered on that storm.  We could then forecast the evolution of that storm and get uncertainty information from the different ensemble members.  This would let forecasters evaluate the ensemble to get probabilities for the storm producing large hail, strong winds or even tornadoes.  We could also get uncertainty in the path of the storm, allowing forecasters to make much more accurate warnings.  Furthermore, as our confidence in these forecasts grows, we could issue warnings for the storms before they even become severe, greatly increasing lead times.
  • Unfortunately an active, real-time warn-on forecast system like this still a ways off, but some of the tools to support it do exist.  Cloud-top cooling from satellites and lightning mapping arrays are very real things and are actively being tested for their ability to identify storms that will grow.  Some groups are also working on using radar data to make high-resolution model analyses of the wind, temperature, pressure, and moisture fields surrounding these storms as they are developing.  This is the first step in trying to make these storm-scale ensembles.  Below is an example 1km analysis produced from radar data as the Moore tornado was developing.  Vorticity is contoured in black, and you can see that the analyzed wind fields have a lot of vorticity (rotation) in the right areas of these storms.
One problem with this warn-on forecast methodology is that it is inherently limited--we can only identify when and where the growing storms will be after they have already formed and are beginning to grow.  Forecasting convective initiation--that is, when and where storms will form before they've even formed is still incredibly difficult (see the NSSL WRF above for an exmaple).  But we're working on that too...

So, in brief, the meteorological community is working on ways to better predict severe convective events like what happened in Moore.  Forecasters already are doing an amazing job with the resources they have, but new tools are in the works to more accurately refine our forecasts to reduce false alarms and give people more lead time.  It's an exciting time to be researching in this field!

No comments:

Post a Comment