Tuesday, May 8, 2012

The "simplest" kind of weather model

When we talk about our weather models these days we're talking about complex systems of equations coded into computer programs that are expensive and time-consuming to run.  There's so much detail in what goes on in the atmosphere that it takes so much memory storage and processing power to compute forecasts for the time period we're interested in and at the detail we want.  Models that I show on this blog like the GFS or the WRF model contain hundreds of thousands of lines of code and can take supercomputers to run them efficiently (particularly in the case of the GFS).  But do we really need all of this power?  If we're willing to settle for a bit less detail in our models and trust our meteorological instincts some more, just how well can we make a forecast?

Do we really need big supercomputers and fancy, complex codes to make decent weather predictions on a global scale?

By doing a lot of simplifying, we can actually get a fairly good idea of the general flow in the atmosphere from some very basic models.  Today I'm going to talk about what is probably the "simplest" model that can realistically model atmospheric flow around the globe--the barotropic model.

The barotropic model was the first kind of numerical weather model ever successfully implemented--it was based on work by Charney in the 1940s.  You can see their original paper describing the work at this link.  So what goes into a barotropic model?

First, what does barotropic even mean?  Meteorologists use that word to describe an environment where pressure is only a function of temperature or density.  This means that if we were to look at, say, the 500mb surface in a barotropic environment, we'd be assuming that the temperature (or density) on that 500mb surface is everywhere exactly the same.  In other words, there are no temperature gradients on constant pressure surfaces.  Because of that, we don't have to worry about the affects of cooling or heating or temperature advection.  Immediately you'll notice that we run into problems--the entire atmospheric circulation is driven by heating by the sun, and yet we're ignoring heating?  How are we going to make things happen in this model?

Well, in some ways, we don't make things happen in this model--there are no features that add energy to the atmosphere in this model, or in other words there are no forcings.  In the barotropic model, since there are no external forcings, all we're really doing is taking the current state of the winds in the atmosphere and letting them blow until they blow themselves out.  The only thing we keep around is the fact that the earth is turning, so admittedly there is a Coriolis "force" present.  But there are no mountains causing lift, no ocean/land differences, and no solar heating.

"Well this is silly," you might think.  "How can we make any kind of decent weather forecast without having any impact from the sun or the land or anything?"  It turns out that, on the large scale, atmospheric motions in the relatively short term (the first few days or so) are pretty well dominated by this barotropic motion--the simple continuation and evolution of the flow without external forcing.  This has been known since the work of Carl Rossby in the 1930s.  I'm going to show examples of how well this actually works here today.

Based on a description of a modern-day implementation of the barotropic model from Issac Held and collaborators at Princeton, I coded up a "simple" global barotropic model in Python.  You can see animations of the output from the model on my webpage:

http://www.atmos.washington.edu/~lmadaus/research/barotropic

Though I warn you it's not updated that regularly, as my initialization data comes in several days late.  Still, it's there...and kind of fun to watch.

All this model does is predict the future wind, vorticity and height anomalies of the 500mb pressure surface.  That's it.  Here's how the barotropic model basically works:


  • We start with a global field of vorticity--that is, a measure of how much rotation there is in the atmosphere.  We're talking rotation on large scales--not small scale tornado-style rotation.  Remember in our barotropic assumptions above that we assumed that density or temperature was the exact same everywhere on a constant pressure surface.  If density is everywhere the same, we can't have any place where the winds are pushing air together to increase the density.  This means that we make an assumption of incompressibility which leads to an assumption of non-divergence--the winds everywhere can never be divergent or convergent.  If we make this assumption and we know the degree of rotation in the air (the vorticity), we can back out a wind field.  Here's an example of the global vorticity and non-divergent wind field I use to initialize my model:


Focus on the northern hemisphere in the image above.  Blue areas are areas with positive absolute vorticity--wind around them blows counterclockwise--think troughs or low centers.  Red areas are areas with negative absolute vorticity--wind around them blows clockwise--think ridges or high centers.  By going through and looking at how strong the vorticity gradients are, we can figure out how strong the winds should be, and we know what directions they're going based on the shape and orientation of these vorticity areas.  It's actually pretty simple to figure out if you know what you're doing.
  • Now that we have our wind field and vorticity field at the current time, we can use the wind field to advect the vorticity field.  That is, we know what the winds are right now, so we use the winds right now to push the vorticity values around for a certain amount of time.  In my model, I have a 15 minute time step, so I assume the winds are constant and use them to push along those blobs of vorticity for 15 minutes.
  • After doing this, I have a new vorticity field that has been advected around some.  However, I still have the old wind field.  Here, the model stops, takes that new vorticity field and computes the new wind field that corresponds with that new vorticity field, just like we did in the very first step. Now I have a new wind field that matches the new vorticity field.
  • From here, the model just repeats those same steps over--it takes these new winds and uses them to push the new vorticity around again for another 15 minutes.  Then we stop, recompute the wind field from the next vorticity field, then go on.  We can keep going for as long as we want...
So you see, all the barotropic model does is use the winds to basically push vorticity around, which gives new winds that push the vorticity around some more, and so on.  Once again, to reiterate, this model has:
  • No temperature or density gradients
  • No fronts
  • No mountains or terrain
  • No oceans
  • No heat sources from the below
  • No heat from the sun
  • No divergence or convergence
  • No rising or sinking motion
So how well does it do?

Well, here are some comparisons.  One thing we can also back out from the wind and vorticity fields is a field called the "streamfunction", which, for all we need to talk about here, is like a height anomaly map.  It highlights where there would be troughs and ridges on the 500mb map, were I actually forecasting the 500mb height.

On the left below is the 18 hour forecast of 500mb winds and that streamfunction/height anomaly field (the black solid and dashed contours).  On the right is the actual 500mb map from that time.
As always, you can click on the image to make it pop up bigger.  I've highlighted the major trough axes on both of the images just for reference.  You can see that, at least in terms of the placement of ridges and troughs, it's actually doing very well in the 18 hour forecast.  It has a deep trough over the western Gulf of Alaska, and the tightly-packed height contours on the actual 500mb map do imply that there are strong winds through the base of that trough, just like the barotropic model is predicting.  The model also does a good job forecasting the elongated trough over the western US and the compact upper-level low over northern Quebec.  It's a surprisingly good forecast for a model that has no real external forcing.

Let's go a bit further out into the forecast.  Here's a comparison of the 44 hour forecasts:
In many ways the forecast is still very good.  The Gulf of Alaska trough is in about the right place with strong winds right where they should be.  That little upper-level low over northern Quebec is also still being handled rather well.  However, we can start to see differences creeping in.  Notice that the elongated trough over the western US doesn't look quite right in the model.  The barotropic model has kept the trough much more compact and has it centered over the western plains whereas, in reality, the trough still extended quite far back off to the southwest.  Remember that in real life the Rocky Mountains stretch across the western US, but the barotropic model has no concept of mountains or topography.  So, any impact the high terrain would have on the flow is totally beyond anything the barotropic model would predict...

Finally, the 66 hour forecast comparison:

Now at almost three days out, things are beginning to fall apart.  The barotropic model still has that northern Quebec low in about the right position with a wind maximum in the right place, so at least that's good.  But now that Gulf of Alaska trough isn't in the right spot--the barotropic model has moved the trough to fast to the east, placing the trough axis over the coast whereas in reality the trough stayed cut off and sort of hung back in the central Gulf of Alaska.   That western US trough is almost totally absent from the barotropic model--the barotropic model kept its more compact, lower amplitude trough and moved it slightly east, centering it over the Mississippi valley.  In reality, the trough stayed elongated to the west and even formed a cutoff low off the coast of California--something not at all present in the barotropic model.

So we can see that, at least for the first day or two, the barotropic model with no real forcing actually does a fairly good job at predicting the major upper air pattern--not that bad for a very simplistic model. However, since there is no vertical motion, no divergence, no heating, etc., the barotropic model really cannot capture the formation and development of new troughs very well.  That cut-off low that formed by 66 hours is a prime example--the forcings that led to that cut-off low being formed were just not present in the barotropic model, so it missed it.

So we'll stick with our far more complicated global models that actually include all those things we ignored for now.  They may be a lot more expensive to run, but they're a bit more reliable beyond 48 hours.  As an interesting comparison, looking back at the history of those first barotropic numerical weather models, the first 24-hour global barotropic model forecasts on the ENIAC computer took 24 hours to run in the 1950s--not much of a forecast if you don't get it until right when it's happening!  By comparison, my barotropic model integrates out to 120 hours in about 20 seconds on my desktop.  Amazing how far computers have come...

Tuesday, May 1, 2012

Using Doppler, dual-pol radar to interrogate storms

Last night there were some pretty impressive-looking storms that moved across northern Oklahoma, dropping a few tornadoes along the way.  I pulled a few screenshots of radar images from these storms to look at a lot of the helpful and hazardous things that our radar system gives us.

First, one of the primary advantages of having a Doppler radar--that is, a radar that uses the Doppler effect to measure wind velocity--is that it lets us more accurately see areas of rotation within thunderstorms.  Before we had Doppler radars (and we just had regular radars), all we had was reflectivity to look at.  Sometimes we got lucky and tornadic storms would produce a well-defined "hook echo" that let us know where a tornado was likely to be.  Here's an image (borrowed from Cliff Mass's blog, where he borrowed it from someone else) of an old-style, 1960s era radar image:

This works out great if you have a storm that has a well-defined hook echo structure.  But we don't always see clear hook echoes in tornadic storms.  Take a reflectivity image from last night, for example.


There are tornado warnings out for these storms and storm spotters reported a tornado on the ground with at least one of them at this time.  But where is the tornado?  There are a few candidate locations that look a little like hook echoes, but nothing too clear.

This is where being able to see Doppler wind velocities is really helpful.  Here's the base velocity image from this time:
I've circled the areas where there are nice velocity couplets that clearly show locations of strong circulation where there could potentially be tornadoes.  If you want to know a bit more about how to look for rotation in these radar velocity images, I wrote a blog post some time back about it that can be found here.

So getting Doppler radars was a huge boon to our ability to forecast and warn for tornadoes.  In fact, a lot of research suggests that adding Doppler capabilities to our radars is the single most important advance we made to increase our warning lead times and probability of detection of tornadoes.

But now we've added even more capabilities to our radars--this whole dual-polarization business.  What can dual-pol show us about these thunderstorms?

If you go through the training material for dual-pol, there are a lot of nice, relatively clean examples that are given that show you how to differentiate between things like rain, hail and even tornadic debris using dual-pol products.  However, in my experience, real-world interpretation of these products is often not nearly so clear-cut.  There are some things that dual-pol nails every time and other things that it doesn't.  Let's start looking at the dual-pol images from the time I showed above.

We'll start with differential reflectivity.  In dual-pol parlance, this is simply the difference between the strength of the horizontally-oriented radar return and the vertically-oriented radar return (eventually I'll write a more detailed blog post about all of this).  This means that the bigger and fatter objects (like, big raindrops which tend to flatten out as they fall) will have higher ZDR (differential reflectivity) values.  Nearly spherical objects or objects that are tumbling as they fall (like hail) will tend to have much lower, near zero ZDR values.  If you have a lot of different objects in an area (like, for instance, if there is tornadic debris being thrown around) we'd expect to see very noisy ZDR values.  Here's my annotations on the ZDR radar image for the time above:

There's a lot going on in that image.  Let's start with my top comment.  If we compare this to the reflectivity image above, you'll notice that in an area where there's a lot of high reflectivity to the north of the potential tornado, we see low, zero, or nonexistent ZDR.  This can happen if the radar beam is traveling through so much material (so much rain or hail) that too much of the beam gets absorbed to make reasonable inferences about what we're actually getting back from the radar beam.  There's just too little left to work with.  This is a phenomenon called attenuation--the absorption of the radar beam by some material that decreases the radar beam's power and resulting sensitivity.  I'll get back to that again later.  For now, it's somewhat disappointing that this northern part of the storm seems to be missing a lot of signal.

Just south of that, closer to the core of the storm, we have a patch of higher ZDR--on the order of 6-7 dB.  We know that higher ZDR values ten to correspond to big, flat objects like big raindrops. So we could probably infer that there was a very heavy downpour going on in the middle of the storm.

But what about the area right near the potential tornado?  The ZDR values are somewhat weaker there, though still slightly positive.  We know that near-zero ZDR values are typical of hail, so maybe there is hail there?  They also look a little noisy, and the base velocity image suggests there could be a tornado there.  Perhaps this is a debris signature?  It's really difficult to tell.

Let's try looking at another dual-pol image for help--the correlation coefficient.  Correlation coefficient basically tells us whether or not everything the radar beam is bouncing off of in that area of space is the same thing or not.  If we have high correlation coefficient, we're probably looking at all rain or all hail.  Lower correlation coefficients mean more of a mix of different precipitation types.  Really low correlation coefficients mean that what the radar beam is bouncing off of is probably not normal precipitation and is instead something like debris, bugs, dust...

Here's the correlation coefficient image from the same time.

Let's start with that area to the north of the core of the storm again, where we saw the low to nonexistent ZDR before.  We see here that that area is full of very noisy, lower correlation coefficients. This could either mean that we're seeing a lot of different sized and shaped objects floating around in that area, but another way to get low, noisy correlation coefficient is for there to be a very weak radar signal.  This is consistent with my early statement that the radar beam is probably somewhat being attenuated in that area.

Looking back down toward the area with the potential tornado in it, we see that the correlation coefficient is still quite high in that area.  This isn't very consistent with debris, as tornadic debris is all different shapes and sizes and should have a lower correlation coefficient.  So, high reflectivity with lower ZDR and high correlation coefficient makes me lean more toward hail being seen in this area.  But, it's very difficult to tell.  Even though we get so much more information from these dual-pol products, it still can be very difficult to interpret.

Let's look at a slightly later time when the storms had merged more together.  Our old friend reflectivity can show us a lot about the storm structure:

One of the striking features of this reflectivity image is the clear outflow boundary seen on reflectivity.  This is a narrow band of slightly higher returns that's making an arc shape out in front of the main storms.  An outflow boundary is the leading edge of the cool, downdraft air that accompanied the rain falling out of the thunderstorms.  As this cool air falls to the ground with the rain, it spreads out and races along the ground out away from the storm.  Those higher returns along the leading edge of the boundary are not actually caused by rain--they're caused by dust, dirt, insects and birds that are caught along the leading edge of the advancing air.  Want proof of this?  We only have to turn to the correlation coefficient image from the same time:
Remember that hydrometeors like rain and hail tend to be rather highly correlated--they're the same everywhere.  Even in mixed precipitation the correlation only drops a small amount.  But other things that are not so evenly shaped--things like dust, birds, insects--those show up as very low correlations.  You can see int he correlation coefficient image above that the outflow boundary is completely missing--the correlations of those returns are so low that they're actually off the color chart.  In fact, a lot of the clutter off to the west of the storms also basically disappears in correlation coefficient.  This highlights one of the best uses of correlation coefficient--to differentiate between what's actually precipitation and what's not.

Here's a look at the velocity image from this time.  Note the strong velocity couplets where there are potential tornadoes.  I've drawn in arrow to help show where the radar says the air is moving--remember green is toward the radar and red is away from the radar.  You can also see how the air behind the outflow boundary is moving to the south, out away from the storms.  There's also strong convergence right along the outflow boundary.

One final thing to look at here.  Sometimes when a radar gets hit with a particularly heavy downpour, the dome around the radar gets coated with a thin layer of water.  As the radar beam tries to pass through the radar dome, some of it gets absorbed by that layer of water, creating more attenuation.  Since the radar beam suddenly loses power as soon as it crosses through the layer of rain, it causes the beam's sensitivity to drop--we have less power going out, so there's even less power that can be reflected back.  This can make radar echoes suddenly appear weaker as a heavy downpour moves over the radar.  This happened last night.  Here's a reflectivity image from right before a downpour hit the radar:

Now watch what happens to the radar returns in the circled area in the next radar scan when there is a downpour right over the radar:

The strength of the return suddenly drops off!  This happens actually quite frequently, and we often call this the "wet radome effect".  Something to watch for if you suddenly see the radar echoes all weaken dramatically.