Sunday, October 16, 2016

A warm, mixed-out day in Colorado

As I mentioned in my last blog, I've recently moved to Boulder, Colorado.  While I'll continue to post about interesting weather tidbits that show up around the world (like last Friday/Saturday's storm in Seattle, which was actually rather well-forecast despite the winds not being as strong as the worst-case scenarios were advertising.  See Cliff's blog and NWS Portland's discussion for more), I'll probably switch to having a more Colorado-centric tilt to my posts.  This presents a great opportunity to explore some weather topics I haven't covered in the past, given Colorado's unique geography and associated weather variability.

I also want to get back to explaining some of the reasoning behind the weather as I understand it.  Not only does this help me really think about why the weather does what it does, but hopefully it gives you a glimpse at how I understand the weather to work.  This might be different from your perceptions, or it may be something you never thought about before, or something you understand with absolute clarity and can point out where I've messed up.  I'm trying to keep this at an in-between level, where enthusiastic meteorological novices may understand some of it, but all my professional meteorology friends may still find something interesting in reading it.  Hopefully this will be useful.  Let's see how it goes...

After a very cool morning, we ended up with a very warm day in Boulder: the normal high this time of year is 65 Fahrenheit, but my weather station shows we got up to almost 82 degrees today.  Extraordinary warmth with a lot of gusty winds (something my poorly-sited weather station does not capture well...).  What was most interesting (to me) was how the temperatures evolved throughout the day.  Here's the time series of what the temperatures looked like today at my weather station:

Sunrise was around 7:13 AM this morning, and you can see that the temperatures started rapidly rising after that time.  In fact, it rose from near 43 degrees up to 79 in about 3 hours...a 36 degree temperature change!  But then a curious thing happened: the temperature stopped rising so rapidly, despite it only being mid-morning.  In fact, over the entire rest of the morning and afternoon, it only got three degrees warmer.  This, despite the fact that we had nearly full sunshine all day.  Why didn't it keep warming up?

To answer that, we have to look at the profile of temperature in the atmosphere above.  Here's this morning's 12Z sounding from Boulder/Denver (from the University of Wyoming site):
We can clearly see that there was a very sharp inversion near the surface: a layer where the atmospheric temperature (right line) increases rapidly with height.  This means that the cooling we had overnight was confined to very close to the surface, and very warm air was only a short distance above the ground.

This explains why the temperature rose so quickly as soon as the sun came out.  As the sun starts heating the land, it causes mixing to occur---turbulent motions in the air near the surface.  These turbulent motions move air up and down.  In this case, not only was the warming land surface heating the air from below, but this mixing motion was also bringing down much warmer air from just above the surface.  No wonder the temperature quickly warmed up!

But why did it stop warming?  As the temperature near the surface continues to warm, the atmosphere becomes less and less stable.  Think of the stability of the air as the resistance of the air to vertical motion.  As we heat up the air near the surface, it becomes more likely that that air will be warmer than its surroundings and want to rise.  That rising air carries away heat with it; this is called convection.

As the surface continues to heat up, it heats the air next to the surface and that air is able to rise farther and farther up into the atmosphere---the stability decreases.  That rising air is also carrying away that heat with it.  As the convecting air currents get deeper and deeper, they're soon able to convect heat away from the surface as effectively as the surface is warming the air above it.  This makes the temperature at the surface stop rising.

It turns out we have a special name for an atmospheric temperature profile where this can happen: an adiabatic lapse rate.  Remember a lapse rate is the rate at which temperature changes with height.  This adiabatic lapse rate is a very specific change in the atmospheric temperature with height.  Let's jump ahead and look at this evening's sounding from Boulder/Denver:

Adiabatic lapse rates are so important to a lot of meteorological thermodynamics that we actually print them on most of our weather balloon sounding diagrams for reference.  There are two kinds of adiabatic lapse rates: dry and moist.  Here the atmosphere is quite dry (the dewpoint line (left line) is well-removed from the temperature line) so we are most concerned with the dry adiabatic lapse rates.  These are all of the green lines on the sounding diagram above; I highlighted two of them for reference.  If the atmospheric temperature profile is dry adiabatic, then it would be parallel to these green lines.  You can see that this is the basically the case over a very deep layer of the lower atmosphere: from the surface (at about 825 hPa) all the way up to 500 hPa.  This dry adiabatic lapse rate tells us that the atmosphere has mixed-out.

Here's another way to think about it: the depth over which the atmosphere is dry adiabatic is the depth over which those vertical motions (the air rising and sinking) can easily occur.  This means that any time the air near the surface tries to warm up in this environment, these vertical motions can convect that heat away all the way up through 500 hPa, which is some 4.4 km above the surface.  That makes it very difficult for the surface to keep warming up, since essentially for that to occur, you need the entire dry adiabatic part of the atmosphere to warm up too.  It's one thing to add enough energy to just heat up a shallow layer 100 m deep or so.  It's another whole ball game to add enough energy to heat up the bottom 4.4 km of the atmosphere.  The sun just couldn't keep up! So as soon as that shallow inversion had been heated through this morning, our lapse rate became dry adiabatic (and later, even a little superadiabatic!) down to the surface.  And just like that our warming slowed way, way down since we had to warm not just 100 m, but a full 4.4 km of air.

Another thing to mention is that with all of this vertical air motion going on (both up AND down), it makes for some very gusty winds.  As soon as that dry adiabatic lapse rate started becoming established around 10 AM, winds throughout the day ramped up and were gusting to 25-30 mph at the NCAR Foothills lab.
 There's another effect that can also occur called momentum transport, where the momentum from stronger winds aloft (notice on the sounding above there are 50 knot winds at 500 hPa) can be "dragged down" by these vertical motions to the surface.  All in all it made for a very gusty day.

The NWS forecast for tomorrow has a red flag warning for fire weather danger, but temperatures are a bit cooler:
The gusty winds are a hint that we're probably going to end up mixing-out again in the afternoon.  But why the cooler temperatures?  Let's check the 700 hPa temperatures now and the forecast for tomorrow afternoon from the GFS (as plotted by Pivotal Weather).  Here's this evening's 700 hPa analysis:
The temperatures at 700 hPa are about 15 Celsius over Boulder this evening.  Here's the forecast for tomorrow afternoon (18Z):
You'll notice that a pool of colder temperatures to the west (associated with a very broad, approaching upper-level trough) are sliding eastward.  The 700 hPa temperature are forecast to be closer to 10 Celsius tomorrow afternoon.  If the air gets colder aloft, then the surface temperatures won't have to warm up as much before the lapse rate hits that dry adiabatic threshold and the near-surface warming slows way down.  Thus, our high temperature forecast tomorrow is lower than for today, even with the same basic process at work.

Another thing you'll notice is that there's a sharp gradient in those 700 hPa temperatures.  You might even think of this as a "front" in the air aloft.  Strong temperature gradients at lower levels promote strong winds above (though something called the thermal wind).  We can check the wind forecasts at a higher altitude, say 500 hPa:

Sure enough, there is a jet at 500hPa right over Colorado.  We now have everything we need to explain the forecast for tomorrow:

  1. Colder air moving in aloft means we will mix out sooner tomorrow; this will limit how warm we can get and make the high temperature somewhat cooler than today.
  2. The temperature gradient along the leading edge of that cooler air is associated with a jet streak of strong winds moving in as well.  Our mixed-out atmosphere will likely mix down some of that momentum to the surface, making the winds likely even gustier than they were today.
So there's a bit of fun atmospheric thermodynamic diagnosis going into tomorrow's forecast.  Hope you enjoyed it.  I know my bike ride back from work is going to be a struggle against the wind...

Thursday, October 13, 2016

The scary, potentially historic set of windstorms expected in Seattle over the next few days

I've been quiet on the blogging front for the past several months as I've been transitioning out of graduate school life and into the postdoctoral world.  I've moved from the University of Washington out to NCAR in Boulder, Colorado, which will give me any entirely new wealth of weather phenomena to talk about.  But, I could not resist putting up a blog about the forecasted major wind events expected to impact the Pacific Northwest this weekend...

This looks like it's going to be a significant series of events throughout the Pacific Northwest.  We can start by reviewing the large-scale synoptic pattern  Here's this morning's GFS-WRF model analysis at 500 hPa over the region from my OLYMPEX model page:
We can see the multi-lobed threat heading towards the Pacific Northwest.  There is broad-scale troughing across the entire northern Pacific.  Embedded in that longwave trough are what we call "shortwave" troughs--little wiggles along the edges of the main trough.  These wiggles may be little, but they represent enough instability aloft to generate some powerful storms.  Note that one lobe has already moved up into northern Vancouver Island and the Canadian coast.  That was associated with the rain and front from last night in the area.  The next shortwave is still out to sea, but rapidly approaching the coast.  By late this evening, it will have deepened considerably off the coast:
This is going to be followed by another shortwave that is so rapidly-developing that you can't even clearly see it yet in the above image.  By late Friday evening, it's starting to show up---you can see a little ripple in the central Pacific associated with a little maximum in the colored vorticity field:
This trough is associated with the remnants of Typhoon Songda, as Cliff Mass discusses on his blog.  This rapidly deepens throughout the night and into Saturday morning, reaching the coast by Saturday evening:
At the surface, these deepening troughs lead to strong cyclones with powerful winds (and a lot of rain too!).  The local WRF model currently (as of the latest 12Z run, but this could change!!!) brings 25-30 knot winds to Seattle with the Friday storm:
But over 50 knot winds to Seattle with the Saturday storm:
This morning's WRF run, as seen above, takes the Saturday storm on an almost perfect track to produce a major windstorm in Seattle.  This is a scary possibility, and concerning that this is the latest model development.  But, should we believe just one model?

 In modern weather forecasting, we like to look at our forecasts in terms of probabilities, as there remain great uncertainties in our forecasts.  We often do this by considering not just one model forecast like I showed above, but several, often in the context of what we call "ensemble" forecasts.  One ensemble used is the Global Ensemble Forecasting System (GEFS), run by the US National Weather Service.  We can look at these ensemble forecasts and compare them to previous ensemble forecasts to see just how unusual or significant a particular forecast may be.  NOAA has an (experimental?) product called the "Ensemble Situational Awareness" table (  You can go through time and different forecast variables over a particular region and at a glance see if there is anything significant going on in the forecast.  Here's the table from last night's GEFS run over the Pacific Northwest:

Each row is a different forecast time and each column is a different forecast variable (e.g., SLP is sea-level pressure, WSP is wind speed). See all those "MAX" and "MIN" values?  That means that somewhere in the Pacific Northwest the forecast at that time, for that variable, is greater than (or less than) any ensemble member's forecast ever for that location.  This is a long period of extremes coming up for the Pacific Northwest and hits at the potential historic nature of this storm.

Let's dive more into the ensemble forecasts.  Here's an image from Brian Colle's extratropical cyclone tracking page ( showing the forecasted positions of the Saturday low pressure center from all the members in two ensemble systems (SREF and GEFS).
Each black dot connected by grey lines is the locations (every 6 hours) and path that a single ensemble member forecast of the path of a low pressure center from one of those two systems.  The red dots are where the ensemble members have the center of the low at 0000 UTC 16 October 2016 (Saturday evening, local time).  The colored swath represents the probability of there being a low pressure center in that location (remember, model gridpoints are somewhat coarse...half degree latitude/longitude from GEFS, for example) within 24 hours of that time (I apologize for the lack of a color bar here...the dark greens are about 60% probability).

You can see that the main swath of probabilities goes into central to northern Vancouver Island; this seems to be the most likely track if we consider all our model forecasts together.  Such a path would really bring strong winds to the Strait of Juan de Fuca and surrounding areas of Vancouver Island.  However, it would not be as bad over Seattle as the WRF run we saw above is suggesting.  But even in the ensembles there is still a possibility it could cross the northern Olympic Peninsula...there are still respectable probabilities (20-30% in the blues) that the low could move over the northwestern Olympic Peninsula and into southern Vancouver Island.  There is also still is a lot of uncertainty in the timing of the low we's not even well-analyzed yet over the Pacific.

As noted by Cliff Mass in his blog, it's the path with the low coming closer to the Washington coast and across the northern Olympic Peninsula that tends to be the "worst case" for the Puget Sound region, as it sets up a strong north-south pressure gradient in the channel between the Olympic Mountains to the west and the Cascades to the east.  This leads to the strongest wind events in the central Puget Sound region.

Speaking of strongest wind events, I was curious to see what the CIPS Analog system was saying for this storm (  This is a somewhat different method of forecasting from the raw numerical models we typically digest.  In analog forecasting, we accept the idea that numerical weather models are often wrong, but assume that, given similar weather situations, they are wrong in the same way every time.  So how does that help us?

To make an analog forecast, we take a model (here, the GFS) and look at its forecast for a certain time (I'm showing below the 72 hour forecast from last evening's 0000 UTC run, so this is the forecast valid on Saturday evening local time).  We then go back in the records of all of the GFS forecasts (or reforecasts when the system is updated) ever made over the years and find all of the 72 hour forecasts that look most similar to the current one.  We then look at what actually happened during those events and use that to make a guess as to what will actually happen this time.  Remember, the key to analog forecasting is the assumption that, given similar weather scenarios, the models will be wrong in the same ways.

Anyhow, as part of this analog forecasting process, we can look at what historical events the system thought were "most similar" to the current one being forecast.  Remember, this is limited to the time periods when we actually had GFS forecasts, so nothing past the 1980s is included.  Here are the top 15 analogs, showing the sea-level pressure forecasts.

You can see that most of them have some sort of low off the northwest coast...a good sign that the analog is working.  I tried cross-referencing these dates with the list of significant Pacific Northwest windstorms maintained by Wolf Read at (  I was actually surprised to see that most of these dates did not correspond to particularly noteworthy windstorms (at least, what he has documented).

Interestingly, the one big match I found was the "Two windstorms in three days: November 13-15, 1981 event" ( which sounds very similar to the current threat of dual windstorms in only a few days.  Here, from Wolf Read's page above, is the track of the first (stronger) storm in that November 1981 event:

You'll note that it made landfall over central Vancouver Island, similar to our current most-likely forecast swatch.  However, this Nov 1981 storm had a much more south-north oriented track, which elongated the area of the coast that was affected by this storm and prolonged its effects.  The current forecasted path for the Saturday storm is a little more west-east, which should decrease the amount of time exposed to high winds.  The Nov 1981 storms did cause 12 deaths and "tens of millions" in damage, according to Wolf Read's summary.  Also of note, the 520 floating bridge experienced some $300,000 of damage (in 1981) after taking waves driven by 75 mph winds on Lake Washington.  This goes to show that even if lows don't exactly take the "classic" path for severe windstorms in central Puget Sound, Seattle can still see damaging events.

As the storms approach, our model solutions should converge on more likely forecast tracks with better estimates of the potential for wind damage.  People from Vancouver Island down through Portland need to be on the lookout and make preparations for this storm.  Be sure to frequently check your local National Weather Service office in Seattle (, Portland ( or Environment Canada ( for the latest warnings and advisories.  Stay safe!

Wednesday, January 6, 2016

Temperature forecast skill sinks in Seattle

Today I was surprised to see that the NWS had forecast a high of 41 Fahrenheit this afternoon here in Seattle, but the temperature had gotten up to 51F!

They aren't the only ones who have been struggling with the temperature forecasts recently.  Let's look at some of the forecast performance.

Here is an image showing the last 30 days of high temperature forecasts for Sea-Tac Airport (KSEA) from the GFS Model Output Statistics (MOS) forecast. In the top panel, the blue line shows the 1-day forecast high temperature and the black line shows what actually occurred.  The gray dashed line in the background shows the climatological normal high temperature for each day. The bottom bar chart shows the error in the forecast each day.

You can see that Seattle's climatology (the gray dashed line) "bottomed out" around December 21st.  Our coldest high temperatures of the year are now behind us, on average.  You'll note that before then (or rather, before about Dec. 15), the GFS forecasts were actually doing fairly well.  There were several days with the high temperature forecast perfect or only within 1 degree of what actually occurred.

However, since then (over the past two or three weeks), the temperature skill has gone crazy.  There's no net bias in the forecast (the GFS hasn't been consistently too cold or too warm) but there have been a lot of days with errors greater than five degrees.  The day-to-day differences in high temperature have also been quite large, and it seems like the model just isn't capturing these swings well.

The NAM model hasn't been a whole lot better:

There have been some misses on the low temperatures, but not as bad.  Here's the same plot, but for the GFS low temperature forecast.

Still a few big misses, but not as bad as the high temperatures.

We can compare different model forecasts by looking at their skill scores, which compares the performance of the model against some standard baseline forecast.  Two common baselines used are climatology and persistence.  A climatology forecast basically follows that dashed grey line in the plots just assumes you forecast the average value for that day, every day.  A persistence forecast assumes that whatever happens today will happen again tomorrow, and that's where you get your forecast.  So if today's high temperature was 51 degrees, our forecast for tomorrow would be 51 degrees again.  We would expect that a good weather forecasting system or service would be able to beat either of these baselines, otherwise it's not adding any value to the forecast.

So skill scores compare the errors from forecasts against these baselines.  A negative skill score means that the forecast system does worse than the baseline overall (bad!).  A zero skill score means the forecast system does exactly the same as the baseline, and a positive score means it does better.  A skill score of 1 means it produces a (near) perfect forecast.

Below are the skill scores for the GFS, NAM, and several other weather forecast producers like Accuweather (ACUWX), the Weather Service (NWS) and Weather Underground/The Weather Channel (WUTWC) in their high and low temperature forecasts at KSEA over the last 15 days. Blue is for low temperature, red is for high temperature.  The lighter bars are skill measured against a persistence forecast and the darker bars are skill measured against a climatology forecast. 

You can see that these scores are all we're doing better than our baselines.  But not by much!  In fact, the GFS has been almost exactly the same as persistence, and only marginally better than climatology.  The other forecast sources also have skill scores generally below 0.5, and typically here in Seattle these scores are closer to the 0.7-0.8 range.

So why have we been struggling so much with our high temperature forecasts?  For one, we've been stuck in a "split-flow" regime for the last few weeks, with storms either being directed to our north or down south into Oregon and California.  It's actually times like these when we don't have a strong synoptic-scale weather signal that our models struggle with the most, for it's on those days where local idiosyncrasies really start impacting our weather.

In particular, we've had a number of clear nights which have set up low-level cold inversions and fog overnight.  Our models tend to have a difficult time representing these low-layer inversions, particularly when it comes to mixing them out during the day.  As a result, sometimes they keep the cold and fog around for too long and sometimes they too quickly erode it away.  This has huge implications for the high temperature forecasts.

In addition, on January 3-4, we had strong easterly flow, which dried out our air but didn't actually raise the temperatures that much.  The GFS bit on the idea that downslope warming with these easterly winds coming off the Cascades would bring up our high temperatures, but that ended up not happening.

So there's a lot of little things going on that have contributed to lower skill in our forecasts.  With more ridging and dry weather expected in Seattle over the next few days, we may have to live with low predictability for a little while...