As many people are now aware, this weekend looks to have some potent severe weather in the southern Plains. The SPC has a day three slight risk for a wide area of Nebraska, Kansas, Oklahoma and Texas on Saturday:
The model forecast fields are pretty good---a strong upper-level trough digging across the central Rockies with a powerful jet streak entering the base of the trough on Saturday evening. Divergence downstream from the trough axis looks to provide broad lifting support across a very moist warm sector, in addition to enhancing strong lee-cyclogenesis in eastern Colorado. Strong directional wind shear is forecast with southerly flow at 850 hPa to near westerly flow at 500hPa. All classic ingredients for severe weather. Here is the NAM 60-hour forecast from this morning's run:
We're looking at surface pressure and precip in the upper-left panel, 850hPa heights and winds in the upper right, 500 hPa heights and vorticity in the lower-left and 300hPa heights and winds in the lower right.
One method that I've occasionally mentioned for predicting the eventual outcome of a particular forecast is to look for analogs in the historical record. That is, to go back in time to when the atmosphere was in a similar state (or, better yet, when the model forecasted a similar state) and see exactly what ended up happening. There are groups that do this sort of thing in real-time. One is the Cooperative Institute for Precipitation Systems (CIPS) group at St. Louis University. With every model forecast they go back through the North American Regional Reanalysis archives and try to find days from the past when the forecasted pattern was similar. We can then look at what actually happened on those days to get a sense of what may happen now.
Take this Saturday, for instance. If we go to the CIPS southern plains sector, we see that their number one analog for the NAM forecast for that evening is 00Z, April 26, 1984. Here's the same four-panel plot as above, but showing what happened at that time:
You can see a lot of similarities in these patterns! Both had similar strong lee cyclogenesis at the surface in Colorado underneath a broad upper-level trough over the intermountain west. It makes sense that these are a close match. So what ended up happening on April 25-26, 1984? Here are the severe weather reports from that day:
Five tornado reports, mostly in Nebraska. Lots of hail reports in Nebraska with a few scattered elsewhere. At least we know this pattern is capable of severe weather.
Let's look at the second-best analog identified by CIPS: 00Z, April 10, 2011. Here's the pattern:
Still similar, though the differences are more apparent. The surface low in eastern Colorado is not as intense. The upper-level trough is also much narrower and more positively tilted. Still the same "overall" pattern, but there are noticeable differences. What happened on this day?
There were still hail reports in eastern Nebraska with tornado reports in western Iowa. There's another whole area of wind and hail (with a few isolated tornado) reports located across the southern Appalachians. It seems that subtle shortwave in the upper-level analyses over West Virginia-Virginia may have played a bigger role than one might have thought.
Finally, the third closest analog---00Z, May 4, 1999. That's the evening of May 3, 1999. You know what's coming. Here's the pattern:
Again---there are definitely differences. The 500hPa trough for this event isn't as sharply defined as in the current NAM forecast. There's also a jet streak across the central Midwest in the current forecast that was not there on May 3rd, 1999. These differences are pretty important. With stronger and sharper large-scale dynamics being forecasted for this weekend, I feel like the severe weather may organize more upscale more quickly than it did on May 3rd, 1999. Such strong synoptic-scale dynamics can work against maintaining discrete storms for very long. Anyhow, here's the storm reports from May 3-4, 1999:
This was the day of the first Moore F5 tornado. Lots of severe weather throughout Oklahoma and, generally, in the same corridor that the SPC has outlined.
The CIPS site also breaks out the analogs by individual variables and how closely they "match" the forecasted pattern. For instance, if we're interested in moisture return, the May 3-4 event has the highest correlation of these three in the surface dewpoint field. The 1984 event, however, has the higher correlations for overall upper-level patterns and winds. So it's a mix and match sort of deal...
How much can we trust these analogs to tell us what's going to happen? You can see that in just the top three closest events there are already notable differences between the patterns on those days and the current NAM forecast. Plus, as each new forecast run gets out, the closest analogs change. We also have only a limited record (some 40-50 years) that we can go back to look for close matches. Furthermore, in just three top analogs we've identified very different regions as areas of high impact. What we CAN say is that this sort of pattern definitely is conducive to severe weather, and in roughly the area the SPC has outlined. Beyond that, you still need your expert forecasting skills to help narrow down the exact areas favorable for severe weather with this particular setup.
Thursday, April 24, 2014
Wednesday, February 26, 2014
Finally some rain for California
As most people are probably aware by now, California is in the midst of a drought of historical significance. The past several years have had below-normal rainfall, and combined with a particularly dry winter so far, water levels are extremely low. Much of the west has been suffering from below normal precipitation. Here's the estimate of the percentage of normal snowpack from mountain sites across the west from the National Weather and Climate Center at the beginning of February:
Several storms have moved through the Pacific Northwest in February, bringing the snowpack up here near Seattle to near normal. Much of the Rockies is also doing rather well. But from Oregon down through California the snowpack remains at 25-50% of normal. It's also pretty low in southern Utah and on the high mountains of Arizona and New Mexico. For California this is critical---communities depend on the mountain snowpack for their water throughout the year. Here's a summary of the reservoir levels currently throughout California. Most are running only about half or less of their average values for this time of year.Without snowmelt to replenish them, it's going to be a rough year...
To illustrate how far below normal the precipitation this year has been, we can look at precipitation since September for several cities in the California central valley. Here's Sacramento:
The top panel is the daily temperature range and the bottom panel is the total precipitation so far. The upper curve is the normal precipitation and the brighter green below is the actual precipitation. Normally by this point in the winter Sacramento has seen almost 15" of rain. So far this year, only about 5". It's a similar story further south in Fresno:
They already don't receive much rain---only 6" on average by this point---but they've only gotten 1.36" by the beginning of February (not sure why the graph hasn't been updated since then). One thing you may note on these graphs is that there can be precipitation events that bring a lot of rain. For instance, in early February Sacramento's precipitation was only at around 2 inches---they gained another three over the course of just a day or two in early February. So large systems can bring lots of rain to California, and it looks like that's what we're about to see.
Starting tonight, two low pressure centers are forecast to move toward the California coast through the end of the week. Here's the ECMWF forecast of surface pressure and 3-hour accumulated precipitation for late this evening (from Weather Underground):
Note the first low bringing substantial rain to the central California coast and more precipitation to the Sierras. The snowpack will be growing... You can see the next low spinning further off the coast. This is forecast to move in on Friday:
This low looks to bring even more precipitation. These are the 12-hour precipitation totals (I believe...Weather Underground isn't exactly clear about this) and you can see some heavy precipitation throughout the coast. Here's the precipitation totals forecast overnight Friday and into Saturday:
You'll notice the incredible enhancement of the precipitation along the coastal mountain ranges---forecasts of up to 2"+ along some of the crests throughout the Big Sur region. I don't know how helpful that will be, exactly, as most of the water retention in California comes off of the Sierras and not the coastal mountains. I'd also be concerned with flash flooding with these extraordinary amounts of rain. However, the Sierras also look to get a fair amount of precipitation as well, so the snowpack will continue to deepen.
Even with this rain, California will still need a very wet spring to recover from the dryness of the winter to date and the cumulative effects of previous dry years. I suspect if the water supplies remain low and more cuts are made that this may turn into the next hot button "is this evidence of climate change?" issue. We still cannot separate out the effects of climate change on any single event, even a multi-year event, as this is still within the limits of natural variability. So people will ask the question, but we really have no answer yet...
To illustrate how far below normal the precipitation this year has been, we can look at precipitation since September for several cities in the California central valley. Here's Sacramento:
The top panel is the daily temperature range and the bottom panel is the total precipitation so far. The upper curve is the normal precipitation and the brighter green below is the actual precipitation. Normally by this point in the winter Sacramento has seen almost 15" of rain. So far this year, only about 5". It's a similar story further south in Fresno:
They already don't receive much rain---only 6" on average by this point---but they've only gotten 1.36" by the beginning of February (not sure why the graph hasn't been updated since then). One thing you may note on these graphs is that there can be precipitation events that bring a lot of rain. For instance, in early February Sacramento's precipitation was only at around 2 inches---they gained another three over the course of just a day or two in early February. So large systems can bring lots of rain to California, and it looks like that's what we're about to see.
Starting tonight, two low pressure centers are forecast to move toward the California coast through the end of the week. Here's the ECMWF forecast of surface pressure and 3-hour accumulated precipitation for late this evening (from Weather Underground):
Note the first low bringing substantial rain to the central California coast and more precipitation to the Sierras. The snowpack will be growing... You can see the next low spinning further off the coast. This is forecast to move in on Friday:
This low looks to bring even more precipitation. These are the 12-hour precipitation totals (I believe...Weather Underground isn't exactly clear about this) and you can see some heavy precipitation throughout the coast. Here's the precipitation totals forecast overnight Friday and into Saturday:
You'll notice the incredible enhancement of the precipitation along the coastal mountain ranges---forecasts of up to 2"+ along some of the crests throughout the Big Sur region. I don't know how helpful that will be, exactly, as most of the water retention in California comes off of the Sierras and not the coastal mountains. I'd also be concerned with flash flooding with these extraordinary amounts of rain. However, the Sierras also look to get a fair amount of precipitation as well, so the snowpack will continue to deepen.
Even with this rain, California will still need a very wet spring to recover from the dryness of the winter to date and the cumulative effects of previous dry years. I suspect if the water supplies remain low and more cuts are made that this may turn into the next hot button "is this evidence of climate change?" issue. We still cannot separate out the effects of climate change on any single event, even a multi-year event, as this is still within the limits of natural variability. So people will ask the question, but we really have no answer yet...
Friday, January 31, 2014
Questioning the Bering Sea Rule: Part 2
In my last blog post I used the ERA-Interim reanalysis to look at a long-range forecasting technique called the "Bering Sea Rule". This has been used by many bloggers to attempt to forecast the general weather for various parts of the eastern US several weeks in advance. In its general form, it claims that whatever happens in the Bering Sea (weatherwise) will happen in the eastern US 2.5-3 weeks later. However, when I tried to correlate the storminess of the Bering Sea over a 30 year period with the storminess of the eastern US at various lead times, I couldn't find any strong correlations at all in the 2.5 to 3 week (17-23 day) window purported by the Bering Sea Rule. There were several other interesting correlations that have relations to well-known, larger scale circulation patterns but nothing special about that particular time window.
I heard back from several people after publishing part 1 of this blog with comments about my findings. Apparently there are other groups numerically trying to validate this rule. I'm really looking forward to seeing what these researchers come up with. There are a few additional things I wanted to quickly test that were supported by the comments I received. These include:
I heard back from several people after publishing part 1 of this blog with comments about my findings. Apparently there are other groups numerically trying to validate this rule. I'm really looking forward to seeing what these researchers come up with. There are a few additional things I wanted to quickly test that were supported by the comments I received. These include:
- Instead of looking at storminess, try to compare the temperature anomalies in the Bering Sea to temperature anomalies over the eastern US.
- The idea that the periodicity of the Bering Sea Rule changes over time---each year the period between what happens in the Bering Sea and what happens in the eastern US is different---not always at the 2.5-3 week lead time.
Temperature anomalies are pretty easy to correlate. Over the 30-year period the pattern is pretty simple:
Strong anti-correlations at short lag times less than 10 days, implying that if it's warmer than normal in the Bering Sea it's more likely colder than normal in the eastern US and vice-versa. For large, standing wave patterns this makes a fair bit of sense. However, beyond 10 days there are only extremely weak positive correlations and, again, there is nothing special about the 2.5-3 week lead time.
What about this idea that the periodicity of this pattern changes every year? We can test this by going back to one of our storminess metrics (say, mean SLP in each region) and only do these lagged correlations for one year. Since we expect our strongest teleconnections to phenomena like the Madden-Julian Oscillation during the winter half of the year, I'll limit the time periods to Sept-March. Let's look at the decade from 2001-2010 and compare the lagged correlation pattern each year. I'm including small versions of these photos inline; click them to get a larger view.
The correlation pattern changes pretty drastically on a year-to-year basis. When we look at these shorter time periods, much stronger correlation magnitudes come out. For some years, there is a particular lead time where there are much stronger correlations than at other lead times (for instance, 35-40 days in 2002-2003, 5-10 days in 2004-2005, 45-65 days in 2005-2006 or 17-23 and 48-52 days in 2006-2007). But in other years there are numerous peaks of similar magnitude at various lead times. Furthermore, none of these correlation magnitudes ever gets stronger than 0.6. Also, in many cases, the largest magnitude correlation is actually of the opposite sign (for instance, 2000-2001, 2001-2003, 2005-2006, 2007-2008).
So what does this all mean? Looking at individual years, there are actually stronger correlations than I expected to find. Within a particular year, there are particular wave patterns that probably do repeat a few times at reasonably regular intervals, particularly with large-scale blocking patterns setting up and whatnot. But, at least with respect to the Bering Sea Rule, in the far majority of years it is very hard (using this metric) to identify one particular time period of enhanced predictability. Even if you agree on multiple time periods where this might apply, the magnitudes of these correlations are still not very high. Even in particularly good years, you could hand-wavingly interpret the maximum 0.6 correlation at some lead times to indicate that, at best, 60% of the time when there is a low in the Bering Sea, there will be a low in the eastern US a certain number of days later. And even then it's a range of days where these correlations are strong, usually 5-10 days long. Given the frequency of troughs moving through the eastern US in a normal winter, it seems rather likely that there will be a low-pressure center sometime during a 1-week long period several weeks from now. There also is this problem of some of the strongest correlations being negative---implying that the opposite of what happens in the Bering Sea would actually be a better prediction.
So that was just some followup on a few more Bering Sea Rule ideas. There is ongoing research into larger-scale patterns of predictability and they'll do a far better and more thorough job than anything I would do here. It would be interesting to try and refine the Bering Sea Rule ideas further---maybe this only applies to the strongest storms or the biggest cold-snap events? At present, though, it seems difficult to quantify the Bering Sea Rule as it stands (certainly not with the 2.5-3 week window that seems to be widely used).
Strong anti-correlations at short lag times less than 10 days, implying that if it's warmer than normal in the Bering Sea it's more likely colder than normal in the eastern US and vice-versa. For large, standing wave patterns this makes a fair bit of sense. However, beyond 10 days there are only extremely weak positive correlations and, again, there is nothing special about the 2.5-3 week lead time.
What about this idea that the periodicity of this pattern changes every year? We can test this by going back to one of our storminess metrics (say, mean SLP in each region) and only do these lagged correlations for one year. Since we expect our strongest teleconnections to phenomena like the Madden-Julian Oscillation during the winter half of the year, I'll limit the time periods to Sept-March. Let's look at the decade from 2001-2010 and compare the lagged correlation pattern each year. I'm including small versions of these photos inline; click them to get a larger view.
2000-2001 |
2001-2002 |
2002-2003 |
2003-2004 |
2004-2005 |
2005-2006 |
2006-2007 |
2007-2008 |
2008-2009 |
2009-2010 |
So what does this all mean? Looking at individual years, there are actually stronger correlations than I expected to find. Within a particular year, there are particular wave patterns that probably do repeat a few times at reasonably regular intervals, particularly with large-scale blocking patterns setting up and whatnot. But, at least with respect to the Bering Sea Rule, in the far majority of years it is very hard (using this metric) to identify one particular time period of enhanced predictability. Even if you agree on multiple time periods where this might apply, the magnitudes of these correlations are still not very high. Even in particularly good years, you could hand-wavingly interpret the maximum 0.6 correlation at some lead times to indicate that, at best, 60% of the time when there is a low in the Bering Sea, there will be a low in the eastern US a certain number of days later. And even then it's a range of days where these correlations are strong, usually 5-10 days long. Given the frequency of troughs moving through the eastern US in a normal winter, it seems rather likely that there will be a low-pressure center sometime during a 1-week long period several weeks from now. There also is this problem of some of the strongest correlations being negative---implying that the opposite of what happens in the Bering Sea would actually be a better prediction.
So that was just some followup on a few more Bering Sea Rule ideas. There is ongoing research into larger-scale patterns of predictability and they'll do a far better and more thorough job than anything I would do here. It would be interesting to try and refine the Bering Sea Rule ideas further---maybe this only applies to the strongest storms or the biggest cold-snap events? At present, though, it seems difficult to quantify the Bering Sea Rule as it stands (certainly not with the 2.5-3 week window that seems to be widely used).
Monday, January 27, 2014
Questioning the Bering Sea Rule: Part 1
Minor correction: in the first version of this post, I described some of the bloggers who initially developed this rule as employees of Accuweather. I've been informed that this is not actually the case, so I removed the Accuweather forecaster references.
In the craziness of all these meteorologists now posting forecasts on the internet, I've been impressed with how thorough and creative several of these individuals are at presenting weather information in new, detailed and informative ways. Every so often, though, there's something that comes up that just bothers me to no end until I've dug into it some more. This (somewhat long) blog post will describe one of these forecasting curiosities--something termed the "Bering Sea Rule".
There are a small group of forecasters out there who hold great faith in this rule for issuing general weather predictions for the eastern US 2.5-3 weeks in advance--pushing the very edges of the limit of predictability for specific atmospheric patterns (predicting the mean state of the atmosphere has much longer limits...but that's another topic). The origin of this theory seems to come from a poster on an Accuweather blog, who posted a brief description at this link. Other blogging forecasters also use this theory to issue their own multi-week/monthly forecasts. Some examples are this blog, this blog and this blog. Is there a coincidence with Accuweather's recent decision to start issuing 45 day forecasts that have virtually no skill? I have no idea...
Before continuing, I want to get out of the way that many of these blogs are filled with good analysis of several of our long-range weather diagnostics. I'm not criticizing these blog posters, I'm just curious about this Bering Sea Rule and if it has any validity. So that's what I'm going to investigate here.
Let's start with what the "Bering Sea Rule" actually states. The original description in that first link describes it as a casual observation by one forecaster that the blog poster later "correlated":
"...after some monster storms of 1950 and 1974 in the Bering Sea, that within 3 weeks of those storms we saw monster storms for the East...I have amassed multiple post where I have correlated the above to a pattern..."
Without any strong specifics as to what the rule is about. Some of the other blogs I linked to above offer the following descriptions:
SIDEBAR: It's interesting to note this odd 30-40 day periodicity in these correlations. I talked about this with my friend/colleague Angel Adames here at the University of Washington and we agreed that this might be symptomatic of a wavetrain generated by the Madden-Julien Oscillation (MJO). This oscillation is a tropical phenomenon/large scale convective enhancement that circles the globe at the equator every 30-40 days on average with variations in strength. As this feature propagates around the globe, it can cause fluctuations in jet stream strength and position which in turn can lead to synoptic-scale wave patterns that propagate through mid-latitudes. Angel has composited the average wintertime effect of the MJO on upper-level heights for the entire 30-40 day MJO cycle. An animation of this is at this link. You can see that the MJO induces a series of troughs and ridges that strengthen first over Alaska and the Bering Sea and later on in the eastern US. This underlying cycle might explain the strong sinusoidal pattern in the lag correlations. But again--these correlations are so weak--0.05 in magnitude at best--that this is not a major impact at all on what happens. But it's interesting...
In the craziness of all these meteorologists now posting forecasts on the internet, I've been impressed with how thorough and creative several of these individuals are at presenting weather information in new, detailed and informative ways. Every so often, though, there's something that comes up that just bothers me to no end until I've dug into it some more. This (somewhat long) blog post will describe one of these forecasting curiosities--something termed the "Bering Sea Rule".
There are a small group of forecasters out there who hold great faith in this rule for issuing general weather predictions for the eastern US 2.5-3 weeks in advance--pushing the very edges of the limit of predictability for specific atmospheric patterns (predicting the mean state of the atmosphere has much longer limits...but that's another topic). The origin of this theory seems to come from a poster on an Accuweather blog, who posted a brief description at this link. Other blogging forecasters also use this theory to issue their own multi-week/monthly forecasts. Some examples are this blog, this blog and this blog. Is there a coincidence with Accuweather's recent decision to start issuing 45 day forecasts that have virtually no skill? I have no idea...
Before continuing, I want to get out of the way that many of these blogs are filled with good analysis of several of our long-range weather diagnostics. I'm not criticizing these blog posters, I'm just curious about this Bering Sea Rule and if it has any validity. So that's what I'm going to investigate here.
Let's start with what the "Bering Sea Rule" actually states. The original description in that first link describes it as a casual observation by one forecaster that the blog poster later "correlated":
"...after some monster storms of 1950 and 1974 in the Bering Sea, that within 3 weeks of those storms we saw monster storms for the East...I have amassed multiple post where I have correlated the above to a pattern..."
Without any strong specifics as to what the rule is about. Some of the other blogs I linked to above offer the following descriptions:
- "Bering Sea Rule (BSR): ...The basis of this theory is that whatever is occurring in the Bering Sea can be expected in the eastern CONUS within 2.5-3 weeks of its occurrence in the Bering Sea. Simple as that." (source here)
- "The Bering Sea Rule states that by watching storm systems make their way across the Bering Sea, one can identify where and when a storm system will show up in the United States. The general timeframe is 17-21 days after a storm appears in the Bering Sea, a storm will appear in the United States." (source here)
Other claims throughout the blogs I listed describe how the Bering Sea Rule is "independent of season" and another blog (located here) tries to use this rule (or a modified form of it) to correlate western Bering Sea/Kamchatka pressure to mean temperatures in the Midwest.
So is there anything to this rule? Can we use what's happening in the Bering Sea to predict storminess and/or temperatures in the eastern US three weeks later? I decided to do some correlations to see what I could find out.
To examine this, I'm using the European Centre for Medium-Range Weather Forecast's Interim Reanalysis (abbreviated ERA-Interim). What does this "reanalysis" give us? Basically they go back and do a detailed data assimilation of all available observations every six hours to get a "best guess" at what the full state of the atmosphere actually was at that time. I'm going to use these reanalyses all the way from 1979-2010---that's 31 years of atmospheric states to look at. If there's a pattern we should be able to find it.
Next step--what regions are we going to use for the Bering Sea and the US? One of the rule definitions above describes the US part of the rule as the "eastern CONUS", so I'm going to look at the US east of the Rockies. For the Bering Sea, I'm not going to use the western Bering Sea/Kamchatka region defined by one of the blogs and instead use the entire Bering Sea as the area of interest. This seems to be consistent with what most people are doing. Here's a map outlining the two regions I'm going to use:
Ok. We have a source of weather "data" (the reanalyses) and regions to focus on. In theory, if there is any predictive power here, then the "storminess" of the Bering Sea should correlate positively with the "storminess" of the eastern US ~17-22 days later. How do we define "storminess", though? I'll try two possible ways (or "metrics"):
- We can compute the average mean-sea-level pressure (MSLP) over the entire region (either Bering Sea or eastern US). When the average MSLP is lower, it's more stormy. When the average MSLP is higher, it's calmer.
- We could instead look at the variance of MSLP across the entire region. This is kind of a measure of the range of pressures spanned in the region. If the entire area has exactly the same pressure (nothing interesting going on), the variance would be zero. If a deep low is moving in (getting stormier), we would expect the variance to be higher. This measure is often used in climate studies for identifying storm tracks.
On top of this, some of the bloggers using the Bering Sea Rule look at 500 hPa heights instead of MSLP...so we can also try the same three methods above using 500 hPa heights instead. Finally, there was also that one blogger that was using the rule to try and track the mean temperature of the Midwest---I'll try correlating with the mean temperature in the eastern US region in part 2 of this blog post some time later.
Some final notes about the methodology---when computing these indices, I am going to remove the annual cycle by first computing the average value over the entire 30-year climatology for every day for each metric. Our metrics will then be reduced to deviations from the mean value for each day. For instance, instead of using the average MSLP over the Bering Sea for January 5th, 1984 I will use the difference between that value and the average MSLP over the Bering Sea for every January 5th from 1979-2010. This will remove seasonal cycles of storminess. Secondly, using 6-hourly data is a bit much for this kind of work--it's going to probably be very, very noisy. Because of this, I'll smooth the timeseries out a bit by taking running averages over certain time length windows. It's unclear what time length is best--should I take average values over the past day? Over the past week? Since it's unclear, I'll try doing this for a variety of averaging windows and see what we get.
All right. Time for some results. Let's start by correlating the mean Bering Sea pressure with the mean eastern US pressure at various lags (from 1-60 days later). We're also going to use averaging windows ranging from correlating daily averages (1 day) to monthly averages (31 days). Each colored line represents a different averaging window length.
So what do we see? Notice as our averaging window gets longer (the different colored lines), the lag correlations become smoother and the correlation coefficient values increase (we're more highly correlated). We reach a peak correlation of ~0.20 for long running mean windows (31 days) at a 7-day lead time. This means that a "stormier" than normal Bering Sea over the last month is weakly correlated with a stormier than normal eastern US over the last month too, but about one week later. Similar peaks (from 5-7 days lead time) exist for shorter averaging windows. For instance, if we just take the average MSLP over the past day in the Bering Sea, lower than normal pressures over the past day very weakly correlate (~0.13) with lower than normal pressures over a one-day period in the eastern US ~5 days later. Not exactly the result we'd want under the Bering Sea Rule. In fact, this corresponds reasonably well for the time period for shortwaves in mean mid-latitude westerly flow to propagate the distance from the Bering Sea to the eastern US. I've highlighted the time period (17-23 days) when the Bering Sea Rule claims enhanced predictability with the green shaded region--there's no peak in correlation there. So this doesn't look so good.
Just for comparison, let's look at the self correlation of eastern US average mean-sea-level pressure for the same period. This looks at the predictive power of the storminess in the eastern US right now for telling us how stormy it will be in the future. Here's what that looks like:
For zero lead times, the correlations are all one (I actually didn't compute zero lag--started with 1-day lag--so they don't all go up to one on the left, but they would if I did). This is what we expect--a timeseries should be perfectly correlated with itself. But what about when we start lagging it? If we look at the same lines as in the previous plots with the highest correlations (31 days averaging at a 7-day lead time), correlations are still at best ~0.2. Those are the same as correlations to the Bering Sea, implying that you could actually predict the monthly average eastern US "storminess" 7 days from now just as well as the Bering Sea Rule could by looking at how stormy the eastern US has been over the last 31 days.
For short averaging lengths (so, lower than normal pressure over the last day or week (1-11 days)) there actually is a tiny bit higher correlation to the Bering Sea at 10-20 days of lead time---but we're talking really small correlations here of 0.07 or less. In our 17-23 day Bering Sea Rule window, the self-correlations rebound slightly, once again indicating that for that timeframe you could predict the mean pressure in the eastern US just as well by using the recent pressure in the eastern US as by using the pressure in the Bering Sea.
Let's try looking at 500 hPa heights instead---correlate the mean 500 hPa heights deviations over the Bering Sea with the mean 500 hPa deviations over the eastern US. Here things get somewhat interesting looking...
There's a nice sinusoidal pattern going on here with a period of around 34 days. Note that the magnitude of correlation is actually at most 0.05--those are really, really small correlations at best. Furthermore, it just so happens that the period indicated by the Bering Sea rule--that 17-23 day window--is actually one of the times with the smallest predictability. The correlations at that time are around zero regardless of what averaging window you use. So, this implies that 500 hPa height means have NO predictive skill at all between the Bering Sea and eastern US for the time periods indicated by the Bering Sea rule.
SIDEBAR: It's interesting to note this odd 30-40 day periodicity in these correlations. I talked about this with my friend/colleague Angel Adames here at the University of Washington and we agreed that this might be symptomatic of a wavetrain generated by the Madden-Julien Oscillation (MJO). This oscillation is a tropical phenomenon/large scale convective enhancement that circles the globe at the equator every 30-40 days on average with variations in strength. As this feature propagates around the globe, it can cause fluctuations in jet stream strength and position which in turn can lead to synoptic-scale wave patterns that propagate through mid-latitudes. Angel has composited the average wintertime effect of the MJO on upper-level heights for the entire 30-40 day MJO cycle. An animation of this is at this link. You can see that the MJO induces a series of troughs and ridges that strengthen first over Alaska and the Bering Sea and later on in the eastern US. This underlying cycle might explain the strong sinusoidal pattern in the lag correlations. But again--these correlations are so weak--0.05 in magnitude at best--that this is not a major impact at all on what happens. But it's interesting...
Let's switch to looking at variances now as another measure of "storminess". Here's the correlation of the variance in Bering Sea mean-sea-level pressure to the variance in eastern US mean-sea-level pressure.
Again...not the strongest signal...for any lead time. If anything, at longer lead times beyond ~5 days, the variance in MSLP in the Bering Sea is anticorrelated with the variance in MSLP in the eastern US---meaning a particularly "stormy" period in the Bering Sea actually, if anything, points to less "stormy" conditions 7-30 days later in the eastern US. How about looking at 500 hPa variance?
An interesting pattern here--the magnitudes of the longer-range (20-50 day) lead times are actually a little larger in magnitude (around -0.13) than we see for shorter lead times. Still really small magnitude--not significant. Furthermore, again at these longer lead times the signals are anticorrelated, if anything. What about our 2.5-3 week time purported by the Bering Sea rule? Like we saw in the correlations of mean 500 hPa heights, this is actually a time when the correlation goes through zero--no connection at all is evident.
So what does all this mean? Looking at 30 years of correlations at various lead times, with various averaging windows, and measuring storminess as both the mean and variance of MSLP and 500 hPa heights in the Bering Sea and the eastern US, I can't find any good evidence at all pointing to a particular connection between storminess in the Bering Sea now and storminess in the eastern US 2.5-3 weeks later. If anything, that lead time is in one of the worst times for predictability by these metrics. If we looked at 1-2 weeks of lead time, maybe we'd have a slightly better case for a very mild predictability based on the 500 hPa mean heights and variance. But those are still very small correlations...
That wraps up part 1 on this topic. In a later blog post, I'm going to investigate this again in a little more detail. I haven't shown the correlations to eastern US temperature at all, which is another feature that some have tried to predict using the Bering Sea rule. Additionally, I noted that most of these Bering Sea rule comments began last autumn and have continued through the beginning of this year. The last several months have been marked by an extraordinarily persistent pattern of ridging in the eastern Pacific/western US and troughing in the eastern US. Perhaps this rule has more merit when the large scale flow is persistently in that pattern? We'll test that one too.
Also, I welcome any and all comments on this! Particularly for some suggestions of better metrics to check or other ways to approach this.
So what does all this mean? Looking at 30 years of correlations at various lead times, with various averaging windows, and measuring storminess as both the mean and variance of MSLP and 500 hPa heights in the Bering Sea and the eastern US, I can't find any good evidence at all pointing to a particular connection between storminess in the Bering Sea now and storminess in the eastern US 2.5-3 weeks later. If anything, that lead time is in one of the worst times for predictability by these metrics. If we looked at 1-2 weeks of lead time, maybe we'd have a slightly better case for a very mild predictability based on the 500 hPa mean heights and variance. But those are still very small correlations...
That wraps up part 1 on this topic. In a later blog post, I'm going to investigate this again in a little more detail. I haven't shown the correlations to eastern US temperature at all, which is another feature that some have tried to predict using the Bering Sea rule. Additionally, I noted that most of these Bering Sea rule comments began last autumn and have continued through the beginning of this year. The last several months have been marked by an extraordinarily persistent pattern of ridging in the eastern Pacific/western US and troughing in the eastern US. Perhaps this rule has more merit when the large scale flow is persistently in that pattern? We'll test that one too.
Also, I welcome any and all comments on this! Particularly for some suggestions of better metrics to check or other ways to approach this.
Wednesday, January 15, 2014
Dark days for Seattle
As some of you know, I have my own weather station that I maintain in Seattle. I post the weather observations online to Weather Underground and the Citizen Weather Observer Program for anyone who is interested in these observations. But, mostly, the weather station is there for my own personal entertainment and use. Here's what the station looks like. It's very poorly positioned--on top of a fence and close to the side of my building, but it's the only place I have...
Yesterday evening I went to check what the temperature had fallen to outside when I noticed that most of the observations on the display were blank. This was puzzling...but looking at the Weather Underground archives it appeared that observations had stopped some time yesterday morning. Here's their time series from my station for yesterday:
You'll notice that the temperature and wind direction flatline several times overnight before going out completely around 9AM. You'll notice, however, that the barometric pressure observations continued to come in. It was puzzling until I recalled what the weather had been like for the past several days---overcast and dark. We have been stuck under clouds for quite some time and, because the outdoor sensor is solar powered, it had been too long since it had a good re-charge and had stopped transmitting. There is a backup battery in the unit that keeps the station running for a time, but that apparently will run out after some time. I'll need to replace that battery...
So how dark has it been? We can turn to another weather station for that answer--the weather station on top of the roof of the Atmospheric Sciences building at the University of Washington. This weather station has a solar radiation sensor that reports how much shortwave radiation from the sun is reaching the station. We can add up how much radiation the station received each day to get an idea of how sunny it has been. Here's a plot of the total amount of solar radiation received each day since the beginning of the year.
Except for January 2nd, the first week or so of the new year was actually very sunny--on January 4th and 5th skies were mostly clear, and you can see that the station received nearly 5000 kilo-Joules per square meter on those days. You'll notice though that since January 7th, we've been receiving far less--only around 1000 kJ/sq. meter--which is around 1/5 of our maximum possible solar radiation. No wonder my weather station finally gave up...
Clouds have finally cleared out today, though, and solar radiation is up quite a bit. Sure enough, this morning my weather station magically kicked back on as it finally had enough power to operate...
Lets hope for more sunny days to come...
Yesterday evening I went to check what the temperature had fallen to outside when I noticed that most of the observations on the display were blank. This was puzzling...but looking at the Weather Underground archives it appeared that observations had stopped some time yesterday morning. Here's their time series from my station for yesterday:
You'll notice that the temperature and wind direction flatline several times overnight before going out completely around 9AM. You'll notice, however, that the barometric pressure observations continued to come in. It was puzzling until I recalled what the weather had been like for the past several days---overcast and dark. We have been stuck under clouds for quite some time and, because the outdoor sensor is solar powered, it had been too long since it had a good re-charge and had stopped transmitting. There is a backup battery in the unit that keeps the station running for a time, but that apparently will run out after some time. I'll need to replace that battery...
So how dark has it been? We can turn to another weather station for that answer--the weather station on top of the roof of the Atmospheric Sciences building at the University of Washington. This weather station has a solar radiation sensor that reports how much shortwave radiation from the sun is reaching the station. We can add up how much radiation the station received each day to get an idea of how sunny it has been. Here's a plot of the total amount of solar radiation received each day since the beginning of the year.
Except for January 2nd, the first week or so of the new year was actually very sunny--on January 4th and 5th skies were mostly clear, and you can see that the station received nearly 5000 kilo-Joules per square meter on those days. You'll notice though that since January 7th, we've been receiving far less--only around 1000 kJ/sq. meter--which is around 1/5 of our maximum possible solar radiation. No wonder my weather station finally gave up...
Clouds have finally cleared out today, though, and solar radiation is up quite a bit. Sure enough, this morning my weather station magically kicked back on as it finally had enough power to operate...
Lets hope for more sunny days to come...
Monday, January 6, 2014
An analog context for the cold wave
Much of the country is experiencing some of the coldest weather it has seen in many years this week with temperatures struggling to get above zero Fahrenheit and wind chills in the frigid -50 to -60 degree range in some places. Here's this morning's surface temperature analysis:
Somewhat complicated plot, but I like it because it nicely divides the above freezing (red) and below freezing (blue) temperatres by the color of the contours, giving an idea of the extent of the cold air. You can see the strong cold front that has been pushing through the eastern United States today as it trails behind a low pressure center that was over southern Ontario this morning. Frigidly cold temperatures. This air has been travelling rapidly down from the arctic over the past several days. Here's a plot showing the air trajectories over the past three days for parcels of air in the lowest 1km of the atmosphere. You can see that they have been travelling all the way from the Arctic Ocean down to the midwest.
One notable feature of this air transport is that it has been relatively slow to warm up as it plunged south. The bottom panel of that plot shows the temperature of this air (in Kelvin) as it has moved along (it's read from right to left with the left side being last night). Some warming of the air has come because the air has descended a bit. But in terms of sensible temperature it has only maybe warmed about 10 F (difficult to estimate) over its entire journey. Part of this is because it has traveled over an area that is now entirely covered by snow, keeping the low-levels of the atmosphere relatively cold. Here's the latest snow depth estimate map from the National Operational Hydrologic Remote Sensing Center (NOHRSC):
You can rest assured that most of northern Canada has a solid snow pack too. All of this snow worked like kind of a refrigerator, keeping that cold air cold as it moved south.
We can see this cold air mass nicely outlined by looking at the temperatures above the surface. Just above the surface at 850mb we see from this morning's NAM analysis that we had -30 to -35 Celsius temperatures over the upper midwest:
When is the last time we had temperatures this cold? One ever-developing tool in meteorology is the ability to search for analogs--that is, to find example patterns of atmospheric conditions in the past that are very similar to what we are experiencing now or expecting in the forecast. The cold wave of January 1994 is a good analog for our current cold weather outbreak, though that cold wave was a bit longer in duration. Here's an example of the 850mb temperatures from the middle of that event:
A similar pool of very cold air over the upper midwest. That's -36 degrees Celsius over northern Wisconsin and Minnesota. Wikipedia's brief page about this event has some of the following nuggets of trivia:
Somewhat complicated plot, but I like it because it nicely divides the above freezing (red) and below freezing (blue) temperatres by the color of the contours, giving an idea of the extent of the cold air. You can see the strong cold front that has been pushing through the eastern United States today as it trails behind a low pressure center that was over southern Ontario this morning. Frigidly cold temperatures. This air has been travelling rapidly down from the arctic over the past several days. Here's a plot showing the air trajectories over the past three days for parcels of air in the lowest 1km of the atmosphere. You can see that they have been travelling all the way from the Arctic Ocean down to the midwest.
One notable feature of this air transport is that it has been relatively slow to warm up as it plunged south. The bottom panel of that plot shows the temperature of this air (in Kelvin) as it has moved along (it's read from right to left with the left side being last night). Some warming of the air has come because the air has descended a bit. But in terms of sensible temperature it has only maybe warmed about 10 F (difficult to estimate) over its entire journey. Part of this is because it has traveled over an area that is now entirely covered by snow, keeping the low-levels of the atmosphere relatively cold. Here's the latest snow depth estimate map from the National Operational Hydrologic Remote Sensing Center (NOHRSC):
You can rest assured that most of northern Canada has a solid snow pack too. All of this snow worked like kind of a refrigerator, keeping that cold air cold as it moved south.
We can see this cold air mass nicely outlined by looking at the temperatures above the surface. Just above the surface at 850mb we see from this morning's NAM analysis that we had -30 to -35 Celsius temperatures over the upper midwest:
When is the last time we had temperatures this cold? One ever-developing tool in meteorology is the ability to search for analogs--that is, to find example patterns of atmospheric conditions in the past that are very similar to what we are experiencing now or expecting in the forecast. The cold wave of January 1994 is a good analog for our current cold weather outbreak, though that cold wave was a bit longer in duration. Here's an example of the 850mb temperatures from the middle of that event:
A similar pool of very cold air over the upper midwest. That's -36 degrees Celsius over northern Wisconsin and Minnesota. Wikipedia's brief page about this event has some of the following nuggets of trivia:
- Chicago got down to -21 Fahrenheit with wind chills down to -55
- Major snowfalls across the eastern half of the US
- Reagan National Airport in DC had a record low high temperature (for the 20th Century) of 8 Fahrenheit
- Pittsburgh got down to a record low -22 Fahrenheit
Do these headlines sound at all familiar? Maybe it's because we're seeing something similar again:
- Chicago got down to -17 Fahrenheit this morning with wind chills down to -42 Fahrenheit
- The 11 inches of snow that has fallen in Chicago is the most in a single event since 2011
The bulk of the coldest air is just moving into the eastern part of the US right now. Given how similar this event seems to be to the 1994 case, we might therefore by analog expect these kind of chilly temperatures for DC and Pittsburgh later. It doesn't look like we'll get as cold---DC is forecast to have a high of 17 tomorrow and Pittsburgh is only forecast to fall to -10---but still, this gives us an idea of what to expect.
Another side of this kind of analog forecasting is that we can use it to try and describe more abstract effects of the weather on lives and property. Trying to describe the societal impacts of weather is something that the meteorological community still struggles to do well. We can give you all the numbers you want for how cold it is going to get or how strong the winds will blow, but what does that mean for peoples' health and safety? We've developed some systems to try and address this---the Saffir-Simpson Hurricane Scale, for instance, relates the wind speed of a hurricane to the type of damage we might expect from such a storm. But we don't have these kinds of scales for everything and they are not perfect.
By looking at past analogs, we can get an idea of what we could expect based on what happened last time. For instance, during the cold of January 1994, over 100 people died due to consequences of the cold weather. Quoting from that Wikipedia article, United cancelled over half its flights, pipe explosions due to freezing cut off water to thousands of homes, Chicago schools closed, and hundreds of drivers in Chicago could not start their cars due to the cold.
What do we have right now? O'Hare is backed up for four days to get flights out due to cancellations from the snow and cold, Chicago's schools are closed...we'll have to wait to see how many people suffered from exposure or lost their utilities. My point is that by using analog forecasting we can get a hint of the kinds of consequences of the weather that our models cannot predict by turning to past experiences. This is a powerful forecasting tool that is only beginning to be exploited...
Subscribe to:
Posts (Atom)