Update on HSLC Environmental Climatology WAF Article

As noted in the recent HSLC conference call, Matt and Keith plan to submit an article for submission in AMS’s Weather and Forecasting within the next two weeks. This article will focus on a nationwide climatology of HSLC significant severe weather in addition to the parameter-based work detailed through conference calls and blogs over the last several months.

While writing the article, the methods utilized to develop the SHERB were revisited. In our initial development, we had utilized HSLC events and nulls datasets from our CSTAR region. Both datasets used archived SPC Mesoanalysis (aka Surface Objective Analysis, or SFCOA) data; however, the events database took SFCOA data from the nearest grid point at the preceding hour, while the null database points were spatially interpolated. To alleviate these differences, the development methods were repeated using a new null dataset provided by the SPC that utilized the nearest grid point at the preceding hour, consistent with the events dataset.

After re-calculating skill scores for individual environmental parameters, it was found that the two lapse rates used in the SHERB formulation (the 0-3 km, or low-level – LLLR, and 700-500 mb – LR75) were the two conditionally most skillful parameters, and using a product of these lapse rates considerably improved the skill at discriminating between HSLC significant severe reports and nulls over conventional composite parameters. However, the third conditionally most skillful parameter was somewhat less obvious, as multiple wind and shear parameters combined with the lapse rates exhibited skill. This was further compounded by the fairly small sample size when attempting to determine the third conditionally most skillful parameter, which led to differing results between our TSS tests and subsequent Monte Carlo simulations. As a result, we tested multiple formulations of the “SHERB” (defined in the article as the product between the LLLR, LR75, and a wind/shear parameter; see Table 1) across our development dataset and verification dataset.

Within the regional development dataset, multiple versions of the SHERB were more skillful than conventional composite parameters at discriminating between HSLC significant severe reports and nulls (Table 2). In particular, the SHERB6 (using the 6 km wind magnitude) and SHERBS6 (using the 0-6 km shear magnitude) were especially skillful. When looking at just HSLC significant tornadoes against nulls, the SHERBE and the original SHERB (i.e., SHERBS3) stood out as best performers (Table 3). These trends continued when investigating nationwide skill, as shown in Table 4. Also note that the skill of the SHERB6 and SHERBS6 diminished in the nationwide verification dataset, suggesting their skill is conditional.

Here is a summary of some findings from skill and climatology comparisons over multiple subsets:

  • In the winter (DJF), deep-layer shear and winds appear to be most skillful in conjunction with LR75 and LLLR, suggesting that in highly dynamic, strongly forced environments, system propagation speed and momentum transfer are important discriminators between events (particularly significant winds) and nulls. Lapse rates are crucial in facilitating this momentum transfer.
  • Regionally, the typical “regime” of HSLC significant severe weather varies: across our CSTAR region, surface-based, low LCL cases are most common; across the Plains and Midwest, elevated cases are most common, and in the far western U.S., high-based, dry boundary layer cases are common. All of these are possible given our definition of HSLC environments.
  • As a result, the most skillful parameter varies from region to region. The SHERBE is particularly skillful in these “CSTAR-style” regimes and elevated regimes, but using SRH rather than shear or winds seems to have greater utility in high-based cases.
  • However, nationwide (and in our region), the SHERBS3 (i.e., original SHERB) and SHERBE are the most appropriate for widespread use due to their relatively consistent skill and optimal thresholds when compared to conventional parameters and other SHERB formulations.

Ultimately, there seems to be no one parameter to encompass all potential HSLC hazards, which is what should be expected. After all, there is no magic bullet parameter (which is good for job security!). However, the SHERBS3 and SHERBE can continue to be used with confidence as guidance tools in HSLC environments when convection is anticipated.

There are some slight adjustments to the parameters’ normalized values after the new tests. The LR75 term is now normalized by 5.6 K/km, while the shear terms are normalized by 26 m/s (SHERBS3/SHERB) and 27 m/s (SHERBE). If there are interests in testing the other SHERB formulations, let us know, and we can post the normalization values for the other wind/shear parameters.

 

Table 1. Wind and shear magnitude parameters exhibiting skill as the third conditionally most skillful parameter in the development dataset using TSS tests or Monte Carlo simulations.

Parameter Label When Combined With Lapse Rates
1 km wind magnitude

SHERB1

3 km wind magnitude

SHERB3

6 km wind magnitude

SHERB6

Cloud-bearing layer mean wind magnitude

SHERBC

0-1 km shear magnitude

SHERBS1

0-3 km shear magnitude

SHERBS3

0-6 km shear magnitude

SHERBS6

Effective shear magnitude

SHERBE

0-1 km storm relative helicity

SHERBH1

0-3 km storm relative helicity

SHERBH3

 

Table 2. Maximum true skill statistic (TSS), optimal threshold, and integrated area under the ROC curve (AUC) for given composite parameters at discriminating between HSLC significant severe reports and nulls within the development dataset. Composite parameters include the Craven-Brooks Significant Severe Parameter, the Energy Helicity Index (EHI), the Supercell Composite Parameter (SCP), the Significant Tornado Parameter (STP), and the Vorticity Generation Parameter (VGP).

Parameter Maximum TSS Optimal Threshold AUC
Craven-Brooks 0.252 3000 0.659
EHI 0.323 0.23 0.681
SCP 0.175 5.04 0.582
STP 0.332 0.19 0.662
VGP 0.331 0.07 0.660
SHERB1 0.368 1.00 0.722
SHERB3 0.484 1.00 0.793
SHERB6 0.574 1.00 0.806
SHERBC 0.500 1.01 0.796
SHERBS1 0.370 0.99 0.723
SHERBS3 0.499 1.01 0.788
SHERBS6 0.513 0.99 0.805
SHERBE 0.482 1.00 0.725
SHERBH1 0.318 0.97 0.677
SHERBH3 0.322 0.98 0.678

 

Table 3. As in Table 2, but for HSLC significant severe tornado reports and nulls.

Parameter Maximum TSS Optimal Threshold AUC
Craven-Brooks 0.391 4500 0.733
EHI 0.489 0.23 0.772
SCP 0.297 0.63 0.668
STP 0.501 0.25 0.764
VGP 0.442 0.07 0.746
SHERB1 0.399 1.00 0.736
SHERB3 0.481 0.98 0.795
SHERB6 0.531 1.00 0.774
SHERBC 0.442 0.87 0.775
SHERBS1 0.428 0.86 0.752
SHERBS3 0.539 1.03 0.808
SHERBS6 0.531 1.05 0.789
SHERBE 0.588 1.01 0.832
SHERBH1 0.357 1.03 0.723
SHERBH3 0.378 0.98 0.729

 

Table 4. Maximum true skill statistic (TSS) using any threshold for (second column) all HSLC significant severe reports against nulls, (third column) HSLC significant tornadoes against nulls, (fourth column) HSLC significant winds against nulls, and (fifth column) HSLC significant hail reports against nulls within the nationwide verification dataset.

Parameter All Tornadoes Wind Hail
Craven-Brooks 0.248 0.432 0.202 0.327
EHI 0.301 0.532 0.244 0.355
SCP 0.272 0.440 0.203 0.377
STP 0.227 0.583 0.189 0.177
VGP 0.206 0.494 0.156 0.194
SHERB3 0.221 0.561 0.208 0.114
SHERB6 0.196 0.412 0.177 0.142
SHERBC 0.213 0.517 0.204 0.109
SHERBS3 0.243 0.540 0.216 0.194
SHERBS6 0.204 0.393 0.171 0.195
SHERBE 0.284 0.500 0.212 0.366
Advertisements
This entry was posted in CSTAR, High Shear Low Cape Severe Wx. Bookmark the permalink.

6 Responses to Update on HSLC Environmental Climatology WAF Article

  1. nwscwamsley says:

    Well done !!

  2. Jonathan Blaes @ WFO RAH says:

    Keith and the gang,

    Very nice work here. There is a lot to process here. The SHERB6 seems to be optiminal for all significant severe with the SHERBE most optimal or the tornado cases.

    Can you remind me what the normalizer is for the 0-3km lapse rate (LLLR)?

    Any thoughts on why the SRH was not a particularly good dicriminator for the tornado events compared to the other formulations? This suggest that other processes may be at play

    Can you describe how critical the wind/shear parameters are to the success of the various SHERB formulations?

    It will be interesting to see if the case studies provide additional insight into the roles of the lapse rates on the HSLC significant events.

  3. Keith Sherburn says:

    JB,

    Thanks for the comment. I’ll address all of your comments/questions in turn.

    Yes, in terms of the development dataset, both the SHERB6 and the SHERBS6 show remarkable skill. I think this corresponds to the large number of cool season significant wind events in the development dataset, which is where the deep-layer shear parameters really show substantial skill. Note that the SHERBC (using the mean cloud-bearing layer wind; in general, also a deep layer) also does quite well in these events. The SHERBE and SHERBS3 are both skillful when examining just tornado cases, and the SHERBE especially stands out with hail cases.

    The normalization value for the LLLR is 5.2 K/km.

    The SRH struggles are an interesting topic. The formulations with the SRH struggled primarily in our development dataset compared to the verification dataset; in fact, on the nationwide scale, the SRH versions do quite well (particularly outside of the winter months). They are among the best performing composite parameters during the spring and for high-based (i.e., LCL > 1000 m) cases. So why do they struggle in our development dataset? I suppose it is important to recall that SRH is based off of some estimate of storm motion assuming a right-moving supercell. As Jason’s work has shown, though the majority of HSLC tornadoes in our development dataset occurred with some sort of supercell (discrete or embedded within lines/clusters), a substantial fraction are from non-supercells. Perhaps in these non-supercell cases (or even some of the supercell cases, given the non-uniformity of right-movers’ motions), the SRH is not a useful metric. However, I am interested in hearing some other impressions of this decreased skill.

    I’m not sure I completely understand your next question, but I think you’re asking, “How much does the skill of the SHERB depend on which wind/shear parameter you use?” I would say that, depending on the situation, it could vary a lot. A product of the lapse rates themselves, without using any shear/wind parameters, is quite skillful (maximum TSS of 0.462 in the development dataset). But depending on the regime, season, region, etc., several of the wind parameters, when combined with these lapse rates, can further augment the skill. Based on the nationwide verification dataset and their increased performance in tornado cases, we chose to suggest the use of the 0-3 km shear magnitude and effective shear magnitude in the article. I think that the wind or shear component used could “make or break” the SHERB, depending on the situation. It may require further testing to really gauge which situations require which wind/shear parameter. Let me know if this wasn’t the question you were trying to ask…

    I agree; I look forward to the case study results. It appears that even marginal lapse rates, when coupled with just the right amount of shear, can mean the difference between a significant severe and non-severe event. Hopefully we begin to understand this dependency a little better once the case studies are completed.

  4. justingsp says:

    JB/Keith, the fact that SRH isn’t as skillful as shear magnitude in terms of identifying significant HSLC environments is certainly consistent as far as the QLCS end of the spectrum is concerned. Trapp and Wesiman (2003) and Atkins and St. Laurent (2009) described different processes for mesovortex genesis within QLCSs, and neither of them had much to do with tilting of streamwise vorticity. One of our case studies (largely a broken-S/QLCS event) will show that environmental SRH was barely (if at all) favorable for tornadoes, especially considering the weak instability, yet multiple tornadoes occurred. (Granted, they were all quite weak).

  5. justingsp says:

    One thing I forgot to mention: another thing that I’ll be showing in BOTH of my case studies is that the Bunker’s method was not the best means of estimating storm motion. The old 30/75 rule actually did a better (not perfect, but better) job of estimating the observed storm motion in both cases. This resulted in not-insignificant differences between SRH from my modified soundings and the SPC mesoanalysis.

  6. Keith Sherburn says:

    Justin,

    Good points. You’re right–most of the QLCS mesovortex work cites crosswise, rather than streamwise, vorticity as the primary contributor. Also, I’ve heard a lot about the Bunkers method of storm motion being inaccurate at times. I look forward to seeing your case study comparisons.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s