HSLC Poster at the AMS Student Conference

Earlier this month, NC State student Keith Sherburn presented a poster documenting HSLC convection at the 12th Annual AMS Student Conference during the AMS Annual Meeting in Austin, TX. The poster was composed of two primary themes. The first half of the poster focused on a general nationwide climatology of HSLC significant severe weather, while the second half discussed ongoing research to improve the forecasting of HSLC significant severe events. The poster garnered a lot of attention, both from students—including Ashley Athey from Virginia Tech, who had helped identify some of the HSLC events for our development dataset—and non-students, such as Russ Schneider from SPC and Jeff Waldstreicher from Eastern Region Headquarters.

A PDF of the poster is attached to this blog entry for those interested. The poster does include multiple figures previously undocumented in CSTAR material, as it focuses on the national climatology of HSLC events and skill of composite parameters rather than just investigating our CSTAR CWAs.

Notably, in the climatology, some of the CSTAR CWAs saw very few HSLC significant severe reports between 2006 and 2011, with MHX and RAH, for example, averaging only one per year. The maximum in HSLC significant severe reports was JAN, with a whopping 181 significant severe reports meeting our HSLC criteria. However, there is a pretty clear non-meteorological signal in some locations, such as in JAN, PAH, and ILX, where there are prominent “hot” or “cold” spots for HSLC significant severe reports that are likely attributable to different warning verification methods. On the other hand, the transition from HSLC tornadoes and winds in the Southeast, Mid-Atlantic, and Mississippi Valley to primarily a wind and hail threat in the Plains likely is meteorological.

Through the annual cycles, it becomes apparent that many regions contribute only a small fraction of the total U.S. HSLC significant severe reports. In the cool months, the majority of HSLC significant severe reports occur in the Southeast, Mid-Atlantic, and Mississippi Valley. This maximum shifts to the Plains and Midwest in the summer. An interesting topic discussed during the poster session was how many of these summertime events could be nocturnal MCSs, which would likely have low to nonexistent SBCAPE but plentiful MUCAPE and sufficient deep-layer shear to meet our criteria. The annual cycle including nulls also indicates that the relative frequency of nulls increases in the winter, suggesting a decrease in warning skill during that season. Diurnally, the main message is that these events, as documented previously, can occur at any time of the day, though they are relatively more common during the afternoon and evening. I plan to make an additional plot of the diurnal cycle using local hour, rather than UTC, since the national reports encompass four time zones.

In the last panel, the plot on the upper right indicates the maximum skill for each of the given composite parameters at discriminating between HSLC significant severe reports and nulls. Clearly, there is much regional variability when it comes to the skill of all parameters, including the SHERB and SHERBE. Regardless, in the regions encompassing our CSTAR CWAs (7 and 8), SHERB and SHERBE clearly outperform the other composite parameters. This indicates two things: 1) The HSLC environment in the Southeast and Mid-Atlantic appears to be unique compared to the rest of the country, and 2) More work must be done to improve the skill of our parameters for use in other regions if they are to be accepted nationally.

One potential failure point of the SHERB/SHERBE in other areas (or even in parts of our CSTAR region) is the use of 0-3 km and 700-500 mb lapse rates, as these will overlap as elevation increases. A possible solution is to use 0-3 km and 3-6 km lapse rates; unfortunately, 3-6 km lapse rates are not included in our SPC relational databases, so we were not able to test their utility.

Also, through recent analysis, it appears that many of the SHERB false alarms in the Plains occur when LCLs and 0-3 km lapse rates are both high (i.e., the old dry boundary layer problem that has been discussed previously). The attached figure shows cumulative distribution functions (CDFs) of 0-3 km lapse rate (LLLR) and surface LCL height (SLCH) for SHERB or SHERBE false alarms (FA), correct nulls (CN), hits, and misses for region 9 (see map in poster, or more generally, the Southern Plains). For the SHERB FA, note that a considerable fraction of the total distribution had high LLLRs and high SLCHs (e.g., over 60% of the FA had SLCHs above 1500 m, while over half had LLLRs above 8 K/km). Thus, we may be able to put a limit on the contribution of the 0-3 km LR term or alternatively add an LCL fade out in order to improve the skill in that region.

If anyone has any comments, suggestions, or questions, please share.



This entry was posted in CIMMSE, Convection, CSTAR, High Shear Low Cape Severe Wx. Bookmark the permalink.

3 Responses to HSLC Poster at the AMS Student Conference

  1. Jonathan Blaes @ WFO RAH says:


    Thanks for sharing the poster and providing the update. Can you remind me of what a significant severe HSLC event is defined as in your study. The limited number of events for RAH, MHX, and even AKQ still surprises me along with the maximum located at RNK and GSP. The limited data set might be a source of some of this.

    It is encouraging though that the SHREB performs well in sectors 7 and 8 but I guess that would not be surpising since the choice of parameters in the SHERB was based on events/nulls in the CSTAR domain, right?.

    One other thing, I still find the true skill statistic charts very interesting, especially the indication of the performance of various composite parameters for the significant HSLC events. The different maxima of these parmaters appears skillful, just different values from what we may typically associate with other types of severe weather.

  2. Keith Sherburn says:


    Thanks for the response.

    When we were developing our parameter and testing its skill for our CSTAR domain, we were using what we defined as HSLC “events” — where over half of the severe reports for a given event had to occur within a HSLC environment in order to be kept in the dataset. However, in this climatology, a HSLC significant severe report uses the strict 500 J/kg SBCAPE and 35 kt 0-6 km shear cut-off. In other words, the number of reports shown on the maps (which are also the reports used for testing the parameters in the charts) were significant tornado, wind, or hail reports corresponding to a data point with less than or equal to 500 J/kg of SBCAPE and greater than or equal to 35 kt of 0-6 km shear.

    I was very surprised, as well, to see the lack of significant HSLC reports in the dataset for some of our CWAs. It seems that the number of significant tornadoes in the CWAs east of the Appalachians is fairly consistent, but the number of significant hail and significant wind reports varies considerably. I’m curious to hear interpretations of this feature in the climatology.

    You’re right; the SHERB was designed based on HSLC “events” in the CSTAR domain. However, I also recently tested the skill of the SHERB/SHERBE without including our CSTAR domain (i.e., just removing all of the reports and nulls from our 11 CWAs from the TSS calculations). Without our CSTAR domain, the SHERB skill in region 7 actually *increased*, while the skill in region 8 remained about the same. The SHERBE skill in both regions decreased slightly, but both the SHERB and SHERBE still outperformed all other composite parameters in regions 7 and 8. Thus, the skill is not just a consequence of utilizing our domain to develop the parameter.

    I agree regarding the TSS charts. For instance, the STP still seems to be quite skillful for HSLC tornadoes — more skillful than any other parameter nationwide. However, its optimal value is around 0.25. Is that really useful for forecasters? Perhaps with adequate situational awareness.

  3. justingsp says:

    “I’m curious to hear interpretations of this feature in the climatology.”

    Hey Keith, my apologies for not responding earlier, but I have a pretty straight forward answer to this query, at least in terms of downburst wind. Essentially, there is no guidance for WFOs to assign a maximum gust speed to a downburst event in Storm Data. We are required by directive to assign something, but how it’s done likely varies widely from WFO to WFO. We almost never have actual wind speed data to verify a warning/event, so it is usually done based upon impact. Since the vast majority of our events are limited to downed trees/power ines, assigning a gust speed to these events for Storm Data purposes becomes very arbitrary, as non-meteorological conditions come into play (i.e., what were the soil conditions? were the trees healthy? how big were they? were leaves on/off the trees, etc., etc.). With that in mind, some WFOs may assign 65 kt or greater to an event fairly frequently, while others may never assign a wind speed that high, barring actual data. After about 20 years of looking at Storm Data, I’ve learned to view quirks in the record as being a result of imperfect data.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s