Is It Useful to Filter SHERB Values?

A system producing another round of elevated SHERB values will cross the western Carolinas and northeast Georgia Monday afternoon and evening. We are noticing, by inspection, that the model SHERB grids are consistently highlighting significant 1.0+ values on days that require more attention for possible HSLC events. We are also noting, as has been mentioned in other posts, that high SHERB values are frequently evident across the entire grid even though only a part of the forecast area will be at risk for HSLC events.

Below is the SHERB grid output from the 00 UTC 16 December 2012 GFS valid at 18 UTC, Monday, 17 December 2012.  (SHERBet color table courtesy of Pat Moore.)  You will note that rather robust SHERB values populate nearly the entire grid – including northern mountain and foothill locations that are not expected to be very unstable at the valid time.

SHERB computed from 00 UTC 16 Dec. 2012 GFS valid at 18 UTC 17 Dec. 2012

SHERB computed from 00 UTC 16 Dec. 2012 GFS valid at 18 UTC 17 Dec. 2012

Below is the GFS surface based CAPE field – showing weak instability wrapping mainly around the southern periphery of the forecast area at 18 UTC.

GFS surface-based CAPE from 00 UTC 16 Dec. 2012 run valid at 18 UTC 17 Dec. 2012

GFS surface-based CAPE from 00 UTC 16 Dec. 2012 run valid at 18 UTC 17 Dec. 2012

Might it be instructive to create a version of the SHERB grid that filters by instability in order to more effectively visualize the main HSLC threat area?  Suppose we force SHERB to zero in locations where surface-based CAPE is zero, allow the full-blown SHERB values in locations where “sufficient” CAPE exists for convection, and then create a linear scaling between these areas based on the ratio of CAPE to the chosen threshold?

I’ve done just that in the figure below – selecting a minimum CAPE value of 300 J/kg for the full SHERB region.  Areas with zero CAPE are assigned zero in the SHERB grid, and the remainder is scaled in the ratio of CAPE/300.  A fairly simple adjustment to the SHERB smart tool was needed to accomplish this.

GFS SHERB values (as above) filtered based on CAPE.

GFS SHERB values (as above) filtered based on CAPE.

In actuality, positive surface-based CAPE values from both the NAM and GFS do eventually spread across much of the foothills and piedmont between the hours of 21 UTC and 03 UTC Monday evening, but the 18 UTC example nicely illustrates the point.

The advantage of using filtered SHERB values is that it helps the forecaster focus in on both the time and location of the higher threat areas.  The disadvantage is that model CAPE values can be notoriously variable depending on surface dewpoints (which is one of the reasons for looking at the SHERB terms to begin with.) Finally, a sequence of filtered SHERB grids can be animated to watch the progression of higher threat regions across the forecast area.

Is this useful?

Harry G.

This entry was posted in Uncategorized. Bookmark the permalink.

5 Responses to Is It Useful to Filter SHERB Values?

  1. Hi, Harry — I think you’re on a good track here. I’m not as familiar with the details of the SHERB parameter development as others are, but this would seem to be a step in the right direction toward refining the threat area and minimizing the FAR of the SHERB alone. I look forward to seeing what others have to say. Great work! -Gail

  2. mdparker says:

    Hi All,

    This is a tricky business. SHERB was not designed to diagnose the probability that convection will develop. Its meaning is more akin to: “given a storm, what is the likelihood of a sig severe report?”. This is an important distinction, because it means that forecasters need to use other tools in their forecast process for assessing the chances for *any* convection, and *then* look at the SHERB. One of the ways we have taken to looking at it here is by plotting a chart of MUCAPE *alongisde* SHERB, so that the two pieces of information complement one another. See, for example:

    But, it’s reasonable to ask: why doesn’t SHERB=0 when CAPE vanishes? Some time ago, we asked the question: how confident are you that a model can reliably discriminate between CAPE=0 J/kg and CAPE=100 J/kg in marginal environments? How about 100 vs 300 J/kg? Any sort of “fade out” driven by CAPE will necessarily be at the mercy of the model CAPE prog, and I’m not sure we want a situational awareness parameter that goes to 0 if the model underdoes the dewpoint by a degree or two and erroneously undershoots CAPE by 200 J/kg. We aren’t even sure how well the mesoanalysis (*analysis!*) does with CAPE in these situations.

    That said, I think Keith found that CAPE does indeed have some skill as a parameter in these situations. And, it would be easy enough to run the skill scores for some kind of modified SHERB just to see how it does. In a sense, this is what SHERBE (SHERB using the effective layer bulk shear) does, because if CAPE=0 J/kg then the effective bulk shear is also =0 by definition (and also for small CAPE situations, the effective layer tends to be rather shallow and so the effective bulk shear is correspondingly lower). I believe Keith has found that SHERB *does better* than SHERBE (i.e. the CAPE-independent formulation out-performs the low/no-CAPE-penalty one) for all sig reports, but not for tornadoes only. Keith can clarify if I am mis-remembering the facts here, though. But, again we have the caveat: the hits and nulls in our study are *all* convective cases. In other words, we are not assessing whether storms will develop. Storms exist on both the hit and null events, and we are trying to separate the sig severe storms from the non-severe.

    I hope this information helps. Perhaps there is a creative way to get all of the information you need by contouring one field and shading another in your software?

    All the best,

  3. Keith Sherburn says:


    I think it’s great that the products of our research are being examined real-time. It certainly gives us another set of eyes to identify potential issues with the SHERB, but it also gives us an opportunity to clarify some of our methods and thought processes when developing the parameter.

    As Matt mentioned, the SHERB (and the SHERBE, for that matter) was not designed to explicitly forecast the occurrence of convection. Those of you in attendance on past HSLC conference calls or the CSTAR workshop may recall that my introduction of the SHERB/E included the idea of a decision tree, and the foundation (or roots, I suppose) of that tree was the fact that convection had to be expected in order to diagnostically utilize the SHERB/E. The data points in our statistical analysis were either significant severe reports or unverified warnings; thus, all of our data points were associated with (at least) potentially severe convection. The idea of developing the parameter was to discriminate between convection that was likely to remain sub-severe vs. convection that had the potential to produce significant severe weather. Thus, I still think it should be a two step process:

    1) Is convection forecast?
    2a) If yes, check the SHERB values to determine if convection will be potentially significantly severe.
    2b) If no, then you may want to check the SHERB values just in case unexpected convection develops, but that is ultimately at the user’s discretion.

    Also, it is ultimately up to the user to decide whether a SHERB >= 1 when the answer to the first question is “no” is actually a false alarm. An alternative method to using CAPE would be only shading SHERB values where thunderstorm wording is used in the grids (although, not all of these events have thunder, either, so I’m not sure how this has been handled in the past in grids). Further, the SHERBE could be shaded instead–or additionally–if effective shear magnitude is available on GFE/AWIPS (if I recall, it is). As mentioned above, the SHERBE will naturally fade out as MUCAPE fades due to the CAPE dependence inherent to effective shear. It could be useful to utilize both the SHERBE and SHERB in tandem to increase confidence about potential threats. From the statistics in our development dataset, it seems that the SHERBE is a bit more robust when picking out the significant tornadoes from the nulls, while the SHERB is more skillful at discriminating between significant winds and nulls. However, their overall skill seems to be pretty comparable (in our development dataset, the SHERB was slightly more skillful overall in our CSTAR domain; in the verification dataset, the SHERBE is more skillful).

    Ultimately, I think it’s up to the offices to decide how exactly they want to implement the SHERB/E. In my opinion, I think it would be beneficial to use both parameters for now in order to gauge when they perform well and when they don’t, given that we have not utilized either parameter in a real-time manner yet.

  4. gerapetritis says:

    Thanks, Keith, Matt, and Gail for the comments. For my example, I really didn’t want to use the SHERB grid to hold the output of this “SHERB-modified-by-CAPE” composite, because I realize it is then no longer SHERB as defined. But, alas, there was too much IT overhead involved in creating a new grid on the fly while working a near-term forecast desk, so I just created the composite directly in the SHERB grid.

    I think what the forecasters are ultimately looking for is some kind of “Probability of HSLC Severe Convection” guidance, perhaps akin to what SPC is doing with the experimental calibrated severe thunderstorm guidance post-processed from the SREF for general severe convection. The best way to visual the threat may be to have a suite of gridded guidance, possibly displaying SHERB, SHERBe, various model CAPE and/or muCAPE fields, along with forecaster created or derived probability of thunderstorm (convection) PoT (PoC) grids. A composite of these could then be created. Whether we would call it a true “probability” or just some sort of “threat” level grid would need to be determined. Eventually, we’d like to do something similar with Matt Eastin’s (UNCC) Tropical Cyclone Tornado Parameter (TCTP), the more traditional Supercell Composite Parameter (SCP), and perhaps some as yet undevised severe pulse storm prediction method in order to have a full guidance suite for all modes of severe convection.

    Harry G.

  5. Keith Sherburn says:

    Harry (et al.),

    I think the Probability of HSLC Severe Convection idea is a good one, and I think the SHERB/SHERBE would be an ideal final component to that sort of probabilistic forecast. I’ve been observing the ongoing HSLC severe episode in the southeast, and based on SPC Mesoanalysis and observed soundings, it seems that the SHERBE is doing an admirable job with this system. The SHERB, on the other hand, as with the last system, has its maxima offset from the main area of convection. More than likely, there will be some occasions where the SHERB does well and others where the SHERBE is superior, but from the past couple of systems, it seems like the SHERBE may be the way to go in the future. Of course, neither parameter will do well *every* time. I think there are a lot of factors not incorporated into the parameters that could lead to sensitivities, such as moisture content, undetected capping inversions, and synoptic/mesoscale forcing, to name a few.

    Also, for what it’s worth, it seems that the SPC Mesoanalysis (and thus, the RAP) was poorly diagnosing mid-level lapse rates this morning, based on comparisons with the BMX sounding. The observed 700-500 mb lapse rate was 7.2 C/km, while the RAP had values < 6 C/km. This would lead to pretty substantial underestimation of SHERB/E values.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s