Recent Conference Paper on QLCS Tornadoes

I was reading an interesting and topical conference paper today entitled “Tornado warning services for misoscale circulations in quasilinear convective systems,” by Kevin Scharfenberg and collaborators, that appeared at the 25th Conference on Severe Local Storms this past October. Link below…

In the article, “the authors question whether tornado warnings are appropriate for misoscale circulations due to the large percentage that will quickly dissipate, the unpredictable nature of the few that will strengthen, and the expected brief remaining life span of any detected strong misocyclones relative to the warning dissemination cycle.” It is a question that has relevance to our project. I suspect that many of the tornadoes that we see in HSLC environments, especially the ones associated with features such as the “Broken-S”, are related to these “misocyclones” that grow in size and strength enough to produce damage.

I think we can all agree that tornadoes in HSLC environments strain the limits of detectability and predictability, yet we are constrained by a system of performance management that rewards or punishes the success or failure of a warning for a weak EF-0 the same as a devastating EF-4. Until computation of statistics changes for tornado warning verification relative to the strength/duration of the tornado (and there is no indication it will), we need to put forth our best effort to detect and warn for the “misocyclone” tornadoes. Whether or not we meet our GPRA goals depends on it.

I am not trying to debate the merits of our current system of performance management. What I am suggesting is that perhaps one of the outcomes from the CSTAR project could be some measure of the predictability of tornadoes in HSLC environments with enough lead time to meet the GPRA goal. Perhaps an attempt could be made to quantify or draw the line between the tornadoes that are large enough to detect and warn with enough lead time versus the ones that aren’t. Can we define the “State of the Science” for predictability of HSLC tornadoes in the Southeast? Once that question is answered, it can enter into a discussion about policy.

Perhaps we could discuss this during our next conference call, which is set for Thursday 24 March at 10 am.


About nws-pat moore

B.S. Meteorology, State Univ. of New York - College at Oneonta (1987) M.S. Meteorology, The Florida State University (1996) National Weather Service (12/3/90 to present), stationed at GSP since 8/16/98.
This entry was posted in CSTAR, High Shear Low Cape Severe Wx. Bookmark the permalink.

3 Responses to Recent Conference Paper on QLCS Tornadoes

  1. Steve Nelson says:


    Great post. Yes, I’ve read this as have a few others in the group I imagine. Quite controversial. Many disagree with the premise that damaging misocyclones are not tornadoes. To someone hit by a misocylone with EF2 or EF3 damage, they probably don’t care what it was, they wanted to be prepared for tornado-like damage!

    Rather than debate this here, let me review what we found with warning metrics. Of 30 EF2+ tornadoes studied from 2008-2010 over MS, AL, TN, and GA, the 8 (22%) QLCS tornadoes had an average lead time of 4.5 minutes and 75% PEW (percent event warned). The 22 supercell tornadoes has 17 minutes of lead time and 95% PEW. The study was expanded to include fall 2007-spring 2008 storms with similar results. A student this semester is looking at some FAR stats for both types. These are challenging storms to warn for indeed.

    I think its a great idea to define the predictability of HSLC storm events, but I caution that we should not assume that QLCS or misocyclones are predominately weak as I think the authors of the study above hint at. Trapp et al 2005 found that the relative frequencies of QLCS tornado intensity are the same as with supercells until EF3 or greater, at which point the frequency still does not rapidly fall to zero. The Feb 26 2008 Carroll county QLCS tornado wiped a cinder-block foundation house and all the trees around it completely clean. Tim Marshall indicated the damage could have been EF4. I can post the presentation if anyone is interested.

    Anyway, a worthy goal for our group. I think it should be discussed.


  2. nwscwamsley says:

    Until we can get quicker scan returns at low levels into awips…I think it is going to be hard to determine signatures that allow us to say it is a large tornado compared to a smaller one in real time. The technology we have now may lead to the time of the slice to interpret the signature (couplet) at its weakening stage since most tornado paths are usually are less than 2 miles long. I do not agree on issuing tornado warnings on every s-shaped signature, but to use your judgment (especially if there is high reflectivity a higher levels near the break in the line. Anytime I see a s-shaped signature I think of a severe thunderstorm warning before a tornado (since straight line winds may be stronger and more widespread than the actual tornado). Personally I do not mind taking a hit when knowing that I am providing the best service to my area on what the main threat was (damaging winds). Parameters are usually key, but I never hesitate to have a tornado polygon ready after a severe t-storm warning is issued in case a spin starts to form on a more narrow scale.

  3. Matt Parker says:

    The bottom line to me is that there are many studies (just search AMS journals for the names Trapp, Atkins, Wakimoto, Przybylinski) that have shown that the tracks of the most intense damage from QLCSs exactly underlay the tracks of the embedded mesovortices. In many cases, the damage is at least EF1 even if a tornado is not confirmed. Jared Guyer’s SLS presentation also showed that in high CAPE environments (CAPE > 500 J/kg) overall 11% of observed tornadoes were significiant (EF2+), whereas for low CAPE environments (CAPE </= 500 J/kg) overall 8% of observed tornadoes were significant. I guess I don't have a dog in the fight when it comes to whether the "correct" warning type here is a severe or a tor, but I think it would be irresponsible to downplay the threat from QLCS mesovortices.

    It's fair to acknowledge that many of these features are short-lived, but I think Ron Przybylinski has shown that the use of some basic thresholds for depth and delta-V provide pretty good separation between the high-end threats and the nulls. Maybe such criteria could be used in making the decision about whether to "escalate" from a severe warning to a tor? I guess this really gets back to Pat's question: can we clearly define the "State of the Science" that undergirds the current S.E. U.S. QLCS/HSLC warning procedures?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s