On October 5, I traveled to Norman, OK and presented an update of my HSLC environmental parameter climatology to Steve Weiss, Dr. Israel Jirak, and Andy Dean from the Storm Prediction Center. They provided feedback on my project and made recommendations for me going forward, including both methodology suggestions and theoretical considerations. One of their main concerns involved the formulations of the SHERB and SHERBE and their lack of a moisture parameter. Given that the parameters were designed for all significant HSLC events, not just HSLC tornadoes, I feel that it is a justifiable exclusion at this time. However, when we begin to focus our efforts on discriminating between significant tornadoes and significant winds, a moisture parameter will be critical.
Andy from the SPC also recently provided me with data for all significant events across the U.S. from 2006 through 2011 in addition to data for all nulls (defined as a severe thunderstorm or tornado warning issued during a convective day in which no severe reports were gathered in the respective CWA) across the U.S. between Oct. 2006 and Dec. 2011. This has provided me with a test dataset for the entire U.S. across all environments, including HSLC.
So far, I have focused on evaluating the performance of the SHERB and SHERBE using the new verification dataset. There are some noteworthy differences between the two datasets and our methods when testing versus when developing the parameter:
- For our development null dataset, we used spatial interpolation within GEMPAK to gather archived SPC Mesoanalysis data for the previous hour. In this verification dataset, the SPC provided us with the data for the nearest grid point, which is consistent with the significant reports database. However, through previous testing, this was shown to be an inconsequential difference on the mean. Additional tests will be conducted to verify this is the case.
- When developing the SHERB and SHERBE, we used only one report per CWA per hour (ORPC), while in testing, we have so far used all significant reports and all nulls. The latter method will likely identify whether the parameters are weighted towards the widespread significant events or if they are potentially more useful in picking out isolated significant events. The original method may provide more utility for the SHERB and SHERBE, as I would guess that most other diagnostic parameters should light up when a widespread significant event is expected. However, this may not be the case. If there are any comments on this, I would be happy to read them. Regardless, we plan on testing with the ORPC method, as well, to determine if this led to some discrepancies we have identified in our results.
- With our development dataset, we considered HSLC “events,” in which we went through each event CWA-by-CWA and assessed how many reports met our HSLC criteria. If over half of the reports met the criteria of <= 500 J/kg of SBCAPE and >= 35 kts of 0-6 km shear, the reports for that day were included in our dataset (and then subject to the ORPC filter). In testing, we have used a strict cutoff at our HSLC criteria; in other words, all HSLC reports must be associated with data points consisting of <= 500 J/kg of SBCAPE and >= 35 kts of 0-6 km shear. This was done because we do not have the non-significant severe reports in our new dataset, though the method could be employed just using significant severe reports.
Given that we received the data just last week, our analyses have been limited. It does appear that, even with the differing methods described above, the SHERB and SHERBE outperform existing composite parameters in our CSTAR domain in discriminating all significant events against nulls (see below). However, it does not perform as well when discriminating just significant tornadoes against nulls. We plan to do a thorough investigation on why this is the case and if it can be attributed to the differences in datasets and methods or if there are other issues to be addressed.
Further, we have started to look at regional comparisons of skill of our parameters against other composite parameters. So far, we have noted that the SHERB and/or SHERBE outperforms all other composite parameters in discriminating significant HSLC severe events from nulls in a substantial amount of the U.S. The plot below shows the best performing parameter for our 11 subjectively defined regions. We plan to investigate why the SHERB and/or SHERBE struggle in some regions in more detail following SLS.
Over the next several weeks, we intend to exhaust the potential of the new dataset provided to us by the SPC. First, however, we must identify the primary cause of differences between the results we are getting with the new dataset against what we found with the development dataset–which of the above possibilities contributes most significantly to these differences, and which results are more representative of the problem we are trying to address? Following this step, we will compile a climatology of HSLC events across the entire U.S., focusing on regional, diurnal, and annual trends. Then, we will determine if we can further improve the parameters we have developed through modifications in the formulation or alternate combinations of parameters, with a focus on improving the skill in our region. Finally, once we are convinced that our parameter-based work is sufficiently thorough, we will transition into an idealized simulation framework in order to address lingering questions regarding the convective-scale features of HSLC events.
If anyone has any comments, suggestions, or questions, feel free to let me know.