Is our over reliance on numerical weather models making us worse forecasters?
It seems there is a new profession in the field of weather forecasting. I like to call it modelology. Which is far different from the field of meteorology. There is a disturbing trend that seems to be infecting our field and, in my opinion, is making us worse at forecasting the weather.
Listen I love and know that numerical weather guidance or models have come a long way. They are getting better in both accuracy, reliability and more importantly in resolution. Though I would argue the resolution is getting better at a rate that is not equal to the accuracy. Which therein lies the problem if the data going in is the same, but you have higher resolution you actually have the potential for bigger errors.
I love models they are essential tools for my job. I’m like a lot of weather geeks staying up to 1:00 am EST to see the 0Z runs of ECMWF model when a big storm is coming. Just like tools in any profession though they need to be used correctly by a skilled craftsman to make a forecast. They aren’t forecasts in and of themselves, thus why they are called “guidance.” If you are a qualified carpenter and have a good saw, hammer and glue, you can make some beautiful things. Those tools don’t build that nice thing themselves and just like the carpenter this skill takes time and practice. You don’t need to have a degree either though it helps. Forecasting is a skill that takes time, practice and honestly lots of failures to get better. That’s why even a non-degree forecaster can become very skilled at forecasting over a Ph.D. in meteorology who doesn’t practice that skill on a daily basis.
With models becoming better, easier available and prettier to look at thanks to the bevy of model sites that have cropped up. Forecasters, both amateur and professional have strayed away from the skill of actually forecasting the weather and become glorified model regurgitators. The model advancements have been great, but it’s like my carpenter friend who now uses an advanced power saw and a nail gun. He still must use his skills combined with these new tools to build his final product. The tools lying around with some wood don’t make a dang thing. I am hearing and seeing way too much of this, “well this model sucked”, or the “models missed this,” etc. Hey, remember they are models they aren’t forecasts. You issue a forecast, not the model.
The GPS problem
Like other advancements in technology you still need to be involved. I’m sure you have read or heard a story of some knucklehead driving into a lake or down an abandoned fire road because their GPS navigation system told them to. I mean at some point when your GPS tells you to turn right, and there’s a wall there you stop. Right? Just this week I saw people going crazy on one single deterministic run of the ECMWF. No other piece of guidance was supporting this track or location. A whole bunch turned into a lake, or, in this case, the Atlantic Ocean. Why do we do this when meteorological common sense says otherwise?
ECMWF Meme
Another issue we have is the prevailing attitude that one model is superior to others. Yes, we all know over time the ECMWF has performed better especially in the long and mid-range than just about any other model. Does it always do better, though? No, it often fails as miserable as the GFS, yet many have acquired a bias towards one model. We remember only it’s “good” runs and failing to remember the bad ones. Where now we are seeing people jumping on an ECMWF solution at seven days that they would never do with a GFS or CMC run. Remember Hurricane Joaquin? The ECMWF eventually got it correct quicker, but we forget at the beginning it too struggled with it like the GFS. Oh yeah, that run where the GFS nailed it 1st before the ECMWF, no one talks about that for some reason.
So how do we fix this?
- Listen I love where our science, technology, and modeling is going. Both the GFS and ECMWF are going to get huge upgrades that go operational soon. Just remember, know where this data comes from. Stop looking at strictly derived snowfall maps and dive into the QPF in the model and look at the thicknesses, profiles, and mandatory temperature levels. You’ll often find it’s not all snow it’s a mixed bag and it rarely if ever falls with a 10:1 snow to liquid ratio.
- Keep a spreadsheet and verify your forecasts. This isn’t to brag when you are right; this is to figure out why you were wrong. I’ve done this for years, and it’s an eye opener how you recognized bias in your forecasts. Remember a busted forecast is only bad if you don’t learn from it.
- Take your emotions out of a forecast. This is hard as a weather geek, really hard! If you like weather, don’t let it cloud or skew your forecast to something you want versus what is realistic
- Forecast what is most likely over what could happen. Now I’m not saying don’t lay out all the possibilities, but lay them all out. Which means if you say there could be 1 foot of snow make sure also say it’s equally possible we get zero!
- It’s okay to say we don’t really know. Tell people how confidence or not confident you are in your forecast. This is where modeling is huge they can give you confidence or take it away.
- Always look at ensembles they help remove outliers and help you forecast what’s likely versus the extreme outliers.
- Local knowledge and experience combined with climatology help tremulously. Analogs are great, has anything like this ever happened before?
- Remember in a warming world the models may not be able to correctly simulate extreme events.
More Tips:
@wxbrad I’ll add ” Know your extremes” snowfall for example what’s a typical regional snow ratio / pwat values and what’s high / low.
— Tom Coomes (@TomCoomes) January 27, 2016
After posting this blog I received this really interesting paper abstract. In Canada they performed an experiment to see if forecasters could forecast without models. The results were shocking.
Project Phoenix – Optimizing the machine-person mix in high-impact weather forecasting
Patrick J. McCarthy, MSC, Winnipeg, MB, Canada; and W. Purcell and D. Ball
In the numerical model-dominated weather services of the world, do meteorologists still add value in weather forecasting? Project Phoenix was an innovative experiment designed to test this question. Conducted early in 2001 at the Prairie Storm Prediction Centre (PSPC) in Winnipeg, Manitoba, a team of three forecasters prepared noon and late afternoon public weather forecasts for the Canadian Provinces of Alberta, Saskatchewan and Manitoba. This Phoenix team began with the text forecasts produced by SCRIBE, Canada’s automated model-based forecast production software. From this starting point, the meteorologists produced their weather forecasts using only their analysis, diagnosis and prognosis skills by using real data, such as radar, satellite imagery, surface observations and upper air soundings. The Phoenix forecasters had no access to any numerical weather prediction data beyond the original SCRIBE text forecasts. The goal of the two week experiment was to discover if short-term forecasting techniques applied by meteorologists could achieve a significant improvement over the automated SCRIBE product, particulalry in terms of high-impact weather (HIW). The performance of the SCRIBE, Phoenix, and the official forecasts were verified against a comprehensive evaluation system.
The Phoenix meteorologists improved the forecasts beyond expectations during the two-week test. In a curious twist, the Phoenix forecasts also were significantly better than the official versions prepared by the PSPC. The PSPC repeated the experiment a few months later. The results confirmed the first test and Project Phoenix became a formal training program for all PSPC meteorologists. Over a dozen more groups completed a one-week version, including one limited to senior personnel and another team comprised of recent graduates. Every Phoenix group posted major improvements over the automated forecast products. As well, all the Phoenix teams achieved at least a slight improvement over the PSPC official forecasts. This training system has now been adopted by the Meteorological Service of Canada to position forecasters for HIW forecasting roles in the years ahead. This presentation examines Project Phoenix, the verification system, and offers an explanation for the results.
Link to paper: https://ams.confex.com/ams/22WAF18NWP/techprogram/paper_122657.htm