Was the Imperial College model robust?
Matt Ridley writes:
There’s been a lot of articles I’ve seen very critical of the code. I’m not a coder so can’t judge.
It has become commonplace among financial forecasters, the Treasury, climate scientists, and epidemiologists to cite the output of mathematical models as if it was “evidence”. The proper use of models is to test theories of complex systems against facts. If instead we are going to use models for forecasting and policy, we must be able to check that they are accurate, particularly when they drive life and death decisions. This has not been the case with the Imperial College model.
This is key. A model is merely a set of assumptions. When the Government says our decisions have saved 140,000 jobs all they mean is that is what a model projects. How often do we check back against what really happened?
It is not as if Ferguson’s track record is good. In 2001 the Imperial College team’s modelling led to the culling of 6 million livestock and was criticised by epidemiological experts as severely flawed. In various years in the early 2000s Ferguson predicted up to 136,000 deaths from mad cow disease, 200 million from bird flu and 65,000 from swine flu. The final death toll in each case was in the hundreds.
A history of projecting huge death tolls.
In this case, when a Swedish team applied the modified model that Imperial put into the public domain to Sweden’s strategy, it predicted 40,000 deaths by May 1 – 15 times too high.
Sweden is a fascinating country to study. At this stage their death rate is relatively high compared to other European countries (yet lower than UK France, Spain, Italy and Belgium) but their decision not to introduce a lockdown allows us to see how accurate the model (based on no lockdown) is. Their death toll is 1/15th of what the model projected.
That doesn’t mean the Swedish approach is the best one. But it does mean we should remember models are often wrong.