When Code Goes Wong

Many people across the globe are currently interested in the quality of code and, even more, the quality of results that arise from code. In particular, in the UK, Neil Ferguson’s epidemiological code, that seemed to influence the government’s decision to impose a lockdown, and, on this blog at least, the code that makes up climate models, both GCMs and IAMs. (That’s General Circulation Models and Integrated Assessment Models to their friends.)

This post then is about When Code Goes Wong. Here’s an amusing example.

RfsGit25

That’s the code size in bytes of a succession of Git commits I made between the 15th and 19th of this month, in doing some ‘real work’ unrelated to Covid-19 or climate. It clearly shows the build phase, where the code size increases, followed by the refactoring one, where the opposite tends to occur (but not always). And in my refactoring I made a couple of stupid errors. My Code Went Wong. And that’s the risk that many coders, like Professor Ferguson, are aware of and wish to avoid. And so code bloat and fragility grows, with little or no regard for the principle of DRY (Don’t Repeat Yourself) or the even more fundamental one of removing ‘dead code’ that will never be used.

However, wrong results are not merely the result of refactoring gone wrong. Commenter Jit summarised the wider situation very well, I thought, on the Ferguson model, on our earlier open thread:

<blockquote>Re: the code, well this and the model it built was always going to be trash. I said as much on an earlier thread. If you compound enough unknowns with the unpredictability of human behaviour in a model that has to be spatially structured to be worth anything… you inevitably end up with nothing resembling the real world. I begin to doubt models as soon as they rise above one dimension. Even in the exponential phase, simplest case, if dN/dt = rN then you have two fat unknowns to generate a third. Now add spatial structure and modelled behaviour to make r the mean value of every viral population (i.e. infected person) and you would be lucky to get anything resembling a realistic value, let alone a true value. I wonder how many input parameters it takes? Does it consider the time of year, ambient temperature etc?

No doubt if code has been built over years it is going to be unwieldy with bits bolted on here and there and lurkers that no longer get called.

Nevertheless, it says more about those who show cyber deference than the code’s originators.</blockquote>

It was always going to be trash because a) the assumptions were wrong (a point underlined later by John Ridgway) and b) too much deference was shown to the apparent answers given. The age-old principle we’ve all heard of and made our contributions to, in our turn: Garbage In Garbage Out.

All the same, having unreadable code doesn’t help. This thread is going to be about just such issues and could get quite long.

But I interrupt the fascination with an advert. This afternoon at 5pm Christopher Essex is talking via Zoom for the GWPF on “Mathematical Models and Their Role in Government Policy“. Professor Essex has deeply influenced me on the subject of GCMs. I’ll be there. And once I’ve taken in what he and the others have to say, I’ll return to this post.

SO, TO BE CONTINUED

(but feel free to comment, as from now. Thank you.)

via Climate Scepticism

https://ift.tt/2XEycFc

May 28, 2020 at 03:27AM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s