Ancient adage: show me some software with no bugs and I'll show you some software that doesn't do anything. Sadly in the world of gigabit/sec downloads, the end user is now responsible for the beta testing. Rolling out a fix is too easy. When I wrote software in the early 80s, rolling out a fix meant getting on a plane with a 256MB mass …
Ancient adage: show me some software with no bugs and I'll show you some software that doesn't do anything. Sadly in the world of gigabit/sec downloads, the end user is now responsible for the beta testing. Rolling out a fix is too easy. When I wrote software in the early 80s, rolling out a fix meant getting on a plane with a 256MB mass storage module and delivering it by hand. I think it's only recently that the same approach for car firmware has become feasible with the infamous "over the air upgrade" so beloved of Mr Musk. Fine for sat nav data, less great for fly-by-wire systems. The other problem is that what most people call "code" these days actually isn't - it's a sequence of calls to pre-written routines that do the actual work and which are themselves full of bugs. Which are on the whole invisible because the software is proprietary.
The first "test" of anything my team produced in the 80s was a code inspection. I'm also happy to say that the application would have worked through y2k (even if the underlying operating system wouldn't have). Now we have governments that pass laws demanding back doors into encrypted comms and we blindly accept it.
The libraries of pre-written routines have source code somewhere. From my point of view, it is part of the code. It's easy to believe those routines must have been thoroughly tested by whoever provides them, and hard to get time or budget to check whether they do what they are supposed to do. And I agree, as it has become so easy to push updates out for software, an attitude seems to have set in that I'd describe as "Well, if we didn't get it quite right, we'll tweak it tomorrow." When it was hard to update software, we worked hard to get it right before we sent it out.
Many years ago I ran into an astounding statistic: In a software program of 12 lines, the probability of at least one bug is 75%. (That's when it has first been written.) Now we depend on programs with thousands to millions of lines of source code. It's really hard to test all the modules, and test how they interact, properly and fully.
You and I go pretty far back with this stuff. A lot has changed. But the fundamentals of how to do it right haven't changed.
Ancient adage: show me some software with no bugs and I'll show you some software that doesn't do anything. Sadly in the world of gigabit/sec downloads, the end user is now responsible for the beta testing. Rolling out a fix is too easy. When I wrote software in the early 80s, rolling out a fix meant getting on a plane with a 256MB mass storage module and delivering it by hand. I think it's only recently that the same approach for car firmware has become feasible with the infamous "over the air upgrade" so beloved of Mr Musk. Fine for sat nav data, less great for fly-by-wire systems. The other problem is that what most people call "code" these days actually isn't - it's a sequence of calls to pre-written routines that do the actual work and which are themselves full of bugs. Which are on the whole invisible because the software is proprietary.
The first "test" of anything my team produced in the 80s was a code inspection. I'm also happy to say that the application would have worked through y2k (even if the underlying operating system wouldn't have). Now we have governments that pass laws demanding back doors into encrypted comms and we blindly accept it.
The libraries of pre-written routines have source code somewhere. From my point of view, it is part of the code. It's easy to believe those routines must have been thoroughly tested by whoever provides them, and hard to get time or budget to check whether they do what they are supposed to do. And I agree, as it has become so easy to push updates out for software, an attitude seems to have set in that I'd describe as "Well, if we didn't get it quite right, we'll tweak it tomorrow." When it was hard to update software, we worked hard to get it right before we sent it out.
Many years ago I ran into an astounding statistic: In a software program of 12 lines, the probability of at least one bug is 75%. (That's when it has first been written.) Now we depend on programs with thousands to millions of lines of source code. It's really hard to test all the modules, and test how they interact, properly and fully.
You and I go pretty far back with this stuff. A lot has changed. But the fundamentals of how to do it right haven't changed.