The critical importance of testing
Create a vendor selection project & run comparison reports
Click to express your interest in this report
Indication of coverage against your requirements
A subscription is required to activate this feature. Contact us for more info.
Celent have reviewed this profile and believe it to be accurate.
26 May 2015Tom Scales
It should go without saying, but good testing is absolutely critical to producing technology that benefits your organization. As a former insurer CIO, I can attest that the group most often squeezed to meet a deliverable date is the quality assurance team that has the incredible responsibility to ensure accuracy. Perhaps our organization was unique, but having also spent years at vendors, doing implementations, I see it happen over and over. Projects run late and rather than adjusting the date, or the project content, testing gets squeezed, often with unintended consequences. The corollary to poor testing is “Day 2.” This is code in the IT community for “we didn’t get it done, but will let someone else worry about it later." This is compounded in the Life insurance industry, as “later” can sometimes be measured in years. This means that the team that built the code may be long gone by the time Day 2 rolls around, or worse, the fact that there are open items will be lost in the history or lore of the organization. The reason I bring this topic up is an article I read recently that dramatically shows the challenges of inadequate testing. For the nerds amongst our readership, one of the classic fails of programming is an overflow. For the non-nerds, this just means you have a counter in your program that isn’t big enough. Rather than continuing to count up, it wraps back to zero and starts over. The results can be major to a program and it is an error that might not be discovered for years. Remember, you’re counting something, so reaching this limit can take a long time. The latest programming error to experience an overflow? The Boeing 787 Dreamliner. It seems that you need to reboot the plane every 248 days or risk the plane falling from the sky. 248 days, measured differently, 2^31 one-hundredths of a second. Now I have no idea what they're counting, but apparently it is pretty important. Should testing have caught this? Of course. Did it? Apparently not. So the next time you want to cut short testing, remember this post and ask yourself “should we?” You may not have a plane falling from the sky, but you could have a hidden calculation error that could cost your company. Take a small error and multiply it over time and it quickly becomes a very, very large error. A potentially catastrophic financial error.