Finding Failure
We celebrate those who seek out failure because they, above all other people, know that the only way to trust if something works is to know all the ways it won’t.
How do you feel when you hear the word, “failure”? As a tester who focuses on developing quality hardware and software that meets your expectations, you probably can’t imagine anything worse. It’s up to you to make the call when enough testing has been done for a product, which is never an easy decision. Fortunately, software development models have evolved in recent years, allowing quality assurance (QA) and testing processes to do the same, ultimately eliminating the product failure fiascos that you work so hard to avoid.
In recent decades, NI has watched the traditional waterfall software development process become much less feasible. The linear and highly structured approach became more clunky, exhausting, even anxiety-filled without much visibility or regular cross-team communication as client’s production needs grew exponentially along with quality demands. Because the pace of technological production has skyrocketed with increasing demands for reliable and safe products, today it’s never been more important to confidently trust all the possible ways something can fail.
The waterfall method focused on accessing failure at the end, which is the opposite of what testers need to do today. The rise of automated software testing solutions, combined with continuous delivery development approaches, allows testers and engineers to reimagine the software development and QA process.
Old assumptions that all needs must and can be gathered up front during the requirements phase does not work for testers anymore, especially when time-to-market pressures have continued to escalate. The industry has evolved to adopt new methods for developing and testing software, answering the question many at NI are extremely familiar with: How do you trust something works if you don’t know all the ways in which it doesn’t?
Continuous delivery is a much more modern, agile, and integrated approach to testing and development. It bakes failed outcomes into all quality and design elements in tandem with code creation while using various tests to gain and share constant feedback to keep quality high. This continuous testing delivery method across the QA process covers two main valuable aspects.
The first is making sure software does what the client expects—often referred to by developers as the “happy path.” The second aspect to QA is looking at all the ways the outcome can go wrong—also unofficially referred to as the “sad path.” NI celebrates the exploration of failure while developing persistent and successful software that works. It’s less sad versus happy paths and more sad and happy paths working together to identify greater opportunities.
Having been singularly focused on achieving a happy path for such a long time, NI knows what it takes to recover from not addressing detrimental software bugs while coding and the negative impact waiting until the end to find failures can have on software performance.
Now when something goes wrong, it’s because the failure is being tested, fixed, and improved upon—not because customers are finding it. Many argue that a core component of innovation is failing to drive iterations of new concepts and new ideas. We view the ultimate test for a test as having the path fail first. You can’t get this if all tests are conducted at the end. This type of method requires a strong test infrastructure built with the greatest complications in mind.
In working alongside automated systems, testers are able to support our largest customers with critical systems to meet higher standards of continuous delivery. These use cases are what support those customers who remain hesitant to embrace continuous software development ideology. It’s hard to ignore greater use of investments while building stronger skills and structures to support finding failure.
NI knows how challenging it can be when other product lines do not utilize modern development methods. But outdated approaches also reveal opportunities in which even more can benefit from if we all embrace QA testing within R&D.
By adopting a modern, agile method to testing and development, teams often benefit even further by leveraging continuous feedback cycles within an environment where teammates are constantly learning from each other’s mistakes.
We can all agree, no product is 100 percent bug free. And it’s impossible to prove any suite of tests has a 100 percent guarantee of functionality. With this in mind, we’re constantly looking for ways to guide investments to improve defenses against defects. This doesn’t just impact the way software is developed; it puts us in a constant practice of gaining speed and precision with each issue as faster, automated tests are less comprehensive while slower, manual tests remain more comprehensive.
This also differs by industry. Take for example the approach to QA testing in gaming. Most gaming companies tend to have a large QA testing organization because it’s a group of people playing a game. This industry tends to prescribe to the waterfall method of developing and testing, with automated testing remaining underrepresented. Games are also played to gain qualitative experiential feedback, which remains a very human-driven effort and is nearly the exact opposite of what manufacturers are able to do utilizing a modern, automatic, nimble approach to testing. Interestingly, this testing method is different from the advanced driver assistance systems (ADAS) technologies, in new radars, cameras, and sensors for safety within automobiles.
For this particular technology suite, NI's adaptable QA approach to ADAS can integrate all the types of tests needed to design the vehicles of tomorrow, today. Automotive engineers can now use one test solution for ADAS and autonomous systems, from characterization to verification and validation on the production floor.
The value in this kind of testing approach is gaining momentum. Rather than looking for perfection, it means looking for the line you draw when you see failure as a continuous opportunity to get better at protecting against the flaws found. Using an automated test suite to address nagging doubts provides an important layer of accountability before software is ready for the world to consume at scale.
Knowing where to draw the line on all of the failures you can fix is one thing, filling your confidence gap with experience and training is another. Many QA testers don’t instinctively believe in their own abilities. You can’t be confident in results if you don’t know what’s being tested.
The next generation of testers care about remaining continually courageous despite failures to address the challenges the world is facing today, from climate change to inequality. At NI, we know we’re part of diverse, interconnected ecosystems—our code, our company, and our planet—that must all work together to thrive. When one ecosystem suffers, we all suffer, making it imperative to squash nagging doubts once and for all by embracing confidence to fully bridge the current testing divide.