Although iterative development has been well established as a valuable working method, organizations in regulated sectors (like automotive and healthcare), still struggle to reap their full benefits. ICT Improve has more than a decade’s worth of experience in applying iterative development in regulated environments. In this insight our experts share the most common issues faced and the solutions that helped us overcome them. In order to help you to become more efficient and effective, the same way our clients did. The subject of today: eliminating waste caused by treating all requirements as equal.

Anyone who has ever worked in a heavily regulated environment will know the pain that comes with the level of scrutiny placed on every move, and how those rules can sometimes turn draconian – harming our ability to do our job, rather than helping us build safe and effective products. One area where we have seen organizations throw out the baby with the proverbial bathwater, is while writing requirements, and how to handle them.

To clarify what we mean, we will be visiting some examples that we have encountered in the real world. These examples might seem trivial but ended up leading to significant time wasted on verification (and documentation), because the organization didn’t properly consider the reason why these requirements got defined. Because as it turns out, not all requirements are created equal…

“The software should be developed in compliance with ISO 62304”

Our first example is about an organization that included in their system requirements that the software should have been developed in compliance with ISO 62304. At face value nothing weird for the development of medical devices, as it describes a standard for the development of medical devices.

Until you realize that the requirement is defined in a system requirements document. While the standard applies to the entire development process. Verification of system requirements happens during the verification phase, but that phase is not the end of that process. Or to visualize it: the result of this mismatch was a lot of effort spent between QA and verification discussing how to create the evidence needed for its verification. At its worst, the idea was to create a defect that described the gap during system verification, leave that open with a formally documented explanation as to why its related failed test did not block transitioning to the next developmental phase, and “fix” the open defect at end of development by writing an addendum to the test report where we provide the remaining verification evidence required…

“The color of the system should be green RAL 6019”

Another real-world example was a system requirement stating that the color of the system should be green RAL 6019.

Even though the requirement is SMART, it still managed to lead to a lot of discussion. Some people advocated for verifying the device’s BOM to see which paint was used. Others argued that this wouldn’t be enough to prove the system met the requirement, because using a certain color doesn’t necessarily mean that the system has that color. This cascaded into a discussion on precision, the impact of circumstances (conditional lighting) and whether we shouldn’t just hire a professional lab to measure it. Those are expensive and time-consuming though, and how would we handle logistics? At this point, some of us started to wonder: aren’t we making things way harder than they need to be? Which – you guessed it – led to even more discussion…

“Serviceable part X should have a lifespan of X years”

Our final example refers to a serviceable part requiring a lifespan of a certain number of years. Again: a type of requirement that a lot of us with experience in hardware development expect to see – we all want hardware that requires limited maintenance, and to have maintenance be predictable. So it makes sense to write something about the reliability of those parts.

Even if we ignore the issue with phrasing (X years with what probability?), proving any kind of reliability numbers will take considerable effort. Even when we throw statistical significance out of the window and use a sample size of 1, the time needed to achieve the specified load (x years) eclipses the time available for formal design verification. So now what? Failed test, defect and additional documentation on why failing this requirement does not block progress? This one can’t be retroactively fixed when we reach the end of development, so should we then just accept a failed requirement in our formal submission? Sounds like a terrible plan, but what is the alternative?

Not all requirements are created equal

The given examples are practical demonstrations of what happens when creating requirements without properly considering context. Rather than tailoring the way we treat requirements based on their purpose and context, we treat all of them as equals: the same level of scrutiny, at the same development phase, verified by the same people.

In reality, the purpose of requirements can vary heavily, and with each purpose, a different approach and level of scrutiny is required. Based on our experiences, we suggest creating three explicit categorizations of requirements, each with its own distinct characteristics:

  • Product requirements 
    Requirements that define the intended use of a product or support the evaluation of whether the product is safe and effective in its use. In other words, it describes aspects of the product that are relevant for formal documentation. Product requirements are part of the formal documentation and their verification and validation coverage is mandatory. Product requirements are formally treated ‘all the way’ (full change control, full tracing, full verification and validation).
  • Process requirements 
    Requirements that are enforced by regulatory guidelines and relate to processes that are applicable anywhere in the lifecycle of the product. In other words, it describes aspects of the process that are relevant to the creation and maintenance of formal documentation. Process requirements are part of the formal documentation but are verified and validated outside of the scope of design conformance (or testing). Instead, their compliance is proven by Quality Assurance and/or the Regulatory Affairs Department.
  • Business requirements
    Requirements that describe a characteristic or capability that does not meet the definition of a product requirement and – if omitted – could result in development creating an economically unviable product. Business requirements are optional for formal documentation, but available for review when requested. Verification and validation coverage is encouraged, but optional (thus formally not required).

With these categorizations in mind, let’s revisit our examples to see what would have changed if we had applied these categorizations from the start, and adjusted our approach accordingly:

The example of meeting a certain regulatory standard fits the category of a process requirement. Rather than trying to force a square peg in a round hole, the requirement should have been removed from the system requirements and moved to the appropriate QA & regulatory document(s). This removes the need for additional documentation, and also ensures that verification is left to the QA & regulatory experts.

The example of the device’s color requires answering an additional question: what needs are being met by making the device a certain color? In our example, it ended up being about branding: the color was important because it matched the color scheme of the company as a whole. As such, the requirement becomes a business requirement, rendering the entire basis on which things got complicated, moot. Since the safety and effectiveness of the device don’t rely on the color, a simple BOM check, or even a subjective, visual check by the business would be enough. Additionally, the test doesn’t need to be included in the (formal) verification evidence, thus requiring very minimal effort.

The lifespan of the serviceable part is also a business requirement for similar reasons. Sure, a device has to be reliable enough for it to be safe and effective, but that is implicitly covered during validation. It would be more fitting to see this requirement as a way to ensure economic viability. By classifying the requirement as a business requirement, the in-depth verification can occur after market introduction, making the requirement more of an intent that needs to be taken into account during development and which can be verified through more subjective means, outside of the scrutiny of regulatory bodies.

Conclusion

As you can see, applying the categorization of requirements could have easily reduced the effort spent on these requirements, meaning lower costs and faster delivery. As a bonus: you will have a better overview of your requirements and the submission process will go much smoother. As stated in the introduction, one could consider this all to be obvious and trivial, but in practice it isn’t.

As said: these examples are all real-world examples of challenges we’ve seen with customers. Regardless of your opinion on how the requirements were initially handled, it was the introduction of these categories that helped steer the conversations within those organizations in the right direction. And the amount of effort saved is also nothing to sneeze at. For example: at one customer, this categorization of requirements resulted in a reduction of requirements in the scope of formal verification and submission (i.e. the requirements that take up the most effort and are scrutinized the hardest) from ca. 200 to ca. 60. As it turns out, sometimes equality just isn’t the answer. 

Written by Johan van Berkel and Patrick Duisters

More information?

Please contact Pieter Withaar

Send an email
Pieter Withaar