Semiconductor Design & Verification Articles

Creating Confidence in the Clouds of Doubt

How electronics systems designers, and others, can think about risk, the costs of risk mitigation, and some simple approaches to lowering risk in a dynamic world of unknowns.


As I was creating a customer proposal in which reducing or mitigating risk in the resulting semiconductors was a key deliverable, it struck me that “confidence” is the subjective state of mind that results from risk mitigation.   It is purely subjective because each person weighs risk differently and in an inherently unmeasurable way.   At the end of the day, most discussions of risk and confidence in the developer’s mind come down to “Can I say that the risk is low enough to say I am done?” or more simply is my confidence “Good Enough.”  All complex products and processes, including and especially electronics-based systems, have vast and almost uncountable sources of risk to the system working reliably, securely, and without surprises or outside intervention throughout their production life.   In addition, money and time cannot be wasted in creating a secure and reliable system as over-engineering will kill the budgets and timeliness of otherwise very useful systems!  So this question of what is “Good Enough” seemed to be worth exploring. 

The Need for Confidence: Avoidance of Risk

But first, let’s dwell on the flip side of confidence: risk.    Much has been written about risk, and while the questions of confidence/risk apply widely, my concerns are related to the functionality and security of an electronic device or system.   Even the simplest electronics-based system is an amalgamation of hundreds of suppliers’ products and services when considered from raw elements to finished product with software and documentation.  Risks to these products can be divided into the following categories:

  • Functionality: Will a system perform as expected?  Will it perform additional unwanted functions or fail to function over time?
  • Infiltration: Has an unauthorized agency controlled any part of the design and development or can it gain access and/or control the system once operational?
  • Exfiltration: Has any of the proprietary information or critical data about the system’s operation been stolen, or can it be discovered through analysis or post-processing?
  • Availability: Can any of the unique suppliers be compromised or inactivated such that the supply chain is interrupted or corrupted?

Recognizing that risks include both human errors and also nefarious acts for economic or state-sponsored reasons,  those responsible for the security of a semiconductor or electronics-based system have a lot to be concerned about before they get to confidence that is “good enough.”   Concern about things going on in the cyber-space around a supplier’s internet connection is an example of how deep this rabbit-hole of worry and risk can go. 

Getting to “Good Enough”

In my initial thoughts about this question of “Good Enough” as it relates to cost (time and money being interchangeable in the abstract), I realized that indeed you can never get to 100% confidence no matter how much time/money are expended and that approaching 100% confidence asymptotically was the best you could expect, passing “Good Enough” somewhere along the curve. 

cost of confidence an abstract view

The additional truth that seemed logical is that, for the most part,  developers have the ability to get significant confidence “right out of the box” because of the inherent correct-by-construction and security systems and supply chain management oversight that comes from being a successful product development company with happy customers.   This last piece, “happy customers” presumes but does not fully ensure that all aspects of a prior product have proven to be secure and reliable to date.  But designers and developers know fully that backdoors can remain hidden, and new features may create a new risk.   We just need to get to “good enough.”

A Closer Look at Improving Confidence

A little further thought on my part about the more specific ways confidence is gained resulted in a realization that every bit of new confidence is only gained by expenditure but only realized after the full expenditure of time and money have been invested.  There is no partial gain in confidence for a partial effort in the middle of each mitigation or security patch or verification effort.    You have to complete the step to gain the additional confidence.  You have to tighten the nut on the bolt before you get the benefit of the bolt.   So now, a less abstract view emerges:

cost of confidence less abstract view

Another truth that confronts the developer is that new levels of confidence, especially when deploying state-of-the-art or state-of-the-practice complex systems often come at increasingly costly steps.  New tools and new methodologies must be explored, learned, adopted in order to get up the “curve” towards “good enough” and anything new inevitably takes more time and money than proven and incorporated mitigations.   While it is also true that sometimes very simple things can create big steps in confidence, in my experience these simple and effective things are rather the exception than the rule. 

Time Changes Everything

Of course, this picture cannot stay stagnant because, over time, new threats emerge and new complexities open up new vulnerabilities.  For example, a new device or component in a system may come with exciting new features that your customers will love or that will enhance the mission of your system but it possibly comes with a new supply chain or a new processor with unforeseen cyber risks.   Also, adversaries in the form of competitors, criminals, as well as nation-states are constantly exploring product weaknesses up and down the supply chain and at all stages of product development and product lifecycle.   Constant vigilance is the price of a dynamic market and a dynamic threat space.

cost of confidence effects of time

Over time, the cost of gaining confidence drops as a benefit of gained experience traversing the paths and automating the process for future efforts.   But perversely, often over that same time, you are gaining experience, new threats emerge, and the “good enough” line pulls away from your outstretched hand before you can get there.   

The Clouds of Doubt

A final realization and complicating factor is that all of this effort to get to “good enough” occurs, and especially in gaining the last critical bits of confidence, in what I am calling “The Clouds of Doubt.”   We don’t really know what is good enough and we can only take educated guesses as to the failure modes or risk profiles or threat vectors.   So, as you see illustrated below, it is a moving target in the dark.    Sometimes the clouds part and you can chart the path to “good enough” confidence, but mostly, you have to assume the unknown will stay the unknown.

cost of confidence clouds of doubt

Navigating the Clouds of Doubt to “Good Enough”

Now that the complexity of the situation is represented about as far as we can go at this level of conceptual abstraction – I can turn to the questions that now need to be asked: What generic approaches can be employed to operate within the clouds of doubt to efficiently get to “Good Enough”?   It is probably a good assumption (and is illustrated above) that gaining the last bit of confidence is the costliest and least understood, especially when the confidence creation process is new and unfamiliar.   The answers, in part, include the following simplistic concepts:

  • Find information in the dark by “feeling around the edges” of knowledge and threat space

It is understood that an unknown starts out as “unimaginable” so imagining the motivations and opportunities of threat actors and imagining the kinds of exploitable lapses that come from human nature is a good place to start the navigation.  Also, errors are well-understood to be more frequent in the gaps between functions, between companies, between standards, and more fundamentally between people.    Investigating these gaps is likely to be useful. 

  • Create a “web” of alternate and partial solutions to close portions of the gap

We can all imagine solutions that get halfway or more towards effective.  If I close the door to the outside in a room with drafty windows, it is not good enough, but it is a start.  Combining partial solutions that emerge from different angles is likely to cover more ground than expensively extending a solution that easily covers one aspect but is largely inappropriate for other aspects.

  • Prepare for “fast follower” leverage of other industries when possible

Many but not all problems are faced in parallel industries – the commercial world of insurance may be able to learn from banking and the lessons learned in the medical industry could possibly be leveraged in defense.   Leading research and best practices in academia and corporate and government labs may be applicable and usable in ways to build confidence and lower risk in unforeseen ways.   Interestingly, as each industry specializes and dives deeper into its own arcane issues and creates solutions for those problems, they become less able to see the parallel uses in other industries – they are simply not being paid to do so.

  • A phased and collaborative approach allows deliberative steps within the cloud of doubt

A “Phased Approach” implies initial steps that include significant planning and calibration of cost/benefit. When dealing with unknowns, open-ended trials and explorations can be costly with little movement towards the goal of “good enough.” Collaboration also ensures that one person’s blind spots can be illuminated by another’s insight. The synergy of collaboration in areas of unknown risks and clouds of doubt cannot be over-estimated.  

In conclusion, this effort to explore the process of creating confidence was started to simply try to understand the issues in mitigating risk within high-integration electronics and in the systems in which they are deployed.   But the process elicited a more generic treatment that may have applicability elsewhere - especially in this world where risk rises proportionally with complexity.

Intrinsix designs and develops leading edge microelectronics for high-value applications in consumer, commercial and defense markets.  Semiconductors must be trusted and electronics customers need confidence in those electronics.   Intrinsix is an accredited Trusted Supplier by the DMEA and the author is a member of the Trusted Supplier Steering Group.  Through that association, he has contributed significantly to the creation of the GUIDEBOOK on TRUST as well an NDIA-sponsored Joint Working Group white paper on Future Microelectronics.

The Roadmap to First Silicon Success with Intrinsix

Image Copyright: 123RF Stock Photo
Five Criteria For Evaluation of a Semiconductor

Recent Articles

Gain Valuable Insight Into Your Semiconductor Project

Popular Articles

Related posts