Brian Candler
1 min readJun 26, 2021

--

I think that "total number of security vulnerabilities" is not a useful metric. Suppose I ask a different question: "how many critical security vulnerabilities is it acceptable to have when deploying your application?" For me, the answer is "zero". By that measure, both of the methodologies you've provided have failed, and should not be used in production. (I assume we agree that "critical security vulnerability" means one with a high likelihood of being remotely exploited, leading to loss of integrity of your systems and/or exfiltration of confidential data).

Can you even infer that Alpine, with its 4 critical vulnerabilities, is in some way "better" than Distroless with its 5? Not necessarily: only if Alpine's 4 is a proper subset of Distroless' 5, i.e. it crosses one off the list. If Alpine crosses 2 off the list but adds 1 new one, that new one could be the one that gets exploited.

Alternatively, you could argue that security scanners are flawed and generate lots of false positives - for example, highlighting issues in parts of libraries which your application doesn't use. If you take that point of view, then you must also accept that the metrics are useless. You would have to investigate each flagged vulnerability one by one, to determine whether it is valid or not in your use case.

--

--

No responses yet