Tags

,

Since dependency vulnerability scanning has been suggested by colleagues as something that could help with security/standards compliance and addressing legacy software, I’ve been looking into a few services to see how useful the available options could be.

NPM Audit

Dependency scanning is performed by default with the install command for later versions of NPM, and by default, the output of npm audit will show a few lines of relevant information about each package with a reported vulnerability – its rating, a brief description of the vulnerability and whether an update is available.

Initially this seems like a very useful thing. Software dependencies will have defects and security vulnerabilities, and it makes sense to have an automated tool that alerts us of them. However, this one flagged hundreds of packages, in a medium-sized application, as having vulnerabilities, and a handful of those were marked as ‘critical’. They warranted a deeper look. Some of the reports didn’t have any information associated with them telling us why a given severity rating was assigned, or any actual research, and this is my main criticism with the current dependency scanning services I’ve looked at. Why, exactly, is a given vulnerability marked as ‘critical’? Is it even exploitable in the context of how it’s being used? How can we know someone hasn’t merely copied and pasted the output of a scanning tool and arbitrarily assigned a metric or two? Are some of the results false positives?

A few of us had been thinking that, if our organisation did make serious use of this,  several people would need to be tasked with doing the research and trying to make informed decisions about whether to upgrade certain dependencies as a priority. We have run into situations where functions in one package version were deprecated and replaced by something else in a later version, making it necessary to rewrite some of our code.

Scanning in Visual Studio and the Package Manager Console

The NuGet Package Manager provides very useful information about whether the installed dependencies for a project can and should be updated, under the Updates tab in the NuGet window. To use this, the following command can be run in the Package Manager Console:

dotnet list package --vulnerable

This will use api.nuget.org/ to compare the installed package references with known vulnerabilities in the GitHub Advisory Database. The output isn’t too different to what npm audit gives us.

Project `MyProject.Api` has the following vulnerable packages

   [net6.0]:

   Top-level Package            Requested   Resolved   Severity   Advisory URL

   > System.Data/.SqlClient      4.8.3       4.8.3      Moderate  

https://github.com/advisories/GHSA-8g2p-5pqh-5jmc

Obviously, most .NET applications wouldn’t be using anywhere near the same number of dependencies as Node.js/React ones, and the vulnerability reports I’ve seen so far are more actionable. Plus, when I take on a new project, I do make a point of updating package and framework versions.

Dependabot

A feature of GitHub, this performs a very similar function to the local scanning methods, but I think this is more useful for a DevOps team that wants to use dependency scanning as a pre-deployment check.

In a project, go to the Settings tab. Under the ‘Code security and analysis‘ section, there will be options for Dependabot.

Here the following can be enabled:

  • Dependabot alerts
  • Dependabot security updates
  • Dependabot version updates

If we just want Dependabot to scan for vulnerabilities and notify us of them, without making changes to the code, just enable the first option. The alerts (if any) can be viewed under the project’s Security tab, and in the ‘Vulnerability alerts‘ section.

Under the Security tab, it’s also possible to set up code scanning, and GitHub provides the CodeQL Analysis tool. Setting this up involves creating a .yml file in the project, and adding whichever configuration section is appropriate for the project type.

MEND SCA

Formerly known as Whitesource, MEND is the solution we decided on, as we develop with a variety of frameworks, languages and platforms. The MEND dashboard – which is accessible to everyone in the development and DevOps teams – provides a single point of monitoring all this.

(There appear to be a lot of ‘critical’ vulnerabilities, but the products being scanned were all updated within the last few months)

Each entry under Library column links to information about the module/library, and a description of the vulnerability reported for it. Here we also find links to the CVEs. We might even get Base Score Metrics that could help us assess the risk of exploitation. The data we get from MEND is highly specific and granular, but I still came away with the belief that it’s only useful for getting a general idea of the state of the software we’re supporting.

With this in mind, the Security Trends charts are better used for getting an idea of how quickly technical debt and legacy packages are accumulating across everything we support.

One of the more interesting things I noticed about MEND that it also scans the base images of Docker containers, and all the packages within them. That’s something that might otherwise be overlooked.

Is dependency scanning that useful?

All the solutions I’ve looked at are variations on the same thing: They read package manifests (e.g. *package.json* or a Visual Studio project file), perform a lookup for each dependency in a vulnerability database and present the results in whichever way.

The main criticism I have with this is the scanning isn’t directly useful for enhancing the security of applications, largely because the metrics won’t accurately reflect how exploitable they are. After all, there’s only one reference to this in the current OWASP Top 10.

Higher-level security testing of the application itself, with the right framework, would give us a more accurate idea of how exploitable it is, and would cover other vulnerabilities related to how the software is put together and configured.

What is it useful for, then? I believe dependency scanning is more appropriate as a tool for addressing technical debt. It can be used for encouraging developers to keep the packages they use reasonably updated, and to help prevent a situation in which a huge amount of legacy components have accumulated.

The idea was being tentatively suggested that we could use these systems – MEND in particular – for continual monitoring and patching. We are already upgrading, patching, documenting and supporting a large number of applications and services, with each upgrade involving a necessarily complicated process to mitigate the risks of disrupting anything critical that happens to be dependent on it. To continually resolve everything flagged by the scanners and release updated versions of the software wouldn’t be realistic, unless there was an additional team dedicated to just that. Even then, that team would be hammered with alerts, so we need a way of filtering out the ones that aren’t useful.

In my opinion, dependency scanners should be treated as tools for software engineers with the expertise to make informed decisions about how the results should be acted on, and as a source of information about the general state of the software and technical debt.