Every day that goes by SolarWinds proprietary software Orion network monitoring product supply chain security failure gets bigger and bigger. Microsoft, itself a victim, reports that 40 of its customers installed trojanized versions of Orion. Victims include the U.S. Department of Energy and the National Nuclear Security Administration, at least a number of state governments, and many others.
How bad is it? The Cybersecurity Infrastructure and Security Agency said the hacks posed a "grave risk" to US governments at all levels. That’s how bad.
What really caught my attention though is that SolarWinds has been anti-open source for years. Cloud native computing, from Docker and Kubernetes to the last little program on the Cloud Native Computing Foundation’s (CNCF) Cloud Native Interactive Landscape is open source.
Ironically, SolarWinds claimed open source software as being untrustworthy because anyone can infect it with malicious code. A SolarWind writer claimed: security "risk is far less when it comes to proprietary software. Due to the nature of open source software allowing anyone to update the code, the risk of downloading malicious code is much higher. One member in the SolarWinds community referred to using open-source software as "eating from a dirty fork." He wrote that, "When you reach in the drawer for a clean fork, you could be pulling out a dirty utensil. That analogy is right on the money."
SolarWinds followed this up by remarking in another blog that the whole foundation of cloud native computing — containers and container orchestration aren’t trustworthy either. Omar Rafik, SolarWinds Senior Manager of Federal Sales Engineering, wrote, "containers are designed in a way that hampers visibility" and "Visibility becomes particularly problematic when using an orchestration tool like Docker Swarm or Kubernetes to manage connections between different containers because it can be difficult to tell what is happening."
Trust us, we already know security is a challenge in cloud native computing. We work on locking down cloud native computing every day.
But, open source is not the one that’s inherently insecure here. Proprietary software — a black box where you can never know what’s really going on — is now, always has been, and always will be more of a security problem.
I would no more trust anything mission critical to proprietary software than I would drive a car at night without lights or a fastened seat belt. That’s why I’m writing this on Linux Mint with LibreOffice rather than Windows and Microsoft Word. That’s why the internet, cloud native computing, and the cloud — yes even Microsoft Azure — use Linux and open source.
Now, there’s nothing magical about open source software. People who assume that a miracle happens when you use open source and you’re somehow perfectly safe — I’m looking at you Equifax — deserve what they get when they don’t keep their software up to date. In that case, it was Apache Struts.
And, in still another infamous case, missing a simple error in validating a variable containing a length in OpenSSL led to the Heartbleed security breach. I called it open source’s greatest failure to date. I wasn’t wrong.
So, why with all that history am I saying open source software is inherently more secure? Because it is.
A fundamental open source principle is that by bringing many eyeballs to programs more errors will be caught. That doesn’t mean all errors are caught, just a lot more than those by a single proprietary company.
A corollary to this, is Eric S. Raymond, one of open source’s founders, who famously said, "Given enough eyeballs, all bugs are shallow." He called it "Linus’s Law." It worked well. Just consider the sheer number of serious Windows bugs — does a month go by without one? — compared to those of Linux.
There are many ways to find those open source mistakes. You can, of course, do it yourself. The code, after all, is open. Not sure what’s new in your software supply chain’s programs? You can use the Red Hat‘s Release Monitoring or Replogy. The nvchecker program is also useful. Or, you can look to Synopsys’s Black Duck or Sonatype Nexus Lifecycle for a third-party code analysis tool.
The Linux Foundation has also been working on armoring the open source software chain with the Open Source Security Foundation (OpenSSF). This cross-industry group brings together open source leaders by building a broader security community. It combines efforts from the Core Infrastructure Initiative (CII), GitHub’s Open Source Security Coalition, and other open source security-savvy companies such as GitHub, GitLab, Google, IBM, Microsoft, NCC Group, OWASP Foundation, Red Hat, and VMware.
The goal of OpenSSF, according to Mark Russinovich, Microsoft Azure’s CTO is to help developers better understand the security threats that exist in the open source software ecosystem and how those threats impact specific open source projects.
To help harden open source software, the Foundation has four goals. 1) Help developers to spot security problems, 2) Provide the best security tools for open source developers, 3) Give them best practice recommendations; and 4) Create an open source software ecosystem where the time to fix a vulnerability and deploy that fix across the ecosystem is measured in minutes, not months.
In short, proprietary software companies, like SolarWinds, are still making huge security blunders, which are hidden from users until the damage is done. At the time, open source programmers and their allies are continuing to make their programs ever more secure and in the open so that everyone benefits.
The Linux Foundation, CNCF, GitLab, Red Hat, Sonatype, Synopsys, and VMware are sponsors of The New Stack.
Feature image by Ryan McGuire via Pixabay.