Ransomware attacks on government IT systems: One reason we need better software architecture

This story is from May 22, 2019, put out by Tim Pool of Subverse. There has been a spate of ransomware attacks on municipal governments, and in one case, the State of Colorado’s Dept. of Transportation (CDOT).

I’ve done a little research, looking for details about how this happened. The best I’m getting is that the governments that got infected didn’t keep their software up to date with the latest versions and patches. They also didn’t maintain good network security. That seems to be the way these ransomware attacks happen. However, in the case of the City of Baltimore, the attack method was different from those in the past. The infection appears to have been installed intentionally on the governments’ computers by someone, or some group, who got into their systems through some means, perhaps through a remote desktop capability that was left unsecured. It didn’t come in through an infected website, or through a viral e-mail that’s used as bait.

Baltimore’s out-of-date and underfunded IT system was ripe for ransomware attack – Baltimore Brew

Some time back, I got a tip from Alan Kay to look at Mark S. Miller‘s work (no relation) on creating secure software infrastructure, using object capabilities, in an Actor sort of way. It was educational, largely because he said that in the 1960s and ’70s, there were two schools of thought on how to design a secure system architecture. One was using access control lists (ACLs), based around user roles. The other was based around what’s been called “capabilities” (or “cap”). The access control list model is what almost every computer system now in existence uses, and it is the primary cause of the security vulnerabilities we’ve been dealing with for three decades.

He said the problem with ACLs is that whenever you run a piece of software logged in using a certain user credential, that software has all the power and authority that you have. It doesn’t matter if you ran a piece of software by mistake, or someone is impersonating you. The software has as much run of the system as you do. If it manages to elevate its privileges through a hack, making itself appear to the system as the “root” user (or “Admin”), then it has complete run of the system, no holds barred.

The capability model doesn’t allow this. To use an analogy, it sandboxes software, based on the capabilities, or designation that spawned it, not the user’s credentials. Capabilities are defined for each object and component in the system, based on what resources it needs. If one component spawns another, the spawned component adds some capabilities of its own, but otherwise, it gets capabilities that the component that spawned it allows it to have, but only from its own repertoire. A user can give software more capabilities, if they desire it.

The strength of this model is there is no avenue for a piece of software to suddenly gain access to all of the system’s resources, just because it fooled you into running it, or someone got your credentials, and installed it. Even if an impersonator gains access to the system, they don’t have the ability to rampage through the system’s most sensitive resources, because it doesn’t use a user-role access model.

Part of the security infrastructure of object capabilities is that objects are not globally accessed. Using the Actor model of access, object references are only obtained through “introduction,” through an interface.

I’m including a couple videos from Miller to illustrate this principle, since he created a couple programming languages (and runtimes), called E and Caja, that use this capability model in Windows, and JavaScript/EcmaScript, respectively.

The first video demonstrates software written in E, from 2002

This next one demonstrates a web app. written in Caja, from 2011. A major topic of discussion here is using object capabilities in a distributed environment. So, cryptographic keys are an important part of it, to provide secret identifiers between objects, to restrict system access to sensitive resources.

In the Caja demo, he showed that an object that’s unknown to the system can be brought into it, and allowed to run, without breaking the object’s operation, and without threatening the system’s security. Again, this is because of the architecture’s sandboxing, which is deep. It’s also because of the object-oriented nature of this system. Everything is late-bound, and everything executes through interfaces. Objects using the same interfaces as sensitive system resources can emulate secure, sandboxed versions of those resources.

Miller has approached capabilities from a standpoint of pragmatism, “the operating system ship has sailed.” We’re stuck with the access control list model as our OS foundation for running software, because too much depends on it. We have a very loooooong legacy of software that uses it, and it’s not practical to just pitch it and start over. He’s talked about “tunneling” from ACLs to capabilities, as a gradualist path to a better, more secure model for designing software. This still leaves some security vulnerabilities open for attack in his system, because he hasn’t rebuilt the entire operating system from scratch. He created a layer (runtimes), on top of which his own software runs, so that it can operate in this way, without falling prey to attack as much as the other software on the system.

This pragmatic approach has its virtues, but it would not have prevented the attacks I’m talking about above, because they used system resources that are built into the operating system. What these attacks highlighted for me is that the system architecture we’re using does not provide adequate security for operating on a network. Sure, you can keep patching and upgrading your system software to stay ahead of cyber threats, but get behind on that, while not adequately compensating for the system security model’s inherent weaknesses, and you compromise a large chunk of your IT investment.

As I am not terribly familiar with the capability model yet, this is as much as I’ll say about it. I think Miller’s demos of it were very impressive. What this is meant to get across is the need for better software architecture, not in the sense of using something like design patterns, but a different system of semantics that is surfaced through one or more programming interfaces designed to use it.

The architecture Miller demonstrated is object-oriented, and I mean that in the real sense of the word, not in the sense of Java, C++, C#, etc., but in the sense, as I said earlier, of the Actor model, which carries on some fundamental features that were invented in Smalltalk, and adds some more. The implementations he created seem to be synchronous programming models (unlike Actor), using the operating system’s multitasking facility to run things concurrently.

It would behoove government and industry to fund basic research into secure system software architecture for the very reason that our governments now depend so much on computers running securely. What Miller demonstrated is one idea for addressing that, but it’s as old as a lot of other ideas in computer science, and it no doubt has some weaknesses that could be overcome by a better model. Though, reading Miller’s commentary, he said he’s biased in favor of thinking that the capability model is the best. He hasn’t seen anything better.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s