Encryption is the Foundation of the New Data Center
May 24, 2016
Shah Sheikh (1294 articles)

Encryption is the Foundation of the New Data Center

For decades, encryption was an arcane art. Encryption was slow, clunky and highly complex, and as a result, the vast majority of data in the data center resides on storage systems in the clear. Sensitive data has historically been protected by IP segmentation and firewalls with IPS modules.


As workloads in the corporate data center begin to migrate to the public cloud, the need to encrypt data in motion and at rest becomes foundational. In the public cloud, it is much harder to rely on the traditional approaches of wrapping select data with firewalls and IPS systems. At the same time, it is much easier to post a heap of sensitive data to an object store such as Amazon S3 and inadvertently leave it open to the unwashed Internet. Customer-controlled encryption is becoming a necessity for the enterprise hybrid cloud.

But IT security is subject to a fundamental law: “If it slows users down, they will turn it off.”

Historically, encryption has always been a prime offender of this law. Consider email. Sending encrypted email makes sense for so many reasons. Users can recall messages sent in error. Sensitive data can be controlled and not forwarded. Businesses are often run on email and email contents are extremely sensitive, so protecting it just makes sense. But because PKI encryption has such a wonky impact on the user experience, less then 1 percent of all email sent is actually encrypted at the message level.

To work at scale, an encryption system needs certain attributes to avoid violating the fundamental law about speed. The first attribute that encryption needs is transparency.

A transparent encryption system does not require an agent in the OS, and does not break basic data center operations such as snapshotting or cloning a data volume. In addition, it cannot have a meaningful performance penalty. If turning crypto on cuts performance in half, I can state from experience that it will get turned off.

Fortunately, Intel has done a lot to help improve performance. Intel has created very advanced crypto acceleration in the CPU with a set of instructions called AES-NI. Applications that use AES-NI can run encrypt/decrypt operations of AES-256 (very strong encryption) at line rates with just a single-digit performance penalty — a penalty that can be tough to perceive in the public cloud.

More and more infrastructure platforms will offer built-in, always-on encryption that works without getting in the user’s way. Interestingly, as the encrypt/decrypt functions become highly efficient, the more challenging part of encryption is managing the keys. Infrastructure providers — cloud providers or software vendors such as VMware — will need to offer fully automated key management services to keep track of thousands of keys and have everything work together seamlessly.

A precedent exists for this seamless integration of encryption — as user endpoints became increasingly mobile, built-in disk encryption became a necessity. Today the vast majority of laptops, and even mobile devices such as the iPhone, have built-in encryption that is fully transparent, has no perceivable performance impact, and is always on.

As we move to a world were encryption in the data center works seamlessly and at scale, it will impact some fundamental assumptions about how a data center operates. First, if the encryption system being deployed is done in software and can span multiple hybrid clouds, it allows the IT team to think about clouds simply as pools of capacity. One pool is the on-premise private cloud, another pool is a large provider such as AWS, and additional capacity pools can include Google or Microsoft Azure. This model frees the IT team to pick the right pool of capacity for the right workload based on the “-ilities” — that is, scalability, availability, reliability and, of course, discountability.

The second and less obvious transformational aspect of ubiquitous infrastructure encryption is the role it can play in enforcing micro-segmentation and access control. In this always-encrypted data center that we imagine, a cryptographic key must be released in order to boot a new server, attach a data volume to a server or allow one server to communicate with another. If an access control policy were integrated with the key management system, complex access control policies could be implemented quite simply.

Historically, access control policies are implemented using IP address segmentation. Access control policies are often a simple statement such as, “This disk holds source code, so it should be accessed only by a build server and only from users in the LDAP group called Developers.”

But trying to identify the IP addresses of the allowed storage system, build server, and increasingly mobile users can cause a one-sentence policy statement to balloon into thousands of IP-based firewall rules. If this same policy were integrated into the key management system, however, the decryption key would be released each time a new request is received to access the storage system that holds the source code, so the policy can be checked.

This idea of key release as a point of policy enforcement is profound. It means that assets such as servers and data sets can move around — from one network segment to another, or from a private cloud to a public cloud — but that the access control policy moves with it. This type of fluidity and robustness is necessary for the enterprise to truly embrace hybrid clouds in a production setting.

The data center of the future will be defined entirely in software. It will be dynamic and portable, spanning premise-based private clouds and hyperscale public clouds. It will provide businesses with the agility they need to respond to rapidly changing market conditions, as well as to innovate rapidly. A software-based encryption solution will be the foundation of this new data center architecture. The role and importance of such an encryption layer is only just beginning to be realized.

Source | ComputerWorld