There’s no doubt that the cloud can improve certain aspects of security. After all, clouds have enormous economies of scale that provide customers with dedicated security teams and technologies that aren’t feasible for the vast majority of organizations. That’s the good news. The bad news happens when customers don’t properly configure and secure their own workloads and buckets in a cloud environment.
Consider the recent Capital One data breach, where a hacker exploited a misconfigured cloud firewall to gain access to data on 100 million card customers and applicants in one of the largest breaches in history. There are thousands of potential cloud misconfiguration mistakes such as the one that felled Capital One. But, as with many things, most misconfiguration errors can be classified into a handful of distinct categories. Here are five of the most common areas where cloud misconfiguration attacks happen.
Mistake 1: Storage Access
When it comes to storage buckets, many cloud users think that “authenticated users” covers only those who are already authenticated within their organization or the relevant application. This is, unfortunately, not the case. “Authenticated users” refers to anyone with Amazon Web Services (AWS) authentication, which is effectively any AWS customer. Because of this misunderstanding, and the resulting misconfiguration of the control settings, storage objects can end up fully exposed to public access. Be especially careful when setting storage object access to ensure that only those within your organization who need access have it.
Mistake 2: “Secrets” Management
This configuration mistake can be especially damaging to an organization. It’s critical to ensure that secrets such as passwords, API keys, admin credentials and encryption keys are secured. I’ve seen them openly available in badly configured cloud buckets, compromised servers, open GitHub repositories, and even in HTML code. It’s the equivalent of leaving the key to your home’s deadbolt taped to the front door.
The solution is to maintain an inventory of all secrets that you use in the cloud, and regularly check to see how each is secured. Otherwise, malicious actors could easily access all your data. Worse, they can take control of your cloud resources to do irreparable damage. Equally as important is the use of a secrets management system. Services such as AWS Secrets Manager, AWS Parameter Store, Azure Key Vault, and Hashicorp Vault are some examples of robust and scalable secrets management tools.
Mistake 3: Disabled Logging and Monitoring
It’s surprising how many organizations don’t enable, configure, or even review the logs and telemetry data that public clouds provide, which in many cases can be extremely sophisticated. Someone on your enterprise cloud team should have the responsibility for regularly reviewing this data and flagging security-related events.
This advice isn’t limited to infrastructure-as-a-service public clouds. Storage-as-a-service vendors often provide similar information, which again, needs to be regularly reviewed. An update bulletin or maintenance alert could have serious security implications for your organization, but it won’t do you any good if no one is paying attention.
Mistake 4: Overly Permissive Access to Hosts, Containers and Virtual Machines
Would you directly connect a physical or virtual server in your data center to the Internet without a filter or firewall to protect it? Of course not. But people do almost exactly that in the cloud all the time. A few examples we’ve recently seen include:
- Etcd (port 2379) for Kubernetes clusters exposed to the public Internet
- Legacy ports and protocols such as FTP enabled on cloud hosts
- Legacy ports and protocols such as rsh, rexec, and telnet in physical servers that have been virtualized and migrated to the cloud.
Make sure you secure important ports and disable — or at the very least lock down — older, insecure protocols in the cloud, just as you would in your on-premises data center.
Mistake 5: Lack of Validation
This final cloud mistake is a meta-issue: We often see organizations fail to create and implement systems to identify misconfigurations as they occur. Whether it’s an internal resource or an outside auditor, someone must be responsible for regularly verifying that services and permissions are properly configured and applied. Set a schedule to ensure this occurs like clockwork, because as the environment changes, mistakes are inevitable. You’ll also need to establish a rigorous process to periodically audit cloud configurations. Otherwise, you risk leaving a security flaw in place that malicious actors can exploit.
The cloud has the potential to be a secure place for data and workloads, but only if cloud customers live up to their side of its dual responsibility model. By keeping these common mistakes in mind and setting up a system to catch them as quickly as they happen, you can be sure that your digital assets in the cloud will be safe.
Related Dark Reading Content:
Peter Smith, Edgewise Founder and CEO, is a serial entrepreneur who built and deployed Harvard University’s first NAC system before it became a security category. Peter brings a security practitioner’s perspective to Edgewise with more than ten years of expertise as an … View Full Bio