Following the recent massive Capital One data breach, it’s clear that even some of the world’s largest and most respected companies working in the cloud are still vulnerable to compliance and security issues. In this case, federal prosecutors charged a Seattle woman with stealing more than 100 million credit applications. As the details of the attack became public, the Capital One AWS environment came under scrutiny.
It’s now accepted that the attack vector the hacker took began with a misconfigured firewall. Ephemeral AWS credentials were extracted from the instance role and used to raid data from under-restricted S3 buckets.
Though Capital One gave up a pretty scary amount of critical consumer data, they were also rapid and accountable in their response. And, they showcased a simple fact: the public cloud is far more secure than on-premise data centers, but it isn’t impenetrable.
In fact, a recent study found that companies miss 99 percent of infrastructure as a service (IaaS) misconfigurations, according to McAfee research, which surveyed 1,000 IT professionals about misconfigurations. These configurations become increasingly complex as companies adopt multicloud environments and increased cloud adoption—making it easier than ever for these types of errors to “creep in.”
Following the Capital One incident, AWS said that it planned to do more to address security by scanning customer builds for misconfigurations. Which brings up an interesting question: Is it possible to consistently keep this type of misconfiguration from happening in the first place in an increasingly complicated landscape of public cloud platforms, vendors and technologies involved in enterprise-level applications? Would it be possible, for example, to pre-configure an entire enterprise cloud deployment into compliance automatically?
One new compliance automation approach says yes, by launching a theoretical “DevOps firewall”. Just as a network needs a firewall to keep malicious traffic out, enterprises can take this approach to ensure their own organization ever deploys insecure, noncompliant code to the public cloud.
A DevOps firewall approach means there are guardrails to simultaneously prevent breaches and ensure that the environment is configured securely (during both prelaunch and continuously post-deployment) – guaranteeing that another Capital One security pitfall doesn’t happen again.
Critical to a DevOps firewall approach is automation and a shift to ensure compliance is tackled at every stage of the process of migrating critical applications to the cloud. It’s imperative that your team is working in an automated environment that can monitor compliance in real-time for a successful DevOps firewall approach.
Automation is also a critical part of the DevOps process through the use of Continuous Integration/Continuous Deployment (CI/CD). Continuous testing forms part of the CI/CD model, implementing tests at set intervals during the pipeline.
Within the DevOps firewall approach, you can go a step further. By leveraging chosen patterns to configure infrastructure as code, a DevOps firewall approach uses the CI/CD pipeline to ensure secure deployments for both pre-provisioning and post-provisioning.
An additional best practice is to use automation to ensure compliance by using a configurable, automated library of security and regulatory frameworks, eliminating the risk of deploying infrastructure that could fail compliance checks. In short, your DevOps firewall won’t ever allow for deployment unless the environment is constructed securely.
The guardrails you set within the DevOps firewall approach can include a variety of approaches—you can pre-set your S3 buckets to known IP ranges. You can monitor against strange behaviors. You can remove unused credentials. But most importantly, these tasks are completed before any infrastructure is provisioned, and that automation tools are employed—with the end goal keeping your cloud more secure.
Looking back at the Capital One Incident—a few things stand out that indicate where a DevOps firewall approach would have resulted in a different outcome:
- A misconfigured firewall should not cause such a vast security breach. Failsafe measures should catch intruders. The lack of redundancy in security would suggest more systemic security issues.
- Broader security architecture reviews should’ve highlighted the extra S3 permissions and eliminated that individual from the role.
- Those permissions could have been limited to a WAF-logging specific bucket if needed. A DevOps firewall approach would have flagged this in the compliance stage of migration, and in an ongoing capacity ensured that permissions were structured appropriately.
- Why weren’t the S3 buckets – which were filled with seriously sensitive information – on restricted access for known IP ranges only? These settings can be managed and continuously monitored with automated compliance tools and the DevOps firewall approach.
- Netflix recently released RepoKid – an open source tool to remove permissions that go unused, it’s a tool that could’ve stopped the attack before it happened.
Put it all together, and the answer is clear: with appropriate steps in pre-provisioning and post-provisioning, automation of best practices in compliance and a fundamental organizational commitment to cybersecurity hygiene best practices, events like Capital One are avoidable, and the cloud remains as safe and effective a place for infrastructure as any enterprise could ever need it to be.
Security
via https://www.aiupnow.com
Help Net Security, Khareem Sudlow