Events

Auditing AWS Environments

Introduction

Related to our new TROOPERS workshop “Jump-Starting Public Cloud Security”, this post is going to describe some relevant components which need to be taken care of when constructing and auditing an Amazon Web Services (AWS) cloud environment. Those include amongst others the general AWS account structure, Identity and Access Management (IAM), Auditing and Logging (CloudTrail and CloudWatch), Virtual Private Cloud (VPC) networks, as well as S3 buckets.

The AWS IAM service is responsible for identity and access management (surprise!). This includes managing user accounts, defining password policies, and – most importantly – creating, defining, and assigning groups and roles.

CloudTrail, also an AWS prorpietary service, is responsible for collecting events and activities occured within the account management console, command line interface, or even by invoking AWS APIs and SDKs. AWS CloudWatch on the other hand collects all kind of monitoring and logging data from AWS resources (e.g. your precious EC2 instances hosting business critical services) and allows for setting up notifications, for example to detect malicious activities within the cloud environment early.

Amazon S3 is a simple storage solutions which integrates with many AWS services. And yes, indeed: the S3 storage is the underlying service, that most of the news articles describing cloud breaches and data leaks are talking about (e.g. [1], [2], [3], [4], [5]).

Identity and Access Management (IAM)

One critical component to look at when designing and auditing an AWS environment is the general platform account structure. A central, organizational account should be defined, under which all separate project/business unit platform accounts are created. This allows for increased transparency and intervention capabilities in case of emergency of the security department, but also dedicated account management structures (including separate rights & roles concepts). Further, cost quotas can be set up per platform account to limit financial damages from resource missuse by attackers (e.g. [6], [7], [8]). It should not go unnoted, that the organizational platform account should not be used for any resource consumption (meaning for example creating/running EC2 instances) but only for platform account management.

Related to a single project account, an account structure based on the separation of duties and least privilege principles should be defined. One possible IAM structure could look like the following: first, an IAM master role, who is allowed to create groups and roles, and IAM manager role, which is allowed to create new user accounts and assign users to pre-defined roles and/or groups, should be created; subsequently, user accounts and other technical accounts should be created and assigned to the respective roles and groups. Of course, there is the possibility to connect the IAM console to a central IAM, for example the company-internal AD instance. However, there are various caveats that need to be analyzed individually. Basically, it comes down to your risk appetit. But we think most security people will get quite some creeps when it comes to setting up a trust or even sync from internal ADs to the cloud provider’s AD services. We’ll talk about this in our training a bit more in detail.

Having designed an account structure including groups and roles, of course access to the accounts should be secured. For example, the root user of a distinct platform account should not have access keys defined and should not be used in daily operations. Also multi-factor authentication (MFA) and proper password / access key policies should be defined for all accounts, especially those who have access to the management console (did we mention yet, that governance is the key to success?!).

To follow the “Keep it simple, stupid” principle (KISS), access policies should only be attached to roles or groups (to which the individual users are assigned to), not directly to users. This enforces a well-defined role concept which can be easier managed then per-user access policies. Also fine-grained policies should be preferred over coarse administrative privileges (“*:*”) to follow the least privilege principle (spot the difference from “the old world” ;-)).

Auditing and Logging

The next components to have a closer look at are CloudTrail and CloudWatch Logs which allow account-wide logging and monitoring. CloudTrail should have at least one trail configured to logs for all regions, backed by a securely configured S3 bucket. Those logs should also be encrypted at rest using AWS Key Management Service (KMS) and Customer Created Master Keys (CMK) to mitigate S3 bucket misconfigurations. To enable further processing and for example setting up security alerts, they should be integrated into CloudWatch Logs. One example for a security alert is that all unauthorized API calls should be monitored and generate alerts to detect malicious activity early. Those alerts should also be set up for failed console or MFA login attempts and configuration changes in IAM, S3 or security groups (VPC).

To enable additional auditing of configuration changes in AWS services, AWS Config should be enabled in all regions. This service captures a history of changes (for supported services) which enables security analysis, resource change tracking, and compliance auditing.

Virtual Private Cloud (VPC) / Networking

In general, all network setups should follow the Least Privilege principle. Only allowed traffic should be whitelisted using security groups, other traffic should be denied per default. It is important, that no security group should exist allowing incoming traffic from an internet gateway to management interfaces like for example SSH (port 22), VNC (port 5500), or common database ports like MSSQL Server (port 1433) or MySQL (port 3306). This reduces the attack surface significantly. It is also recommended, that administrative traffic to management interfaces or the web UI is limited to company IP ranges only.

Additional VCP Flow Logs can be enabled for VPCs where necessary to capture information about the IP traffic to and from network interfaces in the VPC. This can for example be used to detect anomalous traffic within the VPC.

S3 Buckets

S3 buckets in general should be encrypted using Server Side Encryption (SSE) within bucket policies. Also the configuration of each S3 bucket should be reviewed regularly and giving public read access should be forbidden.

Conclusion

As seen in the examples above, setting up and auditing an AWS environment can be very complex and needs to be well documented. Depending on the criticality of the data processed in the cloud, processes need to be established to verify the superordinated security objectives confidentiality, integrity, and availability (CIA). Our two-day TROOPERS workshop “Jump-Starting Public Cloud Security” covers all relevant components which need to be audited in a public cloud environment, not only limited to AWS. We hope to see you there!

 

Best regards,

Christoph Klaassen & Simon Lipke

 

[1] https://www.scmagazine.com/national-credit-federation-unsecured-aws-s3-bucket-leaks-credit-personal-data/article/710743/
[2] https://gizmodo.com/top-defense-contractor-left-sensitive-pentagon-files-on-1795669632
[3] https://www.bleepingcomputer.com/news/security/data-of-14-million-verizon-customers-exposed-in-server-snafu/
[4] https://gizmodo.com/thousands-of-job-applicants-citing-top-secret-us-govern-1798733354
[5] https://mackeepersecurity.com/post/auto-tracking-company-leaks-hundreds-of-thousands-of-records-online
[6] https://www.theregister.co.uk/2015/01/06/dev_blunder_shows_github_crawling_with_keyslurping_bots/
[7] https://securosis.com/blog/my-500-cloud-security-screwup
[8] https://www.theinquirer.net/inquirer/news/3027077/hackers-hijack-teslas-unsecured-aws-account-to-mine-cryptocurrency