CloudTrail Data Events

In today’s post, I will talk about a hacking investigation I recently took part in. We will look into what went wrong, what the attackers did, and how we can improve detection and prevention to manage such incidents better.

Investigation

After being alerted by a monitoring system about suspicious activity on an AWS account, a colleague made a first quick analysis and we continued investigating the issue.

This is the time for quick action: Finding out if this is a real attack, cutting off access paths, investigating origins, lateral movement, impact on data and systems - and finding out how to prevent a situation like this in the future.

First, listing up CloudTrail events is important to see if you have activity from one or even multiple attackers. Their IPs also indicate proficiency and threat level - but not to their location. It is a common discussion in the IT security community if IPs can be used to attribute attacks. But with the prevalence of hacked boxes being used as stepping stones or cloaking via VPN/TOR, this seems more or less a dead topic anyway.

This was the case in this instance as well: the offending IP was determined to be from one of the popular VPN providers. Which at least tells us a story that somebody knew what they were doing, not advertising a Vodafone/Verizon home connection.

Listing up CloudTrail actions performed by this IP showed scans for additional secrets, probably via scripting. In addition to SSM Parameter Store and Secrets store, there were also checks in CloudFormation parameters, EC2 user data, and launch templates. This helped narrow down leaked data, giving the customer information on which API keys to cycle and which SaaS offerings might include signs of the compromise as well.

Next was checking for possible privilege escalation and lateral movement (persisting in the system). Checking IAM roles (e.g. instance profiles gives an insight into ways the attacker could move inside the account (or even across them, if you have Cross Account roles). Also, inspecting VPC flow logs help greatly in finding out unusual access within your VPC.

After checking the privileges were highly unlikely to be escalated - we advised for a complete reinstallation/restore regardless - we tried to figure out, how the attack originally started.

Turned out that an application in the account was vulnerable to remote code injection (RCI) and had access credentials hardcoded on the system. That made it way too easy to exfiltrate this from application configuration files. A surprise waited for us when checking the load balancer access logs: there was exactly one request from the VPN address to the backend. Now, it’s pretty unlikely to be this lucky. Just accessing a system once and immediately finding the jackpot?

Using the powers of Amazon Athena, we started correlating the logfiles and soon found a cluster of high-speed accesses via multiple TOR addresses. While TOR traffic to this application was not unusual in any way, it showed a clear peak where traffic surged over a short timespan with requests arriving within seconds of each other.

The timeline of the attack looks as follows:

  • sporadic TOR accesses to the application to check for good vectors
  • scripted scan of different injection methods over a short timespan
  • after getting a positive match, traffic dropped out
  • the attacker likely crafted a payload to extract as much info as possible
  • then switched to VPN for lower latency during manual actions
  • sending the prepared payload, extracting access keys
  • scans on AWS API level for more secrets and available persistence methods

Surprisingly, the attack stopped suddenly and even though the attacker had a good foothold already they never returned or persisted.

Of course, all access keys and API keys were cycled quickly, cross-account roles checked and systems patched/rebuilt.

Summary

This attack showcased two very common problems: Unpatched applications and hardcoded user credentials.

But it also touched upon something that many AWS accounts suffer from during security incidents: Limited traceability. In this case, we established the attacker having access to all S3 buckets in the account, but could not check what has been downloaded from them. And in accounts that might contain customer data, invoices or personal information this is bad. EU-GDPR anyone?

So why was it not possible to check for this?

CloudTrail Data Events

Everyone working on AWS should know CloudTrail - your friendly service which handles API level logging of actions. If you have a problem, you can see who changed a certain resource, who added a user. or who stopped an instance.

But fewer people know that CloudTrail is monitoring the management plane of services only. After all, having every access to DynamoDB or S3 logged would immensely inflate log volume and provide little benefit in most cases. But in a security incident, you want to do a proper damage assessment and thoroughly check which data has been compromised.

Activating logs for Data Events in CloudTrail solves this problem. It is not automatically active though, so you have to either think about it on an account or include it in your account setup. As it generates additional costs as well, you have to consider carefully if that big data system of yours might need this level of logging at all.

My advice would be to activate at least S3 data events for your production and security-sensitive accounts. As testing and development accounts should not contain user data, they probably are not the right place to activate it (you use pseudonymisation techniques like tokenization, right?).

I would put those extensive logs into an S3 bucket/prefix with a shorter retention span to reduce cost (ideally in a separate cybersecurity/logging account). You can then use them for incident analysis or ingest them into some SIEM system for low-latency alerting to unusual access patterns.

Built into AWS, GuardDuty integrates S3 Protection for this purpose. In accounts created before July 31st 2020, the feature is not active by default - but in newer accounts it is. GuardDuty includes a list of S3 related alerts including anomalous access patterns.

Countermeasures

Implement scanning for vulnerable applications in your environment - AWS offers tools like ECR scanning or AWS Inspector. If you want more options, you can also look for third-party solutions. And, if possible, add some means of central patching of these problems early in the process.

Some security tools will also notify you of AWS access keys on your system. You can even catch those during development, with tools like YELP’s detect-secrets included in your CI/CD pipelines.

Strongly discourage IAM users in your accounts. Use solutions like AWS SSO with mandatory MFA for your interactive users and cross-account roles into the target accounts. If you cannot avoid IAM users for programs, check if they can use EC2/ECR metadata-based credentials instead.

To get alerts on suspicious activity, use GuardDuty (with S3 Protection) or an IDS/SIEM system. You should also wire up your monitoring system to notify you of some suspicious activity (e.g. console logins).

For forensics, set up centralized logging into an account which is purely for this purpose. Set up CloudTrail and Data Event logging, ALB logs, and VPC Flow Logs. It is okay to store high-volume logs into a bucket with a shorter retention span, as long as this period is longer than the expected time to discover an attack. Alternatively, wire it up with a SIEM or SOC-as-a-service offering.

Similar Posts You Might Enjoy

Be Aware of EBS Direct APIs

Recently, I blogged about a security incident where CloudTrail was not set up to log S3 data events. But while this is the most common type of data events, there are some more. And one of them has really scary implications. But good news: you can protect yourself about that. - by Thomas Heinen

Least Privilege - Semi-Automated

In almost every tutorial on AWS you will come across the term “Least Privilege”. Writing IAM policies properly requires lots of research and time - that’s the reason why many projects still rely on AWS Managed Policies or write exploitable policies. But there are tools to help you along. - by Thomas Heinen

Secure Backup Solution for OnPremises and Hybrid Environments

With current ransomware attacks it is important to have a reliable backup strategie in place. With Veeam Backup & Replication you are able to backup your on-premises and hybrid environments and extend your storage solution with AWS Cloud capabilities to increase capacity and archiving storage with AWS S3 service. - by Marco Tesch