01 logo

whoAMI Attack Exploits AWS AMI Naming Flaw for RCE

AWS Hacked

By WIRE TOR - Ethical Hacking ServicesPublished 11 months ago 3 min read
AWS Hacked

Cybersecurity researchers have uncovered a new type of name confusion attack called whoAMI, which enables threat actors to execute remote code within Amazon Web Services (AWS) accounts by exploiting vulnerabilities in the way Amazon Machine Images (AMI) are retrieved.

According to a report by Datadog Security Labs researcher Seth Art, this attack could potentially compromise thousands of AWS accounts. The issue arises from software misconfigurations in both private and open-source repositories, making it a significant supply chain risk.

Understanding the whoAMI Attack

At its core, the whoAMI attack is a subset of supply chain attacks, where an attacker publishes a malicious AMI under a name that matches a legitimate AMI expected by the victim’s software. This misconfiguration occurs when software omits the “ — owners” attribute while searching for an AMI through the ec2:DescribeImages API.

Conditions for Exploitation

For the attack to succeed, the following conditions must be met when an AWS user retrieves an AMI ID via the API:

The request uses the name filter to identify an AMI.

The request fails to specify the owner, owner-alias, or owner-id parameters.

The request fetches the most recently created image matching the criteria (most_recent=true).

When these conditions align, an attacker can create a doppelgänger AMI with the same name as a legitimate one. If the victim’s infrastructure automatically selects the most recent AMI from the results, it could instantiate an EC2 instance using the attacker’s backdoored AMI instead of the intended one. This grants the attacker remote code execution (RCE) capabilities on the instance, opening the door for post-exploitation actions, including data theft and persistent access.

Comparing whoAMI to Dependency Confusion

Seth Art describes the whoAMI attack as similar to dependency confusion attacks, except instead of software dependencies (e.g., pip or npm packages), the malicious resource in this case is an Amazon Machine Image (AMI). This allows attackers to exploit automated infrastructure provisioning processes.

Scope and Impact

Datadog Security Labs found that approximately 1% of monitored organizations were affected by the whoAMI attack. Publicly available examples of vulnerable code were identified in multiple programming languages and infrastructure-as-code (IaC) tools, including:

  • Python
  • Go
  • Java
  • Terraform
  • Pulumi
  • Bash shell scripts

This wide-ranging exposure underscores the attack’s potential to impact cloud infrastructure at scale.

Amazon’s Response and Mitigation Efforts

Following responsible disclosure on September 16, 2024, Amazon responded within three days, confirming that all AWS services were operating as designed. After an internal review, AWS stated it found no evidence that the attack technique had been exploited in the wild.

Based on extensive log analysis and monitoring, our investigation confirmed that the technique described in this research has only been executed by the authorized researchers themselves, with no evidence of usage by any other parties,” AWS said in its statement.

However, AWS acknowledged that customers who retrieve AMI IDs via ec2:DescribeImages without specifying the owner value remain at risk. To mitigate this, AWS introduced Allowed AMIs, a new account-wide security setting in December 2024 that allows customers to limit the discovery and usage of AMIs within their AWS accounts.

Industry Reactions and Security Enhancements

  • Several cloud security vendors and infrastructure-as-code (IaC) providers have taken steps to address this vulnerability:
  • HashiCorp Terraform: Since November 2024, Terraform warns users when most_recent=true is used without an owner filter in terraform-provider-aws version 5.77.0.
  • Upcoming Terraform Changes: This warning will be upgraded to a blocking error in version 6.0.0.
  • Security Best Practices: AWS and security experts recommend customers review their IAM policies and explicitly define AMI owners in their infrastructure automation scripts.

Mitigation and Best Practices for AWS Users

To protect against the whoAMI attack, AWS customers should take the following steps:

Explicitly specify AMI owners when searching for AMIs using the ec2:DescribeImages API.

  • Adopt Allowed AMIs to restrict AMI usage to trusted sources.
  • Monitor cloud infrastructure configurations for insecure automation patterns.
  • Upgrade Terraform configurations to ensure compliance with best practices.
  • Use IAM permissions to limit the ability of untrusted users to deploy new AMIs.

Conclusion

The whoAMI attack highlights the importance of secure infrastructure provisioning in cloud environments. While AWS services function as intended, misconfigurations in how organizations retrieve AMIs create a significant security risk.

With Amazon’s rapid response and the introduction of Allowed AMIs, organizations have new tools to mitigate this threat. However, it remains crucial for AWS users to adopt best practices, review automation scripts, and monitor for insecure cloud configurations to prevent future supply chain attacks.

cryptocurrencycybersecurityhackershistory

About the Creator

WIRE TOR - Ethical Hacking Services

WIRE TOR is a Cyber Intelligence Company that Provides Pentest & Cybersecurity News About IT, Web, Mobile (iOS, Android), API, Cloud, IoT, Network, Application, System, Red teaming, Social Engineering, Wireless, And Source Code.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments (1)

Sign in to comment
  • Alex H Mittelman 11 months ago

    Who am I? Who are you! Who is anybody? Great work’! Well done

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.