ERROR

ArcGIS Enterprise on Kubernetes pods are unable to access instance metadata in EKS

Last Published: October 29, 2025

Error Message

When running ArcGIS Enterprise on Kubernetes on Amazon Elastic Kubernetes Service (EKS) using node group-based IAM role assignment, errors can occur due to a failure in pods accessing instance metadata. These errors occur when a new organization is created, a Kubernetes version is upgraded, or workloads are migrated to a new node group. For example:

  1. Users are unable to deploy or undeploy load balancers using the ArcGIS Enterprise on Kubernetes deployment script. The AWS Load Balancer controller pods are unable to start, and users may see the following error message in the AWS Load Balancer Controller pod logs:
Error:   
{"level":"error", "ts":"2025-10-07T14:30:45Z","logger":"setup","msg":"unable to initialize AWS cloud","error":"failed to get VPC ID: failed to fetch VPC ID from instance metadata: error in fetching vpc id through ec2 metadata: get mac metadata: operating error ec2imds: GetMetadata, canceled, context deadline exceeded"}
  1. Users are unable to create a new organization that uses S3 as the object store and attempts to configure IAM Role authentication from worker nodes to object store. Users may see the following error message:
Error:   
Please check the connection information for provider 'AWS' and service 'AWS S3'. Possible causes may include invalid endpoint, credential, region, regionEndpointUrl, bucket or container, and permissions. Caused by: ArcGIS Enterprise expected to authenticate with the cloud provider using credentials provided by the environment (typically IAM role authentication). The most common cause for this error is that the instances and VMs in the cluster have not been granted an IAM role.

Cause

AWS has deprecated Amazon Linux 2 and replaced it with two new options: Amazon Linux 2023 and Bottlerocket. By default, both Amazon Linux 2023 and Bottlerocket require version 2 of the Instance Metadata Service (IMDS). IMDS allows applications that run on EC2 instances to introspect data on the underlying EC2 instance, including data on the node’s IAM credentials. With IMDSv2, accessing the instance metadata requires a session token, which provides an additional layer of security over IMDSv1. However, on EKS-managed node groups that use Amazon Linux 2023 or Bottlerocket, the default network hop count limit for the response to the PUT request for the session token is set to 1. This setting prevents various ArcGIS Enterprise on Kubernetes workloads from introspecting the node’s IAM credentials, in turn causing a failure of those workloads to access AWS services that require IAM authentication.

Solution or Workaround

When deploying ArcGIS Enterprise workloads onto worker nodes that use EKS-optimized Amazon Linux 2023 or Bottlerocket nodes, use a launch template that explicitly sets the HttpPutResponseHopLimit to 2. Additionally, ensure that root device volume settings is included in this launch template that allocate adequate disk space to the root device volumes of the new worker nodes. Consult the ArcGIS Enterprise on Kubernetes system requirements for the recommended worker node disk size. Use this launch template when creating the new node groups.

If migrating from Amazon Linux 2 to using a new node group that uses Amazon Linux 2023 for ArcGIS Enterprise on Kubernetes, the cluster administrator can follow these high-level steps to verify the new node group is functioning as expected.

  1. Create a backup of the ArcGIS Enterprise organization.
  2. Create new node group based on Amazon Linux 2023 nodes using the methodology described above that matches the compute and memory capacity of the existing node group.
  3. Begin cordoning and draining all nodes in the Amazon Linux 2 node group, one at a time, and allow for all workloads to reach a ready state on the new nodes.
    1. kubectl cordon <nodeName>
    2. kubectl drain <nodeName>
  1. If all workloads do not reach a ready state, attempt to log in to the organization and review the log messages.
  2. To roll back the changes:
    1. Cordon all Amazon Linux 2023 based nodes in the newly introduced node group
    2. Uncordon all Amazon Linux 2 based nodes
    3. Drain all Amazon Linux 2023 based nodes
    4. Scale the Amazon Linux 2023 based node group to zero (to save costs until a subsequent attempt)

If unable to migrate successfully to the new Amazon Linux 2023 instances, reach out to AWS Support or a trusted partner or consultant for additional guidance.

Article ID: 000038610

Software:
  • Enterprise

Get support with AI

Resolve your issue quickly with the Esri Support AI Chatbot.

Start chatting now

Get help from ArcGIS experts

Contact technical support

Start chatting now

Go to download options