Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS Data Transfer Hub | S3 transfer from US to China Bucket failed with error "InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records" #167

Closed
rajatstg opened this issue Jan 30, 2025 · 10 comments
Assignees

Comments

@rajatstg
Copy link

rajatstg commented Jan 30, 2025

Hello Developer(s),
The following issue encountered in Data Transfer Hub for which I request your assistance to provide resolution:

Action Performed:

  1. To create DTH service, used "(Option 1) Launch the Cloudformation stack in AWS Regions" as given in URL: https://docs.aws.amazon.com/solutions/latest/data-transfer-hub/step-1.-option-1-launch-the-stack-in-aws-regions.html.

  2. Using DTH, intiated copy task for 1.6 TB of data from S3 bucket of us-west-2 (Oregon) region to S3 bucket of eu-central-1 (Frankfurt) region. Used the secret created in AWS Secret Manger (Oregon Region). The copy started successfully as shown below and stopped it manually as it is a test.

Note: W.r.t the secret used, the permissions for the IAM user (with AKSK) are inplace and also verified with AWS Support.

Timestamp Message
2025-01-27 16:45:28 net.ipv4.tcp_congestion_control = bbr
2025-01-27 16:45:28 2025/01/27 11:15:28 Start running Finder job
2025-01-27 16:45:28 2025/01/27 11:15:28 Get secret Value of XXXX-secret from Secrets Manager
2025-01-27 16:45:28 2025/01/27 11:15:28 next tick is:2025-01-28 02:00:00 +0000 UTC
2025-01-27 16:45:28 2025/01/27 11:15:28 Queue DTH-S3EC2-0012f-S3TransferQueue-W0vzgpO1ZMIq has 0 not visible message(s) and 0 visable message(s)
2025-01-27 16:45:28 2025/01/27 11:15:28 Giant object merging Step Function arn:aws:states:us-west-2:XXXXXXXXXX:stateMachine:DTH-S3EC2-0012f-MultiPart-ControllerSM has 0 not competed task(s)
2025-01-27 16:45:28 2025/01/27 11:15:28 Prefix List File:
2025-01-27 16:45:28 2025/01/27 11:15:28 List common prefixes from / with depths 0
2025-01-27 16:45:28 2025/01/27 11:15:28 prefix:
2025-01-27 16:45:28 2025/01/27 11:15:28 Comparing within prefix /
2025-01-27 16:45:29 2025/01/27 11:15:28 Scanning in destination prefix /
2025-01-27 16:45:29 2025/01/27 11:15:29 Totally 2 objects in destination prefix /
2025-01-27 16:45:29 2025/01/27 11:15:29 Start comparing and sending...
2025-01-27 16:45:29 2025/01/27 11:15:29 Completed in prefix /, found 1 batches (3 objects) in total
2025-01-27 16:45:29 2025/01/27 11:15:29 Finder Job Completed in 532.363661ms, found 3 objects in total
2025-01-27 16:45:33 Exit Finder proccess, trying to set auto scaling group desiredCapacity to 0 to terminate instance after 60 seconds...

  1. Used DTH to copy same 1.6 TB of data from S3 bucket of us-west-2 (Oregon) region to S3 bucket of cn-north-1 (Beijing - China). Used the same secret as used in previous action. The copy failed with the following error message.

Note: W.r.t the secret used, the permissions for IAM user with AKSK are inplace and also verified with AWS Support just in case.

Timestamp Message
2025-01-27 16:30:10 net.ipv4.tcp_congestion_control = bbr
2025-01-27 16:30:10 2025/01/27 11:00:10 Start running Finder job
2025-01-27 16:30:10 2025/01/27 11:00:10 Get secret Value of XXXXXXX-secret from Secrets Manager
2025-01-27 16:30:10 2025/01/27 11:00:10 next tick is:2025-01-28 02:00:00 +0000 UTC
2025-01-27 16:30:10 2025/01/27 11:00:10 Queue DTH-S3EC2-52de2-S3TransferQueue-FKipHAxXurf3 has 0 not visible message(s) and 0 visable message(s)
2025-01-27 16:30:10 2025/01/27 11:00:10 Giant object merging Step Function arn:aws:states:us-west-2:XXXXXXXXXXXX:stateMachine:DTH-S3EC2-52de2-MultiPart-ControllerSM has 0 not competed task(s)
2025-01-27 16:30:10 2025/01/27 11:00:10 Prefix List File:
2025-01-27 16:30:10 2025/01/27 11:00:10 List common prefixes from / with depths 0
2025-01-27 16:30:10 2025/01/27 11:00:10 prefix:
2025-01-27 16:30:10 2025/01/27 11:00:10 Comparing within prefix /
2025-01-27 16:30:11 2025/01/27 11:00:10 Scanning in destination prefix /
2025-01-27 16:30:11 2025/01/27 11:00:11 Unable to list objects in / - operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: 2FJ2PA9FK47J5FQ9, HostID: mOJ5VG1p8GlCpjqz86lofaHIq/1ZrplohROyUm6DFQ6AfBRrBFoz8fixTjYQvDis01TxwEVC6dc=, api error InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
2025-01-27 16:30:11 2025/01/27 11:00:11 S3> Unable to list object in / - operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: 2FJ2PA9FK47J5FQ9, HostID: mOJ5VG1p8GlCpjqz86lofaHIq/1ZrplohROyUm6DFQ6AfBRrBFoz8fixTjYQvDis01TxwEVC6dc=, api error InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
2025-01-27 16:30:11 2025/01/27 11:00:11 Error listing objects in destination bucket - operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: 2FJ2PA9FK47J5FQ9, HostID: mOJ5VG1p8GlCpjqz86lofaHIq/1ZrplohROyUm6DFQ6AfBRrBFoz8fixTjYQvDis01TxwEVC6dc=, api error InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
2025-01-27 16:30:15 Exit Finder proccess, trying to set auto scaling group desiredCapacity to 0 to terminate instance after 60 seconds...

Troubleshooting steps performed:

  1. No bucket policy that blocks the access.

  2. Relevant IAM permissions are in place.

  3. Verified Secret and its format. Created secret as key/value pairs as per the below format given in the documentation.

{
"access_key_id": "",
"secret_access_key": ""
}

  1. Ran traceroute from a test EC2 instance created in the VPC created by Cloudformation for DTH service.

traceroute -T -p 443 s3.eu-central-1.amazonaws.com
traceroute -T -p 443 s3.cn-north-1.amazonaws.com.cn

Output of above traceroutes are given below:

OREGON TO FRANKFURT:
[root@ip-10-0-0-xx ec2-user]# traceroute -T -p 443 s3.eu-central-1.amazonaws.com
traceroute to s3.eu-central-1.amazonaws.com (52.219.72.183), 30 hops max, 60 byte packets
1 * * *
2 240.4.228.4 (240.4.228.4) 18.147 ms 240.0.64.7 (240.0.64.7) 18.115 ms 240.0.64.6 (240.0.64.6) 18.101 ms
3 240.0.88.7 (240.0.88.7) 148.769 ms 240.0.88.5 (240.0.88.5) 150.517 ms 242.13.183.193 (242.13.183.193) 18.032 ms
4 240.0.88.18 (240.0.88.18) 148.386 ms 240.0.88.7 (240.0.88.7) 149.207 ms 240.0.88.31 (240.0.88.31) 146.910 ms
5 240.0.88.17 (240.0.88.17) 148.083 ms 240.0.88.27 (240.0.88.27) 145.504 ms 240.0.88.22 (240.0.88.22) 150.364 ms
6 240.0.88.23 (240.0.88.23) 150.396 ms 241.0.3.79 (241.0.3.79) 148.933 ms 240.0.88.31 (240.0.88.31) 148.894 ms
7 241.0.3.74 (241.0.3.74) 151.317 ms * *
8 * * *
9 * * *
10 s3.eu-central-1.amazonaws.com (52.219.72.183) 149.816 ms * 146.586 ms

OREGON TO BEIJING:
[root@ip-10-0-0-xx ec2-user]# traceroute -T -p 443 s3.cn-north-1.amazonaws.com.cn
traceroute to s3.cn-north-1.amazonaws.com.cn (54.222.52.121), 30 hops max, 60 byte packets
1 100.91.5.139 (100.91.5.139) 28.976 ms 100.91.5.135 (100.91.5.135) 29.288 ms 100.91.5.181 (100.91.5.181) 28.368 ms
2 100.100.4.65 (100.100.4.65) 29.636 ms 100.100.4.7 (100.100.4.7) 29.322 ms 100.100.4.117 (100.100.4.117) 28.295 ms
3 100.100.68.132 (100.100.68.132) 28.012 ms 100.100.92.4 (100.100.92.4) 29.472 ms 100.100.80.68 (100.100.80.68) 29.831 ms
4 100.100.64.137 (100.100.64.137) 29.549 ms 100.100.72.9 (100.100.72.9) 30.474 ms 100.100.76.73 (100.100.76.73) 28.725 ms
5 100.100.8.78 (100.100.8.78) 27.505 ms 100.100.8.68 (100.100.8.68) 30.410 ms 100.100.8.64 (100.100.8.64) 29.165 ms
6 219.158.43.5 (219.158.43.5) 29.734 ms 29.462 ms 29.113 ms
7 219.158.96.25 (219.158.96.25) 227.029 ms 227.004 ms 226.464 ms
8 219.158.10.45 (219.158.10.45) 220.282 ms 220.617 ms 220.629 ms
9 * * *
10 * * *
11 124.65.59.130 (124.65.59.130) 240.200 ms 238.251 ms 124.65.57.238 (124.65.57.238) 223.337 ms
12 bt-204-238.bta.net.cn (202.106.204.238) 237.278 ms 226.482 ms 226.363 ms
13 54.222.0.169 (54.222.0.169) 213.463 ms 54.222.0.167 (54.222.0.167) 202.532 ms 54.222.0.175 (54.222.0.175) 206.671 ms
14 * * 54.222.0.216 (54.222.0.216) 183.765 ms
15 * 54.222.0.129 (54.222.0.129) 184.433 ms *
16 * * *
17 * * *
18 * * *
19 * * *
20 * * *
21 * * *
22 s3.cn-north-1.amazonaws.com.cn (54.222.52.121) 235.953 ms 220.199 ms 221.184 ms

It was observed the above traceroutes didnt use the path of Direct Connect however still able to reach both the destination.
The VPC created for DTH doesnt have interface and so Direct Connect is not in picture.
The Direct Connect is a requirement as mentioned in the URL https://docs.aws.amazon.com/solutions/latest/data-transfer-hub/solution-overview.html for Data Transfer Hub to communicate with destination.
However without Direct Connect it worked for Frankfurt region but not for China region.

Hence, need your assistance to fix this issue or let us know if we missed anything. Thank you.

@bassemwanis
Copy link
Contributor

Thank you for bringing this issue to our attention. While we investigate the issue, could you please confirm that you have reviewed the details on how to Transfer S3 object via Direct Connect? Additionally, could you please refer to the troubleshooting section of the implementation guide for potential solutions?

@rajatstg
Copy link
Author

rajatstg commented Feb 10, 2025

In the URL https://docs.aws.amazon.com/solutions/latest/data-transfer-hub/transfer-s3-object-via-direct-connect.html, from the before 4 options, 2 & 3 are not feasible for us. Already tried option 1 without success as the DTH is deployed in Source bucket region which is us-west-2. However, not tried option 4 as is says "In this scenario, DTH is deployed in the destination side (China) and within a VPC with public access" which is not the case, DTH is deployed in US Oregon region.

  1. Create an Amazon S3 transfer task
  2. Create an Amazon ECR transfer task
  3. Transfer S3 object from Alibaba Cloud OSS
  4. Transfer S3 object via Direct Connect

Troubleshooting sections already referred and followed without success. Worked with AWS Support viz, S3, IAM, DX, & Solutions team. With all their confirmation I have raise an issue.

It would be great if you can connect with me via Chime or Zoom, so that I can show the DTH page and initiate a transfer for your understanding and to know if I missed anything. If yes, please email me the meeting url for a remote session during your morning hours between 9:00am to 11:30am EST.

@bassemwanis
Copy link
Contributor

@rajatstg can you follow these steps and change DTH deployment to be in the destination side.

@rajatstg
Copy link
Author

Followed the steps for deployment in destination region. After I logged in to Authing console, tried to create an OIDC user pool for which it prompts to input phone number to get code via SMS (Text Message). As my phone number is outside China, it says "Failed to send international SMS, please contact administrator". Please advise, thank you.

@rajatstg
Copy link
Author

From the url: https://github.com/awslabs/amazon-s3-data-replication-hub-plugin/blob/main/docs/DEPLOYMENT_EN.md, which has out of date CF Template, I see it prompts for secrets from both source account and destination account during the launch of CF Stack. However the stack failed with various issues for which I got some help from Cloud Formation support but I couldnt progress further.

What I want to convey is that the DTH launched via the latest documentation "https://docs.aws.amazon.com/solutions/latest/data-transfer-hub/step-1.-option-1-launch-the-stack-in-aws-regions.html" does not prompt for destination account secrets. There is no field for that in the DTH task page. So please confirm if that is the cause or something different.

Image

@rajatstg
Copy link
Author

@bassemwanis The DTH server we have now doesnt ask for Destination Secret. May be due to the missing value to pass, it errors stating invalid AKSK. You may add the field in CFT and provide me a custom template for now as a workaround.

Request your assistance asap as we need to start moving data using DTH service. Please consider this with high priority.

@bassemwanis
Copy link
Contributor

@rajatstg, you would need to create a user and attach the IAM policy for the Source bucket only, as the solution is deployed in the destination account, for reference Set up credentials for Amazon S3.

Next, configure the credentials and store them in Secrets Manager, for reference Create an Amazon S3 Transfer Task.

If the access_key_id and secret_access_key are for source bucket, READ access to bucket is required. If it is for destination bucket, READ and WRITE access to bucket is required.

@rajatstg
Copy link
Author

@bassemwanis The IAM account with AKSK in Oregon region have full privilege to access source bucket. Secret for source IAM account is present in Secrets Manager. Now I need to start DTH task and provide the following:

  1. Source Account (US Oregon):

Source Bucket
Prefix
Region i.e. us-west-2
Is Bucket in this account? YES
Secret i.e. Source IAM account secret have FULL permission ONLY to Source Bucket in Oregon region.

  1. Destination Account (which in this case is China):

Destination Bucket
Prefix
Region
Is Bucket in this account? NO (Destination China bucket is in different account than the one where DTH is created)

  1. Entered notification email and set to one time transfer leaving rest as default.

With the above values entered the task fails. Please correct me if I missed anything.

@bassemwanis
Copy link
Contributor

@rajatstg could you please follow the following configurations:

  • In Destination Account:

    • Create IAM policy for Destination bucket (ensure the policy grants both READ and WRITE access).
    • Create a user
    • Attach the IAM policy to the user created in the previous step.
  • In Source Account:

    • Deploy Data Transfer Hub (DTH)
    • Create credentials information in Secrets Manager, and add the user’s AccessKeyID and SecretAccessKey.
    • Create an Amazon S3 transfer task

@rajatstg
Copy link
Author

Hi @bassemwanis I used the AKSK of China IAM account which have the necessary permissions to access destination bucket to create secrets in Oregon region which resolved the issue. Thank you very much for your help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants