Article
Docker to AWS ECR Deployment to a Running Container: A Developer's Battle with Authentication Errors and How I Finally Won

Docker to AWS ECR: When "Just Push It" Becomes a 4-Hour Debug Session
Imagine: You're having a perfectly normal Tuesday when your client drops this bomb on you: "Hey, we need you to push your Docker image to ECR. Here's an IP address, a PEM file, and some AWS credentials. Should be simple, right?"
Famous last words.
The Problem That Started This Adventure
Now, here's the thing that tutorials don't tell you: most AWS deployment guides assume you're working in a perfect world where you have IAM roles set up, proper CLI access configured, and maybe even some fancy CI/CD pipeline ready to go.
But reality? Reality is messier.
What do you do when you don't have an IAM role configured? What happens when you can't just run aws configure because you're working with temporary credentials that expire faster than milk in the desert? What if you don't have direct terminal access to your production environment?
This is exactly the situation I found myself in - no fancy IAM roles, no pre-configured CLI, just raw AWS credentials, an IP address, and the expectation that I'd figure it out. If you're in a similar boat - maybe working with a client's existing infrastructure, or dealing with temporary access, or just trying to deploy without the luxury of perfect DevOps setup - this guide is for you.
What followed was a journey through the nine circles of authentication hell, featuring guest appearances by expired session tokens, mysterious environment variables, and the dreaded "UnrecognizedClientException" error that haunts developers' dreams.
If you've ever stared at your terminal screen wondering why AWS is rejecting your perfectly valid credentials, or if you're about to embark on your first Docker-to-ECR deployment without the safety net of pre-configured IAM roles, grab a coffee. This is going to be a ride.
The Setup That Started It All
So there I was, confident in my Docker skills, ready to tackle what seemed like a straightforward task. My arsenal consisted of:
- An EC2 instance IP:
13.211.97.134(I will use this IP in reference for whole, you pick your's, (well, sounds innocent enough, right?) - A PEM file named
example.pem(at least someone has a sense of humor) - Some AWS credentials that looked legit
- My Docker application sitting pretty on my Windows machine
"How hard could this be?" I thought, channeling every developer who's ever said those cursed words before spending their entire day debugging.
Chapter 1: The "Easy" Beginning
Getting Cozy with Our EC2 Instance
First things first - I needed to get Docker running on this Ubuntu instance. This part actually went smoothly (I should have known this was the calm before the storm):
sudo apt update -y
sudo apt install docker.io -y
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker $USER
Here's something they don't tell you in the tutorials: after adding yourself to the docker group, you HAVE to logout and login again. Not optional, not "maybe restart your terminal." You literally have to:
exit
ssh -i "example.pem" ubuntu@13.211.97.134
I learned this the hard way after spending 20 minutes wondering why I still needed sudo for every docker command. Sometimes the old ways are the only ways.
Installing AWS CLI: The Plot Thickens
Next up was getting AWS CLI v2 installed. This is where things started getting interesting:
sudo apt install unzip -y
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Pro tip: Always verify your installation because Murphy's Law loves developers:
aws --version
If you see something like aws-cli/2.x.x, you're golden. If not, welcome to troubleshooting land - population: you.
Chapter 2: The Authentication Nightmare Begins
Credentials Configuration: Where Dreams Go to Die
This is where my confident Tuesday turned into a debugging nightmare. I had these shiny AWS credentials, so I confidently ran:
aws configure
And entered my Access Key ID, Secret Access Key, region (ap-southeast-2), (choose your's), and output format (json). Everything looked perfect.
But then came the session token. Oh, the session token.
You see, I had temporary credentials, which meant I also needed to set:
export AWS_SESSION_TOKEN="that-ridiculously-long-string-of-characters"
This seemed fine until I realized that session tokens are like milk - they expire, and when they do, everything goes sour.
The First Error: A Gentle Introduction to Pain
Time for the moment of truth - authenticating Docker with ECR:
aws ecr get-login-password --region <your-region> | docker login --username AWS --password-stdin <account-id>.dkr.ecr.<your-region>.amazonaws.com
And then it happened. The error message that would haunt me for the next hour:
An error occurred (UnrecognizedClientException) when calling the GetAuthorizationToken operation: The security token included in the request is invalid.
Error: Cannot perform an interactive login from a non TTY device
Two errors for the price of one! It was like AWS was showing off.
Chapter 3: Down the Rabbit Hole of Environment Variables
The Mystery of the Conflicting Credentials
Here's what nobody tells you about AWS credentials: they're like a jealous ex - environment variables will override your AWS CLI configuration every single time, even when you don't want them to.
I had to play detective and figure out what was going on:
env | grep AWS
And there they were - old environment variables sitting there like uninvited guests at a party, messing everything up.
The solution? Nuclear option:
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
Then verify they're actually gone (because trust, but verify):
env | grep AWS
If you see nothing, congratulations! You've successfully ghosted your environment variables.
The Fresh Start Strategy
With the old variables banished to the shadow realm, I reconfigured everything from scratch:
aws configure set aws_access_key_id YOUR_ACTUAL_KEY
aws configure set aws_secret_access_key YOUR_ACTUAL_SECRET
aws configure set aws_session_token YOUR_ACTUAL_TOKEN
Then came the moment of truth:
aws sts get-caller-identity
When this command returns your account information instead of an error, it feels like winning the lottery. Small victories, people.
Chapter 4: The File Transfer Tango
Getting My App from Windows to EC2
While I was fighting authentication battles, I had another challenge: getting my application code onto the EC2 instance. From my Windows machine, I used:
scp -i "example.pem" -r "C:\path\to\your\app" ubuntu@<EC2-Public-IP:/home/ubuntu/app
This worked like a charm (finally, something that worked on the first try). Then on the EC2 instance:
cd ~/app
ls -la # Just to make sure everything made it safely
Seeing your files sitting there, ready to be containerized, is like seeing an old friend in a foreign country.
Chapter 5: Docker Build and the Light at the End of the Tunnel
Building the Image (The Easy Part)
With my app files in place, building the Docker image was refreshingly straightforward:
docker build -t habib .
Watching Docker do its thing - downloading base images, running commands, building layers - it's almost therapeutic after hours of authentication battles.
The ECR Authentication: Take Two
With fresh credentials and cleared environment variables, I tried the ECR authentication again:
aws ecr get-login-password --region <your-region> | docker login --username AWS --password-stdin <account-id>.dkr.ecr.<your-region>.amazonaws.com
And... it worked! "Login Succeeded" appeared like a message from the heavens.
The Final Push (Literally)
Time to tag and push the image:
Tag the local image with the ECR repository URI
docker tag <LOCAL_IMAGE_NAME>:<LOCAL_TAG> <ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<REPOSITORY_NAME>:<TAG>
# Push the image to ECR
docker push <ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<REPOSITORY_NAME>:<TAG>
Watching the upload progress bars fill up was like watching a progress bar that actually meant something. Each layer pushing successfully felt like a small victory.
Victory Dance
Finally, the verification:
aws ecr list-images --repository-name habib
When you see your image listed there, with its fresh timestamp and proper tags, it's time for a little victory dance around your desk.
Chapter 6: The Final Frontier - Running the Container in Production
From ECR to Running Container: The Last Mile
But wait! Getting your image into ECR is only half the battle. Now comes the real test - actually running that container in production on your EC2 instance. This is where theory meets reality, and where you find out if your containerized application actually works in the wild.
After celebrating the successful push to ECR, I realized I still had one more crucial step: pulling the image back down to EC2 and running it as a container. This might seem redundant (didn't I just build it here?), but there's method to this madness.
Why Pull from ECR Instead of Using the Local Image?
You might be thinking, "Why not just run the local image I built?" Here's the thing - in production environments, you want to run the exact same image that's in your registry. This ensures:
- Consistency: The image you're running is identical to what you'd deploy elsewhere
- Verification: You're testing the full ECR workflow, not just the build process
- Best Practices: This mirrors what your CI/CD pipeline would do
The Moment of Truth: Pulling and Running
With my image safely stored in ECR, it was time for the final test:
# Pull the image from ECR
docker pull <account-id>.dkr.ecr.<your-region>.amazonaws.com/<repository-name>:latest
# Run the container
docker run -d -p 8080:8080 --name my-app <account-id>.dkr.ecr.<your-region>.amazonaws.com/<repository-name>:latest
That first docker pull command is always nerve-wracking. Will it authenticate properly? Will the image download? When you see those familiar progress bars showing each layer being pulled, it's like watching your baby take their first steps.
The Reality Check: Does It Actually Work?
Running the container is one thing, but does your application actually work? Time for the health check:
# Check if the container is running
docker ps
# Check the logs
docker logs my-app
# Test if the application responds
curl localhost:8080
When you see your application responding to requests, running inside a container that was built locally, pushed to AWS ECR, pulled back down, and executed - that's when you know you've completed the full circle. It's not just deployment; it's deployment with confidence.
What This Actually Proves
This final step validates your entire pipeline:
- Your Dockerfile works correctly
- Your application runs in a containerized environment
- ECR authentication and image storage work properly
- Your EC2 instance can successfully run containers from ECR
It's the difference between theory and practice, between "it should work" and "it definitely works."
The Production Reality
In a real production scenario, this is exactly what would happen:
- Developer builds and pushes to ECR (what we just did)
- Production server pulls from ECR and runs the container (what we're doing now)
- Application serves real traffic
By completing this full cycle, you've essentially performed a production deployment test. Your future self (and your operations team) will thank you for this thoroughness.
The Error Hall of Fame: What Went Wrong and How I Fixed It
Let me save you some time by sharing the greatest hits from my error collection:
Error #1: "The Security Token Included in the Request is Invalid"
This is AWS's way of saying "I don't trust you right now." Usually happens when:
- Your session token expired (they do that every few hours, like clockwork)
- Environment variables are fighting with your AWS config
- You're using old credentials
The fix: Clear everything and start fresh:
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
aws configure
Error #2: "Cannot Perform an Interactive Login from a Non TTY Device"
This cryptic message usually means your AWS authentication isn't working, even though it sounds like a terminal problem.
The fix: Make sure aws sts get-caller-identity works first. If it doesn't, fix that before trying Docker login.
Error #3: "InvalidClientTokenId" - The Classic
This is AWS's polite way of saying "Who are you and why are you talking to me?"
The fix: Check your credentials are correctly configured and not expired. Session tokens are the usual suspects here.
Error #4: Container Starts But Application Doesn't Respond
Sometimes your container runs perfectly, but your application is unreachable. This usually means:
- Port mapping issues (
-p 8080:8080maps EC2 port 8080 to container port 8080) - Application binding to
127.0.0.1instead of0.0.0.0inside the container - Security group rules blocking the port on EC2
The fix: Check your Dockerfile's EXPOSE directive, ensure your app binds to 0.0.0.0, and verify EC2 security groups allow traffic on your chosen port.
The Lessons Learned (The Hard Way)
1. Environment Variables Are Sneaky
Always check what environment variables you have set. They override everything else, and AWS doesn't tell you when this happens. It just fails mysteriously.
2. Session Tokens Expire
Unlike fine wine, session tokens don't get better with age. They expire, usually when you least expect it. Always have fresh credentials ready.
3. Test Each Step
Don't try to do everything at once. Test aws sts get-caller-identity before attempting ECR authentication. It's like checking if your car starts before planning a road trip.
4. Clear and Simple Troubleshooting
When things go wrong (and they will), start with the nuclear option: clear all AWS environment variables and reconfigure from scratch. It's faster than debugging mysterious conflicts.
5. The Full Circle Matters
Pushing to ECR isn't the end - running the container from ECR is. This validates your entire workflow and gives you confidence that your deployment process actually works.
The Victory Lap: What Actually Worked
Here's the magic sequence that finally worked:
- Clean slate approach: Clear all environment variables
- Fresh credentials: Configure AWS CLI with current, valid credentials
- Test authentication: Verify with
aws sts get-caller-identity - ECR authentication: Use the get-login-password command
- Build, tag, push: The standard Docker workflow
- Pull and run: Test the container from ECR on EC2
The whole process, once I figured out the credential dance, took about 15 minutes. The debugging that led to those 15 minutes? That took considerably longer.
What's Next? The Bigger Picture
Now that your image is not just sitting in ECR but actually running as a container in production, you've got options:
- Deploy to ECS: Let AWS manage your containers across multiple instances
- Use with EKS: Kubernetes in the cloud for orchestration at scale
- Set up CI/CD: Automate this whole process so you never have to do it manually again
- Image scanning: Make sure your containers are secure
- Load balancing: Route traffic across multiple container instances
- Auto-scaling: Let AWS scale your containers based on demand
But for now, take a moment to appreciate what you've accomplished. You've wrestled with AWS authentication and won. You've containerized an application, pushed it to a cloud registry, and successfully run it in production. You've completed the full DevOps circle from development to deployment.
The Real Talk Section
Here's what I wish someone had told me before I started this journey:
AWS authentication is confusing by design. There are IAM users, roles, temporary credentials, session tokens, and environment variables all dancing together in ways that aren't immediately obvious. Don't feel bad if it takes a while to understand.
Error messages are often misleading. "Cannot perform an interactive login" usually has nothing to do with your terminal and everything to do with expired credentials.
Documentation is great, but real-world scenarios are messy. The AWS docs assume you're starting with a clean slate and perfect credentials. Real life is messier.
Every developer has been where you are. That frustrated feeling when nothing works? We've all been there. The satisfying feeling when it finally works and you can curl your running application? We've all felt that too.
The full deployment cycle matters. It's not enough to push to ECR - you need to prove you can pull and run. That's where the rubber meets the road.
The Wrap-Up: From Chaos to Confidence
What started as a "simple" task - pushing a Docker image to ECR - turned into a masterclass in AWS authentication, environment variable management, the fine art of debugging cryptic error messages, and ultimately, running a production container from a cloud registry.
But here's the thing: now you know. The next time someone hands you an IP address, a PEM file, and some AWS credentials, you'll know about session tokens and environment variables and the mysterious ways they interact.
You'll know to test your AWS authentication before attempting ECR login. You'll know that "UnrecognizedClientException" usually means expired credentials, not a bug in the AWS CLI. You'll know that sometimes the best debugging strategy is to start over with a clean slate.
Most importantly, you'll know that the job isn't done when your image is in ECR - it's done when that image is running successfully as a container, serving requests, and proving that your entire deployment pipeline works from end to end.
You'll know that behind every "simple" deployment task is a web of authentication, networking, containerization, and configuration that can turn a 10-minute job into a 4-hour adventure. And that's okay. That's just software development being software development.
The next time you see that docker run command successfully starting your container, pulling from ECR, and your application responding to curl requests, you'll appreciate not just the technical achievement, but the journey of problem-solving, persistence, and learning that got you there.
And maybe, just maybe, the next time someone tells you to "just push it to ECR and run it," you'll smile knowingly and say, "Sure thing. Let me just check my session tokens first, and then we'll test the full deployment cycle."
Because that's what separates the experienced developers from the optimistic ones - we know to check the session tokens first, and we always test the full circle.
If you’d like to go beyond just reading and actually try this out yourself, I’ve also shared the full application I used for this practice on my GitHub. It’s a simple video app that transcodes videos – the same one I containerized, pushed to AWS ECR, and ran on EC2. You can clone the repo, follow along with the steps in this post, and get hands-on experience running the whole flow end to end.
Want to connect? If this debugging journey helped you out, or if you've got your own AWS authentication horror stories to share, find me on LinkedIn. I'm always up for swapping developer war stories over virtual coffee!
More technical deep-dives? Check out my other posts on my blog where I turn complicated technical problems into stories that actually make sense.


