☁️ My Personal Experience Deploying Systems on AWS
From a newbie who didn't know what AWS is to knowing how to work with AWS.

🧑💻 Introduction
I started working with AWS around 2020—when my team decided to move our backend system from physical on-prem servers to the cloud. At the time, everything was new: setting up EC2, load balancers, RDS, managing costs, and handling security.
After three years of "living with AWS", I’ve experienced almost everything: from small-scale deployment, managing production with tens of thousands of daily requests, to production crashes caused by misconfigurations—and yes, even burning money by forgetting to shut down services. In this article, I’ll share my entire AWS deployment journey—what works, what doesn’t, and key lessons learned from real experience.
🧱 Part 1: Preparation – Know Yourself, Know AWS
1.1 Define your needs
Before touching the AWS console, you must answer:
- What are you building? A web app, REST API, data pipeline, batch job?
- What’s the expected traffic?
- What database do you need? MySQL, PostgreSQL, NoSQL?
- Is auto-scaling important? Do you need low-latency?
- Does your team have DevOps experience?
My team was building a Spring Boot REST API, using MySQL, connected to a React frontend. So, we chose:
- EC2 to run the Spring Boot app
- RDS for MySQL
- S3 for file storage
- CloudWatch for logging
- Later, we added: Load Balancer, Auto Scaling, Route53, etc.
1.2 Get familiar with AWS services
Here are the AWS services I’ve used the most, grouped by function:
Purpose |AWS Services|Role
Compute|EC2, Lambda, Beanstalk|Run applications
Storage|S3, EBS, EFS|Store files and volumes
Database|RDS, DynamoDB|Manage relational/NoSQL databases
Networking|VPC, Route 53, Load Balancer|Networking, DNS, routing
Monitoring|CloudWatch, X-Ray|Logging and system monitoring
CI/CD|CodePipeline, CodeDeploy|Automate deployments
Security |IAM, KMS, Security Groups|Access control and encryption
Pro tip: Take advantage of AWS Free Tier when testing, but always set cost limits.
🚀 Part 2: Deployment – From Development to Production
2.1 The simplest way to deploy
- At first, I deployed the traditional way:
- Build the Spring Boot app → create .jar file
- Launch an EC2 instance (t2.micro Free Tier)
- Install Java + MySQL client
- Copy .jar to EC2 via SCP or Git pull
- Run the app: java -jar app.jar
Pros:
- Quick and easy to understand
- Full control
Cons:
- Hard to scale, no CI/CD, security risk if misconfigured
2.2 A more professional approach
As the system grew, I switched to:
Elastic Beanstalk: AWS automatically handles EC2, Load Balancer, Auto Scaling
External or private RDS connection
CI/CD using GitHub Actions and AWS CLI
Recommended architecture:
- EC2 (Auto Scaling Group)
- Application Load Balancer
- RDS (MySQL/PostgreSQL)
- S3 for file storage
- CloudWatch for logging and alerts
2.3 Connect domain and HTTPS
- Use Route 53 to buy or point your domain to AWS
- Use ACM (AWS Certificate Manager) for free SSL
- Attach SSL to Load Balancer → auto HTTPS redirect
AWS also auto-renews the SSL certs. Super convenient.
💸 Part 3: Cost Optimization – Avoid Burning Your Wallet
My biggest rookie mistake was… forgetting to shut down EC2, which resulted in a $50/month bill, even though nothing was running 🥲
3.1 Cost-saving tips:
Use the Free Tier: Use EC2 t2.micro, RDS, Lambda within free limits
Auto stop idle EC2: Use cron jobs or Lambda to auto-stop instances when idle
Turn off RDS when not in use: For test environments: snapshot → delete → restore when needed
Use S3 Glacier for backups: Cheap long-term storage (slow access but low cost)
Monitor via CloudWatch: Track and alert for CPU, memory, and billing
Set Budget & Alerts: In AWS Billing → email when cost exceeds your limit
🔐 Part 4: Security – Not Optional in the Cloud
Common mistakes:
- Opening port 22 (SSH) to the world → brute force attacks
- Making S3 buckets public → leaked data
- Allowing all IPs to access RDS → open to scanning attacks
My security practices:
- Create custom Security Groups per service
- Put RDS in a private subnet – only accessible by EC2
- Use IAM Roles instead of hardcoding access keys
- Enable Multi-Factor Authentication (MFA)
- Rotate access keys regularly
🛠️ Part 5: CI/CD – Automating Deployment
My setup:
- GitHub Actions
- Use AWS CLI to:
Build .jar
Deploy to Beanstalk or upload to S3
Trigger Lambda
🧪 Part 6: Monitoring – Keep Your System Healthy
- CloudWatch Logs → view logs and debug issues
- CloudWatch Alarms → trigger alerts when CPU > 80% for 5 mins
- X-Ray → trace requests from ALB to Lambda or EC2
- CloudTrail → log all actions in your AWS account
✅ Final Thoughts: Lessons Learned
- Things I wish I knew earlier:
- AWS is powerful—but dangerous if unmanaged.
- Security and cost optimization are essential.
- Don't be afraid of automated services like Beanstalk, Lambda.
- Always set a budget alert from day one.
- Start small – scale later.
✍️ Conclusion
Deploying systems on AWS has been a huge learning journey. From a developer with zero cloud experience, I’ve learned how to set up production systems, manage costs, and secure infrastructure. AWS gives you a lot of flexibility—but it demands responsibility and continuous learning.
If you’re just starting with AWS, don’t be afraid to dive in. Test everything in a dev environment first, and once you grasp the fundamentals, AWS becomes a truly powerful tool in your arsenal.




Comments
There are no comments for this story
Be the first to respond and start the conversation.