AWS/GCP/Azure Cost Optimization Best Practices

Your cloud bill keeps climbing every month, and honestly, you´re not even sure why anymore. I get it, I´ve been in that exact spot, staring at a $47,000 monthly AWS bill, wondering where the hell all that money went. Here´s what nobody tells you about cloud infrastructure: it´s stupidly easy to waste money without realizing it.

You spin up some servers for a quick test on Friday afternoon, forget about them over the weekend, attach a few storage volumes here and there, and boom, you´re burning through cash on services nobody´s even using. Cloud cost optimization isn´t really about just spending less money, though that´s nice. It´s more about spending smarter and making sure the money you´re putting into cloud infrastructure is actually doing something useful for your business.

The tricky part? Cloud providers have made it incredibly simple to spend money but weirdly complicated to figure out where it´s all going. When you´re dealing with AWS, GCP, and Azure, each one with its own pricing quirks, discount programs, and weird gotchas, trying to figure out the best approach can make your head spin. But here´s some good news that might actually help: most companies can slash their cloud costs by 20% to 40% just by fixing the basics and plugging the most obvious leaks.

Understanding Where Your Money Goes
| 01

Before you start trying to optimize anything, you need to actually see where your money´s going. This sounds super obvious, right? But most companies have terrible visibility into their cloud spending. Sure, they know the total bill, but ask them which applications or teams are burning through the budget, and you get blank stares.

Start with proper tagging across everything in your cloud. Tag every single resource with at least these basics: what application it belongs to, what environment it is (production, staging, dev), which team owns it, and who´s responsible for it. These tags become the backbone of actually understanding your spending patterns. AWS Cost Explorer, Azure Cost Management, & GCP´s cost tools all need good tagging to tell you anything useful.

Set up multiple billing alerts, not just one. Don´t wait until you´ve blown your entire budget to get notified. Set alerts at 50%, 75%, and 90% so you have time to investigate what´s happening before things spiral out of control. I´ve watched teams blow past their monthly budget by Wednesday morning because nobody was paying attention.

The Prioritization Framework: What to Tackle First
| 02

Here´s where most guides about cloud cost optimization best practices completely drop the ball, they throw a giant list of tactics at you but never tell you where to actually start. Reality check: some optimizations will save you ten times more money than others, and you should focus your limited time on the stuff that actually makes a difference or contact a cloud consulting company like Stack Overdrive.

High-Impact Optimizations (Start Here)

These changes typically save 15-40% of your total cloud bill without requiring a ton of work. If you´re just getting started with optimization, spend about 80% of your energy here.

Right-size your compute instances
| 01

This is, hands down, the biggest money pit for most companies. Someone spins up a large instance because they´re not totally sure what they need, and then it just sits there at that size forever. Pull up your cloud provider´s recommendation tools, or even better, actually look at what your CPU and memory usage have been over the last month.

If your instances are consistently sitting below 40% utilization, downsize them. You´ll usually save 30-50% per instance just from this. We worked with a company where right-sizing alone saved them $8,000 every month, way more than any other single thing they did.

Eliminate idle and unused resources
| 02

Unattached storage volumes, ancient snapshots nobody remembers taking, unused IP addresses, instances that are stopped but nobody´s ever going to restart them, this junk piles up faster than you´d think. Every quarter, do a cleanup: find anything that hasn´t been touched in 90 days and delete it. Be ruthless. Set up some automation to flag stuff that looks abandoned.

Implement autoscaling
| 03

If you´re not using autoscaling, you´re basically burning money. Most applications don´t need the same horsepower at 3 in the morning that they need at 2 in the afternoon when everyone´s working. Autoscaling lets you match your capacity to what you actually need at that moment. 

Buy reserved instances or savings plans for predictable workloads
| 04

For anything you know is going to run 24/7 for at least the next year, committed use discounts are a no-brainer. AWS Reserved Instances, Azure Reserved VM Instances, & GCP Committed Use Discounts all save you typically 30-70% compared to on-demand pricing. Start with your production databases and your core application servers.

Medium-Impact Optimizations (Do These Next)

These usually save 5-15% of your bill and need a bit more effort to get people to change how they do things.

Optimize storage tiers and lifecycle policies
| 01

Move data you rarely touch to cheaper storage. Use S3 Intelligent-Tiering on AWS, Cool or Archive tiers on Azure, Nearline or Cold line on GCP. Set up lifecycle policies that automatically move or delete old stuff. Most companies have literal terabytes of data just sitting in expensive hot storage that nobody’s looked at in months.

Negotiate enterprise discount programs
| 02

If you’re spending more than $50K a month, you can probably negotiate better rates. AWS has Enterprise Discount Programs, GCP and Azure both have Enterprise Agreements. You typically get 5-15% off your whole bill in exchange for committing to spend a certain amount.

Use spot instances for fault-tolerant workloads
| 03

Spot instances on AWS, Preemptible VMs on GCP, and Spot VMs on Azure can be 60-90% cheaper than regular instances. The catch is they can get interrupted on short notice, so they’re perfect for batch jobs, CI/CD pipelines, data analysis, and other stuff that can handle being stopped and restarted.

Low-Impact Optimizations (Nice to Have)

These save maybe 1-5% of your bill. Only bother with them if you´ve already knocked out the high and medium stuff, or if they´re super easy for your specific setup.

Optimize data transfer costs
| 01

Try to keep traffic within the same region and availability zone when possible. Use CDNs for static files instead of serving them directly from your servers. The savings add up over time, but this is rarely your biggest problem.

Clean up old AMIs and container images
| 02

Old machine images and container images eat up storage space. Delete the ones you´re no longer using. Usually saves a few hundred bucks a month unless you´re creating images like crazy.

Review and cancel unused subscriptions
| 03

That monitoring tool you tried out six months ago and totally forgot about? Yeah, it´s still billing you. Go through your SaaS subscriptions every quarter and cancel whatever you´re not actually using.

Top 5 Hidden Cloud Cost Leaks

These are the sneaky costs that don´t jump out at you in billing reports but can quietly drain thousands every month.

Non-Production Environments Running 24/7
| 01

Your dev and staging environments probably don’t need to be running at night or on weekends. Set up some automation to shut them down outside work hours.

We worked with a company that was dropping $15,000 monthly on staging environments that developers only used for maybe 20 hours a week. We automated shutdowns for evenings and weekends and got that down to $4,000.

Data Transfer Between Regions
| 02

Moving data between regions costs money, and it sneaks up on you. If your application in us-east-1 is constantly grabbing data from a database sitting in us-west-2, you’re getting charged for every gigabyte that travels. Use related services in the same region whenever possible.

Over-Provisioned Databases
| 03

Database instances are expensive, and people always over-provision them because they’re scared of running out of capacity. A db.r5.4xlarge on AWS runs over $2,000 a month. If your database is only using 20% of its CPU and memory, downsize it. Most databases handle downsizing pretty smoothly with barely any downtime.

Logging Everything at Maximum Detail
| 04

Keeping super detailed logs in CloudWatch, Stack driver, or Azure Monitor gets pricey fast. Companies regularly burn $5,000+ every month just on logs. Take a hard look at your log retention. Do you really need 90 days of debug-level logs? Keep error logs longer, cut back on how long you keep info and debug logs, and filter out the stuff that doesn’t matter.

Orphaned Load Balancers and Network Resources
| 05

Application Load Balancers cost $16-20 monthly on AWS, Classic Load Balancers cost $18, Network Load Balancers run $20-40. Companies pile these up from old projects and then forget they exist. Same deal with NAT Gateways ($32-45/month), unused elastic IPs ($3.60/month each), and VPN connections. Every quarter, audit your network stuff and delete anything orphaned.

AWS Cost Optimization Specifics

AWS cost optimization has some tricks specific to that platform that are worth knowing about. AWS is the oldest cloud provider, so they´ve got the most features, but that also makes them the most complicated for managing costs.

Use AWS Compute Optimizer, which looks at how you´re actually using things and tells you specific instance types to switch to. It´s completely free and works better than you´d expect. Turn it on for all your accounts.

Try AWS Spot Fleet if you want more reliable spot instances. Instead of asking for individual spot instances, you request a fleet, and AWS automatically finds the cheapest available capacity across different instance types.

Turn on S3 Intelligent-Tiering for data lakes and archives. It automatically moves objects between storage tiers based on how often they are accessed. You pay a tiny monitoring fee but save way more on storage.

Look into AWS Graviton instances (ARM-based) if your workloads support them. They´re usually 20-40% cheaper than equivalent Intel instances and often run just as fast or faster.

Multi-Cloud Cost Optimization Strategies

Running apps across multiple cloud providers makes things more complicated but also opens up some opportunities. Multi-cloud cost optimization needs a different approach than just optimizing one provider.

Centralized Cost Visibility
| 01

Get a multi-cloud cost management tool like Cloud Health, Cloud ability, or Apptio. These give you one place to see costs across all your providers and let you actually compare things properly. The native tools from each cloud provider only show their own costs, making cross-cloud comparison basically impossible.

Workload Placement Strategy
| 02

Different cloud providers are better at different things and price services differently. Run compute-heavy workloads wherever compute is cheapest, run data-heavy workloads wherever storage and data transfer out is cheapest. GCP tends to be cheaper for workloads that run constantly, AWS has the most variety in instance types, Azure usually has better deals if you’re already using a lot of Microsoft products.

Avoid Data Transfer Between Clouds
| 03

The absolute fastest way to explode your multi-cloud budget is moving data between providers. AWS charges $0.09 per gigabyte to transfer data out, Azure charges $0.087/GB, GCP charges $0.12/GB. If your application on AWS needs to constantly pull data from something running on GCP, you’ll pay hundreds or thousands monthly just in transfer fees. Design your infrastructure to minimize data movement between clouds.

Unified Tagging and Naming Conventions
| 04

Use consistent resource tags across all your providers so you can track costs by application or team, no matter where services are running. Each provider has their own weird conventions for tags, so make a document that maps everything out and make sure your team follows it.

Automation is Your Friend

Trying to optimize costs manually doesn´t scale. You´ll do a big cleanup, save some money, then six months later, costs have crept back up because nobody´s watching anymore. Automate everything you possibly can.

Set up automation to:

  • Shut down dev and staging environments on a schedule
  • Delete unattached storage volumes that are older than 30 days
  • Flag instances with consistently low usage for someone to review
  • Automatically create snapshots and delete old data based on lifecycle rules
  • Alert teams when their spending goes over certain amounts

AWS has Config Rules and Lambda for automation. GCP has Cloud Functions and Pub/Sub. Azure has Automation and Logic Apps. Use these to enforce your cost policies without needing humans to remember.

Building a Cost-Conscious Culture

Technology and automation only get you part of the way there. The real secret to sustained cloud cost optimization best practices is actually getting your teams to give a damn about costs.

Make costs visible to your engineering teams. Show them what their applications actually cost to run each month. Most developers have absolutely no clue. When they can see that their service costs $8,000 a month and the similar one next to it costs $2,000, they start asking good questions.

Implement FinOps practices—basically getting finance, engineering, and business teams to work together on cloud spending. Have monthly cost reviews where teams explain why spending changed and what they´re doing to optimize.

Set budgets by team or application, not just one giant company-wide number. Give teams ownership of their own costs and the power to do something about it. When teams have some skin in the game, they make way better decisions.

Celebrate when people save money. When someone saves $5,000 a month by rightsizing a database, call it out publicly and make it a big deal. Make cost optimization part of your engineering culture, rather than just something the ops team worries about.

Conclusion

Real cloud cost optimization isn´t about being cheap or finding the absolute lowest price for everything. It´s about understanding where money goes, cutting out waste, and making smart decisions about where to spend. Start with the high-impact changes like rightsizing instances, getting rid of idle resources, and setting up autoscaling—these usually give you about 70% of your possible savings with maybe 30% of the work.

Fix the hidden leaks in dev environments that run all the time, data transfer between regions, and databases that are way bigger than they need to be. Once you´ve tackled the basics, move on to the medium-impact changes and tricks specific to whichever providers you use. Whether you´re all-in on AWS, spread across multiple clouds, or mostly on GCP or Azure, the basic ideas stay the same: get visibility into what´s happening, automate enforcement of policies, be ruthless about prioritizing, and build a culture where engineers actually understand and care about infrastructure costs.

Most companies can sustainably cut their cloud spending by 30-40% without making things slower or less reliable; they just need to approach it systematically instead of randomly trying things they saw in a blog post somewhere.

Scroll To Top Icon

back to top