The bill came down. The costs went up. This is the story of how a SaaS company in the business intelligence space spent eighteen months engineering its way to a lower AWS invoice, and in doing so, quietly destroyed the productivity of its engineering team, delayed two product launches, and spent more in total than if it had done nothing.
The company, which I’ll leave unnamed because the people involved still work there and the lessons matter more than the attribution, had reached a scale where cloud costs were genuinely significant. Not ruinous, but noticeable on a CFO’s radar. Their platform processed and stored substantial volumes of customer data. Their AWS bill had grown alongside their customer base, which is exactly how it’s supposed to work. But the bill was large enough to attract attention, and attention turned into a mandate.
The engineering team was asked to cut cloud spend by 30 percent.
They did it. Over the course of a year and a half, through a combination of reserved instance purchases, workload migration to cheaper storage tiers, architectural changes to reduce data transfer costs, and a major push to right-size compute instances, they brought the monthly AWS bill down by roughly a third. The CFO was pleased. The mandate was met. The project was considered a success.
Except it wasn’t.
The engineering team that executed this work was the same team responsible for the company’s core product. They did not hire specialists. They did not bring in a FinOps consultancy. They redirected their best infrastructure engineers, and periodically pulled in application engineers to help with the architectural changes, for the better part of two years. Two product features that had been scoped and scheduled were pushed. One was eventually cut entirely when the market window closed. A third initiative, a new data connector that a handful of enterprise prospects were explicitly asking for, was deprioritized repeatedly until two of those prospects signed with a competitor.
When the company eventually tried to calculate the true cost of the optimization project, the numbers were uncomfortable. Engineer salaries allocated to the project. Opportunity cost on the delayed features. Estimated revenue from the lost enterprise deals. The total was a multiple of what they had saved on the cloud bill.
This is not a story about cloud optimization being bad. It’s a story about what gets left off the ledger.
The problem is structural. Cloud costs are visible, categorized, and reported monthly. The cost of an engineer’s attention is diffuse, invisible in any single line item, and almost never attributed to the project that consumed it. A CFO can see the AWS bill fall. Nobody shows them a slide that says “two engineers who could have built the connector that would have closed three enterprise deals were instead optimizing S3 storage tiers.”
This accounting asymmetry is why cloud optimization projects get greenlit with enthusiasm while the slower, harder work of product development gets deprioritized without anyone quite deciding to deprioritize it. The savings are concrete. The losses are hypothetical until they aren’t.
The math changes depending on where you are as a company. For a capital-efficient infrastructure business running on thin margins, serious optimization work can be genuinely necessary. Dropbox famously built its own infrastructure in 2015 and 2016, saving what the company estimated as nearly $75 million over two years. But Dropbox was processing enormous volumes of storage at a scale where the economics of ownership made sense, they were large enough to absorb the engineering cost, and they had the operational expertise to make it work sustainably. The decision was made with full awareness of what it would require.
Most companies are not Dropbox. Most companies doing aggressive cloud optimization are doing it because someone in finance noticed a large number and asked engineering to make it smaller.
The more honest framing is that cloud costs are a tax on growth. When your AWS bill rises because your customer base grew and your usage scaled with it, that is not a problem. That is the business working. The question worth asking is not “how do we lower this number” but “what is the return on this spend, and is there a cheaper way to get the same return that doesn’t cost us in ways we’re not measuring.”
Some optimization is clearly worth it. Reserved instances and savings plans on predictable workloads are essentially free money, requiring minimal engineering time for meaningful savings. Eliminating genuinely wasteful spend, orphaned resources, oversized instances that were never right-sized after initial provisioning, pays for itself quickly. The rule of thumb that works in practice: if a competent engineer can identify and implement the saving in a day or two, do it. If it requires months of architectural work, run the full accounting before you commit.
The BI company eventually internalized this. They hired a dedicated FinOps engineer whose sole job was infrastructure cost management, which removed the optimization burden from the product engineering team entirely. Their cloud bill crept back up as the company grew, but the ratio of cloud spend to revenue stayed stable, which is the number that actually matters. And they shipped the data connector, a year late, to customers who had mostly figured out alternatives.
The deeper lesson is one that applies beyond cloud costs. Whenever a cost is visible and a corresponding cost is invisible, organizations will optimize for the visible one at the expense of the invisible one, and call it discipline. Burn rate is the wrong number to watch for the same reason: it measures money leaving the account, not value accumulating in the product or the customer base.
The company paying less for cloud is probably spending more. The question is just whether anyone has built a dashboard that shows it.