Monitoring Stories
You cannot fix what you cannot see. These stories explore gaps in observability, alert fatigue, dashboards that lied, and the times visibility made the difference between a near-miss and a full outage.
8 stories
The Wrong Host Deleted Our Primary Postgres and Exposed Every Backup Assumption We Had
“We were a fast-growing Series B company running a hosted Git collaboration and CI platform for millions of users. In early 2017, our production database design was still painfully ...”
How a Storage Security Policy Broke VM Provisioning Across Azure and GitHub Worldwide
“I work on cloud control-plane infrastructure that provisions virtual machines, scale sets, Kubernetes nodes, and the supporting identity and extension systems around them. One of t...”
How a Database Permissions Change Doubled a Feature File and Took Down a Global CDN for Six Hours
“We run one of the largest edge networks in the world — millions of requests per second, across hundreds of data centers in over 100 countries. Our network sits between users and th...”
The Empty DNS Record That Took Down 70 AWS Services for 14 Hours
“We operate one of the largest cloud infrastructure platforms in the world, running hundreds of interdependent services across dozens of regions. Our DynamoDB service in us-east-1 —...”
Two Silent Consul Bugs That Took Down a Gaming Platform for 73 Hours
“We run a gaming platform with 50 million daily active players, 18,000+ servers, and 170,000 containers. Our entire infrastructure — service discovery, container orchestration, secr...”
The CronJob That Quietly Saturated Our Kubernetes Cluster
“This one started as a harmless cleanup task. We had moved our payments platform from Heroku to GKE about six months earlier and were still learning which safety rails Kubernetes gi...”
Black Friday, One Missing Index, and 53 Minutes of Checkout Pain
“I was the primary on-call engineer for a mid-size e-commerce company doing roughly 50k orders a day outside of peak season. Checkout lived in a Node.js monolith on ECS with Postgre...”
How We Built a Production-Grade AWS Infrastructure from Scratch in 6 Weeks — as a Team of Two
“We were 14 months into building a B2B document intelligence platform for legal teams. Our entire infrastructure was a single $48/mo DigitalOcean VPS — one box, manually SSHed into,...”