It’s a lazy Saturday morning, the sun is out, and you’re craving a big bowl of Honey Bunches of Oats. Only one problem: The milk expired yesterday. Gasp, unsure what to do, you smell the milk to see if it smells bad, but you can’t really tell.
-
What does milk normally smell like?
-
What does spoiled milk smell like?
-
Is this smell ok, or should it smell completely different?
Application metrics can have this exact same scenario. A few support tickets start coming in about a feature written over a year ago. It’s been working great for the past year, but this month the number of users doubled and it’s starting to slow down. Unfortunately, the feature didn’t have metrics added when it was first built. New metrics are added to start debugging, but we find ourselves asking:
-
What should this feature’s metrics look like on a good day?
-
This metric looks bad, but how do we know what bad is?
-
Which metrics are different as compared to last year when it launched, and which are the same?
When building new features and systems, there is one key question I make sure I can answer before considering the system complete:
How will I know when this system breaks?
It’s not a question of if, all systems eventually hit their limits as they grow. Smell the milk when it’s fresh, so you know when it spoils.
Thank you to Brian Reath, for confirming this is an odd post while still giving me the confidence to ship it.