When the Dashboard Becomes a Thermostat
Canonical URL: https://floriansonderegger.com/when-the-dashboard-becomes-a-thermostat.html
Published: 2026-03-04
Word Count: 1262
Machine Summary
Or: Metrics do not measure culture. They set it. It is 08:41 and someone has a slide open with three numbers, large enough to feel like a verdict. Reach. Engagement. Conversion.
Retrieval Hints
- Author: Florian Sonderegger
- Language: English
- Document Type: Essay / analysis article
- Core Topics: media systems, culture, strategy, AI, organizational change
Search-Intent Coverage
- What is the article's core argument?
- Which practical implications are highlighted?
- Which regions, sectors, and institutions are discussed?
- What are the key terms, entities, and frameworks?
Full Text (Plain, Extraction-Friendly)
Or: Metrics do not measure culture. They set it.
It is 08:41 and someone has a slide open with three numbers, large enough to feel like a verdict.
Reach. Engagement. Conversion.
A quiet moment follows, the kind that is framed as "alignment" but feels like waiting for a weather report.
Then the familiar line: "So we should do more of what performs."
No one is being cynical. That is the uncomfortable part. This is what competent people say when the room has agreed that the numbers are the closest thing to reality we can share.
And yet I keep noticing the same secondary effect. The dashboard stops being a mirror and starts acting like a thermostat.
## The turn
It's tempting to think measurement is passive. We record what happened, we learn, we adjust.
In practice, measurement is an intervention.
The moment a metric becomes legible, repeated, and socially meaningful, it changes what people choose to do next.
That is why "being data-driven" so often produces a strange outcome: the work looks more optimized and feels less alive. Not because people got lazy. Because the thermostat was installed.
## What a metric really is
A metric is a compression. It takes a messy, human reality and turns it into a single legible signal.
That is useful. It is also violent in a small way, because you have to decide what gets included and what gets dropped.
Then you do a second thing, often without naming it. You attach consequence.
Maybe it is a bonus. Maybe it is praise in the all-hands. Maybe it is the implicit career narrative of "the person who can move the number."
The mechanism does not require malice. It only requires repetition and reward.
Once consequence enters, the metric becomes a thermostat with a setpoint. People are no longer just observing temperature. They are adjusting behavior to hit the target.
Which means the question is not "Is the number accurate?"
The question is "What behavior does this number make easiest?"
## The predictable failure mode
When a metric becomes a target, it tends to stop being a good measure. That is not a hot take. It is a systems property.
People adapt. They find shortcuts. They protect themselves. They learn which activities move the needle and which activities are invisible.
Over time, the system selects for what the metric can see. A few examples you have probably lived through:
- Content teams learn that frequency beats craft when the feedback loop is fast. The thermostat rewards output cadence, so cadence becomes identity.
- Marketing teams learn that short-term conversion beats long-term trust when quarterly reporting is the social reality. The thermostat rewards spikes, so you start designing spikes.
- Product teams learn that shipping beats coherence when "velocity" is the main story leadership hears. The thermostat rewards motion, so motion becomes the goal.
None of this requires anyone to say, "Let's be shallow."
It happens because the thermostat keeps running even when you are tired, even when you are trying to be thoughtful.
So the real risk is not "bad metrics."
It is unexamined consequence.
## The uncomfortable part for media
Media is especially sensitive because the product is a shared reality.
A newsroom does not just deliver information. It coordinates attention. It decides what is salient, what is normal, what is urgent, what is ignorable.
When the thermostat is tuned to the wrong setpoint, you do not just get worse stories. You get a different public conversation.
Two shifts show up fast:
- Judgment gets replaced by legibility.
- Editors and producers start preferring ideas that are easy to justify with numbers over ideas that are hard to justify but right in their bones.
- The feedback loop eats the editorial loop.
- If performance data arrives faster than meaningful audience understanding, you optimize for the instrument panel, not the person.
The tragedy is that this can look like professionalism. It can look like "being accountable."
But accountability to what, exactly?
If the thermostat is calibrated to the wrong notion of "good," your system will become excellent at being wrong.
## Switzerland reads this differently
Small markets feel thermostat effects earlier.
There is less room for waste, less tolerance for experiments that do not show impact, and more pressure to justify decisions across multiple stakeholder realities.
Add multilingual audiences, cross-border media gravity, and a political culture that treats legitimacy as something you continually re-earn, and the desire for legible proof becomes intense.
Swiss organizations are also unusually good at coordination. That is a strength. It keeps systems stable.
It also means measurement can become a substitute for disagreement. Numbers are a socially safe way to end a debate without naming values.
So the Swiss risk pattern is not "we chase hype."
It is "we quietly let the thermostat decide, because it reduces friction."
And then we wake up one day with a culture that feels optimized and oddly cautious.
## A measurement defense posture
If your world is complex, you cannot dashboard your way into certainty. You have to probe, learn, and adjust.
That is not anti-data. It is pro-reality.
Here is a posture I have found useful. Not rules. Tested heuristics.
1) Separate learning metrics from reward metrics. If a number is used to learn, protect it from consequence. The moment you attach reward, you distort it. That does not mean you never reward outcomes. It means you should not pretend the same metric can carry both truth and consequence without deforming. Ask: which metrics do we want to be honest, even when they look bad? Then remove punishment from them.
2) Make one value explicit that the dashboard cannot see. Every metric system has blind spots. Name one on purpose. In editorial work, it might be public value or civic clarity. In marketing, it might be trust or brand coherence. In product, it might be felt quality. Then operationalize it in a way that is not instantly reducible to a single number. A recurring review. A red-team critique. A rotating panel. A structured narrative debrief. This is the part that feels inefficient. It is also the part that keeps you from letting the thermostat flatten the room.
3) Treat metric shifts as hypotheses, not commands. When a number moves, do not jump straight to "do more of that." First ask: by what mechanism did it move? Was it distribution? Timing? Headline framing? A platform change? A one-off event? A cohort effect? A novelty spike? If you cannot articulate a mechanism, you do not have a strategy. You have superstition with charts.
4) Install a counter-metric that punishes gaming. If you have a primary KPI, assume it will be gamed. Not out of malice. Out of adaptation. So design a counter-pressure. Something that makes the easiest gaming path visible and costly. Examples: pair volume with complaint rate, pair click-through with return visits, pair conversion with churn, pair speed with defect rate. The exact pairing depends on your system, but the principle is stable. The thermostat needs a second sensor.
## What I'm really arguing for
Not fewer metrics. Better governance of consequence.
A metric is never just information. It is a behavioral design tool. It shapes attention, status, and what counts as good work in the day-to-day.
So the question for today is simple, and slightly uncomfortable:
If you had to justify your main KPI the way you justify a rule in a game, could you explain what behavior it rewards, what it accidentally punishes, and what kind of culture it will produce after a year?
Machine-Readable Snapshot
{
"title": "When the Dashboard Becomes a Thermostat",
"description": "Metrics do not just describe culture. Once they carry consequence, they shape behavior and can quietly optimize teams toward the wrong kind of success.",
"summary": "Or: Metrics do not measure culture. They set it. It is 08:41 and someone has a slide open with three numbers, large enough to feel like a verdict. Reach. Engagement. Conversion.",
"canonical": "https://floriansonderegger.com/when-the-dashboard-becomes-a-thermostat.html",
"published": "2026-03-04",
"wordCount": 1262
}