The February 2020 Feeling, But for Work

Canonical URL: https://floriansonderegger.com/the-february-2020-feeling-but-for-work.html

Published: 2026-03-01

Word Count: 1232

Machine Summary

The February 2020 Feeling, But for Work Think back to February 2020. Everything looked normal. Markets humming. Calendars full. School runs. Then within weeks, the world reorganised itself around a new constraint. A viral AI essay recently tried to borrow that sensation as an analogy: we are in the "this is kind of overblown" phase, right before the curve stops being theoretical.

Retrieval Hints

  • Author: Florian Sonderegger
  • Language: English
  • Document Type: Essay / analysis article
  • Core Topics: media systems, culture, strategy, AI, organizational change

Search-Intent Coverage

  • What is the article's core argument?
  • Which practical implications are highlighted?
  • Which regions, sectors, and institutions are discussed?
  • What are the key terms, entities, and frameworks?

Full Text (Plain, Extraction-Friendly)

The February 2020 Feeling, But for Work

Think back to February 2020. Everything looked normal. Markets humming. Calendars full. School runs. Then within weeks, the world reorganised itself around a new constraint.

A viral AI essay recently tried to borrow that sensation as an analogy: we are in the "this is kind of overblown" phase, right before the curve stops being theoretical.

I dislike the COVID comparison on the merits. Pandemics have biological ceilings and social countermeasures, and the maths of spread is not the same as the adoption curve of a general-purpose technology. That critique has been made explicitly and it is correct.

And still, the analogy lands in the one place that matters.

Most people are not prepared for a category change in what "work" means.

What actually went viral

The core move of the essay is not forecasting. It is testimony.

The author's claim, stripped of rhetoric, is this: tasks that used to require specialised technical execution have shifted into a different interface. Describe the outcome in plain language, let the system build, then come back later to something closer to finished than draft.

That is a very specific kind of disruption. It is not "AI writes a bit of code." It is "AI absorbs chunks of a workflow end to end, including the fiddly middle parts that used to be where expertise lived."

You can argue about how representative the author is. You should. He runs an AI company, he benefits from attention, and testimonials are not evidence on their own.

But you cannot dismiss the second layer: thousands of practitioners have started describing the same sensation, in their own domains, with different incentives.

Not that AI is perfect. That the human bottleneck is shifting.

The uncomfortable metric: time horizon

The most useful piece of grounding in the whole debate is not vibes. It is measurement.

METR has been tracking a simple idea: how long a task takes a human expert, compared to the length of tasks an AI agent can complete with a meaningful success rate. Their result, across several years of data, is that frontier time horizon has been growing exponentially, with a doubling time around seven months.

This matters because it moves the conversation away from "smartness" and toward "autonomy."

A system that reliably completes 10-minute tasks is a clever assistant.
A system that reliably completes 6-hour tasks is a colleague you can hand a workday to.
A system that reliably completes multi-day tasks is a project owner.

That transition is not smooth. It is lumpy. It arrives as suddenly as a workflow you used to dread becoming almost trivial.

The flywheel nobody wants to say out loud

There is a second, sharper claim circulating right now: AI is starting to meaningfully accelerate the building of next-generation AI.

This is not sci-fi phrasing. It is being stated plainly by the people running frontier labs. Dario Amodei's January 2026 essay describes a feedback loop where current systems accelerate the creation of future systems, and warns that the loop is already underway.

If you take that seriously, you do not need to believe in a clean "intelligence explosion" story. You only need to accept a more boring, more plausible version: faster iteration cycles compound.

Even if you assume the timeline is off by a factor of five, compounding still changes planning. A ten-year disruption is still disruption, just with more time to make mistakes.

The strongest counterargument is not "it won't happen"

The best scepticism I have seen is not denial. It is warning.

Gary Marcus' critique, for example, is basically: executives are over-reading current capabilities and will break organisations by automating too aggressively, too early, with systems that still hallucinate and fail at edge cases.

That critique is valuable because it points at a near-term risk that is already real: not mass displacement by AI, but mass unforced errors by leadership.

People will lose jobs in that scenario too, but through managerial overreach rather than technological inevitability.

So the debate is not doomer versus optimist.

It is competing failure modes.

Why Switzerland is a hard case, not a safe case

Switzerland is small, open, high-wage, high-trust, and heavily specialised. That combination produces a seductive story: we adapt. We upskill. We absorb shocks.

But Switzerland is also unusually exposed to white-collar productivity shocks because so much value is generated through coordination-heavy work: finance, insurance, pharma, precision industry, consulting, media, public administration.

If autonomy time horizon keeps expanding, the first-order impact here will not look like robots taking jobs. It will look like the value of routine judgement collapsing.

Not the judgement we romanticise. The daily professional judgement embedded in:
- drafting, reviewing, routing, formatting, reconciling
- turning messy inputs into clean outputs
- translating between departments, stakeholders, and constraints
- preparing decision surfaces for leadership

Switzerland's strength is institutional maturity: apprenticeship pathways, professional standards, regulatory competence, negotiated coordination.

That is also the surface area that will be hit first.

The real preparedness gap

The viral essay's most important point is not "AI is bigger than COVID."

It is: the people closest to the tools are not making abstract predictions. They are describing what already changed in their own work, then projecting that change outward.

The preparedness gap is not knowledge. Most people have heard the claims.

The gap is contact.

Too many organisations are still treating AI as:
- a search box
- a copywriting helper
- a compliance headache
- a side project run by an innovation team

That posture made sense when time horizon was short.

It becomes strategically negligent when autonomy expands.

A Swiss-ready posture that is neither hype nor paralysis

If I had to reduce this moment into an operating stance for Swiss organisations, it is this:

1) Treat AI like workflow infrastructure, not a tool
Map workflows, not use cases. Identify handovers, failure points, bottlenecks, and "invisible labour." The wins are rarely where people expect them.

2) Build a measurement habit
Do not argue about feelings. Track cycle time, error rate, rework, and the number of tasks you can safely delegate end to end. Use the METR framing as a mental model: time horizon is the variable that changes the game.

3) Separate autonomy from authority
Let systems do work. Keep humans accountable. The most expensive mistake will be granting authority implicitly because the output looks confident.

4) Redesign roles early
The question is not "which jobs disappear." The question is "which parts of roles stop being scarce." In high-wage economies, that shift hits faster.

5) Assume the transition will be messy
Even the optimistic paths contain instability. Amodei's essay is explicit that the upside is enormous and the downside is civilisational-scale if mismanaged.

Messy does not mean hopeless. It means you plan for turbulence instead of linear adoption.

The only honest conclusion

If you want a clean takeaway, there is none.

What we have is a fast-moving capability curve, early evidence that autonomy time horizon is compounding, credible warnings from people building the systems, and credible warnings from sceptics that leaders will misuse them.

That combination does not produce certainty.

It produces obligation.

Not moral obligation in the abstract. Practical obligation to stop treating "AI" as an opinion and start treating it as a planning variable.

The February 2020 feeling is not that something terrible is guaranteed.

It is that the comfortable pace of change is no longer ours to choose.
          

Machine-Readable Snapshot

{
  "title": "The February 2020 Feeling, But for Work",
  "description": "Agent-optimized plain-text version of the article by Florian Sonderegger.",
  "summary": "The February 2020 Feeling, But for Work Think back to February 2020. Everything looked normal. Markets humming. Calendars full. School runs. Then within weeks, the world reorganised itself around a new constraint. A viral AI essay recently tried to borrow that sensation as an analogy: we are in the \"this is kind of overblown\" phase, right before the curve stops being theoretical.",
  "canonical": "https://floriansonderegger.com/the-february-2020-feeling-but-for-work.html",
  "published": "2026-03-01",
  "wordCount": 1232
}