today's managers will decide what work becomes
plus, spring Supermanagers cohort closes tonight
Enrollment for the next cohort of Supermanagers closes today. If you’ve been on the fence, this is the last live cohort until fall. (And if you can’t make the session timing, the build guides are designed to work on your own time too.)
I love teaching this course because we’re at a historical inflection point for knowledge work. A range of futures are on the table — some scary, some better than we can imagine — and I genuinely believe today’s managers will play the biggest role in deciding which one we get.
For all the handwringing about AI destroying jobs, I’ve seen very few practical answers to: “what would a company where AI doesn’t destroy jobs actually look like?” This surprises me, because I don’t think the answer is mysterious. I’ve worked with plenty of teams where AI enriches human-centered work, even as it transforms the shape of that work.
Some of these models are:
Agents that coach humans through learning a new skill
Deliberately designed Skills that evaluate and give feedback on human-created work, raising the quality bar across a team
Agent agencies: a human briefs, the agent produces, the human reviews and approves across rounds of feedback
Agents that create the first draft of work before handing off to humans who are responsible for the last mile
Or the inverse: a human sets the structure and writes the first 10% to establish the standard, then AI extrapolates from there
These produce better work than humans or robots alone. They also create a virtuous loop: the system improves the human’s work, the human’s work improves the system, and so on.
None of this is guaranteed. The default I see is wildly unimaginative people panic-slashing headcount and hoping AI gets smart enough fast enough to magically sort out the damage.
I understand the technical and economic pressures pushing toward that path. But I don’t find it convincing. Why would people (outside of those who stand to gain financially from AI proliferation) go along with this? The political backlash is already starting. Additionally, as I’ve written, there is competitive opportunity in keeping humans in the loop, even where it’s not technically necessary. Plus, humans are uncannily good at inventing new work for ourselves; we have been for centuries.
We know how the CEOs will behave. We know what the la stated goals are. The big question mark is the people in the middle: how will the managers actually tasked with galvanizing their teams around AI do it? There are hundreds of decisions buried in that work and it will be their job to figure it out.
I’m not suggesting managers are going to successfully collectively bargain with the tech billionaires. But practically, work happens in the details — and it is wild to me how few people have a point of view on the details. If you’ve ever built a product, you know it lives or dies not in the executive mandate but in the pixels. AI at work is a product, and every detail — what the agent does, what the human does, where the handoff happens, what the output looks like — is a pixel.
Today’s managers have a historical opportunity to shape what work becomes. Those who understand the technology well enough to think creatively and ambitiously about it will chart the course.
If this excites you, join us.
xoxo,
hils
P.S. Scholarships are still available if the price is a barrier. Apply here. I especially welcome applicants from nonprofits.



I can't wait for this course!!!!