please stop generating business slop
Sharpen your judgment in the age of AI by asking the magic questions
The next cohort of my Maven Course, How to Become a Supermanager with AI, starts June 9. Spots are filling quickly - Enroll today.
Also, the new WHOOP experience is here and I couldn’t be more proud of what we’ve built. I’ll write about it soon, but in the meantime, get a free month on me at join.whoop.com/hilary.
A common criticism of AI is that it is responsible for “AI slop”: generic LinkedIn posts littered with emojis and em-dashes, Facebook images of celebrities melting or whatever, news articles that are supposedly written in English but reading them makes you feel like you might be having a stroke, etc.
This is happening at work, too. Business slop! Documents full of plausible-sounding but superficial ideas, slide decks full of buzzwords devoid of any actual insight, strategy documents that are full of data but don’t have a clear point of view.
I hear people blame AI for this. Because yes, it has never been easier to generate outputs. It’s very easy to ask ChatGPT to do a task for you, and it will do it, regardless of whether there is any critical thinking at play. And if you have poor judgment, you might just rip that output and send it into the world, because look, the magic box did your work for you!
But that’s not inherently an AI problem. I use ChatGPT about a million times a day, but I would hope it’s also clear how much care and thought I put into my work. Those things are not mutually exclusive.
But increasingly, good judgment is harder to come by. Developing good judgement used to be straightforward. You honed it slowly, through careful, deliberate practice: doing often tedious work, making mistakes, learning, and iterating.
AI makes skipping that work easy, and the fast-paced expectations of modern work make slowing down to build judgment seem like an indulgence or waste of time. So the challenge is: How do you rapidly develop high-quality judgment in a world where slowing down isn’t an option, and the temptation to outsource your thinking is everywhere?
GOOD NEWS. I am here with a simple answer: change the way you ask questions. Notice how often you ask open ended questions, and try to replace them with this framing: “I think the answer is _____. Do you agree?”
“Do you agree?” is a magic question. So is, “is that right?”
Framing your questions as assertions that can quickly be validated (or falsified!) with a yes or no answer creates rapid feedback loops that sharpen your judgment.
The sloppening
My husband recently played me this excerpt from the Terminally Online podcast, which I found illustrative of the judgment challenge. This is a conversation between Claire Fogarty (a producer) and Jon Lovett (Obama’s former speechwriter):
Claire: I really intentionally try to resist AI in my life. I do not use AI tools at all…I went to college when ChatGPT came out. I watched all my classmates literally forget how to write. I was encouraged to use it in some of my classes, and so I did. And then, I was like, why would anyone want to read something that no one wanted to write? That’s the thing I keep coming back to.
Jon: I just think, if you were somebody that cared about photography and you didn't learn Photoshop, like it or not, you were left behind. If you don't know how to write or think, AI will not make you smarter and it will not let you produce great work. It's a tool. And if you can't use it to produce something good, because you're not starting from a place of clear, crisp thinking, it may not do much for you. You'll just produce AI slop. Because, by the way, you would produce human slop. But if you are able to think clearly and creatively, it then becomes something that might help you be even better.
Claire: In college, I spent a year engaging with AI and understanding how it could be helpful. But I talk to a lot of my fellow producers here, and I’m so much earlier in my career than them, and my brain still needs to be learning and doing this. If you listen to [my podcast], I do not use a single AI process to do it. I read every single article…I need to read the whole Atlantic article, write the questions, write the summary, because the people listening are not reading it. And that’s what my job is. You know? That’s why I’m here.
I am sympathetic to Claire, who seems very bright and earnest. But the solution isn’t to reject AI outright. Effective use of AI starts with strong, foundational critical thinking. Without that, AI amplifies mediocrity.
Using AI can rob people, especially young people, of the opportunity to develop critical thinking skills and good judgement. But it does not have to.
To protect your thinking from becoming sloppy, and to use AI effectively, you need to remain sharp in two areas:
Synthesis: how you translate inputs into a clear point of view
Validation: how you test and refine that point of view
The simplest, most tactical way to do this is by reframing how you ask questions: shifting from vague, open-ended questions to specific, testable assertions.
Good question
Here’s how I encourage my team to turn open-ended questions into magic questions:
Open-ended: “What do you think I should do?”
Falsifiable assertion: “Given X constraint and Y goal, I plan to do Z. Is that right?”
Open-ended: “What could I have done differently?”
Falsifiable assertion: “If I had escalated earlier, we could have avoided the miss. Going forward I’ll raise similar risks immediately. Do you agree?”
Open-ended: “I don’t understand our strategy. Can you explain it?”
Falsifiable assertion: “It looks like our biggest opportunity is X because Y; we’re deprioritizing Z for these reasons. Is that right?”
When you ask open-ended questions, you’re outsourcing your thinking and asking someone else to figure things out for you. But by making specific assertions, you own the thinking, and clearly show your logic, even if it’s flawed. Exposing flaws in your thinking may feel vulnerable and scary, but it is the fastest way to learn and get to the right answer. I am **constantly** trying to identify flaws in my own thinking.
Open-ended questions have their place, especially if you are still in the “synthesis” phase, like if you’re doing user interviews or discovery research. But most people run that mode far too often, like, 90% of their questions are open-ended. You will be far better if you aim for 20% open-ended questions and 80% validating assertions.
Don’t let being wrong discourage you
Initially, using this technique might feel uncomfortable. You’ll likely be wrong more than you are right.
When I encourage people to start operating this way, they often get discouraged: “My manager says I should come with solutions, but then she tells me my solutions are wrong, so what’s the point?”
Being wrong is exactly the point. You will feel like you are failing because your recommendations will get shot down. But every quick "no" or clarification you receive is a rapid learning moment that you can use to calibrate your instincts.
Over time, these repeated iterations compound dramatically. If you treat these interactions as a data-gathering exercise, paying attention to whether you are getting more “yeses” than “nos” over time and what is driving those “yeses,” you will start more consistently making clear, well-founded statements that people trust. But it will only happen if you are going into them with a clear, falsifiable point of view. If you do not do that, you will simply become more and more reliant on someone else giving you the answer.
Don’t be afraid of “wasted effort”
It takes time and effort to develop a point of view, especially when you are relatively junior or new to an organization. It is not always obvious that that time is well spent. Sometimes you’ll spend a bunch of time wrestling with your brain in a Google doc or slide deck only for the meeting you were preparing for to get cancelled. Plans shift, and your work goes unused. I have spent countless hours agonizing over the specific layout of some slides that never saw the light of day, and I would always kick myself, like “why on earth did I waste so much time working on that?”
Don’t beat yourself up when this happens. Time spent clarifying your thinking, even on things that don’t pan out, builds your judgment. Seemingly pointless exercises in clearly laying out your assumptions, rationales, and proposals is actually practice that allows you to do this type of work faster and more reliably in the future. It’s an investment in your judgment, and these investments will become increasingly important as AI makes the short-term work of building that judgment appear less “necessary.”
Start desloppening today
The deluge of AI-generated slop won’t stop anytime soon. But that doesn’t mean you’re doomed to generate it yourself if you “give in” to the pressures of using AI. By shifting from vague questions to clear, testable assertions, you create tight feedback loops that rapidly strengthen your judgment.
Using AI can, but does not have to erode your critical thinking. It can magnify it, but only if you're actively training your judgment in parallel.
Start today by paying attention to how often you ask open-ended questions, and instead ask the magic questions: “Is that right?” and “Do you agree?”. Watch how quickly you move from uncertainty and slop toward clarity, confidence, and genuine mastery.
xoxo,
hils
Priming a question like "I think X—do you agree?" doesn't refine thinking, it coercively steers respondents towards replying "yes", which just creates confirmation bias and discourages dissent. Your result may be faster consensus, but it definitely won't be better judgment. Instead you'll get exactly the kind of low-quality groupthink you're claiming to fight.
Appreciate this post, and looking forward to your course. Have you read the HBR Guide to Gen AI for Managers yet? Thoughts on how approaches and frameworks from the book complement your course?