Weekly AI insights —
Real strategies, no fluff. Unsubscribe anytime.
Everyone talks about what AI gives you. Nobody talks about what you lose when your most productive collaborator can't push back with conviction.
There is a conversation happening right now in every founder community, on every Slack channel, in every podcast about the future of work. It goes something like this: AI agents will make you ten times more productive. You can build a company with one person. The age of the solo founder has arrived. What nobody mentions, in those conversations, is what it feels like at 11pm when you have shipped more code in a day than your last team shipped in a sprint, and there is nobody to tell.
I started building with AI agents seriously about eighteen months ago. By seriously I mean I restructured my entire workflow around them. I have agents that do reconnaissance, agents that write code, agents that audit that code, agents that plan, agents that remember. I have built something genuinely remarkable with them, a level of output that would have been impossible for a team of five a few years ago. And I would be lying if I said it did not sometimes feel like the loneliest thing I have ever done professionally.
The social texture of building companies is something we almost never talk about, because it seems secondary to the actual work. But it is not secondary. The arguments at the whiteboard at 2pm, the co-founder who sends you a voice message at midnight because they cannot sleep, the engineer who pushes back on your architecture because they have seen this failure mode before: these interactions are not just pleasant. They are load-bearing. They are where ideas get stress-tested. They are where you find out which of your convictions are real and which are just things you say. They are, in a very practical sense, how good companies get built.
AI agents do not push back with conviction. This is the thing I want to be honest about, because most writing on AI agents glosses over it. They can simulate disagreement. They can surface counterarguments when asked. But there is a difference between an agent that produces a list of potential objections because I told it to, and a co-founder who grabs your arm and says: wait, I think we are building this completely wrong. The latter comes from a different place. It comes from someone who has skin in the game, who has their own ego and reputation tied to the outcome, who is genuinely afraid you are going to fail. That fear is information. You cannot replicate it with a system prompt.
What happens to your decision-making when you remove that friction? I have noticed something in myself over the past year that I think is important and underreported. I move faster. I ship more. And I am also more likely to get deep into an idea before I discover it was wrong. The traditional collaborator serves as an external check on your internal narratives. They catch you early, when the correction is cheap. Without them, you sometimes catch yourself only after you have built the thing, only after you are emotionally invested, only after the cost of being wrong has gone up considerably. The productivity gains are real. But so is the validation debt you are accumulating.
I am not arguing that solo-with-AI is worse than team-with-humans. The math simply does not work out that way, at least not at the stage where most of us are building. A small team with bad dynamics, misaligned incentives, and the overhead of coordination will consistently underperform a focused founder with good AI tooling. I have been on both sides of that comparison and I know which one I prefer. But I think the honest framing is that you are making a trade, not escaping one. You trade the noise and friction and coordination cost of human teams for something quieter and lonelier and, in some ways, more dangerous to your own judgment.
The way I have responded to this is to become much more deliberate about seeking human input at specific points in the process. I share ideas earlier than feels comfortable. I publish thinking before it is polished. I schedule calls with other founders specifically to have my assumptions challenged: not to network, not to share wins, but to get someone to tell me what I am missing. In a strange way, building with AI has made me better at asking for help from humans, because I have had to be intentional about it in a way that used to happen organically. When you have teammates, you get challenged as a byproduct of collaboration. When you work alone, you have to design the challenge in deliberately.
There is also a deeper question here that I find myself sitting with more often than I expected. When you build with AI agents, the work gets done. It gets done well, often faster than you imagined. But the question of meaning is more complicated. Building a company is, among other things, a shared story. The people who built it with you carry that story. They remember the version that did not work, the pivot that felt wrong until it did not, the customer who changed how you thought about the product. AI agents do not carry stories. They carry context, which is related but not the same thing. Context lives in a file. Stories live in people. When the company is over, whatever form that takes, there will be a version of this where the only person who really knows what happened is you.
I do not say this to be grim. I say it because I think it points toward something real about how the best AI-augmented founders will build differently in the next few years. The ones who figure this out will be the ones who treat human collaboration as a scarce and precious resource, not a bottleneck to be optimized away. They will build in public more aggressively, not just for marketing reasons but because public building forces external perspective. They will cultivate small, high-trust networks of other founders who they share real information with. They will invest in communities that generate genuine intellectual friction, because that friction is the thing their AI stack cannot provide.
The loneliness I am describing is also, I think, a signal worth paying attention to. When you feel isolated in your work, it is often because you are operating at the edge of what you can validate internally. It is a sign that you need more external input, not less. The founders who are going to struggle with AI-augmented building are the ones who treat that loneliness as a productivity problem to be solved with more automation. The ones who are going to do something genuinely interesting are the ones who treat it as a diagnostic. Something in the system wants external engagement. Build toward that, not away from it.
The future I actually believe in is not one where the solo founder replaces the team. It is one where the founder becomes a different kind of node in a network. Less executor, more architect. Less isolated in the problematic sense, and more deliberate about which human connections actually matter. The AI handles the execution layer. The human handles the meaning layer. And the work of figuring out where exactly that boundary lives is, I suspect, the most interesting design problem of the next decade. The tools are extraordinary. The real question is not whether you can build alone with them. The real question is whether building alone is actually what you want.