I decide to leave Meta in November 2025.
I wasn’t let go. On the contrary, I was doing well, rising through the ranks and considered a top performer. My next promotion, to VP Product, was maybe 6-12 months away. My total compensation was an eye-watering amount.
I wasn’t leaving for a hot new startup. In fact, I was leaving without a clear answer to what my next company or position would be.
I had described my decision like this:
I’m going to spend the next 12 months going deep on AI, learning by doing, and at the end of that period, I’ll decide whether to join an AI company or start one.
Some people thought I’m nuts for walking away from this position. And maybe time will prove them right.
In this note, I want to capture my rationale for making this decision. With time, we’ll see what parts I got right and what parts I got wrong.
I had been at Meta for 8 years. My first 3-4 years were incredibly intense. It was hard for me to even believe it at the time, but during that period I was learning faster than at my first startup. My rate of learning, and the people I surround myself with are often the top drivers in my decisions. At Meta (then Facebook), I found myself surrounded by a greater density of smart, driven people like nothing I had experienced before. I saw people process information faster, speak more articulately, cut through ambiguity, make better decisions, and execute faster and sharper than me. I was being out-smarted and out-executed in nearly every meeting. It was humbling, sometimes crushing, but at the same time, addictive. I yearn for this pain because learning hurts, and I knew that pain was a signal for how fast I was developing. If I could just hang on, not let my mistakes or ego get the better of me, and focus on improving each day, doing the best work I could, I knew I was accelerating my own development.
Eight years later, I couldn’t say the same. I had transitioned from Senior Director to become one of just three IC9’s at the company (IC9 = Level 9 Individual Contributor; Meta doesn’t label these levels but I think this might be one level above what other companies call Senior Staff). Where before I felt like the dumbest person in the room, dazzled by the intellectual and social abilities of the people around me, now I was the source of inspiration for a younger generation. It was me getting called into the worst situations to cut through fuzzy tradeoffs and make harder decisions faster when other teams struggled.
There were incredible people at the company who I admired and learned from every time we interacted, but these were far fewer in number these days. I still don’t know if this effect was because I grew that much, or because Meta’s hiring standards lowered over the years as we needed to hire more people, or my ego had taken over and I had lost the humility I once had. In any case, the cause was less important than the effect: my learning and growth had stalled. And like a shark without movement, that couldn’t continue.
Truth be told, I was staying for the money. Where once I was driven to be surrounded by the best people in the world, and be learning and growing with them. Now my personal development had stagnated, and I found myself justifying my rationale to stay in terms of finances.
I still enjoyed work some days. But it struck me that I was probably living in the days that future me would regret.
To leave was to give up a huge compensation. Like, really huge. You could halve it five times and it would still be 6 figures. This is what most people reacted to in my decision. How could I walk away from that.
I framed it to myself differently: I see that compensation as an extremely high floor, but at the same time, a relatively low ceiling. I’ll explain.
Outside of Meta, my income floor is zero dollars. Maybe I’ll make some money in the next year, but I wouldn’t ever expect to be able to match that compensation in the next year. The compensation structures are designed so this incentive is effective. So to stay at Meta I was more or less guaranteed a very high minimum income.
On the flip side though, while at Meta, there is no real way for me to 10X or 100X that compensation, regardless of how much value I create. If I work day and night, do the best work of my life and unlock some revolutionary goodness with AI that creates billions of dollars, or I do just enough to hit the checkboxes in annual performance reviews, I’m getting roughly the same compensation at the end of the day. It was dulling my ambitions. I would reach a point in projects where I could see so much more room to push it, but there was no more ROI to take on that additional effort/stress/pain. I found that increasingly depressing.
While outside of Meta, my theoretical ceiling is practically limitless. It is purely a function of the value I am able to create and capture. That idea inspires, energizes, and incentivizes me to think really big.
I see the AI revolution more broadly than smart chat assistants. I see it as an inflection point akin to the introduction of the personal computer, or the smartphone. It is like a new type of computer. Software has been eating the world, but that has largely been Von Neumann architecture computers doing the eating - where the rules and logic need to be explicitly coded (mostly by hand by software developers). Now, with modern AI and the massive infrastructure buildout, we have neural architectures working at scale.
Software engineering is now more important than ever. It’s just the value is no longer in knowing how to write the specific syntax. How we leverage machines to do work for us has changed forever. And just like I taught myself to write code and products with von neumann machines, now I would need to learn how to make products with neural architectures.
Since 2022 my intuition was nagging me that I should be transitioning into a role where I could spend most of my time on AI, but I resisted the urge. By 2025 there was just no credible way to deny it anymore. The game had changed and I needed to change too.
I was armchair smart about AI. I had been following along for years, paying serious attention from around 2015 when ML models were self-discovering the concept of cats just by training on Youtube videos.
But 10 years on, I still had only the surface-level knowledge that comes from reading tweets, blog posts, papers and watching youtube videos.
I needed the tacit knowledge that comes from experience. I wanted to get a “feel” for how these new machines worked, and to understand the underlying drivers and be familiar with the constraints, opportunities and limitations of the new computers the world was inventing.
Tacit knowledge can only acquired one way, and there really are no shortcuts: you just have to do the work. Yourself. That means spending time in the frustrating cycle of not understanding, trying and failing, and eventually making it work and realizing how stupid your approaches were before that. That takes a non-trivial amount of time. 10,000 hours some people say.
This was fundamentally incompatible with my job at Meta. Meta is an intense place to work. Expectations are high. Everyone has more work and problems to deal with than they could ever manage, even if they never slept (I’ve tried). Meta is also, by now, a big company. It has processes. It has politics. Reviews with leadership take days and sometimes weeks to prepare. Decisions get escalated when there’s not something resembling a consensus. As a product manager, I can’t avoid that mess. I was compensated so much in part because my job is to bring those highly paid people together, get clear on the facts and tradeoffs and align on good decisions so we ship the most impactful outcomes for the business.
Before I left, I spent a few weeks evaluating the top AI roles I could take up across the company. I saw a lot of exciting long term plans and met a lot of people I’d love to to work with. But my assessment was that I would be learning at around 1% of the speed I knew I was capable of. My time would be shredded with meetings, preparing exec decisions, resolving escalations, figuring out how to transform the company to an AI first workforce, recruiting, etc.
Maybe 1% would be ok in some situations, but I wasn’t ok wasn’t when I couch it in the exponential growth curves we’re seeing with AI progress. Those two did not play well together in my mind.
If I left Meta, it would be just me, my computer and a desk. I could learn as fast as I could learn. I could learn anything I want. No meetings. No reviews. No interruptions. But I would be alone, there would also be no colleagues. There would also be no real direction of what I should learn. Or any feedback that what I was learning was right. But staying at Meta, I wouldn’t be able to learn very much at all.
So the worst case scenario is I’m dead wrong. Is that a disaster? Well, not really.
I lose a year of income. I lose all my unvested RSUs. I don’t have any conviction on starting a company.
But in that scenario, I will have a year of experience learning deeply about AI. I should have built a bunch of AI products and experiments that demonstrate what I’ve been up to. That sits atop an already decent resume. I would think that version of me should be quite hirable and hopefully by a group of people so sharp and driven that it makes me feel like my first days at Meta again. In fact, I would bet that version of me can get a better job at an AI first company than the version of me that stays at Meta for the next year.
And the best case scenario is I spend time learning, building, and I create some of the ambitious products I’ve been dreaming of for years.
All things considered, my napkin math calculated that over a timeframe of the next 5-10 years, the opportunity cost of staying at Meta was now higher than the value of staying.
Of course, it’s possible I’m just an AI-pilled overconfident homosapien who liberally applied every cognitive bias in the book to rationalize himself into a nonsensical conclusion. It’s definitely a non-zero chance.
But I’ve learned by now that what turn out to be excellent ideas in hindsight, often don’t look that way at the time. They are muddy, at best. Some people will see them as clearly terrible ideas, and are baffled that anyone considers them otherwise.
The pattern I’ve seen is these “dumb ideas” usually have some logical through line that “theoretically” could make it a good idea, but there’s just an absence of evidence to back up the key axioms in that theory, and “practically” there are many more vivid or certain factors that make the high risk or dumb conclusion a safer bet for any rational person to land on.
Like explorers that sailed out in search of new lands, there is no other path except to push forward through the uncertainty and prove to yourself which it is. Hopefully you can iterate as you go to so it course corrects to an excellent idea over time.
My decision to leave Meta fits this pattern.
The downside is relatively limited. The upside is potentially massive. I have a lot of degrees of freedom to iterate and optionality to exploit. And so in this case, I’m taking the bet, and I’ll learn from it either way.