In 2015, Figma had been working on its browser-based design tool for three years. Dylan Field and Evan Wallace had rebuilt the rendering engine twice, rewritten the collaboration layer, and shipped features to a user base that could fit in a school gymnasium. By most conventional startup metrics, they were failing. By the metrics that actually matter, they were almost ready.
This is the question that destroys more promising companies than bad ideas or bad markets: how do you know when the product is done enough to scale, and when you’re just scared to find out if it actually works?
Figma’s answer is worth studying closely, because they got it right in a way that most teams don’t.
The Setup
Design tools in the mid-2010s were a mess of desktop software, file-sharing hell, and version conflicts. Adobe Illustrator and Sketch dominated, but both required files to live on local machines. Collaboration meant emailing PSDs around or fighting over a shared Dropbox folder. Field and Wallace believed the right answer was a design tool that lived entirely in the browser, with real-time multiplayer editing baked in from the start.
The technical problem was enormous. Rendering design software at 60 frames per second in a browser, with multiple users editing simultaneously, required custom WebGL work and a collaboration architecture that didn’t really have precedent in the design space. They weren’t just building a product. They were building the infrastructure the product needed to exist.
This is the first thing worth understanding about Figma’s pre-scale period: the iteration wasn’t procrastination. They were solving genuine technical constraints that would have made scaling pointless. If you put a thousand users on a tool that lagged at 10 frames per second and corrupted files during concurrent edits, you wouldn’t get a growth story. You’d get a reputation problem.
What Actually Happened
Figma launched publicly in 2016, three years after founding. By that point, they had conviction about a few specific things: the rendering worked, the real-time collaboration worked reliably enough, and they had enough early designers using it to know which workflows it actually fit.
The decision to start scaling wasn’t a boardroom declaration. It was more like a weight finally reaching a tipping point. The signals were concrete:
First, retention. The users they had were coming back and bringing colleagues. This matters more than almost any other early metric, because it tells you whether the product is solving a real problem or just satisfying curiosity. Figma’s early users weren’t churning. They were integrating the tool into how their teams worked.
Second, the marginal improvement curve flattened. There’s a version of iteration where each change makes the product meaningfully better for the use case you’re targeting. Then there’s a version where you’re polishing things that only matter once you have scale. Figma hit a point where the next round of improvements required real usage data from a larger pool of users to even identify correctly. You can’t know which edge cases matter until you have enough edges.
Third, the competition moved. Sketch was dominant but desktop-only. Adobe was slow. The window where Figma’s core technical advantage was defensible was open, but windows close. Field has talked about this period in various interviews: the sense that the product was good enough that waiting longer wasn’t making it better, it was just burning time.
They hired their first sales and marketing people. They went after design teams at larger companies. They made pricing decisions that prioritized adoption over revenue. And then they grew fast, eventually reaching a valuation that led Adobe to attempt a roughly 20 billion dollar acquisition (which regulators blocked).
Why This Matters
Figma’s story is useful precisely because it complicates the standard advice. You hear two things constantly in startup circles: “do things that don’t scale” (meaning, iterate manually and learn before you systematize) and “move fast” (meaning, don’t over-polish before you get market feedback). Both are true. They just apply to different phases of the same company.
The mistake most teams make is treating these as philosophy rather than tactics. They iterate forever because “do things that don’t scale” gave them permission to not confront whether the product actually works. Or they scale too early because “move fast” convinced them that any hesitation is cowardice.
Figma spent three years in the first mode because the technical foundation genuinely wasn’t ready. Then they switched modes decisively. The hard part isn’t knowing the two modes exist. It’s being honest about which one you’re actually in.
What We Can Learn
A few concrete things to take from this:
Retention is the only signal that matters for timing the shift. Not downloads, not press coverage, not revenue from your first ten customers (who might be distorting your roadmap in ways you haven’t noticed). If the users you have are staying and pulling in colleagues, you have something worth scaling. If they’re leaving or staying out of inertia, more iteration is the right call.
Know the difference between product iteration and infrastructure iteration. Figma’s early years weren’t spent fiddling with button colors. They were spent building a rendering engine that could support the collaboration model the product required. If your iteration is making the core experience work, that’s legitimate. If it’s optimization without a clear constraint driving it, you’re stalling.
The question isn’t whether the product is perfect. It never is. The question is whether scaling will teach you things that further iteration in isolation cannot. At some point, the only way to know what’s broken is to have enough users that the breaks become visible. Figma couldn’t fully validate their collaboration features without teams actually collaborating. They needed scale to finish the learning.
Competition sets a clock whether you acknowledge it or not. Figma was lucky that the incumbents were slow. Most teams aren’t that lucky. If you’re in a space where well-funded competitors are moving, the cost of over-iterating is losing the window. This doesn’t mean you should scale a broken product. It means the clock is real and pretending it isn’t is a form of wishful thinking.
The version of this story that doesn’t work out looks almost identical from the inside. A team that spent three years on technical infrastructure before scaling could just as easily be a team that burned its runway perfecting something the market didn’t want. The difference between Figma and that story isn’t the three years. It’s the retention signal they had before they shifted gears, and the honesty to read it accurately.
Most founders know intellectually when they’re past the point of productive iteration. The harder problem is that scaling is frightening in a way that iterating isn’t. Iterating keeps the question open. Scaling forces an answer. Figma scaled when they had enough evidence to believe the answer was going to be a good one. That’s the whole story, and it’s a harder standard to meet than most playbooks will admit.