Time to Hello World

Time to Hello World

In the age of vibe coding, getting a developer to their first API call is table stakes. The harder question is whether your documentation was written for the agent sitting between you and your user.


The CTO’s Edge · AI Strategy


Time to Hello World is not a new idea. It has been an informal but meaningful measure of platform quality for decades. It started in desktop software, where it measured how long it took a developer to install a SDK and get a sample app running. Then in early SaaS, where it measured how quickly a new user could reach the moment the product became real to them. Then in the API economy, where it became a proxy for developer experience: how fast could an engineer go from documentation to a working call?

Each platform shift redefined what Time to Hello World measured. The metric itself never went away. It just moved.

We are in the middle of another shift, and this one changes the metric in a more fundamental way than any of the previous ones. It doesn’t just redefine how fast Hello World happens. It splits the metric in two.

Recently I needed to build a technical integration from scratch. I am a technologist. I have written production code, I understand how APIs work, I know what a pipeline is. But I stepped away from hands-on development years ago, and I was not about to spend a week relearning Python syntax. So I sat down with an AI coding assistant, researched my options, chose a vendor, signed up for a trial account, got an API key, and had a working pipeline running in VS Code. All in a single session.

Here is the part that should make every product and engineering leader stop: once I had that API key, I never returned to the vendor’s website. Not once. Every question I had about configuration options, parameters, and implementation details went to my AI coding assistant. The vendor’s documentation still mattered enormously. I just wasn’t the one reading it.

That distinction is the whole story.

Vibe coding changed who your buyer is

For most of software history, API adoption was gated by developer bandwidth. If a business leader wanted to connect two systems, they filed a ticket, waited for a sprint slot, and hoped the engineer had time. The API vendor’s developer experience team focused exclusively on winning over senior engineers, because senior engineers were the only people making integration decisions.

That world is gone.

Vibe coding, the practice of describing what you want to an AI coding assistant and iterating on generated output without writing code yourself, has pushed API integration capability into the hands of product managers, operations leads, analysts, and executives. The non-technical operator who can articulate a business problem clearly can now ship working integrations. That is a profound shift in who is evaluating your platform, and how.

Vibe coding has gotten a bad rap. Most of the criticism targets a real but narrow failure mode: the developer who doesn’t understand what they’re building, shipping code they can’t debug, skipping fundamentals because the AI will sort it out. That is a legitimate concern. But it is not what vibe coding looks like when someone with domain expertise and technical literacy is in the driver’s seat, someone who can evaluate what the AI produces, recognize when it’s wrong, and apply judgment at every step. That version of vibe coding isn’t reckless. It’s leverage.

Your documentation is now a prompt. Your onboarding flow is now a conversation. The question is whether you designed it that way, or whether an AI is improvising on your behalf.

The decision of whether to adopt your platform may no longer be made by a developer evaluating your SDK. It may be made by a business leader who built a working proof of concept on a Tuesday afternoon and brought it to their CTO on Wednesday morning. Are you testing for that path?

The next evolution of Time to Hello World

The classic TtHW metric had one phase: how fast can someone get from zero to a working output? In the desktop era that meant installing a runtime and compiling a sample. In the SaaS era it meant signing up and reaching the product’s first meaningful moment. In the API era it meant reading the docs, getting a credential, and making a call that returned real data.

Each of those was a single, linear journey. Faster was better. Fewer steps was better. The metric was simple because the path was simple.

The vibe coding era breaks that linearity. There are now two distinct phases, and they require fundamentally different things from a vendor. Every API vendor should know their TtHW cold. Almost none of them do. And even the ones who track it are only measuring half the picture.

Phase one: getting to the API key. This is the traditional Hello World problem, now filtered through a more demanding lens. How many steps stand between account creation and a valid credential? Every verification email, credit card wall, approval workflow, and sales-required sandbox adds friction and dropout probability. The bar for vibe coders is ruthless: if getting started requires talking to a human, most evaluations end before they begin.

Phase two: everything after. This is the phase almost nobody is designing for, and it is where the real differentiation now lives. Once the API key is in hand, the vibe coder doesn’t return to your website. They don’t read your changelog. They don’t browse your feature documentation. They ask their AI assistant. “How do I change this configuration?” “What parameter controls that behavior?” “How do I handle this error?” Every one of those questions gets answered, accurately or not, by a language model working from whatever it knows about your platform.

If your documentation was written for human consumption, the AI will do its best with it. Narrative prose, marketing-inflected feature descriptions, examples that demonstrate capabilities rather than enable them. Sometimes that works. Often it produces plausible-sounding code that calls endpoints that don’t exist, passes parameters in the wrong format, or misses authentication requirements entirely. The vibe coder hits an error they can’t diagnose, and your platform gets blamed for a documentation problem.

Agent-first documentation is a different discipline

Writing documentation for agents as the primary consumer is not the same as writing good developer documentation. It requires a different set of design decisions.

Precision over narrative. Agents don’t need context or motivation. They need exact specifications. Parameter types, required versus optional fields, allowed values, default behaviors, and error codes need to be stated explicitly, not implied. A human reader will infer that a timestamp field probably expects ISO 8601 format. An agent may not.

Complete, runnable examples. Fragments don’t work. An agent given a partial code example will complete it, and the completion may be wrong. Every example in your documentation should be a working call that produces a real response, not a scaffold that requires the reader to fill in the blanks.

Self-contained reference pages. Agents don’t browse. If answering a question about one endpoint requires context from three other pages, the agent may not connect them. Each reference page should contain everything needed to use that feature without cross-referencing elsewhere.

Error messages written for interpretation. When an AI-assisted integration fails, the error response goes directly back to the AI for diagnosis. “Invalid request” is useless. “The documents array must contain at least one item, and each item must include a documentId field” gives the AI, and by extension the vibe coder, a clear path to resolution.

There is an emerging standard worth knowing about here: llms.txt. Analogous to how robots.txt tells search engine crawlers what to index, llms.txt is a machine-readable file vendors publish at their documentation root that gives AI coding assistants a clean, structured map of the entire API surface, including endpoints, parameters, examples, and versioning, in a format optimized for agent consumption rather than human browsing. Stripe publishes one. Anthropic publishes one. Cloudflare publishes one. BuiltWith tracking suggested that hundreds of thousands of sites had implemented the standard by late 2025. When you are evaluating a vendor, checking whether they publish an llms.txt file is an emerging signal worth noting. It tells you whether they have begun thinking seriously about agent-first documentation or whether they are still building for a world where humans do all the reading. It is a two-second check, and right now the vendors who have done it are telling you something about where they think the future of developer experience is headed.

The regulated-domain multiplier

In regulated industries like healthcare, legal services, and financial services, this dynamic carries additional weight. A vibe coder who misconfigures a general-purpose API wastes an afternoon. A vibe coder who misconfigures an API handling sensitive data in a production environment creates a compliance exposure. The vendor’s documentation, default settings, and error messaging carry real liability implications, not just adoption friction.

This means the agent-first documentation bar is actually higher in regulated domains. The vendor needs to guide a non-expert not just to a working integration, but to a secure one. Default to safe configurations. Surface compliance-relevant warnings in the reference, not buried in a separate security guide. Make the right path the easy path, because the AI will take the path of least resistance, and so will the operator following its lead.

This is now a due diligence question

If you are a PE operating partner evaluating a SaaS platform, or a CTO assessing a vendor for enterprise deployment, two questions now belong in your evaluation scorecard alongside price and feature parity.

First: what is your Time to Hello World for a non-technical user with an AI coding assistant? A vendor who cannot answer this has not measured it. A vendor who measures it and optimizes for it has internalized that adoption, not just purchase, is the product.

Second: has your documentation been designed for agent consumption, or just human consumption? Ask to see their API reference. Look for complete examples, precise specifications, and self-contained reference pages. Then paste a section into an AI coding assistant and ask it to write an integration. How accurate is the result? That test will tell you more than a vendor demo. And while you are at it, check whether they publish an llms.txt file. It takes two seconds and tells you immediately whether agent-first documentation is something they have invested in or something they have never considered.

The fastest path from contract signature to business value runs through your documentation. Vibe coding made that path visible. Agentic coding made your documentation the product.

What good looks like

The vendors winning this race share a few characteristics. Their getting-started guide is a single, linear page. Their authentication flow produces a working credential in under three clicks. Their reference documentation reads like a specification, not a brochure. Their error responses are written to be parsed and acted on, not just acknowledged. And when you paste their documentation into an AI coding assistant and ask it to build something, the result works.

Stripe has been the canonical benchmark for a decade, with documentation so precise and complete that it became a standard test case for how well language models can interpret API specifications. That is not an accident. It reflects years of investment in treating documentation as a first-class product artifact. Stripe already publishes an llms.txt file, as do Anthropic, Cloudflare, and a growing list of API-first companies that have begun internalizing the same insight: your documentation now has two audiences, and the AI audience is growing faster than the human one.

Time to Hello World has always rewarded the vendors who treated the first five minutes of the developer experience as a product problem, not a documentation problem. That instinct is still right. It just needs to extend past the first five minutes now, all the way through the integration, into every question the AI will ask on the operator’s behalf, for the entire lifecycle of the relationship.

I built a working integration last week without returning to the vendor’s website after signup. The AI handled every configuration question, every parameter lookup, every debugging step. The vendor didn’t know any of this was happening. But their documentation made it possible. Or it would have made it impossible. That is the lever they control, and most of them don’t know it exists.


Tags: AI strategy · API design · Vibe coding · Vendor evaluation · Due diligence