Madhura Bhat – Sr Product Manager at Microsoft
Vibe coding sounds like a meme at first. But it captures something real about how software is being built right now.
The phrase came from an AI educator who joked online that modern coding often looks like this:
“You ask some stuff, you see stuff, you run stuff, you copy–paste stuff, and you keep asking it to fix things until everything works.”
That playful description stuck. Today, vibe coding has grown into a serious way of working: using AI coding tools to guide you through building full-fledged applications, even if you don’t see yourself as a “hardcore” developer.
This isn’t about pretending to be an engineer overnight. It’s about learning enough of the foundations, combining them with AI tools, and stepping into a new kind of role: a generalist product builder who can take an idea all the way to a working app.
This piece walks through that journey using one concrete example: building a simple e-commerce app to sell homegrown avocados. Along the way, it covers:
All of this comes from hands-on experience, with no theory added on top.
Vibe coding becomes powerful when someone who doesn’t breathe code every day manages to ship something real.
One example: an e-commerce web app for selling farm-grown avocados directly to families in a city. The app has:
It isn’t a toy prototype. It’s a live app that has already been used by hundreds of people to actually buy fruit.
The turning point was a realisation: AI coding tools can take non-traditional coders from “I can barely read code” to “I can build a production-grade app” if they:
The goal isn’t to replace engineers. It’s to blur the walls between roles – product, design, data, and engineering – and let more people actually build.
Vibe coding is not about magically producing perfect code in one prompt. It looks more like a conversation with a very talented but slightly chaotic junior dev:
You’re constantly steering, correcting, and tightening the brief. That steering is where product thinking, design sense, and basic technical literacy matter.
The more context you give, the better the output. That’s where prompt design becomes “context engineering”.
There isn’t one vibe coding tool. There’s a landscape, and each category plays a different role.
These run entirely in the browser. You don’t install anything on your laptop.
Examples mentioned:
You typically:
For many MVPs, this alone is enough to get a working version in front of users.
Here, you use a traditional IDE (Integrated Development Environment) like VS Code and then add AI on top.
An IDE is essentially a workspace for programmers. It gives you:
On top of that, you can install AI assistants such as:
They sit inside your editor, suggest code as you type, and can work across multiple files in your repository. This is closer to how developers work day-to-day, but AI makes the experience far less intimidating.
These are full IDEs built around AI from the start.
Example mentioned:
These tools can:
They feel less like a plugin and more like a co-builder sitting in the same room.
These sit directly in your terminal.
Example mentioned:
From the command line, such tools can:
They’re especially powerful once you’re comfortable with local development.
A rough mental model from experience:
If you stay only in browser tools, you’re operating at the surface. It works until something breaks or you want to add a feature those tools don’t handle well.
Let’s walk through the avocado app example, because it packs in most of the important ideas.
You could say:
“Create an e-commerce web app for selling avocados.”
And the tool will do something.
But a better first prompt looks more like a mini-product brief. For example, it can specify:
That one detailed prompt already guides:
It also saves precious prompt credits if you’re on a free or limited plan, because you avoid endless “make the button bigger”, “change the colours”, and “fix the layout” type follow-ups.
Even with AI doing the heavy lifting, it helps to know what’s going on underneath.
The app stack breaks down like this:
If there’s only a frontend, it’s essentially a website.
When you add backend logic and a database, it becomes a web app.
On top of this, the stack in the example used some popular layers:
You don’t need to master all of them to get started. But knowing what each layer does makes the AI’s output far less mysterious.
Once the initial version of the homepage exists, the next step is to ask the tool to design the full user journey:
Here, it helps to chunk the work into prompts instead of dumping everything in one monolithic request. For example:
This stepwise approach reduces hallucinations and broken flows. It also keeps you in control of what’s being added.
At this point, the app needs a place to store data. In the example, the tool integrated with Supabase, a Postgres-based database with relational tables.
For an e-commerce app, that means tables like:
Each table has rows (entries) and columns (fields like order ID, amount, timestamp, customer email, and tracking number). Once wired up, every new order from the UI automatically creates a new row in the orders table.
Some tools bundle this into something like a “cloud” layer:
The interface lets you see tables, inspect records, and confirm that user actions in the app are truly being recorded.
When you build in a browser-only tool, your entire project lives inside that tool’s workspace. If that tool goes down or changes access rules, you’re stuck.
That’s where GitHub comes in.
You can:
Think of GitHub as a Google Drive for code. Once your repo is there, you can:
The flow looks like:
Now you’ve combined the speed of browser scaffolding with the power and control of local development.
Browser tools are a great on-ramp. But they hit limits:
Most importantly: if you want to understand what’s actually happening and be able to debug confidently, you need to work locally at some point.
Local development usually starts with three installs:
Once those are installed, the basic workflow is:
git clone <your-repo-url> # bring code from GitHub to your laptop
npm install # install all dependencies listed in the project
npm run dev # start a local development server
After npm run dev, you can visit something like http://localhost:8080 and see your app running on your machine.
From there, your AI assistant in Cursor or VS Code can:
And you’re no longer locked into what a browser tool supports.
Nothing meaningful gets built without bugs.
A practical pattern is:
If something breaks (say an image path is wrong or a component fails to load), you’ll often see a stack trace or clear error message there.
You can then:
Sometimes it’s as simple as a mismatched file name. Sometimes it’s deeper. Either way, combining dev tools with an AI agent is much faster than staring at the screen in confusion.
If things get too messy, you can roll back to a previous checkpoint or commit and start from a clean state.
There’s a big risk with pure prompt-driven building: your project can drift.
You keep asking the tool to “add this” and “change that” and “make this nicer”, and after 100 prompts, the original intent is blurry. Different parts of the app may even contradict each other.
Spec-driven development is a way to anchor the work.
The idea is to treat a spec as a first-class citizen in the repo, not a side doc that gets lost. A practical flow is:
There are tools like GitHub’s “spec kit” that help automate parts of this (generating constitution, spec, plan, and tasks from prompts), but the core idea is simple: don’t jump straight to implementation without a shared understanding of what you’re building.
Once an app starts to grow, relying purely on manual clicking around is risky. That’s where tests come in.
You can think of them as a pyramid:
Tools like Playwright can help write these tests, and AI assistants are quite good at generating them if you ask.
You still need to review what the AI produces, but once in place, these tests act like guardrails whenever you or the AI refactor things.
Vibe coding isn’t a gimmick. It’s a signal of where product building is headed.
Product managers, designers, data folks, and anyone curious about building now have a path to ship real applications without waiting in line for engineering bandwidth every single time.
It takes patience, repeated prompting, and a willingness to peek under the hood. But once you’ve seen a full app go from idea to live deployment with AI doing a huge chunk of the heavy lifting, it becomes very hard to see yourself only as a “non-coder” again.
Yes, AI coding tools can scaffold, generate, refine, and debug full-stack apps, especially when paired with structured prompts and incremental iteration.
A basic understanding of frontend, backend, and databases helps, but the workflow is designed so you learn through building rather than studying theory first.
Because local environments offer flexibility, integrations, ownership of code, and the ability to scale beyond the limits of web builders.
Very important, they prevent misalignment, regression, and unpredictability as the AI modifies or expands code.
Yes, GitHub acts as the source of truth and enables collaboration, rollbacks, and structured multi-agent workflows across local and cloud-based development environments.
Madhura Bhat is a Senior Product Manager at Microsoft in the Copilot Analytics team, that provides actionable insights to help companies understand and optimize their AI transformation.