Vibe Coding: Building Full-Fledged Apps Using AI Coding Tools

Madhura Bhat – Sr Product Manager at Microsoft

Vibe coding sounds like a meme at first. But it captures something real about how software is being built right now.

The phrase came from an AI educator who joked online that modern coding often looks like this:

“You ask some stuff, you see stuff, you run stuff, you copy–paste stuff, and you keep asking it to fix things until everything works.”

That playful description stuck. Today, vibe coding has grown into a serious way of working: using AI coding tools to guide you through building full-fledged applications, even if you don’t see yourself as a “hardcore” developer.

This isn’t about pretending to be an engineer overnight. It’s about learning enough of the foundations, combining them with AI tools, and stepping into a new kind of role: a generalist product builder who can take an idea all the way to a working app.

This piece walks through that journey using one concrete example: building a simple e-commerce app to sell homegrown avocados. Along the way, it covers:

  • What vibe coding really is
  • The landscape of AI coding tools
  • How to go from prototype to full app with databases and AI assistants
  • When and why to move from browser tools to local development
  • How specs and tests keep your AI-generated code from drifting into chaos

All of this comes from hands-on experience, with no theory added on top.

Key Takeaways:
  • AI coding tools make it possible to build real applications faster and with less technical overhead.
  • Starting in browser-based tools is useful, but local development unlocks long-term control and flexibility.
  • Clear, detailed prompts act like mini product briefs and dramatically improve build quality.
  • Specs and testing stabilize the product as features evolve and complexity increases.
  • Modern AI development shifts more people into hands-on builders rather than passive planners.
In this article
    Add a header to begin generating the table of contents

    From “I’m not a coder” to shipping an app

    Vibe coding becomes powerful when someone who doesn’t breathe code every day manages to ship something real.

    One example: an e-commerce web app for selling farm-grown avocados directly to families in a city. The app has:

    • A homepage with a clear story
    • Product listings
    • Cart and checkout
    • Order storage in a database
    • Order tracking
    • Payments
    • Secure authentication

    It isn’t a toy prototype. It’s a live app that has already been used by hundreds of people to actually buy fruit.

    The turning point was a realisation: AI coding tools can take non-traditional coders from “I can barely read code” to “I can build a production-grade app” if they:

    1. Learn how to prompt properly
    2. Understand enough of the tech stack basics
    3. Combine browser-based tools with local development
    4. Treat specs and tests as part of the build, not an afterthought

    The goal isn’t to replace engineers. It’s to blur the walls between roles – product, design, data, and engineering – and let more people actually build.

    What does vibe coding really look like?

    Vibe coding is not about magically producing perfect code in one prompt. It looks more like a conversation with a very talented but slightly chaotic junior dev:

    • You describe what you want
    • The tool scaffolds a project
    • You see what it produced
    • You run it
    • You spot what’s broken or off
    • You ask it to fix or refine things
    • Repeat until it behaves and looks the way you need

    You’re constantly steering, correcting, and tightening the brief. That steering is where product thinking, design sense, and basic technical literacy matter.

    The more context you give, the better the output. That’s where prompt design becomes “context engineering”.

    The AI coding tool landscape

    There isn’t one vibe coding tool. There’s a landscape, and each category plays a different role.

    1. Online/browser-based tools

    These run entirely in the browser. You don’t install anything on your laptop.

    Examples mentioned:

    • Lovable
    • Bolt
    • v0
    • Replit (a mix of browser and editor)
    • GitHub Spark

    You typically:

    • Log in
    • Describe the app you want
    • Let the tool scaffold the project
    • See a live preview in the browser
    • Refine via prompts

    For many MVPs, this alone is enough to get a working version in front of users.

    2. IDE extensions

    Here, you use a traditional IDE (Integrated Development Environment) like VS Code and then add AI on top.

    An IDE is essentially a workspace for programmers. It gives you:

    • A code editor
    • Tools to run and debug code
    • Integrations with version control (like Git)

    On top of that, you can install AI assistants such as:

    • GitHub Copilot
    • Augment Code

    They sit inside your editor, suggest code as you type, and can work across multiple files in your repository. This is closer to how developers work day-to-day, but AI makes the experience far less intimidating.

    3. Standalone AI-native editors

    These are full IDEs built around AI from the start.

    Example mentioned:

    • Cursor, which bundles an editor with its own AI “copilot”

    These tools can:

    • Read your entire repository
    • Perform multi-file edits
    • Understand structure and context
    • Help with debugging, refactoring, and planning

    They feel less like a plugin and more like a co-builder sitting in the same room.

    4. Command-line assistants

    These sit directly in your terminal.

    Example mentioned:

    • Claude Code

    From the command line, such tools can:

    • Interact with Git (commit, push, open PRs)
    • Write or update test suites
    • Review code changes
    • Help with deeper automation

    They’re especially powerful once you’re comfortable with local development.

    When to use what?

    A rough mental model from experience:

    • Just prototyping or learning?
      Browser-based tools are usually enough.
    • Want more control and understanding?
      Move into IDEs with extensions or Cursor. You’ll start to see the actual file structure, dependencies, and build steps.
    • Building something serious or long-lived?
      Combine AI tools, GitHub, and local development. Relying only on browser tools means your app is always stuck on someone else’s infrastructure with limited control and often strict prompt limits.

    If you stay only in browser tools, you’re operating at the surface. It works until something breaks or you want to add a feature those tools don’t handle well.

    Building an e-commerce app with AI coding tools

    Let’s walk through the avocado app example, because it packs in most of the important ideas.

    Step 1: A thoughtful first prompt

    You could say:

    “Create an e-commerce web app for selling avocados.”

    And the tool will do something.

    But a better first prompt looks more like a mini-product brief. For example, it can specify:

    • Goal & audience
      That it’s a simple, trustworthy way for local families to buy fresh avocados directly from a farmer.
    • Tech stack
      • Use React for the frontend
      • Use Tailwind CSS for styling
      • Make components responsive, accessible, and reusable
      • Avoid random external UI libraries
    • Design direction
      • Use earthy tones
      • Follow the color palette of avocados
      • Give it a modern but warm look

    That one detailed prompt already guides:

    • Information architecture
    • Visual style
    • Framework choice
    • Component structure

    It also saves precious prompt credits if you’re on a free or limited plan, because you avoid endless “make the button bigger”, “change the colours”, and “fix the layout” type follow-ups.

    Step 2: Understanding the basic stack

    Even with AI doing the heavy lifting, it helps to know what’s going on underneath.

    The app stack breaks down like this:

    • Frontend (browser side)
      • HTML – the skeleton (headings, buttons, structure)
      • CSS – the paint, layout, and visual polish
      • JavaScript – the interactivity (what happens when you click a button, how flows connect)
    • Backend (server side)
      • Handles business logic
      • Talks to the database and other services through APIs
    • Database
      • Stores records like orders, users, addresses, reviews
      • For example, tables for orders, payments, customer profiles

    If there’s only a frontend, it’s essentially a website.
    When you add backend logic and a database, it becomes a web app.

    On top of this, the stack in the example used some popular layers:

    • React – a framework that sits on top of JavaScript and makes it easier to build modern user interfaces with reusable components.
    • TypeScript – adds type checking on top of JavaScript, like a grammar checker that catches certain mistakes early.
    • Tailwind CSS – a utility-first wrapper around CSS that provides ready styling primitives, which makes it faster to get consistent design.

    You don’t need to master all of them to get started. But knowing what each layer does makes the AI’s output far less mysterious.

    Step 3: Going from landing page to full user journey

    Once the initial version of the homepage exists, the next step is to ask the tool to design the full user journey:

    • Landing on the homepage
    • Browsing products (e.g., avocado boxes)
    • Adding items to the cart
    • Reviewing the cart
    • Checkout
    • Order confirmation
    • Order history and tracking

    Here, it helps to chunk the work into prompts instead of dumping everything in one monolithic request. For example:

    1. First prompt: scaffold the basic app and homepage
    2. Second prompt: extend it with the full e-commerce flow
    3. Third prompt: wire it to a database
    4. Fourth prompt: add authentication
    5. Fifth prompt: integrate an AI assistant

    This stepwise approach reduces hallucinations and broken flows. It also keeps you in control of what’s being added.

    Step 4: Adding a real database

    At this point, the app needs a place to store data. In the example, the tool integrated with Supabase, a Postgres-based database with relational tables.

    For an e-commerce app, that means tables like:

    • Orders
    • Customers
    • Addresses
    • Payments

    Each table has rows (entries) and columns (fields like order ID, amount, timestamp, customer email, and tracking number). Once wired up, every new order from the UI automatically creates a new row in the orders table.

    Some tools bundle this into something like a “cloud” layer:

    • Database integration
    • Authentication
    • AI integrations

    The interface lets you see tables, inspect records, and confirm that user actions in the app are truly being recorded.

    GitHub as your code’s “Google Drive”

    When you build in a browser-only tool, your entire project lives inside that tool’s workspace. If that tool goes down or changes access rules, you’re stuck.

    That’s where GitHub comes in.

    You can:

    1. Connect your project to GitHub from the browser tool

    2. Let it push your code into a GitHub repository

    3. Treat GitHub as the source of truth going forward

    Think of GitHub as a Google Drive for code. Once your repo is there, you can:

    • Clone it to your laptop

    • Use any IDE you like (VS Code, Cursor, etc.)

    • Host it wherever you want later (Firebase, your own cloud, etc.)

    The flow looks like:

    1. Build the first version with a browser AI tool

    2. Push code to GitHub

    3. Pull from GitHub into your local environment

    4. Continue development with Cursor or VS Code plus AI extensions

    Now you’ve combined the speed of browser scaffolding with the power and control of local development.

    Moving to local development: why bother?

    Browser tools are a great on-ramp. But they hit limits:

    • Third-party integrations can be constrained
    • Advanced auth flows may be difficult
    • Custom UI tweaks can get expensive in prompt usage
    • Monthly costs can balloon if you run many prompts

    Most importantly: if you want to understand what’s actually happening and be able to debug confidently, you need to work locally at some point.

    Local development usually starts with three installs:

    1. An IDE – Cursor or VS Code
    2. Git – the command-line tool for working with GitHub (cloning, committing, pushing)
    3. Node.js – to run JavaScript/TypeScript web apps and start a local server

    Once those are installed, the basic workflow is:

    git clone <your-repo-url>   # bring code from GitHub to your laptop

    npm install                              # install all dependencies listed in the project

    npm run dev                          # start a local development server

    After npm run dev, you can visit something like http://localhost:8080 and see your app running on your machine.

    From there, your AI assistant in Cursor or VS Code can:

    • Edit specific files
    • Create new components
    • Refactor code
    • Help you wire new integrations
    • Explain errors in plain language

    And you’re no longer locked into what a browser tool supports.

    Debugging with AI and browser dev tools

    Nothing meaningful gets built without bugs.

    A practical pattern is:

    1. Open your app in the browser.
    2. Press F12 to open Developer Tools.
    3. Look at:
      • Console logs for JavaScript errors
      • Network tab for failed API calls

    If something breaks (say an image path is wrong or a component fails to load), you’ll often see a stack trace or clear error message there.

    You can then:

    • Copy that error text
    • Paste it into your AI assistant in Cursor or VS Code
    • Ask it to diagnose and fix the problem

    Sometimes it’s as simple as a mismatched file name. Sometimes it’s deeper. Either way, combining dev tools with an AI agent is much faster than staring at the screen in confusion.

    If things get too messy, you can roll back to a previous checkpoint or commit and start from a clean state.

    Spec-driven development: keeping the app grounded

    There’s a big risk with pure prompt-driven building: your project can drift.

    You keep asking the tool to “add this” and “change that” and “make this nicer”, and after 100 prompts, the original intent is blurry. Different parts of the app may even contradict each other.

    Spec-driven development is a way to anchor the work.

    The idea is to treat a spec as a first-class citizen in the repo, not a side doc that gets lost. A practical flow is:

    1. Constitution
      • High-level principles: design guidelines, accessibility expectations, privacy constraints, and compliance needs.
    2. Spec
      • Detailed functional requirements:
        • A user should be able to add an item to the cart
        • A user should see order confirmation after checkout
        • A user should be able to view past orders
    3. Plan
      • Architectural decisions and non-functional requirements: performance considerations, trade-offs, key components.
    4. Tasks
      • Breaking the spec into manageable units that either a human dev or an AI agent can implement one by one.
    5. Implementation
      • Only after the above do you start asking the AI to write code.

    There are tools like GitHub’s “spec kit” that help automate parts of this (generating constitution, spec, plan, and tasks from prompts), but the core idea is simple: don’t jump straight to implementation without a shared understanding of what you’re building.

    Testing: making sure things stay working

    Once an app starts to grow, relying purely on manual clicking around is risky. That’s where tests come in.

    You can think of them as a pyramid:

    1. Unit tests (bottom layer)
      • Test individual components or functions in isolation
      • Example: “Does the ‘Add to Cart’ button work as expected on the homepage?”
    2. Integration tests (middle layer)
      • Test how two or more components work together
      • Example: “After login, does the user actually see their profile and cart?”
    3. End-to-end (E2E) tests (top layer)
      • Test full flows from start to finish
      • Example: “From visiting the homepage to placing an order and seeing confirmation.”

    Tools like Playwright can help write these tests, and AI assistants are quite good at generating them if you ask.

    • “Write unit tests for the cart component.”
    • “Create an end-to-end test for the checkout flow using Playwright.”

    You still need to review what the AI produces, but once in place, these tests act like guardrails whenever you or the AI refactor things.

    Stepping into the generalist product builder era

    Vibe coding isn’t a gimmick. It’s a signal of where product building is headed.

    • Browser tools make it easy to get started
    • IDEs with AI assistants make deeper work approachable
    • GitHub and local development give you control and portability
    • Specs and tests keep the whole thing from spiraling out of control

    Product managers, designers, data folks, and anyone curious about building now have a path to ship real applications without waiting in line for engineering bandwidth every single time.

    It takes patience, repeated prompting, and a willingness to peek under the hood. But once you’ve seen a full app go from idea to live deployment with AI doing a huge chunk of the heavy lifting, it becomes very hard to see yourself only as a “non-coder” again.

    Frequently Asked Questions (FAQs)

    Yes, AI coding tools can scaffold, generate, refine, and debug full-stack apps, especially when paired with structured prompts and incremental iteration.

    A basic understanding of frontend, backend, and databases helps, but the workflow is designed so you learn through building rather than studying theory first.

    Because local environments offer flexibility, integrations, ownership of code, and the ability to scale beyond the limits of web builders.

    Very important, they prevent misalignment, regression, and unpredictability as the AI modifies or expands code.

    Yes, GitHub acts as the source of truth and enables collaboration, rollbacks, and structured multi-agent workflows across local and cloud-based development environments.

    Author Bio

    Madhura Bhat is a Senior Product Manager at Microsoft in the Copilot Analytics team, that provides actionable insights to help companies understand and optimize their AI transformation.

    Facebook
    Twitter
    LinkedIn