Developers

An Introduction to Vibe Coding

Author

Bri Wylde

Publishing date

If you’re like me and grew up thinking of AI as evil, sentient robots hellbent on destroying humanity, then contemplating the implications of vibe coding might seem like small potatoes. However, here we are. AI is no longer just the stuff of sci-fi; it’s being embedded in our daily lives, shaping industries, and transforming how we build software.

This year has seen a massive shift in the developer landscape, particularly at hackathons. Almost everyone is using AI in some way to build their projects, whether as a supplemental tool or to create entire applications from scratch. But while AI-assisted programming can be powerful, it’s not without its challenges. Navigating the large amount of available tools, figuring out how to prompt, providing the right context, and working with AI’s quirks can frustrate both experienced and non-experienced developers.

This article breaks down some basic concepts, best practices, and resources to explore as you experiment, play, and develop on your vibe coding journey.

But first, a bit about this vibe coding business…

Vibe coding is a process where developers describe what they want in plain language, and a large language model (LLM) translates that into working code. There are several distinct ways for coders to interact with an LLM: they can ask it to locate specific features or components in a codebase, saving time when navigating large projects; they can collaborate with it to plan a build strategy, refining ideas through back-and-forth discussion before any code is written; or they can have it take a more active role, generating code and assembling the entire project.

When using AI to code, the developer’s role shifts from being just a programmer to also being a product manager, providing the context and instructions, guiding the project’s direction, and testing the outputs that the AI delivers. When done well, vibe coding speeds up development cycles, encourages creative experimentation, and makes coding more accessible to new programmers.

With those basics in mind, let’s look at some best practices for using AI in your workflow.

AI-assisted programming best practices

The art of prompting

Prompting is the direct instruction you give an LLM to tell it what you want it to do. In AI-assisted programming, strong prompting is just as important as the coding itself. You can picture an LLM like a brilliant developer who has never actually built anything before. They have the skills, but no context. Your job is to give them effective instructions so they know exactly what you want them to create.

To create a solid prompt, first eliminate ambiguity. Try not to leave the LLM guessing because it will guess, and it’s not afraid to be confidently wrong. Be explicit about your goals, requirements, and constraints, and include plenty of detail. It’s fine to think out loud, or even ramble some; AI models handle stream-of-consciousness surprisingly well, and your evolving train of thought can help the AI model reason through the request. Speak as you would to a human collaborator, and (pro tip) even try using speech-to-text to make it easier to generate rich, thorough prompts.

Second, be outcome-oriented. Tell the LLM what you want to achieve and why, rather than micromanaging the how. A clear end goal gives the model room to propose efficient solutions you might not have considered. Framing the request with a simple premise (e.g., “build a retro dungeon crawler”) and then filling it with specific requirements helps the LLM understand both where to begin and what you want the end result to be.

Finally, use keywords, specifications, and relevant context. In your prompt, state the programming language, style, and link to any repos, example contracts, or interfaces it should reference before it starts generating code to ensure it’s working with the right information from the beginning. Keywords like “pixelated” or “minimalist” can set the creative tone for projects, while explicit instructions like “don’t make assumptions” or “avoid outdated code examples” tell the AI model what to avoid.

Don't do this:

Make a smart contract for NFTs

Try this instead:

Build a minimalist Stellar smart contract in Rust that implements an on-chain two-player Tic-Tac-Toe game on the Stellar blockchain. Store the 3x3 board in contract state, track turns by Stellar address, validate moves, detect wins/draws, and expose functions to start a game, make a move, and check status. Include a simple test suite and instructions to deploy with stellar-cli. Keep it self-contained with no dependencies beyond the Soroban Rust SDK. Reference: https://mcp.openzeppelin.com/, https://github.com/script3, https://github.com/soroswap.

AI-assisted programming best practices

Providing context

Context is the background information you give an LLM so it can tailor its response to your project’s needs. Think of the prompt as the instructions, and the context as the reference material. You tell the LLM what to do in the prompt, and you provide the resources it needs to complete the task in the context. This might include your current working directory, relevant repos, conversation history, system instructions, and more.

Depending on the LLM, context can reset between conversations, and the AI model doesn’t inherently “remember” anything beyond what’s in the current context window. This limitation exists because LLMs are often isolated from most live data and tools. MCP (Model Context Protocol) servers solve this problem by acting as a bridge between the LLM and external resources. They can pull in missing or up-to-date information on demand. For example, connecting to your Git repository for commit history, querying a database for schema changes, or fetching version-specific code examples. They can also provide the AI model with relevant, domain-specific knowledge from trusted sources, giving it access to specialized information for that specific interaction.

By augmenting the LLM’s working knowledge, MCP servers enable continuity across sessions and open up capabilities that wouldn’t be possible with prompts alone. We’ll explore types of MCP servers later, but for now, know they are foundational for effectively providing context in AI-assisted programming.

AI-assisted programming best practices

Mix-n-match tools

When using AI for programming, it’s easy to fall into model paralysis: a state of being overwhelmed by the number of available options. The key is to experiment and not get discouraged when a tool can’t one-shot a task. Mix and match based on the job, as each tool has unique strengths and specializations.

Here are some recommendations to help you find a strong starting point.

AI code editors

An AI code editor is a programming environment with built-in, advanced AI capabilities directly into the workflow. Evolving from traditional IDEs, these tools are ideal for developers who want to use AI to enhance their coding process.

Some popular AI code editors are:

  • Cursor is an AI-powered code editor built on VS Code that integrates AI models directly into the development workflow. It can generate, refactor, and explain code, run commands, and integrate with external tools via MCP servers.
  • GitHub Copilot is an AI coding assistant developed by GitHub and OpenAI that integrates into editors like VS Code. It suggests code, writes functions, and offers contextual help based on your current project.
  • Lovable is a development platform that integrates with various LLMs like OpenAI, Anthropic, Groq, and others, plus uses React, Tailwind, CSS, and Vite for the frontend to enable users to create full-stack websites through natural language.
  • Replit is a browser-based IDE that lets you code, deploy, and collaborate to generate, refactor, and build full apps from any device with an internet connection. It’s cloud-hosted and beginner-friendly, requiring no setup.
  • Windsurf is a desktop IDE that deeply understands your codebase, enables instant previews, inline edits, deployments, and automated linting without leaving the editor. Its AI agents are hyper-contextual and deeply integrate into your working code.
  • Zed is a fast, Rust-based code editor focused on real-time collaboration and AI-driven workflows. Its emphasis lies in performance, openness, and allowing developers to fully steer AI interventions through powerful, visible prompts and editable context.

LLMs

LLMs vary widely in qualities and capabilities like speed, cost, context length, and reasoning ability; so the best platform choice depends on your specific use case. Many AI code editors already support multiple LLM models, making it easy to switch as you work. If one model struggles with a task, simply try another.

Here are some popular LLM options and their differences:

  • Claude Sonnet offers a strong balance of speed, cost, and reasoning ability. It is ideal for most coding, writing, and research tasks where quick iteration and affordability matter more than absolute peak performance. Sonnet is good for getting something done well enough, fast, and affordably.
  • Claude Opus is the top tier, optimized for the highest accuracy and depth of reasoning, even on very complex or nuanced problems. It’s best for in-depth research or situations where maximum quality outweighs speed and cost.
  • OpenAI GPT-5 delivers high-end reasoning, coding, and multi-step task performance with flexible modes to balance speed and depth. It’s best for projects where accuracy and complex thinking matter more than cost.
  • OpenAI o3 is OpenAI’s strongest reasoning model, excelling in STEM, logic, and problem-solving. It’s slower and pricier, but ideal if you need the highest reasoning quality.
  • Gemini 2.5 combines multimodal capabilities and a huge context window with strong general reasoning. It’s cost-efficient, making it a good choice for large-context, cross-media tasks at speed.
  • Grok 4 offers competitive reasoning and real-time search with a conversational style. It’s well-suited for exploratory or research-driven work.

These are just six of many LLMs out there–set forth and see what other unique tools you can try!


MCP servers

MCP servers typically fall into two categories: tools and data. Tool-oriented MCP servers give your LLM the ability to take action in your development environment, like running tests, deploying contracts, signing transactions, or interacting with APIs. Data-oriented MCP servers feed your LLM specialized information it wouldn’t otherwise know. Knowing which type you need for a given task makes it easier to choose the right MCP server integrations to include in your setup.

Here are some recommendations to try:

  • Context7 is a data-based MCP server that supplies LLMs with up-to-date, version-specific code examples and documentation from real sources (packages, repos, docs, etc.) directly into prompts. It acts like a documentation knowledge base across many libraries and frameworks, helping prevent outdated or hallucinated answers.
  • Perplexity is a data-based MCP server that connects AI models to web search capabilities for real-time, web-wide research to deliver relevant information to users.
  • DeepWiki is a data-based MCP server that gives LLMs programmatic access to auto-generated, structured documentation for public GitHub repositories. It enables targeted retrieval and contextual Q&A.
  • Playwright is a tool-based MCP server that enables browser automation, letting an AI model open pages, fill forms, click buttons, and run tests using structured accessibility data rather than pixel-based inputs to make design tweaks.
  • Cloudflare is a tool-based MCP server that allows AI models to interact with external services and data sources, enabling the LLM to perform actions like sending emails, deploying code, or accessing information from the internet. It also offers a data-based documentation MCP server.

But as mentioned above, the most important thing is to…


Get out there and experiment!

A big mistake people make when first using LLMs is expecting the perfect answer after a single input, which seldom gives the desired result. Working with AI is like any other skill; you improve with experimentation. Start by using it for bite-sized projects or isolated parts of your workflow so you can test ideas without risking the whole build. And retry anything that didn’t work with different models, prompts, and contexts.

AI is going to be a part of programming the future, whether you’re ready or not. The sooner you experiment, adapt, and build your own best practices, the more prepared you’ll be for the future.

Ready to get vibe coding? Here are some other resources to get going:

Watch a vibe coding demo from Tyler van der Hoeven on YouTube: Learn Kalepail’s Secret Sauce for Getting AI to Work for Him

Ryan Carson, CEO, founder, and developer, has great insights on the state of AI in coding. Follow him on X.

Find and follow your favorite AI companies that build the tools you love: OpenAI, Anthropic, Gemini, Cursor, etc., and engage with their content!

And join the Stellar Developer Discord to showcase your own vibe coding projects!