Developers
Author
Bri Wylde
Publishing date
If you’re like me and grew up thinking of AI as evil, sentient robots hellbent on destroying humanity, then contemplating the implications of vibe coding might seem like small potatoes. However, here we are. AI is no longer just the stuff of sci-fi; it’s being embedded in our daily lives, shaping industries, and transforming how we build software.
This year has seen a massive shift in the developer landscape, particularly at hackathons. Almost everyone is using AI in some way to build their projects, whether as a supplemental tool or to create entire applications from scratch. But while AI-assisted programming can be powerful, it’s not without its challenges. Navigating the large amount of available tools, figuring out how to prompt, providing the right context, and working with AI’s quirks can frustrate both experienced and non-experienced developers.
This article breaks down some basic concepts, best practices, and resources to explore as you experiment, play, and develop on your vibe coding journey.
Vibe coding is a process where developers describe what they want in plain language, and a large language model (LLM) translates that into working code. There are several distinct ways for coders to interact with an LLM: they can ask it to locate specific features or components in a codebase, saving time when navigating large projects; they can collaborate with it to plan a build strategy, refining ideas through back-and-forth discussion before any code is written; or they can have it take a more active role, generating code and assembling the entire project.
When using AI to code, the developer’s role shifts from being just a programmer to also being a product manager, providing the context and instructions, guiding the project’s direction, and testing the outputs that the AI delivers. When done well, vibe coding speeds up development cycles, encourages creative experimentation, and makes coding more accessible to new programmers.
With those basics in mind, let’s look at some best practices for using AI in your workflow.
AI-assisted programming best practices
Prompting is the direct instruction you give an LLM to tell it what you want it to do. In AI-assisted programming, strong prompting is just as important as the coding itself. You can picture an LLM like a brilliant developer who has never actually built anything before. They have the skills, but no context. Your job is to give them effective instructions so they know exactly what you want them to create.
To create a solid prompt, first eliminate ambiguity. Try not to leave the LLM guessing because it will guess, and it’s not afraid to be confidently wrong. Be explicit about your goals, requirements, and constraints, and include plenty of detail. It’s fine to think out loud, or even ramble some; AI models handle stream-of-consciousness surprisingly well, and your evolving train of thought can help the AI model reason through the request. Speak as you would to a human collaborator, and (pro tip) even try using speech-to-text to make it easier to generate rich, thorough prompts.
Second, be outcome-oriented. Tell the LLM what you want to achieve and why, rather than micromanaging the how. A clear end goal gives the model room to propose efficient solutions you might not have considered. Framing the request with a simple premise (e.g., “build a retro dungeon crawler”) and then filling it with specific requirements helps the LLM understand both where to begin and what you want the end result to be.
Finally, use keywords, specifications, and relevant context. In your prompt, state the programming language, style, and link to any repos, example contracts, or interfaces it should reference before it starts generating code to ensure it’s working with the right information from the beginning. Keywords like “pixelated” or “minimalist” can set the creative tone for projects, while explicit instructions like “don’t make assumptions” or “avoid outdated code examples” tell the AI model what to avoid.
Don't do this:
Make a smart contract for NFTs
Try this instead:
Build a minimalist Stellar smart contract in Rust that implements an on-chain two-player Tic-Tac-Toe game on the Stellar blockchain. Store the 3x3 board in contract state, track turns by Stellar address, validate moves, detect wins/draws, and expose functions to start a game, make a move, and check status. Include a simple test suite and instructions to deploy with stellar-cli. Keep it self-contained with no dependencies beyond the Soroban Rust SDK. Reference: https://mcp.openzeppelin.com/, https://github.com/script3, https://github.com/soroswap.
AI-assisted programming best practices
Context is the background information you give an LLM so it can tailor its response to your project’s needs. Think of the prompt as the instructions, and the context as the reference material. You tell the LLM what to do in the prompt, and you provide the resources it needs to complete the task in the context. This might include your current working directory, relevant repos, conversation history, system instructions, and more.
Depending on the LLM, context can reset between conversations, and the AI model doesn’t inherently “remember” anything beyond what’s in the current context window. This limitation exists because LLMs are often isolated from most live data and tools. MCP (Model Context Protocol) servers solve this problem by acting as a bridge between the LLM and external resources. They can pull in missing or up-to-date information on demand. For example, connecting to your Git repository for commit history, querying a database for schema changes, or fetching version-specific code examples. They can also provide the AI model with relevant, domain-specific knowledge from trusted sources, giving it access to specialized information for that specific interaction.
By augmenting the LLM’s working knowledge, MCP servers enable continuity across sessions and open up capabilities that wouldn’t be possible with prompts alone. We’ll explore types of MCP servers later, but for now, know they are foundational for effectively providing context in AI-assisted programming.
AI-assisted programming best practices
When using AI for programming, it’s easy to fall into model paralysis: a state of being overwhelmed by the number of available options. The key is to experiment and not get discouraged when a tool can’t one-shot a task. Mix and match based on the job, as each tool has unique strengths and specializations.
Here are some recommendations to help you find a strong starting point.
An AI code editor is a programming environment with built-in, advanced AI capabilities directly into the workflow. Evolving from traditional IDEs, these tools are ideal for developers who want to use AI to enhance their coding process.
Some popular AI code editors are:
LLMs vary widely in qualities and capabilities like speed, cost, context length, and reasoning ability; so the best platform choice depends on your specific use case. Many AI code editors already support multiple LLM models, making it easy to switch as you work. If one model struggles with a task, simply try another.
Here are some popular LLM options and their differences:
These are just six of many LLMs out there–set forth and see what other unique tools you can try!
MCP servers typically fall into two categories: tools and data. Tool-oriented MCP servers give your LLM the ability to take action in your development environment, like running tests, deploying contracts, signing transactions, or interacting with APIs. Data-oriented MCP servers feed your LLM specialized information it wouldn’t otherwise know. Knowing which type you need for a given task makes it easier to choose the right MCP server integrations to include in your setup.
Here are some recommendations to try:
But as mentioned above, the most important thing is to…
A big mistake people make when first using LLMs is expecting the perfect answer after a single input, which seldom gives the desired result. Working with AI is like any other skill; you improve with experimentation. Start by using it for bite-sized projects or isolated parts of your workflow so you can test ideas without risking the whole build. And retry anything that didn’t work with different models, prompts, and contexts.
AI is going to be a part of programming the future, whether you’re ready or not. The sooner you experiment, adapt, and build your own best practices, the more prepared you’ll be for the future.
Ready to get vibe coding? Here are some other resources to get going:
Watch a vibe coding demo from Tyler van der Hoeven on YouTube: Learn Kalepail’s Secret Sauce for Getting AI to Work for Him
Ryan Carson, CEO, founder, and developer, has great insights on the state of AI in coding. Follow him on X.
Find and follow your favorite AI companies that build the tools you love: OpenAI, Anthropic, Gemini, Cursor, etc., and engage with their content!
And join the Stellar Developer Discord to showcase your own vibe coding projects!