Vibe Coding with AI Agents: Best Practices for Using Cursor with Node.js and SashiDo - Part 1
AI-powered software development is entering a new phase - one where you're not just getting code suggestions or autocomplete. You’re building entire features and even complete apps with minimal manual input. Welcome to Vibe coding: a radically efficient way of working where you collaborate with intelligent agents to do the heavy lifting.
In this blog post series, I’ve been using Cursor - a cutting-edge IDE designed for agentic workflows - to quickly spin up full-stack Node.js applications on SashiDo. Our team has tested a bunch of different approaches to make sure this setup is smooth, stable, and easy to follow. Easy enought that a Product and Business Dev like me can follow it.
So instead of spending hours figuring things out on your, you can just follow my lead and get building right away. I’ll walk you through what works best, what to watch out for, and how to get the most out of both Cursor and SashiDo - step by step through the lense of a non-developer.
TABLE OF CONTENTS
SETTING UP FOR VIBE CODING WITH CURSOR
ESSENTIAL CODING RULES AND PREFERENCES
IINTERACTION AND EXECUTION MODES
- Keep Context Clean
- Keep Prompts Small and Targeted
- Testing: Integration Over Unit
- Bonus Tips for a Smooth Vibe Coding Workflow
WRITING A DETAILED SPEC AND HOW TO USE IT
What Is Vibe Coding?
Vibe coding - also called agentic coding - is using an AI agent(in tools like Cursor or Windsurf) to build your app with you, not just finishing your lines of code, but generating entire features, testing them, and even deploying. You're not typing everything out - you’re giving the AI instructions, watching it work, and stepping in to guide when needed.
Think of it like being a movie director. You’re not holding the camera or setting up the lights - you’re saying: “Make this scene happen. Use this style. Keep the tone consistent.” The AI? It’s your production crew, pulling it off.
With tools like Cursor and Windsurf, you're collaborating with agents that can read your spec, understand your rules, and build entire apps end-to-end. It’s not perfect, but when it clicks, it feels like magic.
Setting Up for Vibe Coding with Cursor
For the next steps, I’ll use a free plan on Cursor Version: 0.50.5, VSCode Version: 1.96.2 and Claude 3.7 Sonnet models. These models support agentic behavior really well, including tool calling and function execution.
To Set Up a Model in Cursor:
- Go to Cursor settings
- Choose from the predefined models or add any model that you like from the “Add Model” button
- If you like, you can add a custom model by overriding the OpenAI API Key with a custom one (e.g., from Groq) using the OpenAI API standard formatting. To do that, you turn on the OpenAI API Key, you add your own key, and then override the base URL.
- You can only set one custom base URL at a time.
Regardless of your model choice, make sure it supports function calling and agent-style behavior. I used Claude 3.7 Sonnet Thinking - a hybrid reasoning model that supports both fast responses and deeper ‘thinking’ via its extended mode. You might not see a separate model called Claude 3.7 Sonnet Thinking in the Dropdown menu(availability may vary based on your Cursor version or plan) - it’s actually a feature of Claude 3.7 Sonnet that enables deeper reasoning when needed. To activate extended thinking in Cursor go to Settings → AI Models → Select all Claude 3.7 Sonnet as your model.
Use Rules - Seriously
Before actually pasting any specs and prompts in Cursor, it’s key to define our coding rules (especially as the codebase gets larger). Cursor and Windsurf both support them. In simple words, the rules are a way to tell AI, the agent, how you want to code, what technologies you should use, what workflows you want to use, etc.
These rules are essentially system messages for your agent. You can set up:
- Global user rules(applicable to all your projects)
- Project-specific rules (apply only to a specific project, recommended)
They’re stored in .cursor/rules/
and written in .mdc
files. You can reference those files as a system message for the AI to work with. The rules help prevent the AI from doing weird things, like switching your storage backend mid-session or building or debugging using technologies that are not within your stack.
Essential Coding Rules and Preferences
Here’s what I kicked off my journey with and what SashiDo’s team recommends adding to almost every project formatted in all natural language descriptions:
General Rules:
- Always prefer simple solutions. - Complex solutions often lead to more bugs, harder debugging, and longer development cycles. Simplicity ensures that the AI generates maintainable code you can reason about quickly.
- Avoid code duplication whenever possible and always check if similar code or functionality already exists. - The agent tends to add new code even when similar logic already exists. This leads to bloated codebases and harder-to-maintain functionality. The rule reminds the agent to check before generating redundant code.
- Use separate environments for Dev, Test, and Prod - Without this rule, the AI could accidentally mix environments, like writing tests that affect production data or using local files in production. This separation is key for stability and debugging.
- Only make changes that are requested or are clearly related to the requested changes. - The agent can sometimes fix one issue and change three unrelated things, breaking other functionality. This rule keeps the agent focused and prevents “collateral damage”.
- When fixing bugs, don’t introduce new patterns or technologies unless you've exhausted the current implementation, and if you do, remove the old code. - The AI sometimes “solves” problems by switching to a completely different tech (e.g., from SQL to JSON file storage). This rule forces it to stick to the original design unless a change is unavoidable, and prevents hybrid solutions that create confusion.
- Keep the codebase clean and organized. - AI-generated code can get messy fast. This rule helps ensure files are properly structured, logic is modular, and things stay readable over time.
Photo by Ibrahim Yusuf on Unsplash
- Avoid scripts and one-off files unless absolutely necessary. - The AI often leaves behind one-off test or debug scripts, cluttering the project. This rule tells the agent to clean up after itself or execute inline.
- Avoid long files (over 200–300 lines); refactor early. - Long files are harder for the AI to understand and refactor later, often leading to broken tests. Preemptive refactoring reduces complexity and keeps things modular.
- Avoid mock data in Dev/Prod, only allow it in test. - When something like a web scraper fails, for example, the AI can silently fall back to fake data and pretend things worked. This leads to false positives and hidden bugs in production.
- Never overwrite
.env
or hardcode secrets without first asking and confirming - The agent sometimes replaces or edits environment files during testing, breaking access to APIs or databases and forcing the user to regenerate keys. - Stick to my stack: Node JS v18 backend, MongoDB v3.6, SashiDo backend, Parse Server v3.6.0 HTML/JS frontend, separate DBs per environment - Without this, the AI is left a lot of space to improvise, so be as specific as possible versions included - e.g., switch from SQL to local file storage. This rule locks in tech choices and ensures consistency across the app.
Coding Workflow:
- Stay focused only on the code relevant to the task stated in each prompt and do not touch code that is unrelated to this task. - The agent would often wander into unrelated parts of the codebase, creating bugs or introducing inconsistencies. This rule keeps its attention where it's needed.
- Write thorough tests for all major functionalities. - We’re all human, and now and then, we can forget best practices like always requesting tests, and the AI wouldn’t write them by default. This rule ensures test coverage is always included.
- Keep the codebase clean and DRY. - The AI has a tendency to create redundant functions or reimplement logic it didn’t realize already exists. Over time, this leads to duplication, inconsistent behavior across the app, and a messy codebase. By explicitly telling the agent to keep things DRY, you help it reuse existing code, maintain a single source of truth, and prevent conflicting logic or edge case errors. This rule reinforces once again the need for maintainability and clarity, especially important in longer sessions where the AI might lose track of previous implementations.
- Avoid major changes to working patterns and the architecture of how a feature works, after it has been shown to work well, unless explicitly instructed. - Sometimes the AI would rewrite an entire module or feature just to fix a small bug. This rule prevents unnecessary overhauls.
- Always consider the ripple effect of any code change and how changes might affect other parts of the code. - The AI doesn’t always grasp the full impact of a change. This rule forces it to think more holistically before editing core logic.
Those are the rules I’ll be adding on Project-specific level for the purposes of my experiement in building a full-stack all by myself with the help of AI. You can always add more and give it as many instructions as you see fit.
Understanding why setting up these preferences initially is necessary would hopefully save you a lot of time. Those rules kind of reflect some of the tendencies these AI coding agents have that don’t really work all that well. Hopefully, with the advancement of AI, most of them will become obsolete, but until then, we’ll stick to the lengthy list and add more as needed.
Interaction and Execution Modes
When working with AI in Cursor, it's also helpful to understand that there are two layers of interaction modes:
Interaction Modes
Cursor gives you several ways to interact or "talk" with the AI assistant, depending on what you’re trying to do:
-
Edit Mode - Highlight any block of code and tell Cursor what to do - like "Change this code", "Optimize this function" or "Add error handling". It will rewrite that section based on your instructions.To enable it go to
Highlight → Right-click → “Edit with Cursor” or Cmd+K / Ctrl+K
-
Ask Mode - Use the Ask panel to pose open-ended questions, like "Explain or guide me to...", “What does this function do?” or “How can I structure a REST API?”. The AI replies with suggestions or explanations, without changing your code. To enable it go to
Sidebar → “Ask Cursor” or Cmd+Shift+K / Ctrl+Shift+K
-
Autocomplete (Inline Suggestions) - As you write code, Cursor proactively offers completions - from full lines to entire functions. This is the default behavior, so suggestions pop up as you type (can be configured in settings).
In a nutshel, Edit Mode is perfect when you know exactly what you want to change - quick refactors, bug fixes, or applying a known pattern; Ask Mode shines when you're exploring or need guidance - learning, debugging, or getting unstuck; and Autocomplete keeps you in flow as you build features or write boilerplate code. Choosing the right interaction mode helps you stay efficient no matter what's your end goal.
Execution Modes
When you ask Cursor to do something, especially through agents or multi-step operations, you decide how hands-on or hands-off you want to be.
-
Manual - "Always ask before changing anything.” - The AI proposes changes, but pauses before applying them. You review and approve every action.
-
Auto Mode – "Decide when to ask and when to act" - Cursor evaluates the risk level and either applies changes automatically or asks for confirmation if needed.
-
Auto-Run mode(also known as YOLO) – "Just do it, no questions asked" - Short for “You Only Live Once,” YOLO mode gives Cursor full control - it applies code edits, runs agents, and even triggers commands (like tests or deploys) without asking. Please be mindful as this mode is not recemended and can be dangerous in production environments.
You can enable or switch between Manual, Auto, and YOLO execution modes via:
Command Palette (Cmd+P
) → Search for execution mode or Settings → AI Agents → Execution Mode
To sum it up, Manual mode gives you full control and peace of mind - perfect for sensitive or production code. Auto mode strikes a smart balance by letting Cursor handle low-risk tasks while still checking in when needed, whcih makes it sutiable for iterative development or trusted workflows. And YOLO mode? It’s your go-to for fast iteration, early prototyping, experiments, or solo projects, but definitely not the one to trust near live systems. Choosing the right execution mode means moving fast when it’s safe, and staying cautious when it counts.
Best Practices
Keep Context Clean
One of the most important things to stay on top of when vibe coding is context management. Cursor has a limited context window, and as your session grows(adding more files, rules, and chat history), the AI’s performance starts to drop. It might miss important details, get confused, or behave unpredictably. I found out that once too much context had built up, the agent’s ability to reason effectively degraded noticeably. That’s why it’s crucial to recognize when a session is getting too “heavy.” When that happens, the best move is to start a new chat, but keep in mind: you’ll need to manually reinsert your spec and rules because they don’t carry over automatically. You’ll also need to be mindful that the overall context from the previous chat is lost, and you might be more explanatory, even repetitive, in your initial prompts in a new chat.
Photo by Samir Malek on Unsplash
Here are a few quick examples that show Cursor is overwhelmed and you might need to start a new chat:
- Inaccurate Context Handling: Cursor may forget previous steps or misunderstand your intent, especially in long conversations or if multiple files are open.
- Delayed Suggestions or Freezes: A noticeable slowdown in response time or missing completions can mean Cursor is overloaded or your local setup is under strain.
- Empty or Incomplete Edits: Sometimes, you’ll trigger a refactor or ask for code and receive a partial or blank response. This is usually a sign the model hit a context or token limit.
- Vague Error Pop-Ups: You might run into generic “Something went wrong” messages or unexplained failures when trying to use Cursor’s features. These, similarly to the previous point, often indicate that the assistant hit a limit or got stuck behind the scenes.
- "Lost in the Sauce" Moments: If the AI starts generating code that clearly doesn’t fit your file, stack, or logic, it's likely confused and needs a reset.
- Endless Loading or Spinning Cursor: Sometimes Cursor just hangs, stuck on “thinking” without producing any output. If it’s spinning forever, it’s usually time to refresh or simplify your request.
Keep Prompts Small and Targeted
Equally important is to keep your prompts focused and narrow. Don’t ask the AI to do too much at once. Instead of piling on complex requests, stick to one small change at a time: fix a bug, add a feature, write a test. This bite-sized, incremental workflow helps maintain stability and gives you more predictable, testable results. I repeatedly emphasized in this tutorial how broader instructions often lead to the agent making unrelated changes, duplicating code, or breaking working functionality. Keeping the scope tight not only prevents that but also gives you better checkpoints to roll back to if something goes wrong.
Testing: Integration Over Unit
Test as much as you can and have the agent write tests. I've found that end-to-end tests (where the agent clicks through real app flows) work better than unit tests. So write tests often and run them constantly. If a test fails, have the agent fix the test, but watch it closely. Sometimes it’ll “fix” the test by rewriting real functionality. Make sure it understands the intent and if needed, go to the app yourself and test that functionality yourself.
Bonus Tips for a Smooth Vibe Coding Workflow
To keep your vibe coding sessions stable and recoverable, it’s crucial to commit often. Frequent commits let you roll back cleanly using Git if something breaks beyond repair. But even beyond version control, Cursor itself tracks chat history, and you can restore earlier checkpoints with a single click. This built-in safety net makes experimentation much less risky.
Photo by Kevin Ku
Another bonus tip is to run multiple branches in parallel, especially when the AI is slow or tackling large changes. You can spin up multiple Cursor windows, let each agent work on a different task or feature branch, and later merge everything together.
Finally, when quality matters, favor “Thinking” models like Claude 3.7 Sonnet Thinking. While they’re a bit slower, the tradeoff in accuracy, context awareness, and reasoning is usually worth it - especially for complex logic or larger codebases. These habits might seem small, but they make a big difference when working with autonomous agents.
Writing a Detailed Spec and how to use it
Now that you’re familiar with the main best practices of working with Cursor, I recommend writing a detailed technical spec. And since in this experiemnt we keep things simple and non-developer friendly, I used SashiDo’s Mobile App Feature Planner GPT to help write it. For example, I’ll prompt:
Write a spec for a simple To Do app. Be as specific as possible. We're going to be using Node.js and SashiDo as a backend.
It’ll output technical specs, database schema, API endpoints - everything you need. Then, I copy that and paste it into a new Cursor window and prompt the agent with:
Build this based on the spec above.
You can add instructions like “Start with setting up the folder structure and database schema.” if you want a specific entry point.
That’s it. The agent will begin planning and coding according to your spec. Just remember: the first step sets the tone for the entire session - be clear, complete, and intentional with your input.
Final Thoughts
Vibe coding isn’t just a trend - it’s quickly becoming a smarter, faster way to build real-world apps. With the right setup, clear rules, and focused prompts, working with agents in Cursor can feel less like typing code and more like directing a build crew...and for non-developers like me - like pure magic.
This post was Part 1 of our hands-on blog post series, where I share all the best practices for collaborating with Cursor effectively while experimenting with the tool and picking my Dev colegues's brains continiously. In Part 2, we’ll roll up our sleeves and actually build a full-featured To-Do app together using SashiDo as the backend, applying everything I covered here in a real-world scenario. I'll share my unflitered journey what Broke, what worked, and how I shipped an MVP on SashiDo using AI.
Stay tuned - and if you haven’t already, try writing your first spec with SashiDo’s Mobile App Feature Planner GPT and loading it into Cursor. It's not perfect yet, but this is the worst these tools will ever be. They’ll only get better from here, and you might be surprised how much you can build without actually writing any code.