Updated: Getting started with AI-assisted coding

Updated: Getting started with AI-assisted coding
Photo by Aerps.com / Unsplash
➡️
If you're looking for my cursor-rules GitHub repo, it's here.

After experimenting with AI-assisted coding for over a year, I've seen the landscape evolve dramatically. When ChatGPT launched, teams rushed to integrate AI capabilities everywhere, with several VS Code extensions serving as ChatGPT interfaces. The limitation? VS Code lacked repository-wide context, severely restricting its AI capabilities.

Today, Cursor has emerged as the leading AI-enabled IDE, yet many engineers either haven't tried it or haven't optimized their setup. This guide aims to quick start those people to become more effective with Cursor.

What's Cursor?

Cursor is an AI-first IDE built on VSCodium (VS Code's open-source version), preserving the familiar VS Code experience—settings, extensions, and themes—while adding integrated AI functionality. Think: ChatGPT in your IDE with complete project access—enabling better code understanding. It streamlines coding, refactoring, debugging, and documentation—though it won't be replacing any developers anytime soon.

The free hobby tier includes 2,000 completions, 50 slow requests, and a two-week Pro trial—which should be sufficient for you try it out. For daily use, the $20/month Pro plan allows unlimited completions, 500 fast requests, and priority processing.

Never used Cursor before? No problem, watch this.

Treat AI As A New Member On Your Team

Working effectively with Cursor's AI requires a paradigm shift in how we approach coding assistance. Like new team members joining an established codebase, AI excels at specific tasks but can make critical mistakes when lacking sufficient context about the application architecture, business logic, and team conventions. Without proper guidance, it often tries to change too much at once, whereas experienced engineers would break changes into small, testable units across multiple pull requests.

Matt Welsh, former Google engineer, describes this mental model:

"Think of AI tools as enthusiastic but inexperienced interns—smart, eager to help, but needing clear guidance and careful review."

Effective developers establish boundaries with AI just as they would with new team members, using prompts like "Let's tackle this one function at a time" or "Show me just the changes needed for the authentication logic." Riley Goodside, known for his prompt engineering expertise, recommends establishing clear constraints: "Modify only the validateUser function without changing its signature" or "Refactor for readability, but maintain the existing error handling patterns." By applying the same onboarding approach we use with new team members—specific guidance, incremental tasks, and thorough code reviews—we transform AI from an occasionally frustrating tool into a consistently productive coding partner.

Think of your AI as a new coworker joining your project. Just as you would orient any new team member, you'll need to explain the repository structure, the languages and frameworks you're using, your coding standards and style guidelines, business context, and team conventions. The same thoughtful introduction you'd provide to help a new developer succeed is exactly what your AI needs to become an effective collaborator. This is where Cursor rules come into play.

Select The Right Model

Making a conscious decision which model to use in Cursor dramatically impacts your development experience and productivity. Don't use Auto as different models will be used, and the responses will dramatically vary. Instead, choose a specific model and stick with it.

Manually selecting the model to use within Cursor

Claude 4 Sonnet, Anthropic's latest model, excels at complex problem-solving with exceptional context understanding and nuanced code generation capabilities, though at a higher token cost. This is generally my default. Meanwhile, OpenAI's o4-mini-high offers impressive coding abilities with faster response times and lower token usage, making it ideal for rapid iterations and simpler tasks. GPT-4o sits between these options, providing ok performance with good reasoning abilities and respectable speed. Claude Opus stands out for handling large codebases and architectural decisions, while Claude 4 Sonnet delivers a balanced mix of speed and quality for everyday coding tasks.

LLM models receive frequent updates, with each new iteration delivering significant performance improvements. To keep up-to date, check the coding ability leaderboard (maintained by Aider) which ranks the latest models at coding and refactoring.

The model you choose should align with your current needs—Claude models typically shine for deep reasoning and complex refactoring, while OpenAI models often provide quicker responses for routine coding assistance. Many developers switch between models depending on their task complexity, with some preferring Claude for architecture discussions and OpenAI's offerings for quick code snippets and corrections.

But keep in mind; changing the model, or using Auto is akin to asking a different a member of your team to help out for every request. It's memory and chain-of-thought will be different, and any previous knowledge you built up with your previously selected model won't exist. Stick to the same model as much as possible for more predictable results.

Be Specific

In Cursor, using the @ context shortcut in chat is super important for focusing the AI on specific content—files, folders, or even just lines of code—so it doesn’t get lost in your repo. Simply type @ plus a path (for example, @src/utils/helpers.ts) and Cursor will fetch that file’s contents into your session.

For more advanced use, you can target a particular Git branch or commit. Suppose you want Cursor to compare a new feature branch to your main branch—just write something like Compare @feature/login-flow/src/index.ts to @main/src/index.ts and Cursor will fetch each version for you. Or, if you need to inspect code as it stood in yesterday’s deploy, reference the commit SHA: Show me @ae3f4b2:/lib/server.js.

Rules To Code By

A Cursor rule file tells the AI how to behave when working with your codebase. It acts like a set of guardrails—defining naming conventions, folder structure, coding standards, or even framework preferences—so the AI generates code that fits your project's style and architecture. Each rule file lives in .cursor/rules, includes frontmatter to describe where and when it should apply, and contains structured guidance the AI reads before writing code. It’s how you turn Cursor from a generic assistant into a project-aware teammate.

You can write a rule in Cursor two ways: Hit Cmd+K (Mac) or Ctrl+K (Windows) in the IDE and select New Rule File—Cursor will scaffold it for you. Or, use my cursor-rules repo, start a chat, and ask the AI to write a new rule. One of the included rules teaches it how to write rules! Meta, but effective.

Here's an example template of a rule file:

---
description: ACTION when TRIGGER to OUTCOME
globs: src/**/*.ts
alwaysApply: false
---

# Rule Title

## Context
- When to apply it
- Anything needed beforehand

## Requirements
- Clear, testable items the AI should follow

## Examples
<example>
Valid code snippet, short explanation
</example>

<example type="invalid">
Bad example, explain why it’s wrong
</example>

## Critical Rules
- Boil it down to the do’s and don’ts

The globs field in the frontmatter tells Cursor which files the rule should apply to. Use standard glob syntax, without quotes. For example: src/**/*.ts, **/*.test.{js,ts}, or docs/**/*.md. If you want the rule to apply to every query, set alwaysApply: true

Using a numbered prefix in the rule filename helps categorize and prioritize rules clearly, especially as your rule files grow. It’s not mandatory, but it helps with organization.

  • 0XX for core standards
  • 1XX for tool configs
  • 2XX for framework-specific rules like React
  • 3XX for testing
  • 7XX for system rules (e.g., GitHub)
  • 8XX for workflows
  • 9XX for templates
  • _ for private or project-specific rules
💡
When working with multiple repositories, you may want to create your own cursor-rules repo, and symlink the .cursor folder into your other repositories.

Avoid Non-Deterministic Language

Avoid non-deterministic language in cursor rule files (e.g., "use best practice," "where appropriate," "consider using"). This forces the AI to reinterpret guidance on each request, causing inconsistent results.

Instead, use specific, measurable criteria:

  • Replace "high test coverage" with "maintain 80% line coverage"
  • Replace "frequently" with "before each commit"
  • Replace "large objects" with "objects >10MB"

Use the AI to analyze rule files for non-deterministic language and specify concrete thresholds, frequencies, and decision criteria. This line in my Cursor rule file for creating cursor rules guards for this and will help you, and your AI, create better rules.

When The AI Fails

As you work with Cursor, you'll inevitably experience AI failures—sometimes subtly, sometimes spectacularly. It might misread the architecture, hallucinate imports, or produce code that looks right but breaks. These moments can be frustrating, especially when you're deep in flow. That's where a technique I call debug mode can be helpful. When invoked, it asks the AI to explain its reasoning, outline assumptions it made, and walk through the logic behind its decisions. Debugging this way can reveal where a prompt was ambiguous or where context was misunderstood—giving you the insight to refine the prompt or create a rule that guides the AI to the right solution next time.

---
description: Provide a concise debug summary when the user requests "debug mode" to surface the chain-of-thought, applied rules, and assumptions
globs: 
alwaysApply: true
---
# Debug Mode

## Context
- Triggered when the chat includes "debug mode", "debug mode on", "enable debug", or similar
- Intended for any AI reply (code or prose) where extra reasoning context is helpful to diagnose a request that failed

## Requirements
- MUST provide a concise debug summary labeled "Chain-of-thought" listing the steps taken.
- MUST include a labeled list "Applied Rules" of all rules that influenced the response.
- MUST include a labeled list "Assumptions" of all assumptions.
- MUST keep the debug summary under 10 sentences.

## Critical Rules
<critical>
- MUST give the user visibility into the full chain-of-thought.
- MUST be verbose when explaining reasoning.
- MUST identify any rules or previous guidance that was considered.
- MUST list all assumptions.
- MUST respond with "**DEBUG MODE**" heading while debug mode is active.
- MUST use debug mode for every subsequent request until it is turned off (i.e. "debug mode off", "disable debug").
</critical>

Tip: Use Zsh/Bash (Not Powershell) As The Default Shell

If you're using Cursor on a Mac, the AI may try to use Powershell as the default shell. If you want to use bash (don't use zsh, Cursor tends to miss when commands exit), add this to your User Settings JSON:

  "terminal.integrated.automationProfile.osx": null,
  "terminal.integrated.defaultProfile.osx": "bash",
  "terminal.integrated.shellIntegration.enabled": false,
  "terminal.integrated.defaultProfile.linux": "bash"

What's Next? MCP...

Once you've got Cursor rules shaping AI behavior in your repo, MCP (Model Context Protocol) takes it further—an open standard that connects your IDE's AI to APIs, databases, and external docs. Your AI can access GitHub, Jira, Confluence, Home Assistant, and more for richer context, and better answers.

While not required to start, MCP bridges the gap when your code alone can't describe the problem. Think of MCP servers like APIs, but for your AI—they pull in context from Jira tickets, Confluence docs, other repos, Figma designs, and more.

Picture this: Cursor pulls a Jira story from the current sprint, assigns it to you, reads the acceptance criteria, reads the related Confluence documentation, makes the code changes (with your review), auto-populates a GitHub PR, and moves the Jira story to "Ready for Review"—all from a single prompt.

Explore available MCP servers at https://github.com/awslabs/mcp and mcp.so.