Cursor Workshop By Shaikh Siraj
Summary :

This is a summary of the session on Cursor Workshop by Shaikh Siraj, Senior Software Developer at LinkedIn, which explores the functionalities and usage of an AI-powered coding agent, likely within a development environment like VS Code. Key topics covered include the request-based pricing model for the service, which charges per interaction, and the concept of tokens and context windows in large language models. The speaker discusses different agent models (e.g., GPT-4, Flaw) and operating modes (Agent, Plan, Ask), explaining how the agent can directly interact with codebases, perform complex tasks like building web pages, and even integrate with external services via an MCP (Model Context Protocol) to access repositories and documentation. The conversation also emphasises the importance of being efficient and precise with prompts and highlights best practices and rules that can be set within the Cursor environment to streamline workflow and improve productivity.
Briefing Document: An Analysis of the Cursor AI Development Environment
Executive Summary
This document synthesizes an in-depth analysis of Cursor, an AI-powered code editor designed to augment developer productivity. Built as a “wrapper over VS Code,” Cursor integrates advanced AI agents directly into the integrated development environment (IDE), enabling them to read, write, and understand entire codebases.
The platform’s business model is a critical factor, operating on a request-based subscription where each interaction with the AI consumes a finite monthly allotment. This places a premium on user efficiency and strategic prompting. Key functionalities include a selection of powerful AI models (e.g., GPT-4.5, Claude), distinct interaction modes (Agent, Plan, Ask), and sophisticated context management features.
A standout innovation is the Model-Context Protocol (MCP), which empowers the AI agent to interact with external data sources and services like GitHub, Jira, and Figma, vastly expanding its operational scope beyond the local repository. While Cursor can deliver significant productivity gains—estimated by the speaker to be as high as 3x—it is not a replacement for foundational engineering skills. The speaker emphasizes that human oversight and active code review are essential to prevent subtle, agent-induced errors from reaching production. The tool is best understood as a powerful assistant for skilled developers, not a fully autonomous programmer.
1. Core Concepts of the Cursor Platform
1.1. Architectural Overview
Cursor is an AI-native code editor that functions as a sophisticated layer on top of the familiar Visual Studio (VS) Code interface. Its primary innovation is the direct integration of an AI “Agent” within the IDE, accessible through a dedicated chat panel. This allows the AI to have full access to the project’s file structure and codebase, eliminating the need for manual copy-pasting of code into an external chatbot. The speaker describes it as a “rapper over VS code” that has “invented this particular chat and they integrated the AI.”
1.2. The Request-Based Economic Model
A central theme is Cursor’s usage-based pricing, which differs from typical flat-rate subscriptions. This model has significant implications for how the tool is used.
- Request Allotment: Enterprise-level subscriptions provide a fixed number of requests per month (e.g., 500).
- Cost Per Interaction: Every message sent to the agent, from a simple “hello” to a complex build command, consumes at least one request. As the speaker notes, quoting Sam Altman, “every high you do or every hello you do it’s going to cost a lot of activity.”
- Variable Cost: More powerful AI models, particularly those designed for complex reasoning (“thinking models”), can consume more requests per interaction (e.g., 2x the base cost).
- Overage: Once the monthly allotment is exhausted, users can continue making requests up to a specified dollar limit (e.g., 25-35), which provides an additional pool of requests.
This economic model forces users to be highly efficient and deliberate in their interactions with the AI.
1.3. AI Models and Context Windows
Cursor provides access to a variety of AI models, each with distinct capabilities, costs, and context limits.
- Available Models: The platform includes models from major providers, such as GPT-4.5 (OpenAI), various Claude models (Anthropic), and Gemini (Google).
- Specialized Use Cases: The speaker has developed a personal preference based on performance:
- GPT Models: Best for high-level “thinking,” system design, and planning.
- Claude Models: Excel at code implementation.
- Context Window: This refers to the amount of information an AI model can hold in its “memory” at one time. It is measured in tokens, which roughly correspond to words.
- A 100k token context window can process approximately 70,000 words.
- A 1 million token context window can process approximately 700,000 words.
- Context Management: As a conversation progresses, the context window fills up. Once the limit is reached, the agent can no longer process new information in that thread. To manage this, the agent continuously summarizes the conversation to retain key information more efficiently.
2. Key Features and Functionality
2.1. Interaction Modes
Cursor offers several modes to control how the user interacts with the AI agent, tailoring its behavior to specific tasks.
| Mode | Functionality | Use Case |
| Agent Mode | The default mode where the AI directly understands prompts and makes changes to the codebase. | General-purpose coding, refactoring, and feature implementation. |
| Plan Mode | The AI first generates a detailed, step-by-step plan of action before executing any code changes. The user can review and edit this plan. | Complex tasks requiring oversight. Described as “handholding” for the agent, who is likened to a “junior engineer.” |
| Ask Mode | The AI functions as an explanatory chatbot. It will answer questions and explain code but will not make any changes to the files. | Code comprehension, learning, and asking general programming questions. |
2.2. Model-Context Protocol (MCP)
MCP is a pivotal feature that allows the Cursor agent to “talk to other outside people,” breaking free from the confines of the local repository. It acts as an integration layer for external services.
- Purpose: To provide the agent with context and data from external platforms and tools.
- Supported Integrations: The platform has pre-built MCPs for services like:
- Jira (project management)
- Figma (UI/UX design)
- Notion (documentation)
- GitHub (code hosting and collaboration)
- Custom MCPs: Companies can build their own MCPs for internal services, allowing the agent to interact with proprietary code repositories and documentation.
- Practical Example: A user can provide the agent a link to a Jira ticket. The agent uses the Jira MCP to access the ticket, understand the bug report or feature request, and then formulate a solution within the codebase.
2.3. Context and History Management
Cursor includes several features to manage the conversational context, which is crucial given the limitations of context windows.
- @past chats Feature: When a new chat is started (losing the previous context), this feature allows a user to explicitly import the summarized context from a prior conversation. The agent automatically processes this summary to regain knowledge of previous work.
- File and Folder Scoping: Users can direct the agent to focus its attention exclusively on specific files or folders, preventing it from reading the entire repository and thereby conserving context window space.
- Image and Screenshot Analysis: The agent can analyze images and screenshots. A user can provide a screenshot of a desired UI, and the agent will attempt to generate the corresponding code.
2.4. .cursor Rules Engine
To ensure consistency and enforce repository-specific standards, users can define a set of rules in a special .cursor package within their project.
- Configuration File: Rules are defined in a file with an .mdc extension.
- Rule Application: Rules can be configured to:
- Always Apply: The rule is executed with every agent interaction.
- Apply Intelligently: The agent decides when the rule is relevant.
- Apply to Specific Files: The rule only triggers when changes are made to specified frontend or backend files.
- Example Workflow: A common rule is to mandate that after writing new code, the agent must also write corresponding unit tests, run the entire test suite, and automatically fix any failures before finishing its task.
3. Workflow, Best Practices, and Limitations
3.1. Productivity Enhancements
The speaker asserts that Cursor dramatically accelerates the development lifecycle.
- Rapid Onboarding: A developer new to a project can ask the agent to “go over the entire repository and make me understand everything,” reducing onboarding time from days to half a day.
- Accelerated Development: Simple UI components or features can be generated in minutes. For example, the agent can take a screenshot of a fundraiser display and replicate its functionality.
- Efficient Debugging: Instead of manually searching for the cause of an error, a developer can provide the agent with the error message or a link to a failed build, and the agent can trace the issue through the codebase.
- Overall Impact: The speaker estimates a personal productivity increase of at least 3x.
3.2. Best Practices for Effective Use
To maximize value and manage costs, the speaker recommends several best practices:
- Be an Efficient Prompter: Avoid conversational filler. Prompts should be direct and information-dense, as every message has a cost. The speaker advises, “we have to be very efficient in how we use this one request.”
- Provide Clear Context (“Handholding”): Treat the agent like a knowledgeable but inexperienced junior engineer. Point it to the right files and explain the specific requirements and constraints of the task.
- Use Plan Mode for Complexity: For any non-trivial task, use Plan Mode to review the agent’s proposed approach before it begins coding, preventing wasted requests on an incorrect implementation.
- Leverage External Knowledge: Use MCP and provide links to documentation, design files, or repositories to give the agent the richest possible context.
3.3. Critical Limitations and Required Oversight
Despite its power, the speaker offers strong cautions against over-reliance on the agent.
Requires Foundational Knowledge: The tool is an amplifier of skill, not a substitute for it. A user must possess fundamental programming knowledge to guide the agent effectively, validate its output, and debug problems. The speaker states, “…we should be at a stage that we should be able to do our stuff as well and we just taking help of agent that it should not be the case that agent is doing everything for us.”
Risk of Subtle Errors: The agent is not infallible and can introduce bugs that are difficult to spot. The speaker recounts a personal experience where agent-generated code passed peer review but caused a production deployment to fail.
Human Review is Non-Negotiable: “We cannot be entire dependent on agent.” Active, critical review of all AI-generated code by a skilled human developer is essential to ensure quality and correctness.


Leave a Reply