Exploring MCP: GitHub MCP for iOS Dev in VS Code, Cursor & Windsurf
One of the MCPs I wanted to try was GitHub MCP, one of the most popular repositories on GitHub last week.
The GitHub MCP Server was the most popular repo on all of GitHub last week. Let’s build.https://t.co/LMxBgdBUkB pic.twitter.com/CIwKgALNuh
— Mario Rodriguez (@mariorod1) April 22, 2025
I want one editor for everything iOS development, including managing GitHub-related workflow without the constant context switching.
GitHub Model Context Protocol (MCP) Server sounds technical. Still, under the hood, it is just a universal translator for the language models in your editor to talk directly to the GitHub API.
I aim to see if it can genuinely organize common iOS development workflows within these AI-powered editors.
While I explore VS Code, Cursor, and Windsurf, the core concepts apply wherever MCP is supported.
Demystifying MCP
Ah, Twitter's overhyped function calling.
Before diving into the GitHub specifics, let me quickly demystify MCP. The Model Context Protocol, created by Anthropic, is basically an open standard.
It defines how AI models (like the ones powering Copilot, Cursor Chat, or Windsurf's Cascade) should talk to external tools and data sources. Instead of every tool and every AI having its unique, messy way of communicating, MCP provides a common language.
But why? It means that tools (like the GitHub MCP Server) can be built once and plugged into any AI client that speaks MCP (like VS Code Agent Mode, Cursor, Windsurf, Cursor Desktop).
Less custom work, more interoperability. Simple, right? You can read into the details here to feed your curiosity:
GitHub MCP Server
This MCP Server is GitHub's official, open-source implementation that bridges your MCP-compatible editor and the GitHub API.
It exposes common GitHub actions as "tools" the AI can use, ranging from creating repos, managing files to handling issues and pull requests, searching code, and more!
Imagine asking your AI assistant as an iOS developer:
- Create a new issue for evaluating the accessibility on the login screen, and assign it to me
- Create a new branch for this issue from
develop
- Add this
NetworkManager.swift
file to the repo - Find open issues tagged "bug" and "UI" in my current project
- Fetch the reviews and comments in the latest PR and work on them
All without leaving your chat/editor. That is the promise. It complements your AI tool by handling the GitHub side of things for you.
Getting Started: Setting Up Your Environment
The initial hurdle of getting it to work is some installation, but still, it is pretty straightforward.
Docker
The easiest way to run the server. Make sure it is installed and running. Grab it here: Docker Installation. (You can check if it is running with docker ps
in your terminal).
GitHub Personal Access Token (PAT)
The server needs this to talk to GitHub as you.
Create a token with the absolute minimum scopes you need. Start small (e.g., repo
access, maybe issues:read/write
, pull_requests:read/write
) and add more only if required. Do not give it god mode! Use a descriptive name like "MCP Server Token - My Mac". Create one here:
Installing the GitHub MCP Server
Visual Studio Code (with Copilot Chat)
- Easiest: The GitHub MCP repo README has a one-click install button. Click one, VS Code opens, confirms, and sets things up. You will likely need to enter your PAT when prompted.
- Manual Setup:
- Press
Cmd + Shift + P
on Mac -> "Preferences: Open User Settings (JSON)". - Alternatively, create
.vscode/mcp.json
In your project folder (same JSON content, but without the outer"mcp": { ... }
), to share the config.
- Press
Add this block inside the main {...}
:
{
"mcp": {
"servers": {
"github": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GITHUB_PERSONAL_ACCESS_TOKEN",
"ghcr.io/github/github-mcp-server"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "<github_token>"
}
}
}
}
}
Cursor
Cursor uses a dedicated config file. For a particular project, create a .cursor/mcp.json
file in your project directory.
Create a \~/.cursor/mcp.json
file in your home directory for tools you want to use across all projects. This makes MCP servers available in all your Cursor workspaces. Add this structure:
{
"mcpServers": {
"github": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "GITHUB_PERSONAL_ACCESS_TOKEN",
"ghcr.io/github/github-mcp-server"
],
"env": {
// Hardcoding token here is less secure!
// If not, be careful with this file's permissions.
"GITHUB_PERSONAL_ACCESS_TOKEN": "<PASTE_YOUR_TOKEN_HERE>"
}
}
}
}
Windsurf (Codeium)
Windsurf (from Codeium) uses a similar config file, found in ~/.codeium/windsurf/mcp_config.json
. Use the same JSON structure as the Cursor example above:
{
"mcpServers": {
"github": { // Agent name
"command": "docker",
"args": [ /* ... same as Cursor ... */ ],
"env": {
// Same security warning about hardcoding token applies!
"GITHUB_PERSONAL_ACCESS_TOKEN": "<PASTE_YOUR_TOKEN_HERE>"
}
}
}
}
GitHub MCP in Action
Okay, setup's done. The server runs with Docker in the background, and the editor is configured. Let's see what this MCP server can do something through the AI chat!
I decided to try a workflow using VS Code with Copilot Chat, but remember that the core MCP interactions should work similarly in Cursor or Windsurf.
The goal: identify a needed change, create an issue for it, implement the code, create a pull request, and finally merge it, all driven from the chat interface.
Starting with an Issue
I started simple. With the relevant Swift file (MusadoraKit.swift
) open for context, I typed into Copilot Chat: "Create an issue for managing the user token"
.
Copilot quickly responded, confirming it understood and stating it would use the create_issue
tool provided by our GitHub MCP server.
Since this was my first time using a tool from this specific server in the session, VS Code prompted me for permission. You get granular control: allow just once, allow for the session, always allow for this project (workspace), or always allow. I chose "Always Allow" for this workspace, but be conscious of the permissions you grant these tools.
I assumed it would just create an issue with some description, but it scanned the context of the open MusadoraKit.swift
file, identified the exact line of code using ProcessInfo.processInfo.environment["USER_TOKEN"]
, generated a relevant title ("Improve User Token Management..."), and produced an incredibly detailed body.
It outlined the current implementation, listed the specific issues with using environment variables (security, persistence, validation), and proposed a concrete solution (a dedicated TokenManager
class using Keychain), broke it down into implementation tasks, added security considerations, flagged potential breaking changes, and even posed relevant clarifying questions for future discussion.
You can see the full detail it generated here:
On top of all that, it automatically added appropriate labels like enhancement
and security
to the issue on GitHub!
A moment later, Copilot confirmed in the chat: "I've created a new issue (#51)..."
with a summary. A quick switch to my browser confirmed it – Issue #51 was live, created and detailed by the AI based on a single-line request and the code context!
Implementing the Change and Handling Git Steps
Again, my lazy self wanted the implementation to be done by AI, too. My prompt: "now implement this issue and also create a PR for it"
.
Copilot outlined its plan:
- Create a new feature branch.
- Implement the
TokenManager
(a new file/class). - Update
MusadoraKit.swift
to use the new manager. - Create a Pull Request (PR) for the changes.
First, it used the create_branch
tool, naming the branch feature/token-manager
based on the issue's intent.
Copilot generated a new TokenManager.swift
file, presumably using Keychain for secure storage, as hinted in the issue description it created earlier. It also modified the original MusadoraKit.swift
, removing the old logic and integrating calls to the new TokenManager.shared
instance. It showed me the diffs for review.
But I saw some interplay between the AI and Git workflow. Copilot attempted to use create_pull_request
it, but it failed initially.
Why? As Copilot correctly diagnosed, the newly generated code and modifications only existed locally on my machine. They had not been committed or pushed to the remote feature/token-manager
branch on GitHub yet. A PR needs remote changes to compare against the target branch.
Recognizing this, Copilot automatically pivoted and used the push_files
tool. This staged the local changes, crafted a commit message (like "Add secure token management with TokenManager"), and pushed the commit to the remote feature branch on GitHub.
Creating the Pull Request
With the code now on GitHub's servers, Copilot re-attempted the final step of its plan: running create_pull_request
again.
It confirmed in the chat: "I've successfully completed the implementation and created both the issue (#51) and pull request (#52)..."
. Back to the browser, I refreshed the Pull Requests page – and there it was, PR #52, opened via the MCP wrapped GitHub API.
Again, like the issue, the PR turned out to be more than a mere title and a link to the commit. The AI had generated a detailed description in the PR body and it automatically included Fixes #51
to link it correctly to the issue we started with.
The description detailed the Changes (introducing the new TokenManager
class using Keychain, deprecating the old property), highlighted the Security improvements, provided a Migration Guide with code snippets showing existing users how to adapt, and even included notes on the Testing performed. It essentially wrote its own release notes for the change, ready for review.
Build Check
Before merging, a sanity check: does the code actually build?
I initially tried running swift build
in my VS Code terminal, but it failed, complaining it could not find TokenManager
. This was my error because I had not explicitly accepted the file changes Copilot made into my local workspace yet via the chat interface's diff view ("Keep" vs "Discard"). A reminder to be mindful when the AI modifies your files!
To get a proper check, I asked Copilot: "please build and go through the build logs"
.
Copilot confirmed it would use its terminal access to run swift build -v
(verbose) within the project directory. After the compilation finished, it analyzed the output and reported: "The build completed successfully... with no errors"
, though it did note some expected deprecation warnings. Build passed!
Merging the Pull Request
The final step was merging the approved and validated PR. I started with a general request: "please merge the PR for it"
.
Interestingly, Copilot stumbled here. It responded that it could not find a mergeable PR or needed more specifics. It seemed to have lost the context of PR #52 we had just been working on.
Okay, I went a bit more explicit: "please merge the PR #52"
.
It still hesitated, listing generic reasons why a PR might not be merge able (conflicts, failed checks, etc.) and asking for the PR number again, even though I just provided it! This highlights that these AI agents, while powerful, are not infallible.
Maybe this was an issue with VS Code, or the model that I was using (free o4-mini
). Cursor with Gemini 2.5 Pro or Claude 3.7 Sonnet is my default combination.
So, I bypassed the conversational ambiguity and called the tool directly using the #
syntax provided by Copilot chat: #merge_pull_request 52
.
That did the trick. Copilot immediately understood it needed to execute the merge_pull_request
tool with the parameter the request number of 52
for the given repository. It ran the command.
A final switch to the GitHub PR page in my browser and one last refresh showed the satisfying purple "Merged" indicator. The AI had successfully merged the pull request, completing the entire workflow from issue creation to merged code, almost entirely driven by chat commands with the GitHub MCP server.
Moving Forward
It is not always perfect. Here are some things I have learned:
- Be Specific: Vague prompts give vague results. Include branch names, PR numbers, or its title, whenever possible.
- PAT Scopes are Key: If an action fails with a 401/403 error, your PAT may not have the required permission (e.g., trying to create a repo without
repo
scope). - Secure Tokens: Avoid hard coding tokens if your IDE supports prompting (like VS Code's
${input:github_token}
). If you must hard code (maybe in Cursor/Windsurf), ensure the config file has restrictive permissions. - Check Docker: If the agent is not responding, make sure your Docker container for the MCP server is actually running (
docker ps
). Check its logs (docker logs <container_id>
) for errors. - Limit Toolsets: If the AI seems confused or slow, try limiting the enabled tools
GITHUB_TOOLSETS
as shown earlier. Even Cursor will prompt you that more than 40 tools degrade the performance. Sometimes, less is more. - Combine Servers? You can configure multiple MCP servers (like the GitHub one, Xcode one, or Supabase). This opens up complex workflows, like fetching data from Supabase and committing a generated Swift model file to GitHub in one go!
For common, well-defined GitHub tasks like creating branches, making quick comments, and finding issues, using the GitHub MCP server via the chat is faster than switching apps for me, and writing those verbose descriptions.
Is it perfect? Not always. Complex tasks or ambiguous prompts can still trip up the model. And it depends on which model you use as well.
Also, the best AI IDE for MCPs is probably the one you already use or are most comfortable with. The core GitHub MCP server works the same way underneath.
Happy MCP-ing!