Exploring Cursor: My Experience with Grok Code Fast
I've been using the Grok Code fast model in Cursor ever since it was called "sonic." I usually do a proper "vibecheck" on a new model, and right now I'm working on Zenther, a health tracking app. It has around 60,000 lines of code, which is not a massive codebase, but definitely not a tiny one either.
Need for Speed
The first thing that seriously impressed me with Grok Code fast is just how damn fast it is. When I switched to it from Claude models, the difference was immediate and noticeable. The time-to-first-token is noticeably quicker, faster file reading, and the reasoning traces are fast (pun intended) enough that you do not get to read through them.
Surprisingly Capable
For small tasks, Grok Code fast holds well against much more expensive models. I was genuinely surprised that it performs almost as well as Sonnet 4 on things like:
- Simple refactoring of classes and old code
- API integrations (looking through derived data and reading through the code of Swift packages)
- Documentation and other folders lookups
- Basic debugging with Xcode builds
The speed makes it perfect for these tasks where I just need to get something done quickly, and not end up doomscrolling on Twitter, like how I usually do with Codex.
Knowing the Limits
But here is where the reality check comes in and things that seems to good to be true, are not.
As soon as you start working on more complex features or try to refactor a bigger chunk of codebase across numerous files, Grok Code fast shows its limitations.
It is not a state-of-the-art model, and not going to replace Sonnet 4 or GPT 5 for these tasks, and that is actually fine. It just means you need to understand where it excels and where it does not.
You can actually take advantage of the speed for smaller tasks while reserving the heavier models for when you really need that extra reasoning power. And do not use Opus 4 just for replacing a word in a file.
Value of Pricing
I have used around 432 millions of tokens with Grok Code fast, and it has cost...13 dollars. And it is still free in Cursor.
Just $13 for 8 days of work, and I mainly depended on it for Swift, Python, and JavaScript!
For comparison, 20 million tokens of Sonnet 4 costed $11.50 on Cursor.
So, even if you are working the whole day, you are looking at spending a few dollars at max; probably $10 if you are working on simulatenous tasks.
My Current Workflow
Here is how I use Grok Code fast in my development:
- Small/medium features: New API integrations, simple UI components, functions
- Code reviews: Quick checks for obvious issues or improvements
- Documentation: When I need to look up Swift/SwiftUI APIs quickly
- Simple refactoring: Breaking down large functions, renaming variables
For bigger stuff like architectural decisions, debugging state management issues, or working on already complex features, I switch back to GPT-5 in Cursor and let it take its time.
Illusion of Speed
I worked on a whole feature for Polarixy AI to implement a Foundation Model, and it was able to go through the other repository's examples and implement it in 5 minutes, which would have taken Claude Code double the time, and GPT-5 even longer.
But I had to make several subtle changes to the UI and code that I felt could have been done in one shot with GPT-5.
So, did I actually save time, or was it an illusion masked by the speed?
Moving Forward
Grok Code fast is still my favorite model of 2025 because it has shown me how fast a great model can be. It feels like using Kimi K2 with Groq, but better, at least for iOS development.
Have you tried Grok Code fast on a project yet? How has your experience been compared to what you expected?