News 5
AI Developer News Roundup – June 2025
Cursor: Massive Funding and a New Ultra Plan
Cursor, the AI-augmented code editor by Anysphere, announced a staggering $900 million Series C funding round on June 6, valuing the company at $9.9 billion . This investment – led by Thrive, Accel, Andreessen Horowitz, and DST – underscores the explosive growth of AI coding tools. Cursor’s user base and revenue reflect this momentum: the company reports over $500 million in annual recurring revenue and adoption in more than half of the Fortune 500 (including NVIDIA, Uber, and Adobe) . This scale of enterprise use is remarkable for a developer tool and signals strong confidence in AI-assisted programming.
In mid-June, Cursor rolled out an “Ultra” subscription tier aimed at power users . Priced at $200 per month, the Ultra Plan offers roughly 20× the usage limits of the Pro plan , catering to developers who need heavy-duty AI coding assistance without unpredictable overage costs. Notably, Cursor collaborated with major AI model providers to enable this tier – multi-year partnerships with OpenAI, Anthropic, Google, and xAI help Cursor deliver high-volume AI compute at a fixed price . Alongside Ultra, Cursor also made the regular Pro plan more generous: it shifted to an unlimited usage (with rate limiting) model, removing the previous fixed 500-request cap . This means even standard subscribers get effectively unlimited AI help, with only throttling to ensure fair use. These changes, supported by top AI providers, make Cursor’s AI pair-programmer more accessible and predictable for developers.
Windsurf (formerly Codeium): New AI IDE and “Cascade” Agent
Another big development in AI coding tools came from Codeium’s rebranding to “Windsurf” and the launch of a new purpose-built AI IDE. Windsurf introduced the Windsurf Editor, a standalone integrated development environment designed from the ground up for AI-assisted coding . This goes beyond providing code completions in existing editors – it’s an entire IDE optimized for working with AI in the loop. A centerpiece of the Windsurf Editor is “Cascade,” an agentic AI coding assistant that’s deeply integrated into the development workflow. According to the Windsurf team, “Cascade combines deep codebase understanding, a breadth of advanced tools, and real-time awareness of your actions into a powerful, seamless flow. It is the most powerful way to code with AI.” In practice, Cascade can “code, fix, and think 10 steps ahead,” automating multi-step coding tasks and anticipating the developer’s needs . This agent can handle routine steps (like creating files, importing libraries, running tests, etc.) in the background so the developer stays in flow.
Windsurf isn’t abandoning existing editors, however. The company (formerly known for the Codeium plugin) still offers AI coding plugins for JetBrains IDEs and others, but the full power of Cascade is unlocked in the dedicated Windsurf Editor . The platform also boasts integration points for external tools via a “Memory Context Protocol (MCP)” and plugin system, so developers can connect services like Figma, Slack, Stripe, and custom tools directly into their AI coding workflow . With a reported 1 million+ users of its tools and many enterprise customers, Windsurf’s rebrand signals an ambitious push to lead the “AI coder” market . Early users have compared the rival products bluntly – one testimonial quips, “Windsurf is so much better than Cursor… I just type my prompt, go away for a bit, come back and there’s a web preview waiting.” . The competition between AI coding assistants is clearly heating up, with both Windsurf and Cursor racing to offer more powerful automation for developers.
JetBrains: AI Assistant Everywhere, Junie Agent, and Open-Source Mellum
JetBrains, known for developer IDEs like IntelliJ IDEA and PyCharm, has fully embraced AI in the past month with a suite of updates aimed at boosting developer productivity. In April’s 2025.1 release wave, JetBrains integrated its AI Assistant into all its IDEs and introduced a new AI coding agent called “Junie.” All of JetBrains’ AI-powered features – including the improved Assistant and the Junie coding agent – are now available under a unified subscription, with a built-in free tier for all users starting with IDE version 2025.1 . This means developers using any JetBrains IDE have immediate access to AI code completion, chat help, and automated coding features at no extra cost (up to certain usage limits), lowering the barrier to entry for AI assistance in daily development.
Junie, the new AI coding agent inside JetBrains IDEs, is designed to tackle more complex, multi-step tasks than the traditional code completion. It can generate entire modules, suggest improvements, and even handle project-wide refactoring or debugging steps. Uniquely, JetBrains built Junie by partnering with leading AI labs: Junie is powered by Anthropic’s Claude model under the hood . Anthropic’s Chief Product Officer Mike Krieger highlighted that “Developers rely on Claude’s state-of-the-art performance in solving complex, real-world coding tasks. We’re excited to see how Junie, powered by Claude, will help the global developer community.” In JetBrains’ RubyMine IDE, Junie has demonstrated the ability to solve about 60.8% of real-world coding tasks on a first try (per the SWE-bench benchmark) – a strong result that shows how effective an in-IDE AI agent can be at boosting productivity. JetBrains is also not limiting itself to one model provider: it’s taking a pragmatic approach of using the best models available. In fact, JetBrains integrated Google’s latest model, Gemini 2.5 Pro, into its AI assistant, giving users access to one of Google’s most advanced LLMs for improved accuracy and reasoning in code generation . JetBrains has even made an AI Assistant extension for VS Code, extending their AI tools beyond JetBrains-owned IDEs .
On the open-source front, JetBrains made a noteworthy research contribution by open-sourcing “Mellum,” its homegrown code completion model. Mellum is a relatively small (4 billion parameter) language model specialized for code – JetBrains calls it a “focal model” focused only on coding tasks like autocompletion . It was trained from scratch by JetBrains to power their IDE cloud completions and supports multiple programming languages (Java, Kotlin, Python, Go, JavaScript, C/C++, Rust, etc.) . Now the Mellum base model is available on Hugging Face for anyone to use or fine-tune . JetBrains decided to open-source it to foster transparency and collaboration, noting that “with open-source LLMs now outperforming some industry leaders, it’s reasonable to assume AI’s evolution might follow a similar trajectory” . By releasing Mellum, JetBrains gives researchers and smaller teams a peek under the hood at an optimized coding model, aligning with the wider open-source movement in AI. It’s an interesting move from a traditionally closed-source company, and it suggests JetBrains sees value in community feedback and contributions to specialized developer models.
OpenAI: Launch of Codex Agent and API Tooling Updates
OpenAI’s new Codex agent interface within ChatGPT allows developers to assign coding tasks (like writing a function or fixing a bug) which the AI tackles in a cloud sandbox environment. The screenshot above shows a prompt asking “What should we code next?” with a task list – illustrating how Codex can manage multiple coding tasks in parallel within the IDE-like ChatGPT sidebar .
OpenAI made headlines in May with “Codex” – a cloud-based AI software engineering agent (not to be confused with the 2021 Codex model) as a new feature of ChatGPT. Announced on May 16, Codex (the agent) is a “research preview” that acts as an AI pair programmer capable of handling multiple tasks in parallel . Developers can give Codex high-level tasks such as “write a new feature X,” “fix this bug,” or “refactor module Y,” and the agent will attempt to complete them autonomously. Under the hood, Codex is powered by a specialized model called “codex-1,” a variant of OpenAI’s latest o3 model optimized for software engineering through reinforcement learning . It runs each assigned task in its own isolated cloud sandbox preloaded with the project’s codebase, so it can read, modify, and test code as needed. Impressively, Codex will iteratively run unit tests, linters, and other checks, and only mark a task “complete” once tests pass or the goal is achieved . Developers interact with it via a sidebar in ChatGPT’s interface: you can ask Codex questions about the codebase (Ask mode) or assign a new coding task (Code mode), and then watch as it writes code, runs tests, and finally presents the changes (complete with a diff and even citations of logs/output for verification) . Initially, Codex is being rolled out to ChatGPT Pro, Team, and Enterprise users (with ChatGPT Plus access added in early June) . This marks a significant step toward autonomous coding agents – instead of just one-off prompts, developers can now delegate chunks of software development to an AI agent that works in the background. Early use cases have shown Codex writing entire functions, answering questions about how code works, fixing bugs, and even generating pull request drafts for review . While it’s still a beta (and not infallible), Codex hints at a future where AI can handle more of the tedious coding work and let developers focus on higher-level design.
OpenAI also rolled out important updates to its API platform in late May, introducing new tools and features that developers building AI applications can leverage. On May 21, OpenAI announced enhancements to the Responses API – the system that underpins agent-like “function calling” and tool usage in their models . One major addition is support for various built-in tools accessible directly through the API, similar to how ChatGPT plugins work. Now developers can have the GPT-4.1 and o-series models call tools such as image generation, the Code Interpreter (for running code and analysis tasks), and improved file search, all within the model’s chain-of-thought . In practice, this means an API client can ask the model a complex question or task, and the model can autonomously decide to generate an image, execute Python code, or look up a file from a knowledge base if needed, before formulating its final answer . OpenAI also enabled the API to interface with remote Model Context Protocol (MCP) servers – an open standard for feeding external context to LLMs . This allows developers to plug in their own data sources or tools (hosted externally) and have the model use them via a standardized interface, which greatly expands flexibility in building AI agents.
Alongside tool integrations, the API got new features aimed at production use in enterprises. OpenAI introduced a “background mode” for handling long-running tasks asynchronously (so the model can work on something in the background and return later) and support for reasoning summaries and encrypted reasoning traces . These features help with reliability and security – for example, a reasoning summary can provide a concise log of the model’s decision steps, and encryption ensures any sensitive intermediate data remains secure. Since the Responses API was first released in March 2025, it’s already seen heavy use (hundreds of thousands of developers, trillions of tokens processed) to build AI agents that can browse the web, write code, and more . With the new tools (image generation, code execution, etc.) and reliability improvements, OpenAI is clearly doubling down on agentic AI capabilities for developers. This makes it easier to create ChatGPT-like agents or complex workflows via the API – all of which is great news for developers who want to build sophisticated AI-driven applications or DevOps agents on the OpenAI platform .
Anthropic: Claude’s Advances and Integrations
Anthropic’s Claude model continues to be a key player in the AI assistant space, especially for developer-focused use cases, even if the past month didn’t see a brand-new model release from the company. Instead, Anthropic has been deeply involved in partnerships and integrations that extend Claude’s reach to developers. As noted above, JetBrains chose Claude as the brains behind its new Junie coding agent in their IDEs . This is a strong endorsement of Claude’s capability in complex coding scenarios – JetBrains reported that Claude’s performance on solving real coding tasks was instrumental in making Junie a success. The Anthropic team also highlighted their “commitment to transforming how developers work” alongside JetBrains, signaling that they see coding as a prime domain for AI assistance .
Anthropic has also been working on making Claude more accessible through third-party platforms. For instance, Claude is available via APIs (with a context window up to 100K tokens) and has been integrated into products like Slack (for conversational assistance) and other enterprise software. Last month, Anthropic partnered with Cursor to support its new Ultra plan – Cursor’s announcement explicitly credited multi-year partnerships with Anthropic (among others) for enabling the high-volume Ultra tier . In practice, this likely means Anthropic is providing favorable terms or infrastructure to Cursor so that heavy users can hit Claude’s API hard without exorbitant cost. It’s a win-win: power developers get more Claude-driven coding help, and Anthropic expands its user base and feedback loop through integration in popular dev tools.
On the research front, Anthropic has been relatively quiet publicly in the past few weeks, as the company is heads-down working on its next-generation model (Claude-Next) which is expected to be 10× more capable than today’s AI, according to earlier roadmaps. While no new papers or model releases came out this month, Anthropic’s focus on “constitutional AI” and safer model behavior continues to influence industry practices. In short, Claude is steadily becoming a staple in developer tooling – even without fanfare, its presence inside products like Junie and Cursor’s platform shows that Anthropic is ensuring Claude remains a top choice for coding assistance and enterprise AI solutions. Developers can likely look forward to further improvements in Claude’s coding abilities and maybe even larger context limits soon, as competition spurs all providers to up their game.
Google: Gemini 2.5 and AI in Development Tools
Google has been very active in the AI space, using both its Google AI division and the newly merged DeepMind team to push new models and developer services. In the past month, the most significant news from Google for developers is the emergence of Gemini 2.5 Pro, the latest version of Google’s flagship AI model. Google’s Gemini is a series of next-gen foundation models, and JetBrains revealed that “Google’s latest and most intelligent AI model, Gemini 2.5 Pro,” is now supported in the JetBrains AI Assistant . This suggests that Google has made Gemini 2.5 available (at least to partners) via an API or on Google Cloud. By integrating it, JetBrains gave a vote of confidence in Gemini’s capabilities for code tasks. Developers using JetBrains AI features can opt to use Gemini 2.5, which promises enhanced accuracy and deeper reasoning for coding suggestions . Google hasn’t publicly published details on Gemini 2.5’s architecture, but it’s likely an improvement over the PaLM 2 model and earlier Gemini iterations, possibly involving multi-modal or larger-scale training. The fact it’s labeled “Pro” and available in a commercial product indicates it’s a production-ready model aiming to rival OpenAI’s GPT-4 in quality.
Google’s I/O 2025 event (held in May) also brought a slate of AI updates relevant to developers. While the full I/O recap is beyond scope, highlights included new generative AI features in Android Studio and Google’s cloud platform. For example, Studio Bot (the AI helper in Android Studio) received upgrades to better understand developer context and suggest code for Android apps. Google Cloud announced expansions to its Vertex AI platform, such as supporting chatbot agents that can connect to data sources and tools, similar in spirit to OpenAI’s function calling. These align with the general industry trend of making it easier to build AI-powered applications.
Google is also investing in infrastructure for AI developers. A notable mention last month was Google’s collaboration with JetBrains to provide the global infrastructure (across Google Cloud’s regions) to power JetBrains AI services . Google Cloud’s AI supercomputers (TPU v5 pods and advanced GPU clusters) are being used to host the JetBrains AI backend, ensuring low-latency responses inside the IDE for things like code completion. According to Google’s product management director, this partnership ensures JetBrains’ 11 million developers get fast and reliable AI assistance at scale . It’s a behind-the-scenes development, but it highlights how major cloud providers are teaming up with software companies to deliver AI features seamlessly to end-users.
Lastly, Google DeepMind has been progressing on research that could trickle into developer tools. While not a product yet, DeepMind’s work on coding agents (e.g., the AlphaCode program-solving model) and new algorithms keeps pressure on the field. No new DeepMind coding breakthrough was announced in the past month, but the overall competition between Google and OpenAI/Anthropic means we might soon see Google introduce its own version of a ChatGPT Code Interpreter or an AI agent in Google’s Colab and Cloud Shell. In summary, Google’s contributions last month centered on providing developers access to its powerful models (like Gemini) and integrating AI into the software dev ecosystem via tools and cloud services.
Other Notable AI Developments for Developers
Beyond the headline items above, a few other developments from the last few weeks are worth mentioning for AI/ML practitioners:
-
Open-Source Model Progress: The open-source community continues to close the gap with proprietary models. JetBrains noted that some open-source LLMs are now reaching and even outperforming certain industry models in specific tasks . One example is the continuing refinement of models like Meta’s LLaMA derivatives and the BigCode project models for coding. Developers benefit from this because it opens up more options to self-host or fine-tune models without relying on a vendor API. JetBrains’ own release of Mellum is part of this trend, and we’re also seeing startups releasing specialized models (for instance, models tuned for SQL generation, front-end code, etc.) on platforms like Hugging Face.
-
AI in Code Collaboration: GitHub’s Copilot, one of the first AI pair-programmer tools, has been steadily improving (though without a single big “news” drop in the last month). Microsoft did use its Build 2025 conference to showcase upcoming Copilot X features: expect better code context handling, voice-activated coding, and pull request assistance to roll out broadly. While these were introduced earlier in preview, they’re becoming stable – meaning soon you can chat with Copilot in VS Code or even ask it to generate a PR description or answer questions about code changes. Microsoft also announced deeper integration of Copilot across its dev stack (GitHub, Visual Studio, Azure DevOps), signaling that AI assistance is becoming a standard part of the developer experience.
-
Major Research Announcements: In academia, the ACL 2025 conference (for computational linguistics) is coming up, and we’ve seen preprints highlighting better techniques for code generation evaluation and prompt refinement. Additionally, OpenAI released research on GPT-4.5 (an interim model upgrade) earlier in the year and it’s likely influencing how developers interact with the API in subtle ways (like improved steerability and fewer errors, as suggested by the GPT-4.1 API update ). Meanwhile, NVIDIA’s latest research in GPU optimization and new libraries (e.g. updates to CUDA and cuDNN for transformer acceleration) will benefit those training custom models. These low-level improvements aren’t flashy news, but they contribute to faster model training and inference for anyone building AI from scratch.
In conclusion, the past month has been packed with AI news relevant to developers. AI coding assistants and IDE integrations are rapidly evolving, as seen with Cursor’s and Windsurf’s advances and JetBrains’ full-court press on AI. At the same time, major AI model providers are rolling out new versions and API capabilities (OpenAI’s Codex agent, Google’s Gemini, Anthropic’s Claude via partners) that give developers more power and flexibility to build with AI. Perhaps the most encouraging trend is how these innovations are reaching developers through the tools they already use – whether it’s your code editor auto-completing entire functions or an API call that can handle a web search on the fly. For developers in AI/ML, staying on top of these updates is crucial, as the tools and models we rely on are improving at a breakneck pace. With competition among big tech and startups alike, we can expect even more exciting enhancements in the months to come, but the clear theme is that AI is transforming software development – making coding faster, more accessible, and increasingly “offloaded” to our new AI partners. Happy coding, and see you next month for another roundup of AI news!
Sources:
-
Cursor Series C funding announcement ; Cursor Ultra Plan details
-
Windsurf (formerly Codeium) IDE launch and Cascade agent
-
Testimonial on Windsurf vs. Cursor
-
JetBrains AI updates (Junie agent, free tier) ; Claude powering JetBrains Junie ; Google partnership with JetBrains
-
JetBrains open-sourcing Mellum code model
-
OpenAI Codex agent announcement ; Codex functionality
-
OpenAI Responses API update (tools and features)
-
Anthropic partnership in Cursor’s Ultra tier ; Anthropic/Claude performance in coding (JetBrains quote)
-
JetBrains integration of Google Gemini 2.5 Pro and VS Code extension .