From Idea to Product: Learnings from Vibe Coding with Cursor and the Adobe Express Add-On MCP Server
Ruben Rincon is Group Product Manager for the Adobe Developer Platform and leads the Developer Experience team.
It’s been a while since I spent quality time coding something beyond an idea or a prototype. That said, I’ve kept a close eye on the wave of AI tools being launched almost daily — many promise to boost developer productivity or even create entire apps without the need to code at all.
Given how little free time I have, the concept of vibe coding seemed attractive to me. While the term is often used to describe a hands-off, black-box style of development that doesn’t require fully understanding what’s being built, I’m seeing more and more professional developers embrace it thoughtfully and creatively. That perspective feels more compelling to me than approaches that prioritize speed over understanding.
If you are unfamiliar: An Adobe Express add-on is a lightweight web-based extension that enhances Adobe Express functionality, built with HTML, CSS, and JavaScript, and runs in a dedicated panel inside the Adobe Express interface.
A few weeks ago, we launched the Add-on MCP Server (Beta) to boost developer productivity. Because vibe coding is one of the many use cases, I used this approach to build an MVP add-on in under two hours with Cursor.ai as my code editor and AI assistant.
This time, I wanted to see if I could push my vibe-code exercise beyond rapid prototyping and build a production-ready Adobe Express add-on.
Spoiler alert: It worked, and mostly well. I spent around 20% of the time compared to traditional coding methods, but there are important nuances to consider. The biggest one is that I have a solid programming foundation from my earlier experience as a developer. In my current role, leading the Developer Experience team for the Adobe Express Developer Platform, I also understand how the pieces fit together. That context helped me notice when the AI tools were heading in the wrong direction and course-correct effectively. (Try out the add-on via this private link)
In this post, I share a few things I learned along the way. Whether you're exploring new AI tools or diving deeper into our add-ons platform, these notes may be helpful to you.
Starting with the right prompt
As with most things in life, a strong start makes a big difference. Your first prompt needs to ensure that the Add-on MCP Server pulls the right context for creating an add-on’s scaffold; otherwise, you might end up implementing an Express.js server instead.
I rewrote my initial prompt four times, each time adding more details about the goal and the technologies I wanted to use. Eventually, the prompt got me halfway to the MVP. Iterating from a poor first result sent me in a downward spiral a few times.
These were the primary elements of my initial prompt after a few iterations:
- Core functionality of MVP (two to three features that I wanted)
- Look and feel: mimicking the look and feel of a cork pinboard with drag and drop interaction (I attached an image for reference)
- Technical stack: React
- UI framework: Spectrum Web Components with the Spectrum Express Theme
- Setup and build: Use the Adobe Express add-ons CLI to scaffold, build, and run the add-on
Cursor.ai (and likely similar tools) enable you to set rules with a baseline of constraints and instructions. The Add-on MCP documentation includes a list of rules for add-ons that you can use as a guide.
Things that helped me do a better job
These practices made the biggest difference in my workflow:
Using a version control system
I don’t think any developer today would write software without a git repository, especially when vibe coding. It lets you incrementally build your project and gives you the ability to safely go back and forth between versions.
Fine-tuning the autonomy of the agent
You can fine-tune how autonomous you want the agent to be. For example, I wanted to have some control over package installation and didn’t want to commit every change to my git repo (Cursor tried to do this automatically, which was unexpected and concerning).
Start more and shorter conversations
A good practice is to start a new agent conversation every time you want to add a different feature or troubleshoot something new. If you keep iterating in the same window, it accumulates context and eventually needs to summarize the conversation before it rounds back to the LLM. My approach: new conversation tab → implement → resolve → commit → repeat.
Keeping it grounded in the documentation
No matter the rules and the context, the agent sometimes steered me in the wrong direction, such as trying to implement standard JavaScript dialogs instead of add-on modal dialogs. When that happened, I pointed it back to the Adobe add-on documentation, which immediately corrected the issue. Sometimes, Cursor wouldn’t restart the server after making manifest changes, which I knew it had to. So it's good to closely monitor the agent logs regularly and ensure the MCP is being called frequently. Look for anything that seems off.
Using the AI tools to learn
This is pretty much “on-the-job” learning. Asking the tool to explain concepts was really educational. For example, I learned a great deal about local storage, file compression, and accessibility. Whenever I was in doubt, I would just ask for an explanation.
Debugging and nitpicking
Cursor is great at writing its own debugging routines and adding logs. I’d copy/paste screenshots of the results, and the agent would iterate. That’s especially helpful to fix those little things that bug you. For example, it took me several cycles to make images on the board come to the front on hover and return to their previous position on hover out. It was very problematic when multiple images and stickies overlapped, but the tool guided me with logs and visual cues to find a solution.
The biggest wins
These were the areas where the combination of AI tooling and the MCP Server made the most significant impact.
Revamping the UI to make it consistent with Adobe Express
A few prompts transformed a rough-looking add-on into a polished, visually consistent experience aligned with the Adobe Express theme. See the before (left) and after (right):
Even if you don’t rely on AI for code generation, it's worth using just for UI alignment. The improvement in user experience is noticeable and saves a meaningful amount of time you'd otherwise spend on design and layout.
Meeting marketplace requirements
Publishing an add-on to the marketplace requires meeting a quality bar in the submission guidelines. I had the Add-on MCP Server run a compliance check, which returned a helpful, detailed report that flagged what was already compliant, what needed further testing, and potential challenges to address. Accessibility was a good example: Not only did I receive a checklist of improvements, but I was also able to implement many of them with just a few prompts.
Distribution preparation
The MCP helped automate release notes, package the add-on, and even create assets, like an add-on icon. It’s a big time saver. That said, be prepared: You may end up with more emojis than you expect in your output!
Shaping the future of the developer experience
This experiment reinforced the importance of good documentation, not only to serve people directly through our developer portal, but also through the new AI tools they are using today, and to help the tools themselves obtain the proper context that will help them write better code. It’s been a priority for our team; we are moving in the right direction, but there’s still more work to do.
It also confirmed the transformative potential of AI technologies, not just in accelerating development but in enabling teams to focus on solving real user problems rather than getting caught up in operational complexity. It also highlights the importance of meeting developers where they are, maintaining open, evolving communication channels with the community, and fostering a culture of continuous learning.
Closing thoughts
I was able to achieve my goal. Over a weekend, I spent 8-10 hours building something that would normally take me a couple of weeks. I have less time to code today due to my job and balancing personal and professional responsibilities, but I am happy to see that I can still achieve the results I used to when I spent entire nights and weekends on side-projects.
AI-first workflows can go beyond prototyping when paired with developer expertise, guardrails, and continuous improvement. There’s an immense opportunity to help developers not only build faster, but build better with a delightful user experience.
Usually, the source code itself would be the centerpiece of any project I share. This time, though, I’m less attached to the code and more focused on the process — I vibe coded it! What I deeply care about now are the technology constructs and decisions that shaped the outcome. Could the future of open source be less about the code and more about how we build with it? It’s too early to say, but the possibilities are worth exploring.