From Claude Code to Figma and Back Again

  • Code
  • Claude Code
  • AI
  • Figma
  • Figma Code Connect
  • MCP

A couple of weeks back, I attended From Claude Code to Figma – and Back Again, a webinar co-presented by Figma and Anthropic. This webinar was part conversation, part design/coding jam between Brett McMillin, Designer Advocate at Figma, and Thariq Shihipar, Member of Technical Staff at Anthropic. It was an interesting look at how a designer and a developer approach the build of a design prototype and how AI tooling is starting to break down the walls between these two disciplines. I saw a lot of similarities with my own early experiences with Claude Code, Figma MCP, and Figma Code Connect but also some differences, so this blog post is part notes about the session, part observations from my own explorations with modern frontend dev tools.

The opening thesis that Brett and Thariq posited is that traditional product roles and workflows in SAAS are blending. Where PMs, Designers, and Engineers previously had dedicated lanes and tasks, AI tooling is now empowering cross-pollination between the disciplines. PMs can converse with agentic design tools like Figma Make to whip up low-fidelity wireframes or mid-fidelity prototypes. Designers can reference high-fidelity responsive screens in an LLM chat and prompt a tool like Claude Code to translate them into a working prototype. Engineers can describe an end goal, like a checkout flow, a login screen, or a conversion-focused hero section, and see a somewhat polished version of that in whatever format they want, be that a design prototype or an example running on localhost.

The cost of inspiration to delivery is going way down. Ideas can start anywhere. Previously, the design canvas was a low-cost starting area, while scaffolding code for prototyping and execution was far more costly.

Brett and Thariq argue that the source of truth is shifting, not so much to code but more toward the system. "The system" here consists of the design system itself as well as the guardrails given to your AI tools, whether that's a markdown file, a Claude Skill, a conversion between Tailwind CSS tokens and in-house design tokens, or something else. What the "source of truth" means is going to be different from team to team, codebase to codebase, and project to project. The system is the collateral that you put into the MCP to guide output.

What is the MCP?

MCP (Model Context Protocol) allows your apps to communicate rules and standards with AI tooling, resulting in AI output that lands closer to a desired result than it would without those guardrails and standards. In the context of the Figma and Anthropic features discussed in the webinar, MCP opens a two-way dialogue between Claude Code and the Figma canvas. It enables Claude to act as a collaborator within Figma to pull and push data. MCP allows you to get all of your data into your various agents, allowing them to be as smart as they can be.

When we talk about "roundtripping" from code to design and back again, MCP is the pipeline that makes it possible to traverse all of those services while maintaining a base of knowledge. In one recent example from my own experience, I was building a card component as part of my team's UI Kit/Component Library redesign. It looked kind of like this:

Card Title

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut

I went about this in a couple of directions. The first direction I took was from Figma to code via Claude. I prompted Claude to build the generic one in the UI kit by giving it the link to the Figma component, instructing it to use our in-house utility CSS library, and asking if it had any questions before it began building. It did not. From there, it took a couple of minutes but it generated a TypeScript React component in the place I told it to in our UI Kit repo.

At that point, I fired up Storybook on localhost and did the same kind of code review I would for any dev on my team, checking contents, typography, spacing/layout, and border radii. It definitely had a few hiccups - not recognizing padding shifts between screen sizes or generating CSS modules instead of using existing utility CSS classes. When I asked why this happened, it generated updates to the Tailwind-to-in-house-styles conversion JSON file.

From there I went at this from a different direction. A previous version of this card component was already in use on a lot of pages across our website. Not only were there a lot of color/theming variants live on the site that the Figma component was out of sync with, but over time the original component had also become polluted with three or four "modifier" props to adjust card padding, font sizes, icon sizes, and border colors. Variants existed in code that Figma should account for, but text sizing and spacing differences also existed in code that Figma shouldn't account for, and instead those differences should be treated as bugs and made consistent.

Having created a new generic card component, I went back to Claude, giving it screenshots and code locations of as many of the old, inconsistent cards as possible, instructing it to keep track of the color/theme variants it saw, making those available as color prop enum options in the React component, but to ignore spacing and font size inconsistencies.

Where it kind of struggled to generate the generic component, it actually did a really good job of tracking all of the existing card variants and generating those as available props, while abiding by our utility classname standards. It even went to the effort of updating JSDoc and README documentation, spinning up test suites for these variants and general component behavior, and updating the old cards for the new ones across the site.

Moving fluidly between code and canvas

Back to the Figma/Anthropic presentation. They break down the MCP pathways between Figma and Claude into two sets of capabilities: read capabilities and write capabilities. Everything I just covered falls under the read capabilities umbrella: it was pretty much able to see what I wanted to build in Figma, and more-or-less how I wanted it to be built in our code base, based on existing components and documentation in our .ai and CLAUDE.md files. This needed guidance and code review. It was also further able to read from the codebase, comparing legacy versions of the same component, discerning the things I wanted to keep, and updating the bugs/inconsistencies/messy modifier prop API that I wanted the new component to address. It did a great job at that.

The Figma/Anthropic presentation digs instead more into the write capabilities, where you can prompt Claude Code to create and edit designs in Figma that are linked to the Figma design systems. This opens up the Figma canvas to Claude Code, primarily through:

  • generate_figma_design, which sends UI for your web apps and sites as design layers to Figma
  • use_figma, which creates or modifies any object in a Figma file by using your design system or building new components
  • Skills that teach Claude Code how to work directly in Figma
    • This flow works best with the figma-use skill, which explains to an agent how the Plugin API works and how to read/write to a Figma file with considerations for tokens, variables, styles, and components
    • They (predictably) plugged the Figma community of Claude skills here

After briefly talking through the write capabilities above, Brett and Thariq demo'd some of these workflow items:

  1. Thariq ran a live recipe app on localhost, and he prompted Claude Code in CLI to create a recipe card component in Figma
  2. Claude created the recipe card as a Figma component, Brett made some visual adjustments (fixed how pill tag looked, rearranged the order of elements underneath the image in the card)
  3. Thariq prompted Claude to update the code of the React component to account for the updates

So it wasn't too different from my actual experience. If I were to complete the roundtrip, I would have taken all of the color/theme variants that I found and had Claude build into the new React component, and prompted it to try to update the Figma component to account for all of those variants.

What's the value in all of this?

This is all very cool, but what is the value in this? On the one hand it allows multiple sources of truth to inform one another: card component variants that exist in code may have only existed as detached Figma component variants, and now a dev has the power to update the actual component in Figma. On the flipside, a designer doesn't need to wait for an engineer's bandwidth to open up to be able to do some live prototyping. All of this has enormous potential speed benefits.

However, with Figma files and code originating from anyone and anywhere, we also run the risk of generating way too many artifacts that aren't actually ready for primetime - Figma components that haven't been design-reviewed, code that hasn't been code reviewed. At best this means quite a bit of refactor down the road, at worst it means polluting our workspaces with cruft and liability.

What all of this points to is that when we embrace these tools, governance is more important than ever. Communication and review processes need to be stronger than the urgency to deliver things as quickly as possible. Otherwise, tying these tools together can be like tying our shoes together, it just becomes a way to trip over ourselves in a major way.