I am thrilled to bring you today’s episode, because I think Murphy is the most spot-on thinker on design systems right now. She’s been blogging prolifically on design system AI readiness, and today we’re going to dive into some of Murphy’s core ideas and an incredibly good spicy take.
🎧 listen to Episode #05 with Murphy Trueman on Design system quality has a business case now (and it's ... AI?)
Read on for a peek into the episode.

Murphy Trueman joins me to dig into something most design system teams already know but haven't wanted to say out loud: your design system isn't ready for AI. Ambiguous tokens, undocumented rules, and a lack of parity that only works because experienced humans know how to use the system... we've been getting away with it for years. AI just made the bill come due. We discuss what it actually means to treat your design system like a semantic API, how to think about governance, and why the fixes AI demands are ones we probably should have made years ago. Plus, what it means to allow more roles (and LLMs) to build with the system, and and what teams can do right now to get their house in order.
🎧 Design system quality has a business case now (and it's ... AI?), with Murphy Trueman — #05 (37min)
Murphy:
The move is from implicit knowledge to explicit contracts. Human designers can intuit that a button with rounded corners and a blue background, is probably a primary action, uh, but agents can't reliably make that leap. They don't intuit behavior, they infer it from structure.
Elyse:
The rigor around foundations, documentation, component relationships, composition, has to be much greater than it ever has been. So it's not like we didn't have good foundations. It's just the level of rigor now, is, the demands are, are greater.
Murphy:
Every design system runs on a layer of undocumented rules, like, don't use that component for navigation, or that variant exists but we're about to remove it. That's all context that lives inside someone's head, and it gets passed around in Slack threads and pairing sessions, but doesn't really make it out, outta people's heads, and we've all become really good at translating our own inconsistencies.
Humans are incredibly good at compensating for missing information. But AI exposes every place where you were relying on that compensation. Instead of building the information into the system itself.
Elyse:
I feel like the LLMs just, they take everything at complete face value. There's no,... you can't provide that kind of information. It's either like deprecated or we're using it.
Murphy:
Yeah. And that's kind of a dream for someone like me that loves to write everything down, but otherwise that's a nightmare. But yeah, AI tools read your documentation literally. If a component is documented as available, the agent will use it, even if every human on the team knows that it's problematic.
Treating the system as a semantic API is really just taking discoverability seriously enough to encode it into the structure. That work didn't start with AI, but it is the same instinct, encoding intent , so the right person or the right tool can find the right thing.
💖 If you like what you're hearing from On Theme, please hit subscribe on your favorite podcast platform or consider supporting the show with a monthly donation.
Have a question or want to hear about something on the show? Reply to this email or DM me on LinkedIn and let me know—can’t wait to hear from you!
See you next episode!,

