An Interview with Simon Muflier, Founder of The Oyez

Interviewer: Simon, for those who don’t know you yet, how do you describe your work?

Simon Muflier: I study how we build, govern, and live with intelligent systems. At The Oyez, we help organisations make sense of next-generation AI—what it can actually do, where it breaks, and how to shape it responsibly. It’s part research, part policy, part culture. The connective tissue is systems thinking: looking at incentives, feedback loops, and real-world constraints instead of shiny demos.

Interviewer: What motivated you to start The Oyez?

Simon: I kept seeing two gaps. First, leaders were either dazzled by AI or paralysed by it—both positions lead to bad decisions. Second, the conversation was siloed: engineers spoke code, policy folks spoke regulation, and brand or culture teams spoke narrative. The Oyez exists to translate across those languages and produce decisions that hold up under scrutiny, not just hype cycles.

Interviewer: “Next-gen AI” is a slippery term. What does it mean to you?

Simon: For me it’s less about model size and more about capability integrated with governance. It’s AI that can reason across modalities, cite sources, verify steps, and operate within guardrails that reflect public values. Next-gen systems will be judged not only by accuracy but by auditability, provenance, and fitness for a specific domain. The frontier is as much institutional as it is technical.

Interviewer: Where do you see the biggest risks right now?

Simon: Misaligned incentives. If the metric is only speed or clicks, we’ll ship brittle systems into high-stakes contexts. Another risk is treating policy as an afterthought—governance must be designed in, not bolted on. Finally, cultural risk: AI that erodes trust because people don’t understand what it’s doing, or who is accountable when it errs.

Interviewer: And the most exciting opportunities?

Simon: Using AI to widen, not narrow, human judgement. That means tools that surface dissenting evidence, map assumptions, and force us to confront uncertainty. In healthcare or law, for instance, AI should make experts more rigorous and transparent. I’m also excited about sector-specific stacks—tailored data, tailored controls—rather than one giant model for everything.

Interviewer: How does The Oyez work with clients on that?

Simon: We start with a “truth audit”: what decisions matter most, what data they depend on, and how errors propagate. Then we design an architecture—often a hybrid retrieval and reasoning layer—with verification gates and human checkpoints. In parallel we draft policy: data handling, model evaluation, incident response, and communications. Culture is the last mile: training teams, crafting clear language, and setting norms so the technology lands well.

Interviewer: What’s one misconception you’d like to retire?

Simon: That regulation and innovation are opposites. Good rules create predictable lanes so the best ideas can compete on safety and performance, not on who cuts corners fastest. We need proportional, testable standards—think aviation checklists more than vague principles.

Interviewer: For leaders feeling behind, where should they start?

Simon: Choose one meaningful workflow. Instrument it, measure current performance, and run a controlled pilot with clear exit criteria. Require model traceability and human accountability from day one. And invest in documentation—it’s unglamorous, but it’s how you scale judgement.

Interviewer: Finally, what keeps you optimistic?

Simon: The calibre of people entering the space from diverse backgrounds—design, humanities, public service. When you put technologists, policy thinkers, and culture builders at the same table, you get systems that are not only powerful, but worthy of trust. That’s the work.