The Shape of Responsible AI
Lessons from conversations on AI, risk, and the structures that hold them together
How do we innovate quickly without losing our grip on safety, fairness, and public trust? This question sat at the center of my conversations across sixteen U.S. cities & towns in six weeks during my Eisenhower Fellowship. I met with researchers, tech founders, policy thinkers, regulators, industry leaders, civil society groups, and people wrestling with the same tension: AI moves fast, but good governance can't fall behind.
I’m deeply grateful to everyone who made time to speak with me. The ideas here are not mine alone; they are a synthesis of the perspectives, debates, and candid conversations I had throughout the fellowship. What follows are the six insights that helped sharpen my thinking about how to push innovation forward while keeping governance close at its side.
Breaking the Dichotomy
Innovation and governance were never meant to sit on opposite sides.
During the fellowship, I kept noticing how often people talked about innovation and governance as if they sat on opposite ends of a spectrum. Some leaned toward speed of innovation, others toward caution and control, and the framing appeared so natural to them that it almost sounded inevitable.
In my own work, though, I've never found that split useful. The dichotomy feels more imagined than real, and it often distracts from the more interesting question of how the two can move together.
An interlocutor captured this perfectly with an F1 analogy. He explained that winning a race isn’t about simply stepping on the gas. It’s about maneuvering, too—anticipating the curve, adjusting at the right moment, and staying in control at high speed.
Governance in AI plays a similar role. It provides the structure that allows progress to continue steadily and reliably.
In fact, this logic is central to how we work at the Center for AI Research (CAIR Philippines). The systems we build are often high-risk, and several projects run in parallel. Having a governance framework gives the work a steady footing. It keeps decisions coherent, helps teams navigate trade-offs, and makes the pace of innovation manageable rather than overwhelming. And when they're aligned, the work is simply easier to handle and far less chaotic.
Seeing innovation and governance as partners sets the foundation. But the moment you try to apply this thinking beyond a team or an institution, a new set of questions appears.
Scaling is Hard
Institutional success doesn't automatically translate to national readiness.
One of the questions that really pushed me to meet the range of interlocutors during the fellowship was this: if our governance approach works inside CAIR, what would it take for something similar to hold at a national level? Many of my meetings started with a simple set of prompts. Do countries even need a governance framework (national level)? If yes, what form should it take, and how would it function in practice? If not, then which policies or mechanisms should fill that role instead? The answers varied, but the underlying complexity was a common thread.
That part surprised people, because "normalizing" sounds clean until you try to do it.
The more I spoke with experts, the clearer it became that governance doesn't behave the same way across sectors. What works in finance won't map cleanly onto education. Health has entirely different standards and rhythms. Even within the same sector, different use cases can demand their own thresholds, documentation requirements, and risk treatments.
Trying to flatten all of that into a single governance template creates stress points everywhere.
These discussions made the challenge much more tangible. Scaling isn't about stretching a successful institutional model to cover a wider surface. Once you move beyond one organization, the system shifts; that is, capacity varies, incentives diverge, and small gaps can generate outsized ripple effects.
If a national framework is the path forward, it has to meet institutions where they are. Some institutions are ready to adopt stronger governance immediately; others need phased support or entirely different mechanisms. Any approach that ignores that diversity, no matter how elegant it looks in theory, will struggle the moment it encounters the complexity of actual systems.
And once you start exploring how governance might scale, a new question emerges: can any framework work the same way across different environments?
Context Shapes Practice
Governance models must be translated, not imported.
When I began this project, I had already spent time reading multiple governance frameworks. I perused the EU AI Act, the Singapore Model AIGF, NIST’s AI RMF, Microsoft’s Responsible AI Standard, Japan’s AI Promotion Act, and others. They differ in structure and philosophy, but their goals overlap: safety, accountability, and a clear view of risk. On paper, each one looks promising. The real complexity appears when you try to imagine how they function within a specific institution or country.
Now, many organizations assume that choosing a strong, well-known framework and applying it is the straightforward path. And yet, despite how obvious “contextualization” sounds, I have seen organizations of all sizes struggle with it. Even large institutions fall into the pattern of copying a polished framework that worked elsewhere, only to discover later that it does not sit comfortably within their own workflows, culture, or capacity.
At one point, an interlocutor shared his tongue-in-cheek redefinition of "TLDR": template, learn, decontextualize, recontextualize. It stayed with me because it captured a recurring pattern I already recognized; that is, teams copying a framework that looks good on paper, and then encountering friction when they try to use it. Workflows compete with the framework's assumptions. Documentation demands land unevenly. Risk lenses misalign with the actual use case. The difficulty rarely comes from the framework itself; it’s the mismatch between the context it came from and the one it enters.
We experienced this ourselves at CAIR. Before drafting our governance framework, we studied many models and quickly realized that none fit us neatly. We had to account for our size, the kinds of high-risk systems we build, and the speed at which our teams iterate. Operationalization guided the design from the beginning because the framework had to work for the people using it every day..
Governance settles in more easily when it follows the natural tendencies of an institution. The way people work, how decisions travel, and what the organization can realistically sustain all shape whether a framework becomes part of everyday practice or stays theoretical. When the design respects these natural tendencies, the work feels more grounded, and the system holds together with less friction.
Progress Through Iteration
Incremental improvements often outperform grand visions that never materialize.
Governance often runs into trouble when the plan tries to cover everything at once. It reminds me of the “broken windows” idea, not in the policing sense but in how institutions read signals. When people see rules that never get enforced, or standards that never translate into action, they learn that the whole effort is optional. A framework that feels too heavy creates the same effect. If nothing moves because the workload is unrealistic, the system quietly assumes that following it is not essential.
I have seen versions of this myself. When expectations outpace what teams can handle, engagement fades and the framework sits there unused. Smaller, steady improvements create a different experience. People stay involved because each step feels manageable. The work becomes easier to absorb, and over time, those small adjustments build habits that last much longer than any sweeping plan rolled out all at once
Big visions are exciting, but the work really moves when improvements come in pieces the system can absorb.
During the fellowship, several interlocutors encouraged me to look at how different U.S. states approached AI and digital governance. Some states took measured, incremental steps and saw more traction. Others unveiled large, ambitious frameworks that looked impressive on paper but stalled almost immediately because implementation never caught up. That contrast made it quite clear that iteration and incremental progress give governance a better chance of taking root. They allow systems to grow in real conditions and at a pace institutions can actually sustain.
Auditing the Middle
Verification is the missing piece between principles and practice.
Once you start thinking in increments, the next question is about evidence. How do we know if each step is moving us in the right direction?
My background as a physicist makes me gravitate toward things I can measure. So once the conversations on iteration and incremental progress took shape, the next question for me was straightforward. If we are moving in steps, how do we know whether those steps are working? What are we checking against? What does “this system is fair” or “this model behaves as intended” actually mean in practice?
Principles are great, but until you can check what the system actually does, you’re guessing with confidence.
One interlocutor offered a line that stayed with me because it was elegantly put. She said, “Just do your homework (as developers and designers of AI systems), so when customers ask how your system works, you can actually answer.” Responsible AI requires clarity, and clarity comes from knowing your system well enough to point to evidence rather than intention.
As I listened to more perspectives, I started seeing auditing as something broader. The technical checks matter a lot, but they are only one part of the picture. Auditing also involves understanding who is affected, looking at how decisions move across teams, reviewing whether safeguards behave the way we assume, and checking that accountability does not disappear as the system grows. It is an opportunity to see the model, the process, and the consequences together rather than in isolation.
And even with all this, auditing remains one of the hardest parts of oversight. Standards and benchmarks are still evolving, and aligning them across sectors or use cases is no small task. That topic probably deserves its own post. For now, what's clear to me is that auditing is the bridge between principles (or what we intend) and real-world performance (what actually happens), and it's a bridge we're still collectively building.
Trust by Design
Trust emerges from systems that earn it, not systems that explain it.
After talking so much about measurement and verification, the conversations naturally shifted to the reason any of this matters. At the end of the day, these systems will only work if people trust them. Not blindly, and not because someone issued a long explanation, but because the system gives them enough understanding and enough room to ask questions when something feels wrong.
People trust AI when they’re allowed into the process—not when they’re handed an explanation after the fact.
This matters to me because the systems we build are meant to serve the public. At some point, society has to trust these tools. And that trust forms more easily when the system is open to scrutiny, when there is documentation to look at, and when there is a path to challenge an outcome.
In the end, design choices shape trust. If people can see how a system behaves and know they have a way to respond when something seems off, confidence builds on its own. It is also worth noting that trust is not a final step; it forms throughout the design and oversight process, one thoughtful decision at a time.
You’re still here. Thanks for staying.
These ideas came out of conversations with people who look at AI from very different angles. Some build the systems, some govern them, and others see how they play out in real life. Each discussion added a piece to the questions I work with every day. Some confirmed patterns I've observed, others challenged them in useful ways.
These reflections serve as working principles that will continue to take shape as the project progresses. Definitely not meant to be a full roadmap. Responsible governance grows through movement, through small adjustments, and through the willingness to revisit assumptions when the context changes. That is where its strength tends to come from.
I am also grateful to the Eisenhower Fellowships for creating the space for these exchanges. The people I met and the ideas they shared made this exploration far richer than anything I could have done alone.
This site is my way of gathering the ideas that shaped my fellowship and the conversations that pushed my thinking forward. If any of the ideas presented sparked something for you or made you see AI governance from a slightly different angle, then this space has done its job. You can reach me anytime at erika@legara.phd.
A Higher Education Template by Shorthand. All images courtesy of Unsplash, Gemini, and ChatGPT.
