AI policy in practice: Shradha Mathur
04 August 2025
Founded in 2023, OpenSphere helps maximise visa approval chances with AI-powered assistance and legal guidance. We talk to the firm’s legal operations lead Shradha N Mathur about her role, how AI policy isn’t abstract and why we need greater participation in the policy process.
“Good AI policy starts with real-world context. If we want systems that work for people, we need rules shaped by the people they affect.”
What’s your role and why is AI policy important to you?
At OpenSphere, where I lead legal operations, we build AI-powered tools for immigration law. It’s a high-stakes environment, dealing directly with people’s futures and legal rights. AI policy here isn’t abstract. It shapes how decisions are made, how systems are explained, and how responsibility is assigned. It’s not just about risk mitigation but about asking who this technology serves and who it might overlook.
How do you help OpenSphere meet evolving AI regulations in India and globally?
My role sits at the intersection of compliance, ethics, and operational strategy. I work closely with our product and engineering teams to ensure that our systems prioritize transparency, data integrity, and user accountability from the design stage onward.
Given that our core users interact with the U.S. immigration system, we keep our workflows closely aligned with US Citizenship and Immigration Services (USCIS) expectations. This includes reviewing how our AI-assisted outputs support evidence compilation, follow documentation norms, and maintain clarity in petition structures. We aim to ensure that every AI recommendation is auditable, explainable, and ultimately supports human legal judgment within the framework USCIS requires.
What legal and ethical frameworks guide your approach?
I rely on a combination of local data protection norms and international human rights-based frameworks. The Indian IT Act and pending DPDP Act shape baseline compliance, while documents like the UNESCO Recommendation on the Ethics of AI and the OECD AI Principles inform how we think about fairness, accountability, and human oversight.
In practice, this means setting up clear consent flows, limiting data collection to purpose-specific use, and maintaining documentation that explains how models are trained and refined. At OpenSphere, we are building explainability features not just for compliance but also to help attorneys and clients understand how recommendations are generated and where human review fits in.
How do you help ensure responsible AI development and deployment?
By getting involved early in the design and development process. At OpenSphere, legal and ethical reviews are not final-stage checks. They are part of the core design process.
We create internal review processes that bring legal, technical, and operational teams together. This ensures difficult questions are asked before building: What data is really needed? What harms could emerge? Are there fallback options if the model fails? Embedding these conversations into project workflows helps shift responsibility from afterthought to default.
What do you think of AI policy and how could it be improved?
There’s a lot to appreciate in the current landscape. India is moving toward formal data protection law, and globally, frameworks like the EU AI Act are setting important precedents. There's also more awareness now about systemic risk and the need to include impacted communities in policymaking.
At the same time, much of the regulation remains broad and interpretive. We need more sector-specific guidance, especially in domains like healthcare, education, and finance. Startups and smaller organizations often want to follow best practices but lack clarity on what exactly those are.
Globally, I think we need more focus on implementation. There are plenty of high-level principles. The gap is in making those principles usable on a day-to-day basis, especially for product, legal, and operations teams.
How do you think AI policy should develop in future to meet the challenges and opportunities of AI?
AI policy needs to become more practical, accessible, and rooted in lived experience. That means more guidance, not just regulation. Templates, checklists, case studies, and community-driven audits could go a long way in translating policy goals into everyday decisions.
We also need broader participation in shaping policy. Communities affected by automated decisions, domain experts, and small innovators all need a seat at the table. If we want AI systems that are safe, inclusive, and reliable, we need AI policy that is participatory and grounded in context.
James Boyd-Wallis is co-founder of the Appraise Network.