
< session />
How to Prevent AI Agents from Accessing Unauthorized Data
Fri, 24 April
As AI systems move into production, data security and access control become critical. In the era of AI agents, enterprises must move beyond experimentation to Day 2 operations, where guardrails, compliance, and fine-grained authorization define success. This session explores how to design permissions systems that ensure AI agents access only authorized data while maintaining high-quality and efficient query responses.
Attendees will learn how Relationship-Based Access Control (ReBAC), popularized by Google’s Zanzibar model, provides a scalable foundation for fine-grained authorization in AI-powered environments. The session includes a live demo showcasing the implementation of fine-grained access control for AI agents and RAG pipelines using Pinecone, LangChain, OpenAI, and SpiceDB.
What You Will Learn
-
How to design fine-grained authorization systems for AI agents and RAG pipelines
-
Why the Google Zanzibar ReBAC model is well suited for scalable AI authorization
-
Practical implementation strategies using Pinecone, LangChain, OpenAI, and SpiceDB
Who Should Attend
AI engineers, data architects, security professionals, and developers working on enterprise AI systems who need to ensure compliance, scalability, and secure data access for AI agents.
< speaker_info />
About the speaker
Sohan Maheshwar
Lead Developer Advocate, AuthZed
Sohan is a Lead Developer Advocate at AuthZed, based in the Netherlands. He started his career as a developer building mobile apps and has been living in the cloud since 2013, in companies such as Amazon, Fermyon and Gupshup. He is also an O' Reilly author, having created a course on Cloud Concepts for Everyone.
He has always been interested in emerging technologies and how it shapes the world around us.








