Director of Engineering Security at Okta, Inc. San Francisco, USA
Title of the Talk :
Breaking AI is the Only Way to Trust It
Abstract of Talk:
Testing AI systems for failure is no longer optional. Regulations such as the EU AI Act and the latest executive order on AI from the White House mandate rigorous evaluations for foundational models and high risk applications. Security professionals must now expand their scope beyond traditional vulnerabilities to uncover adversarial machine learning weaknesses and responsible AI risks that can emerge in real deployments. This keynote focuses on systematically uncovering and addressing these failures before AI systems are deployed. Attendees will gain insights into AI architectures, popular models like GPT, Stable Diffusion, and LLaMA, and the evolving threat landscape they face. The session explores threat taxonomies and failure of models through frameworks like MITRE ATLAS and OWASP Top 10 for LLMs, while introducing advanced attack strategies including perturbation, poisoning, and model inversion. The keynote concludes with actionable strategies and checklists aligned with current and emerging regulatory requirements to help security teams deliver resilient and trustworthy AI deployments.
