← Back

Richard E. Harang, Ph.D.

LinkedIn · Bluesky · Google Scholar

AI security expert with over a decade of experience performing research and development at the intersection of machine learning, security, and privacy. Combines deep technical expertise with a talent for translating and communicating that expertise to diverse audiences, from engineering teams to executive leadership. Thrives on hands-on research, architectural design, finding and fixing novel vulnerabilities in emerging AI-powered systems, and serving as a trusted advisor to senior leadership on the emerging area of AI security.

NVIDIA
Principal AI/ML Security Architect
August 2022 – Present
  • Led AI security initiatives across NVIDIA, focusing on large language models (LLMs) and agentic systems.
  • Designed AI-specific security processes to support over 600 models on NVIDIA AI Enterprise.
  • Built AI expertise in security teams via training and technical mentorship, resulting in independent AI security capabilities across 4 separate business units.
  • Built and streamlined AI security approval processes for security and Third Party Risk Management teams, cutting time to approval from weeks to days.
  • Worked with external business partners (including Microsoft, Perplexity, Anthropic) to identify and remediate security issues in products used at NVIDIA.
  • Developed frameworks for threat modeling and securing AI-powered applications and agentic systems.
  • NVIDIA lead on initiating cross-industry OpenSSF AI Model Signing Initiative.
  • Contributor to v1 of OWASP Top 10 LLM Security Risks.
Duo Security
Senior Technical Leader
April 2021 – August 2022
  • Managed Duo Data Science algorithmic research group, leading basic research on unsupervised learning, weak supervision, and anomaly detection on time series of user authentication events to help secure over 20 million auth events per day.
  • Drove security assessments of machine learning and data collection pipelines, considering both data privacy and protection issues as well as potential attacks against authentication mechanisms with machine learning components.
  • Developed and mentored junior staff members and developed data science collaborations between Cisco companies.
Invincea / Sophos
Research Director: Data Science
November 2016 – April 2021
  • Third Data Scientist hire at Invincea; post-acquisition by Sophos, helped grow the Sophos Data Science team from 3 to 18 FTEs.
  • Helped integrate ML Windows PE malware detection model into Sophos pipeline, and led development and technology transfer of 7 new customer-facing ML models deployed on over 100 million endpoints.
  • Managed a team of 3-5 data scientists focusing on applying machine learning to security problems: network security, malicious software artifact detection, webpage content classification, email content analysis, and adversarial evasion of ML-powered endpoint security.
  • Multiple papers and 11 patents on ML and applications of ML to security problems.
U.S. Army Research Laboratory
Team Lead
August 2014 – November 2016
  • Led a team of contractors and federal employees to perform 6.1 basic research into machine learning for a range of security applications.
  • Multiple publications, including in USENIX Security, NDSS, and IEEE-TIFS.
  • Established deep learning capabilities for the Network Sciences Division, focusing on network security and intrusion detection applications.
  • Applied research on source code stylometry, adversarial attacks against vision systems, and adversarial manipulation of language models in the form of recurrent neural networks.
  • TS/SCI clearance.
ICF International
Research Fellow (Contractor to ARL)
September 2011 – August 2014
  • As a contractor to the U.S. Army Research Laboratory, supported the Network Sciences Division with theoretical and experimental research into applying machine learning to network security.
  • Multiple peer-reviewed publications, technical reports, and conference presentations at both classified and unclassified levels.
  • TS clearance.
Ph.D. in Statistics and Applied Probability
University of California, Santa Barbara
2010
From Prompts to Pwns: Exploiting and Securing AI Agents
Black Hat USA 2025
Black Hat Machine Learning (Training)
Black Hat US 2023 & 2024, Black Hat EU 2023
Practical LLM Security: Takeaways From a Year in the Trenches
Black Hat USA 2024
Security Data Science: Getting the Fundamentals Right
BSides Las Vegas 2019
Measuring the Speed of the Red Queen's Race
Black Hat USA 2018 (w/ Felipe Ducau)
Getting Insight Out of and Back into Deep Neural Networks
BSides Las Vegas 2017
Harang, Richard and Rudd, Ethan M. SOREL-20M: A Large Scale Benchmark Dataset for Malicious PE Detection. arXiv preprint arXiv:2012.07634 (2020)
Caliskan, Aylin; Yamaguchi, Fabian; Dauber, Edwin; Harang, Richard; Rieck, Konrad; Greenstadt, Rachel; Narayanan, Arvind. When coding style survives compilation: De-anonymizing programmers from executable binaries. NDSS 2018
Dauber, E., Caliskan, A., Harang, R. and Greenstadt, R. Git blame who? Stylistic authorship attribution of small, incomplete source code fragments. ICSE 2018
Papernot, N., McDaniel, P., Swami, A. and Harang, R. Crafting adversarial input sequences for recurrent neural networks. MILCOM 2016