AI-based attacks
,
Artificial Intelligence and Machine Learning
,
Fraud management and cybercrime
Also: Protecting AI vulnerabilities against cyber adversaries
In the latest “Proof of Concept,” Zscaler’s Sam Curry and Venable’s Heather West assess how vulnerable AI models are to potential attacks, propose practical steps to build resilience into AI models, and discuss the How to address biases in training data and model predictions. .
See also: On-demand panel | Ensuring Operational Excellence: Addressing the Top 5 CISO Security Concerns
Anna Delaney, director, productions; Tom Field, senior vice president, editorial; Sam Curry, Vice President and CISO, Zscaler; and West, senior director of cybersecurity and privacy services at Venable – discussed:
- Methodologies for assessing the vulnerability of AI models;
- How to assess and mitigate privacy concerns in AI systems;
- How to identify and correct biases in training data and model predictions.
Curry was previously chief security officer at Cybereason and chief technology and security officer at Arbor Networks. Prior to this role, he spent more than seven years at RSA – the security division of EMC – in various leadership roles, including director of strategy, chief technologist and senior vice president of management and product marketing. Curry has also held leadership positions at MicroStrategy, Computer Associates and McAfee.
West focuses on data governance, data security, digital identity and privacy in the digital age at Venable LLP. She has served as a long-term political and technology translator, product consultant, and Internet strategist, guiding clients through the intersection of emerging technology, culture, government, and policy.
Don’t miss our previous episodes of “Proof of Concept”, including November 17 edition on the impact of the American executive order on AI and December 8 edition on navigating software liability.