Towards Trustworthy ML: Rethinking Security and Privacy for ML

Nicolas Papernot · Carmela Troncoso · Nicholas Carlini · Florian Tramer · Shibani Santurkar


(Note: Workshop posters and instructions are on the workshop site. Password for the zoom link is: seaslug .)

Description: As ML systems are pervasively deployed, security and privacy challenges became central to their design. The community has produced a vast amount of work to address these challenges and increase trust in ML. Yet, much of the work concentrates on well-defined problems that enable nice tractability from a mathematical perspective but are hard to translate to the threats that target real-world systems.

This workshop calls for novel research that addresses the security and privacy risks arising from the deployment of ML, from malicious exploitation of vulnerabilities (e.g., adversarial examples or data poisoning) to concerns on fair, ethical and privacy-preserving uses of data. We aim to provide a home to new ideas “outside the box”, even if proposed preliminary solutions do not match the performance guarantees of known techniques. We believe that such ideas could prove invaluable to more effectively spur new lines of research that make ML more trustworthy.

We aim to bring together experts from a variety of communities (machine learning, computer security, data privacy, fairness & ethics) in an effort to synthesize promising ideas and research directions, as well as foster and strengthen cross-community collaborations. Indeed, many fundamental problems studied in these diverse areas can be broadly recast as questions around the (in-)stability of machine learning models: generalization in ML, model memorization in privacy, adversarial examples in security, model bias in fairness and ethics, etc.