With the rapid advancement of computing hardware, learning algorithms, and the explosive growth of data, machine learning (ML) technologies have been widely adopted across diverse domains such as facial recognition, intelligent video surveillance, autonomous driving, and beyond. Despite their remarkable success, the security and privacy implications of ML systems remain insufficiently understood. In particular, adversarial machine learning continues to expose unique vulnerabilities in models, yet there is still a lack of systematic methods to assess and improve their robustness. At the same time, growing public awareness of data privacy raises critical concerns about how to train and deploy ML models without compromising sensitive information.
In addition, the rapid progress toward increasingly Artificial General Intelligence (AGI) systems (e.g., foundation models and AI agents) introduces new risks that go beyond conventional ML settings. These include issues of content provenance, detecting and mitigating mis/disinformation, ensuring trustworthy deployment at scale, and safeguarding both AGI and AI agents against misuse or adversarial manipulation.
This workshop aims to bring together researchers and practitioners to explore these pressing challenges. We solicit high-quality contributions addressing a wide range of topics, including but not limited to adversarial learning, robust algorithm design and evaluation, privacy-preserving machine learning techniques, and secure ML system deployment. The workshop will provide a forum for participants to exchange cutting-edge ideas, present novel solutions, and discuss emerging trends that bridge theoretical advances with real-world applications.
A best paper workshop award will be selected from all workshops in conjunction with ACNS2026 with a prize of EUR500 sponsored by Springer.
The accepted papers will have post-proceedings published by Springer in the LNCS series.