In conjunction with the ACM Conference on Computer and Communications Security (CCS)
November 26, 2023, Copenhagen, Denmark
The static nature of current computing systems has made them easy to attack and hard to defend. Adversaries have an asymmetric advantage in that they have the time to study a system, identify its vulnerabilities, and choose the time and place of attack to gain the maximum benefit. The idea of moving-target defense (MTD) is to impose the same asymmetric disadvantage on attackers by making systems random, diverse, and dynamic and therefore harder to explore and predict. With a constantly changing system and its ever-adapting attack surface, attackers will have to deal with significant uncertainty just like defenders do today. The ultimate goal of MTD is to increase the attackers’ workload so as to level the cybersecurity playing field for defenders and attackers – ultimately tilting it in favor of the defender.
The workshop seeks to bring together researchers from academia, government, and industry to report on the latest research efforts on moving-target defense, and to have productive discussion and constructive debate on this topic. We solicit submissions (both short and long) on original research in the broad area of MTD, with possible topics such as those listed below. We also solicit submissions for Systematization of Knowledge. These submissions will be reviewed similar to regular submissions. We also welcome all contributions that fall under the broad scope of moving target defense, including research that shows negative results.
List of Broad Topics:
Systematization of Knowledge: In addition to the regular submissions, we also seek systematization of knowledge submissions for MTD’23. Such submissions can evaluate and contextualize a subdomain of MTD or highlight important insights and lessons learned. These submissions will be reviewed based on their insights and treatment of the subdomain and not based on original research. All systematization of knowledge submissions must be distinguished by the prefix “SoK”.
Submitted papers must not substantially overlap with papers that have been published or simultaneously submitted to a journal or a conference with proceedings. Short submissions should be at most 4 pages in the ACM double-column format. Submissions should be at most 10 pages in the ACM double-column format, excluding well-marked appendices, and at most 12 pages in total. Submissions are not required to be anonymized. SoK submissions could be at most 15 pages long, excluding well-marked appendices, and at most 17 pages in total.
Submissions are to be made to the submission web site at http://ccs23-mtd.hotcrp.com. Only PDF files will be accepted. Submissions not meeting these guidelines risk rejection without consideration of their merits. Authors of accepted papers must guarantee that one of the authors will register and present the paper at the workshop. Proceedings of the workshop will be available on a CD to the workshop attendees and will become part of the ACM Digital Library.
Prof. Xiaojing Liao, Assistant Professors of Computer Science at Indiana University Bloomington, USA
Title: "Strategizing Robustness and Privacy in Machine Learning Systems"
Abstract: As intelligent systems ubiquitously integrate into our digital landscape, the imperative to safeguard their trustworthiness intensifies. Standard machine learning methodologies often proceed under the assumption that training and test data exhibit analogous distributions, neglecting the potential for adversarial manipulations or inherent distribution shifts, consequently compromising machine learning trustworthiness.
In this talk, I will discuss my research on securing machine learning systems, with a concentrated lens on robustness, privacy, and their intricate interconnections. In particular, I will first discuss the tactics employed by real-world cybercriminals to manipulate machine learning systems and derive crucial security properties from these activities to safeguard AI systems. I will then present a robust training framework capable of adhering to these identified properties through verifiable means. After that, I will discuss the impacts of privacy on the security properties of the verifiable robust training framework, and introduce a privacy property into the framework, aiming to fortify its defenses against model-stealing attacks, thereby navigating the tightrope between ensuring robustness and maintaining privacy in ML systems.
Dr. Nils Ole Tippenhauer, Faculty at CISPA Helmholtz Center for Information Security, Germany
Title: "MTD via Adaptive Control in Cyber-Physical Systems"
Abstract: Cyber-Physical systems such as autonomous vehicles, drones, and industrial control systems all rely on realtime control algorithms to process sensor data and provide optimal control commands to actuators. Malicious manipulations of such systems can change sensor values, control commands, or control algorithms to destabilize the physical process, leading to catastrophic physical damage. Although various anomaly detection approaches have been proposed in the past, those algorithms will not actively prevent manipulations. On the other hand, deterministic attacks often require precise knowledge of control algorithms and current state of the system and monitors.
In this talk we discuss the potential of Moving Target Defenses such as (partially) randomized control and monitoring solutions for CPS. Together with control and monitoring ensembles, such solutions are promising to greatly increase robustness against manipulations with only minor performance impacts.
All time are in Central Europe Time (GMT +1)