State-of-the-art topics in cybersecurity research
DEIB - "A. Alario" Seminar Room (Bld. 21)
July 23rd, 2024 | 9.30 am
July 23rd, 2024 | 9.30 am
Sommario
On July 23rd, 2024 at 9.30 am the seminar titled "State-of-the-art topics in cybersecurity research" will take place at DEIB "Alessandra Alario" Seminar Room (Building 21).
This Talk will be held by Alessandro Bertani, Marco Di Gennaro, Francesco Panebianco (PhD in Computer Science and Engineering at Politecnico di Milano) on the following subjects:
"Hardware Trojans - A Systems Security Approach" - Alessandro Bertani
Hardware Trojans have become a topic of increased attention, due to their stealthy nature and their potential for exploitation of global supply chains. Hardware Trojans are malicious and intentional modification to the layout of integrated circuits, that remain dormant until they are activated ("triggered") by an attacker. In this talk we tackle the problem of Hardware Trojans from a systems security perspective, considering their design, deployment and operation, and the challenges of using and counteracting them at scale.
"How to Forget a Neural Network Backdoor: Introducing Selective Amnesia" - Marco Di Gennaro
Deep neural networks (DNNs) are susceptible to backdoor attacks due to their complex architectures. In these attacks, adversaries inject hidden backdoor tasks into a DNN during training, causing it to misclassify inputs with specific triggers. Defending against these attacks is challenging. Rui Zhu et al. introduce a novel and effective technique called SEAM (Selective Amnesia) to address this issue. Inspired by the phenomenon of catastrophic forgetting in continual learning, SEAM induces selective forgetting in a DNN. This process involves retraining the model on randomly labeled clean data to cause it to forget both primary and backdoor tasks, and then retraining it on correctly labeled data to recover the primary task. Their method leverages the Neural Tangent Kernel to maximize the forgetting of the backdoor while preserving essential feature extraction for the rapid recovery of the primary task. SEAM has been rigorously tested on various image and natural language processing tasks, proving to be significantly more efficient and effective than existing unlearning techniques. It achieves high fidelity between primary task accuracy and backdoor task accuracy, operates about 30 times faster than training from scratch, and requires minimal clean data. This presentation explains SEAM, showing a new way to defend against DNN backdooring attacks.
"Model Heist: The Current State of Model Stealing Attacks" - Francesco Panebianco
Machine learning models are increasingly pervasive in information technology due to their enhanced performance across various tasks, making them critical assets for many companies. This talk addresses the threat of model reversing (or model stealing) and evaluates current defenses, as well as techniques for evading forensic verification by malicious suspects and framing by malicious accusers. These solutions present trade-offs with model performance and have inherent limitations. As such, this remains an open research problem.
This Talk will be held by Alessandro Bertani, Marco Di Gennaro, Francesco Panebianco (PhD in Computer Science and Engineering at Politecnico di Milano) on the following subjects:
"Hardware Trojans - A Systems Security Approach" - Alessandro Bertani
Hardware Trojans have become a topic of increased attention, due to their stealthy nature and their potential for exploitation of global supply chains. Hardware Trojans are malicious and intentional modification to the layout of integrated circuits, that remain dormant until they are activated ("triggered") by an attacker. In this talk we tackle the problem of Hardware Trojans from a systems security perspective, considering their design, deployment and operation, and the challenges of using and counteracting them at scale.
"How to Forget a Neural Network Backdoor: Introducing Selective Amnesia" - Marco Di Gennaro
Deep neural networks (DNNs) are susceptible to backdoor attacks due to their complex architectures. In these attacks, adversaries inject hidden backdoor tasks into a DNN during training, causing it to misclassify inputs with specific triggers. Defending against these attacks is challenging. Rui Zhu et al. introduce a novel and effective technique called SEAM (Selective Amnesia) to address this issue. Inspired by the phenomenon of catastrophic forgetting in continual learning, SEAM induces selective forgetting in a DNN. This process involves retraining the model on randomly labeled clean data to cause it to forget both primary and backdoor tasks, and then retraining it on correctly labeled data to recover the primary task. Their method leverages the Neural Tangent Kernel to maximize the forgetting of the backdoor while preserving essential feature extraction for the rapid recovery of the primary task. SEAM has been rigorously tested on various image and natural language processing tasks, proving to be significantly more efficient and effective than existing unlearning techniques. It achieves high fidelity between primary task accuracy and backdoor task accuracy, operates about 30 times faster than training from scratch, and requires minimal clean data. This presentation explains SEAM, showing a new way to defend against DNN backdooring attacks.
"Model Heist: The Current State of Model Stealing Attacks" - Francesco Panebianco
Machine learning models are increasingly pervasive in information technology due to their enhanced performance across various tasks, making them critical assets for many companies. This talk addresses the threat of model reversing (or model stealing) and evaluates current defenses, as well as techniques for evading forensic verification by malicious suspects and framing by malicious accusers. These solutions present trade-offs with model performance and have inherent limitations. As such, this remains an open research problem.