Computer scientists design a way to defend against attacks to AI-based security systems

April 26, 2019

In a new published paper, a group from Prof. Ben Zhao and Prof. Heather Zheng’s SAND Lab describe the first generalized defense against backdoor attacks in neural networks. Their “neural cleanse” technique scans machine learning systems for the telltale fingerprints of a sleeper cell—and gives the owner a trap to catch any potential attacks.

Related News

Faculty, Research