Adaptive Neural Recovery for Highly Robust Brain-like Representation
TimeTuesday, July 12th4:42pm - 5:06pm PDT
Location3006, Level 3
Event Type
Research Manuscript
AI/ML Security/Privacy
DescriptionIn this paper, we develop a novel theoretical method that analyzes the risk bound in the presence of adversaries. Specifically, we fit the adversarial learning problem into the brain-inspired theoretical mathematics that describes information association and memorization in high-dimension. Our method establishes limits on the robustness of the HDC classifier in terms of a distinguishability measure between the classes. When our distinguishability is small, the HDC classifier will not provide suitable robustness to adversarial perturbations, even if the classification accuracy is high.