Sara Ghazanfari
I am currently a Ph.D. candidate at New York University in the
EnSuRe Research Group.
I'm pleased to be co-advised by
Siddharth Garg and
Farshad Khorrami. My primary research is focused on the intersection of trustworthy machine learning
and computer vision with the goal of improving the robustness of vision systems by providing
robust representations.
Email  / 
Google Scholar
 / 
GitHub  / 
Twitter  / 
LinkedIn  / 
CV
|
|
News
- Jan-2024: One paper accepted to ICLR 2024.
- June-2023: One paper accepted to ICML Workshop 2023.
|
|
LipSim: A Provably Robust Perceptual Similarity Metric
S. Ghazanfari, A. Araujo, P. Krishnamurthy, F. Khorrami and S. Garg
ICLR, 2024
PDF /
arXiv /
code
In this work, we demonstrate the vulnerability of the SOTA perceptual similarity metric
based on an ensemble of ViT-based feature extractors to adversarial attacks.
We then propose a framework to train a robust perceptual similarity metric called LipSim
(Lipschitz Similarity Metric) with provable guarantees by leveraging 1-Lipschitz neural
networks as backbone and knowledge distillation approach to distill the knowledge of the
SOTA models. Finally, a comprehensive set of experiments shows the performance of LipSim
in terms of natural and certified scores and on the image retrieval application.
|
|
R-LPIPS: An Adversarially Robust Perceptual Similarity Metric
S. Ghazanfari, S. Garg, P. Krishnamurthy, F. Khorrami and A. Araujo
ICML Workshop, 2023
PDF /
arXiv /
code
In this work, we show that the LPIPS metric is sensitive to adversarial perturbation and propose the
use of Adversarial Training to build a new Robust Learned Perceptual Image Patch Similarity (R-LPIPS)
that leverages adversarially trained deep features. Based on an adversarial evaluation, we demonstrate
the robustness of R-LPIPS to adversarial examples compared to the LPIPS metric.
Finally, we showed that the perceptual defense achieved over LPIPS metrics could easily
be broken by stronger attacks developed based on R-LPIPS.
|
|