Skip to content

Commit

Permalink
up
Browse files Browse the repository at this point in the history
  • Loading branch information
sueszli committed Dec 14, 2024
1 parent 77f86d5 commit 71c1de6
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 2 deletions.
6 changes: 6 additions & 0 deletions references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -397,6 +397,12 @@ @INPROCEEDINGS{10585001
keywords={Training;Surveys;Technological innovation;Ethics;Terminology;Collaboration;Machine learning;Automobiles;Transportation;Cybersecurity;Adversarial Attacks;Model Explain-ability},
doi={10.1109/ICCICA60014.2024.10585001}
}
@article{yuan2020fooling,
title={Fooling the primate brain with minimal, targeted image manipulation},
author={Yuan, Li and Xiao, Will and Dellaferrera, Giorgia and Kreiman, Gabriel and Tay, Francis EH and Feng, Jiashi and Livingstone, Margaret S},
journal={arXiv preprint arXiv:2011.05623},
year={2020}
}

%
% intro
Expand Down
Binary file modified thesis.pdf
Binary file not shown.
4 changes: 2 additions & 2 deletions thesis.tex
Original file line number Diff line number Diff line change
Expand Up @@ -226,8 +226,8 @@ \subsection{Imperceptible Adversarial Examples}

Humans can detect forged low-$\varepsilon$ adversarial examples with high accuracy in both the visual (85.4\%)~\cite{veerabadran2023subtle} and textual ($\geq$70\%)~\cite{herel2023preserving} domain. It's worth mentioning that invertible neural networks can partially mitigate this issue in the visual domain~\cite{chen2023imperceptible}. \\

Additionally, small $\varepsilon$-bounded adversarial perturbations are found to cause misclassification in time-constrained humans~\cite{elsayed2018adversarial}.
% and monkeys: https://gwern.net/doc/ai/nn/adversarial/human/index
Additionally, small $\varepsilon$-bounded adversarial perturbations are found to cause misclassification in time-constrained humans~\cite{elsayed2018adversarial} and primates~\cite{yuan2020fooling}.
% see: https://gwern.net/doc/ai/nn/adversarial/human/index#harrington-deza-2021-section
\end{highlightbox}

While initially discovered in computer vision applications, the attack can be crafted for any domain or data type, even graphs~\cite{Kashyap2024AdversarialAA}. Natural language processing models can be attacked by circumventing the discrete nature of text data~\cite{Han2022TextAA, meng2020geometry, yang2024assessing}. Speech recognition systems are vulnerable to audio-based attacks, where crafted noise can cause system failure~\cite{rajaratnam2018noise}. Deep reinforcement learning applications, including pathfinding and robot control, have also shown susceptibility to adversarial manipulations that can compromise their decision-making capabilities~\cite{Bai2018AdversarialEC}.
Expand Down

0 comments on commit 71c1de6

Please sign in to comment.