Skip to content

Latest commit

 

History

History
52 lines (35 loc) · 3.9 KB

README.md

File metadata and controls

52 lines (35 loc) · 3.9 KB

AI Apotropaics

A curated list of interesting counter AI techniques, examples, and stories. Apotropaic: adjective | apo·​tro·​pa·​ic : "designed to avert evil"

=====================================================

Adversarial-Faces

https://github.com/BruceMacD/Adversarial-Faces | "Testing the effectiveness of practical implementations of adversarial examples against facial recognition."

Ars Technica: Some shirts hide you from cameras—but will anyone wear them?

https://arstechnica.com/features/2020/04/some-shirts-hide-you-from-cameras-but-will-anyone-wear-them/ | "It's theoretically possible to become invisible to cameras. But can it catch on?"

AI.Facebook: Using ‘radioactive data’ to detect if a data set was used for training

https://ai.facebook.com/blog/using-radioactive-data-to-detect-if-a-data-set-was-used-for-training/ | "We have developed a new technique to mark the images in a data set so that researchers can determine whether a particular machine learning model has been trained using those images."

Google Maps Hacks

http://simonweckert.com/googlemapshacks.html | "99 second hand smartphones are transported in a handcart to generate virtual traffic jam in Google Maps."

Camera Shy Hoodie

https://www.macpierce.com/the-camera-shy-hoodie | IR lights in hoddie to obscure night cameras.

CV Dazzle

https://cvdazzle.com/ | "Dazzle explores how fashion can be used as camouflage from face-detection technology, the first step in automated face recognition."

Algotransparency.org

https://algotransparency.org | "We aim to inform citizens on the mechanisms behind the algorithms that determine and shape our access to information. YouTube is the first platform on which we’ve conducted the experiment. We are currently developing tools for other platforms."

BoingBoing: Adversarial Preturbations

https://boingboing.net/2019/03/08/hot-dog-or-not.html | "Towards a general theory of "adversarial examples," the bizarre, hallucinatory motes in machine learning's all-seeing eye"

LabSix

https://www.labsix.org/about/ | "LabSix is an independent, entirely student-run AI research group composed of MIT undergraduate and graduate students... Much of our current research is in the area of adversarial examples, at the intersection of machine learning and security."

Tech Crunch: Autonomous trap 001

https://techcrunch.com/2017/03/17/laying-a-trap-for-self-driving-cars/ | Artist James Bridle produces a salt based trap for a self driving car.

How to Recognize AI Generated Faces (2018)

https://medium.com/@kcimc/how-to-recognize-fake-ai-generated-images-4d1f6f9a2842 | Kyle McDonald on things to look for in generated images.

Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models

http://sandlab.cs.uchicago.edu/fawkes/ | The SAND Lab at University of Chicago developed Fawkes to take personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking, but make them unsuitable for image recognition.

Adversarial.io

https://adversarial.io/ | Adversarial.io is an easy-to-use webapp for altering image material, in order to make it machine-unreadable.

Fake Contacts

https://github.com/BillDietrich/fake_contacts | Android phone app that creates fake contacts, which will be stored on your smartphone along with your real contacts. This feeds fake data to any apps or companies who are copying our private data to use or sell it. This is called "data-poisoning".

Vox: Tesla Tricked Into Speeding by Two Pieces of Tape

https://www.technologyreview.com/2020/02/19/868188/hackers-can-trick-a-tesla-into-accelerating-by-50-miles-per-hour/ | A two inch piece of tape fooled the Tesla’s cameras and made the car quickly and mistakenly speed up.