Skip to content

(⌐■_■) - Deep Reinforcement Learning instrumenting bettercap for WiFI pwning.

License

Notifications You must be signed in to change notification settings

friedphish/pwnagotchi

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

63 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pwnagotchi

Pwnagotchi is an "AI" that learns from the WiFi environment and instruments bettercap in order to maximize the WPA key material (any form of handshake that is crackable, including PMKIDs, full and half WPA handshakes) captured.

handshake

Specifically, it's using an LSTM with MLP feature extractor as its policy network for the A2C agent, here is a very good intro on the subject.

Instead of playing Super Mario or Atari games, pwnagotchi will tune over time its own parameters, effectively learning to get better at pwning WiFi things. Keep in mind: unlike the usual RL simulations, pwnagotchi learns over time (where a single epoch can last from a few seconds to minutes, depending on how many access points and client stations are visible), do not expect it to perform amazingly well at the beginning, as it'll be exploring several combinations of parameters ... but listen to it when it's bored, bring it with you and have it observe new networks and capture new handshakes and you'll see :)

Multiple units can talk to each other, advertising their own presence using a parasite protocol I've built on top of the existing dot11 standard, by broadcasting custom information elements. Over time, two or more units learn to cooperate if they detect each other's presence, by dividing the available channels among them.

peers

Depending on the status of the unit, several states and states transitions are configurable and represented on the display as different moods, expressions and sentences.

If instead you just want to use your own parameters and save battery and CPU cycles, you can disable the AI in config.yml and enjoy an automated deauther, WPA handshake sniffer and portable bettercap + webui dedicated hardware.

NOTE: The software requires at least bettercap v2.25.

units

Why

For hackers to learn reinforcement learning, WiFi networking and have an excuse to take a walk more often. And it's cute as f---.

Documentation

THIS IS STILL ALPHA STAGE SOFTWARE, IF YOU DECIDE TO TRY TO USE IT, YOU ARE ON YOUR OWN, NO SUPPORT WILL BE PROVIDED, NEITHER FOR INSTALLATION OR FOR BUGS

Hardware

  • Raspberry Pi Zero W
  • Waveshare eInk Display (V2) (optional if you connect to usb0 and point your browser to the web ui, see config.yml)
  • A decent power bank (with 1500 mAh you get ~2 hours with AI on)

Software

  • Raspbian + nexmon patches for monitor mode, or any Linux with a monitor mode enabled interface (if you tune config.yml).

Do not try with Kali on the Raspberry Pi 0 W, it is compiled without hardware floating point support and TensorFlow is simply not available for it, use Raspbian.

Automatically create an image

You can use the scripts/create_sibling.sh script to create an - ready to flash - rasbian image with pwnagotchi.

usage: ./scripts/create_sibling.sh [OPTIONS]

  Options:
    -n <name>    # Name of the pwnagotchi (default: pwnagotchi)
    -i <file>    # Provide the path of an already downloaded raspbian image
    -o <file>    # Name of the img-file (default: pwnagotchi.img)
    -s <size>    # Size which should be added to second partition (in Gigabyte) (default: 4)
    -v <version> # Version of raspbian (Supported: latest; default: latest)
    -p           # Only run provisioning (assumes the image is already mounted)
    -d           # Only run dependencies checks
    -h           # Show this help

Host Connection Share

If you connect to the unit via usb0 (thus using the data port), you might want to use the scripts/linux_connection_share.sh script to bring the interface up on your end and share internet connectivity from another interface, so you can update the unit and generally download things from the internet on it.

UI

The UI is available either via display if installed, or via http://pwnagotchi.local:8080/ if you connect to the unit via usb0 and set a static address on the network interface (change pwnagotchi with the hostname of your unit).

ui

  • CH: Current channel the unit is operating on or * when hopping on all channels.
  • APS: Number of access points on the current channel and total visible access points.
  • UP: Time since the unit has been activated.
  • PWND: Number of handshakes captured in this session and number of unique networks we own at least one handshake of, from the beginning.
  • AUTO: This indicates that the algorithm is running with AI disabled (or still loading), it disappears once the AI dependencies have been bootrapped and the neural network loaded.

Random Info

  • hostname sets the unit name.
  • At first boot, each unit generates a unique RSA keypair that can be used to authenticate advertising packets.
  • On a rpi0w, it'll take approximately 30 minutes to load the AI.
  • /var/log/pwnagotchi.log is your friend.
  • if connected to a laptop via usb data port, with internet connectivity shared, magic things will happen.
  • checkout the ui.video section of the config.yml - if you don't want to use a display, you can connect to it with the browser and a cable.
  • If you get [FAILED] Failed to start Remount Root and Kernel File Systems. while booting pwnagotchi, make sure the PARTUUIDs for rootfs and boot partitions are the same in /etc/fstab. Use sudo blkid to find those values when you are using create_sibling.sh.

License

pwnagotchi is made with ♥ by @evilsocket and it's released under the GPL3 license.

About

(⌐■_■) - Deep Reinforcement Learning instrumenting bettercap for WiFI pwning.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.6%
  • Shell 9.5%
  • Awk 1.9%