154

GitHub - evilsocket/pwnagotchi: (⌐■_■) - Deep Reinforcement Learning vs WiFI

 4 years ago
source link: https://github.com/evilsocket/pwnagotchi
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.md

Pwnagotchi

Pwnagotchi is an "AI" that learns from the WiFi environment and instruments bettercap in order to maximize the WPA key material (any form of handshake that is crackable, including PMKIDs, full and half WPA handshakes) captured.

handshake

Specifically, it's using an LSTM with MLP feature extractor as its policy network for the A2C agent, here is a very good intro on the subject.

Instead of playing Super Mario or Atari games, pwnagotchi will tune over time its own parameters, effectively learning to get better at pwning WiFi things. Keep in mind: unlike the usual RL simulations, pwnagotchi learns over time (where a single epoch can last from a few seconds to minutes, depending on how many access points and client stations are visible), do not expect it to perform amazingly well at the beginning, as it'll be exploring several combinations of parameters ... but listen to it when it's bored, bring it with you and have it observe new networks and capture new handshakes and you'll see :)

Multiple units can talk to each other, advertising their own presence using a parasite protocol I've built on top of the existing dot11 standard, by broadcasting custom information elements. Over time, two or more units learn to cooperate if they detect each other's presence, by dividing the available channels among them.

peers

Depending on the status of the unit, several states and states transitions are configurable and represented on the display as different moods, expressions and sentences.

If instead you are a boring person, you can disable the AI and have the algorithm run just with the preconfigured default parameters and enjoy a very portable bettercap + webui dedicated hardware.

NOTE: The software requires bettercap compiled from master.

units

Why

For hackers to learn reinforcement learning, WiFi networking and have an excuse to take a walk more often. And it's cute as f---.

Documentation

THIS IS STILL ALPHA STAGE SOFTWARE, IF YOU DECIDE TO TRY TO USE IT, YOU ARE ON YOUR OWN, NO SUPPORT WILL BE PROVIDED, NEITHER FOR INSTALLATION OR FOR BUGS

Hardware

  • Raspberry Pi Zero W
  • Waveshare eInk Display (optional if you connect to usb0 and point your browser to the web ui, see config.yml)
  • A decent power bank (with 1500 mAh you get ~2 hours with AI on)

Software

  • Raspbian + nexmon patches for monitor mode, or any Linux with a monitor mode enabled interface (if you tune config.yml).

Do not try with Kali on the Raspberry Pi 0 W, it is compiled without hardware floating point support and TensorFlow is simply not available for it, use Raspbian.

UI

The UI is available either via display if installed, or via http://10.0.0.2:8080/ if you connect to the unit via usb0 and set a static address on the network interface.

ui

  • CH: Current channel the unit is operating on or * when hopping on all channels.
  • APS: Number of access points on the current channel and total visible access points.
  • UP: Time since the unit has been activated.
  • PWND: Number of handshakes captured in this session and number of unique networks we own at least one handshake of, from the beginning.
  • AUTO: This indicates that the algorithm is running with AI disabled (or still loading), it disappears once the AI dependencies have been bootrapped and the neural network loaded.

Random Info

  • hostname sets the unit name.
  • At first boot, each unit generates a unique RSA keypair that can be used to authenticate advertising packets.
  • On a rpi0w, it'll take approximately 30 minutes to load the AI.
  • /var/log/pwnagotchi.log is your friend.
  • if connected to a laptop via usb data port, with internet connectivity shared, magic things will happen.
  • checkout the ui.video section of the config.yml - if you don't want to use a display, you can connect to it with the browser and a cable.

Magic scripts that makes it talk to the internet:

#!/bin/bash

# name of the ethernet gadget interface on the host
USB_IFACE=${1:-enp0s20f0u1}
USB_IFACE_IP=10.0.0.1
USB_IFACE_NET=10.0.0.0/24
# host interface to use for upstream connection
UPSTREAM_IFACE=enxe4b97aa99867

ip addr add $USB_IFACE_IP/24 dev $USB_IFACE
ifconfig $USB_IFACE up

iptables -A FORWARD -o $UPSTREAM_IFACE -i $USB_IFACE -s $USB_IFACE_NET -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -t nat -F POSTROUTING
iptables -t nat -A POSTROUTING -o $UPSTREAM_IFACE -j MASQUERADE

echo 1 > /proc/sys/net/ipv4/ip_forward

License

pwnagotchi is made with ♥ by @evilsocket and it's released under the GPL3 license.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK