50

GitHub - tensor8/hacking_slot_machines: Hacking slot machines.

 5 years ago
source link: https://github.com/tensor8/hacking_slot_machines
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.md

SlotBot: Hacking slot machines to win the jackpot using a hidden camera and brute-force search

So it turns out that there's a game on a specific brand of slot machine that's basically like an extreme version of Trivial Pursuit. It also turns out that the game ROM (containing all the answers) can be found online.

This code allows you to win the jackpot every time.

Enjoy!

Here's the pipeline:

  • Capture image of slot machine screen with buttonhole camera to raspberry pi
  • Process image to undo perspective shift and segment into question and answer boxes with OpenCV
  • Pass processed question boxes to Google Tesseract for text recognition
  • Run OCR text through a hand-designed brute-force search to get the most likely answer
  • Pass answer through text-to-speech engine and into hidden earpiece
  • (profit)

The game

The game asks you a series of general knowledge questions. It presents you with a choice of four answers, where one is correct. The more you get right, the more money you build up, until you win the jackpot.

Decrypting the game files

The game data files look like this; encrypted, unreadable text. Fortunately, it turned out that they were encrypted using an xor cipher. This means that we can fairly easily write a script to get a list of questions and answers in human-readable, decrypted form (run python decrypt.py).

Designing a brute force search

Now we have the data, the fun begins. We need to read the screen, match what we can see to a question in the data bank via a brute force search, and read out the corresponding correct answer.

I initially tried to do this with the question data alone, ignoring the answers. Unfortunately, this doesn't work. Optical character recognition is imperfect, especially when running in real-time off a bad camera. About 30% of the characters will typically be misread. This means that the OCR-read question text is typically too garbled to identify which exact question it corresponds to; we can only narrow the search down to about 30 possible candidates.

So, we need to use the information provided by the answers to help identify which question we are looking at. This makes the brute force search a little more tricky, but still possible.

The two basic ingredients of this brute-force search will be (i) a way to compare two strings for similarity; and, using this, (ii) a metric to rank similarity between imperfectly observed question/answer pairs and true samples from the database.

We use the Levenschtein distance to define the similarity between two strings, defined as the minimum number of edits needed to change one string into another. Since a longer string tends to accumulate more reading errors, we'll normalize the Levenschtein distance over its length.

We form a confusion matrix of OCR answers against database answers. Taking the Frobenius inner product between this and every 4-D permutation matrix will give us the metric we need. We can brute-force search over this to find the correct answer. The intuition behind this algorithm is that we're taking the dot product between the observed confusion matrix (which is noisy due to poor observability), and idealized confusion matrices (assuming perfect observability). The idealized confusion matrices take the form of permutation matrices because the four answers can appear in any permutation.

The code to carry out the brute force search can be found in /src/.

Hardware

I bought a raspberry pi 2 to run the software, and used a TTS engine (Google Tesseract) to read out the answer into an earpiece.

I actually couldn't get the code to run fast enough on the raspberry pi to be useful (a single pass took about 30s). The bottleneck was opencv and tesseract (the only bits I couldn't optimize), so I ended up having to pipe the image over wifi to be processed by a laptop in a backpack. The code running on the rpi can be found in ./pi_interface.py.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK