10

Deep Learning Cats Dogs Tutorial on Jetson TX2

 4 years ago
source link: https://jkjung-avt.github.io/caffe-tutorial-on-tx2/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Deep Learning Cats Dogs Tutorial on Jetson TX2

Aug 11, 2017

In general it’s not recommended to train neural nets on an embedded platform like Jetson TX2. I did it for the sake of learning. In fact, this example works OK on Jetson TX2, and I do recommend it to people who wants to learn Caffe.

(Well, I’ve already been training the Nintendo DQN directly on Jetson TX1 before…)

The original tutorial: A Pactical Introduction to Deep Learning with Caffe and Python

This tutorial demonstrates how to train an AlexNet to do image classification between 2 classes: cats and dogs. The dataset, which contains 25,000 training images, comes from Kaggle. The tutorial walks you through how to do data preparation, how to define a caffe model and the solver, how to do training, and how to use the trained model to do prediction. The model is first trained from scratch (with random initialization). Then transfer learning is applied to improve the training result. Overall I think this is the best Caffe tutorial I’ve come across so far.

The original source code provided along with this tutorial seemed for python2 only. When I tried to run the code on Jetson TX2 with python3, I had to fix a few things, namely:

  • I checked out the code into /home/nvidia/project folder, so I had to modify all file paths in the code to match my set-up.
  • I added () to all print functions, which is required for python3.
  • In code/create_lmdb.py, map_size of lmdb.open() needed to be reduced.
  • Also in code/create_lmdb.py, I added encode(‘ascii’) in in_txn.put(). This is also required for python3.
  • In code/make_predictions_?.py, specifically open the input/mean.binaryproto file as binary (‘b’). Otherwise I’d get ‘utf-8’ decoding error when running the code.

You can get a copy of the modified code from my GitHub repository: https://github.com/jkjung-avt/deeplearning-cats-dogs-tutorial.

With the modifications mentioned above, I was able to train the models on my Jetson TX2 in a reasonable time frame. More specifically, ‘max_iter’ was set to 40,000 in the Caffe solver files, and it took roughly 40 hours for the 40,000 iterations of training to complete on my Jetson TX2. (I’ve used ‘sudo nvpmodel -m 0’ to set the Jetson TX2 to maximum performance mode.)

As a final note, I noticed that the deeplearing-cats-dogs-tutorial code could take up to 6.5GB of memory during training. So most likely the code would not work on a Jetson TX1 (with 4GB of RAM only). If you’d really like to run this code on a Jetson TX1, try to reduce the data ‘batch_size’ defined in say caffe_models/caffe_model_1/caffenet_train_val_1.prototxt.


Recommend

  • 10
    • jkjung-avt.github.io 4 years ago
    • Cache

    Deploying the Hand Detector onto Jetson TX2

    Deploying the Hand Detector onto Jetson TX2 Sep 25, 2018 Quick link: jkjung-avt/tf_trt_models In previous posts, I’ve shared how to apply TF-TRT to optimi...

  • 14
    • jkjung-avt.github.io 4 years ago
    • Cache

    How I built TensorFlow 1.8.0 on Jetson TX2

    How I built TensorFlow 1.8.0 on Jetson TX2 Get bazel. I tested the latest version (0.17.1) of bazel and it was no good. So I downloaded and used bazel 0.15.2 instead. $ cd ~/Downloads $ wge...

  • 10
    • jkjung-avt.github.io 4 years ago
    • Cache

    TensorFlow/TensorRT Models on Jetson TX2

    TensorFlow/TensorRT Models on Jetson TX2 Sep 14, 2018 2019-05-20 update: I just added the Running TensorRT Optimized GoogLeNet on Jetson Nano

  • 7
    • jkjung-avt.github.io 4 years ago
    • Cache

    Keras Cats Dogs Tutorial

    Keras Cats Dogs Tutorial Apr 14, 2018 Quick link to my GitHub code: https://github.com/jkjung-avt/keras-cats-dogs-tutorial Recently I started t...

  • 6
    • jkjung-avt.github.io 4 years ago
    • Cache

    YOLOv3 on Jetson TX2

    YOLOv3 on Jetson TX2 Mar 27, 2018 2020-01-03 update: I just created a TensorRT YOLOv3 demo which should run faster than the original dark...

  • 9
    • jkjung-avt.github.io 4 years ago
    • Cache

    Building and Testing 'openalpr' on Jetson TX2

    Building and Testing 'openalpr' on Jetson TX2 Mar 9, 2018 I read about openalpr a while ago. Recently...

  • 9

    Measuring Caffe Model Inference Speed on Jetson TX2 Feb 27, 2018 When deploying Caffe models onto embedded platforms such as Jetson TX2, inference speed of the caffe models is an essential factor to consider. I think the...

  • 5
    • jkjung-avt.github.io 4 years ago
    • Cache

    Faster R-CNN on Jetson TX2

    Faster R-CNN on Jetson TX2 Feb 12, 2018 2018-03-30 update: I’ve written a subsequent post about how to build a Faster RCNN model which runs twice as fast as the original VGG16 based model:

  • 18
    • jkjung-avt.github.io 4 years ago
    • Cache

    Single Shot MultiBox Detector (SSD) on Jetson TX2

    Single Shot MultiBox Detector (SSD) on Jetson TX2 Nov 30, 2017 2019-05-16 update: I just added the Installing and Testing SSD Caffe on Jetson Nan...

  • 12
    • jkjung-avt.github.io 4 years ago
    • Cache

    YOLOv2 on Jetson TX2

    YOLOv2 on Jetson TX2 Nov 12, 2017 2018-03-27 update: 1. I’ve written a new post about the latest YOLOv3, “YOLOv3 on Jetson TX2”; 2. Updated YOLOv2 relat...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK