36

Handtrack.js — let the flames dancing in your hands

 4 years ago
source link: https://mc.ai/handtrack-js - let-the-flames-dancing-in-your-hands/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Implementation

# Step 1 : Include handtrack.js

First of all, simply include the script handtrack.js in the <head> section of the html file.

<script src="https://cdn.jsdelivr.net/npm/handtrackjs/dist/handtrack.min.js"> </script>

Or you can install it via npm for use in a TypeScript / ES6 project

npm install --save handtrackjs

# Step 2 : Stream webcam to browser

To stream your webcam into the browser, I utilize the npm JavaScript module webcam-easy.js , which provides an easy to use module that can access webcam and take a photo. To find out more details about that, please refer to my previous blog :

# Step 3 : Load HandTrack Model

In order to perform hand tracking, we first need to load the pre-trained HandTrack model, by calling the API of handTrack.load(modelParams) . HandTrack comes with a few optional parameters of the model:

  • flipHorizontal — default value: True

flip e.g for video

  • imageScaleFactor — default value: 0.7

reduce input image size for gains in speed

  • maxNumBoxes — default value: 20

maximum number of boxes to detect

  • iouThreshold — default value: 0.5

ioU threshold for non-max suppression

  • scoreThreshold — default value: 0.99

confidence threshold for predictions

const modelParams = {
 flipHorizontal: true, 
 maxNumBoxes: 20, 
 iouThreshold: 0.5,
 scoreThreshold: 0.8
}handTrack.load(modelParams).then(mdl => { 
 model = mdl;
 console.log("model loaded");
});

# Step 4 : Hand detection

Next, we start to feed the webcam stream through the HandTrack model to perform hand detection, by calling the API of model.detect(video) . It takes an input image element (can be an img , video , canvas tag) and returns an array of bounding boxes with class name and confidence level.

model.detect(webcamElement).then(predictions => {
 console.log("Predictions: ", predictions);
 showFire(predictions);
});

Return of predictions would look like:

[{
 bbox: [x, y, width, height],
 class: "hand",
 score: 0.8380282521247864
}, {
 bbox: [x, y, width, height],
 class: "hand",
 score: 0.74644153267145157
}]

# Step 5 : Show magic fire

In the above function, we get the bounding box of the hand position, now we can use it to show the fire GIF image in your hand.

HTML

Overlay the canvas layer on top of the webcam element

<video id="webcam" autoplay playsinline width="640" height="480"></video><div id="canvas" width="640" height="480"></div>

JavaScript

Set the size and position of the fireElement , and append it to the canvas layer.

function showFire(predictions){
if(handCount != predictions.length){
$("#canvas").empty();
fireElements = [];
}
handCount = predictions.length;

for (let i = 0; i < predictions.length; i++) {
if (fireElements.length > i) {
fireElement = fireElements[i];
}else{
fireElement = $("<div class='fire_in_hand'></div>");
fireElements.push(fireElement);
fireElement.appendTo($("#canvas"));

}
var fireSizeWidth = fireElement.css("width").replace("px","");
var fireSizeHeight = fireElement.css("height").replace("px","");
var firePositionTop = hand_center_point[0]- fireSizeHeight;
var firePositionLeft = hand_center_point[1] - fireSizeWidth/2;
fireElement.css({top: firePositionTop, left: firePositionLeft, position:'absolute'});
}
}

CSS

set the background-image to be the fire.gif image

.fire_in_hand {
 width: 300px;
 height: 300px;
 background-image: url(../images/fire.gif);
 background-position: center center;
 background-repeat: no-repeat;
 background-size: cover;
}

That’s pretty much for the code! Now you should be good to start showing the magic fire in your hands!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK