72

Make a JavaScript Facial Recognition App that Works Like the Ones in the Movies

 4 years ago
source link: https://www.tuicool.com/articles/eyQfyiV
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

beq6Vbf.jpg!web

Photo by Harishan Kobalasingam on Unsplash

When we watch movies that take place in the future, we often see the characters looking at their computer screen trying to scan faces for people like criminals or whoever they want to pursue. Right now, facial recognition technology has matured so much that this has become a reality. We can build apps that can recognize faces with great accuracy. It is no longer only in the realm of science fiction.

To make a facial recognition app, we can use a library like face-api.js, located at https://github.com/justadudewhohacks/face-api.js/ . It works both straight in the browser and in Node.js. You use it by training Tensorflow with images to generate training data which you load into the app, and then it can detect faces by calling its functions. Generating the data is out of the scope of this article, but you can download preset data that you can load into your app for doing facial recognition. The link above has a weights folder which you can use in your app.

In this article, we will make a simple facial recognition app which users can use to detect the emotion of a face, the gender, and the age of the face. We will use Vue.js along with the face-api.js library to build the app. To get started, we run the Vue CLI by running npx @vue/cli create facial-recognition-app . In the wizard, we select ‘Manually select features’, then select Babel, CSS preprocessors, Vuex, and Vue Router.

Now we need to install our libraries. In addition to face-api.js, we also need Axios to make requests to our backend and BootstrapVue for styling. To install them, we run:

npm i axios bootstrap-vue face-api.js

Now we move on to creating the app. Create a mixins folder in the src directory and then create a file called requestsMixin.js . In there, add:

const APIURL = "http://localhost:3000";
const axios = require("axios");
export const requestsMixin = {
methods: {
getImages() {
return axios.get(`${APIURL}/images`);
},
addImage(data) {
return axios.post(`${APIURL}/images`, data);
},
editImage(data) {
return axios.put(`${APIURL}/images/${data.id}`, data);
},
deleteImage(id) {
return axios.delete(`${APIURL}/images/${id}`);
}
}
};

Here, we have the functions to make requests to our backend for saving the images which we will set up later.

Next in Home.vue , replace the existing code with:

<template>
<div class="page">
<h1 class="text-center">Images</h1>
<div class="clearfix">
<b-button-toolbar class="button-toolbar float-left">
<input type="file" style="display: none" ref="file" @change="onChangeFileUpload($event)" />
<b-button variant="primary" @click="$refs.file.click()">Upload Images</b-button>
</b-button-toolbar>
<img ref="image" :src="form.image" class="photo float-left" />
</div>
<div v-if="loaded">
<b-card v-for="(img, index) in images" :key="img.id">
<div class="row">
<div class="col-md-6">
<img :src="img.image" img-alt="Image" class="photo" :ref="`photo-${index}`" />
</div>
<div class="col-md-6">
<h3>Faces</h3>
<b-list-group>
<b-list-group-item v-for="(d, i) of img.detections" :key="i">
<h4>Face {{i + 1}}</h4>
<ul class="detection">
<li>Age: {{d.age.toFixed(0)}}</li>
<li>Gender: {{d.gender}}</li>
<li>Gender Probability: {{(d.genderProbability*100).toFixed(2)}}%</li>
<li>
Expressions:
<ul>
<li
v-for="key of Object.keys(d.expressions)"
:key="key"
>{{key}}: {{(d.expressions[key]*100).toFixed(2)}}%</li>
</ul>
</li>
</ul>
</b-list-group-item>
</b-list-group>
</div>
</div>
<br />
<b-button variant="primary" @click="detectFace(index)">Detect Face</b-button> <b-button variant="danger" @click="deleteOneImage(img.id)">Delete</b-button>
</b-card>
</div>
<div v-else>
<p>Loading image data...</p>
</div>
</div>
</template>
<script>
import { requestsMixin } from "@/mixins/requestsMixin";
import * as faceapi from "face-api.js";
const axios = require("axios");
const WEIGHTS_URL = "http://localhost:8081/weights";
export default {
name: "home",
mixins: [requestsMixin],
computed: {
images() {
return this.$store.state.images;
}
},
async beforeMount() {
await faceapi.loadTinyFaceDetectorModel(WEIGHTS_URL);
await faceapi.loadFaceLandmarkTinyModel(WEIGHTS_URL);
await faceapi.loadFaceLandmarkModel(WEIGHTS_URL);
await faceapi.loadFaceRecognitionModel(WEIGHTS_URL);
await faceapi.loadFaceExpressionModel(WEIGHTS_URL);
await faceapi.loadAgeGenderModel(WEIGHTS_URL);
await faceapi.loadFaceDetectionModel(WEIGHTS_URL);
await this.getAllImages();
this.loaded = true;
},
data() {
return {
form: {},
loaded: false
};
},
methods: {
async deleteOneImage(id) {
await this.deleteImage(id);
this.getAllImages();
},
async getAllImages() {
const { data } = await this.getImages();
this.$store.commit("setImages", data);
if (this.$refs.file) {
this.$refs.file.value = "";
}
},
onChangeFileUpload($event) {
const file = $event.target.files[0];
const reader = new FileReader();
reader.onload = async () => {
this.$refs.image.src = reader.result;
this.form.image = reader.result;
await this.addImage(this.form);
this.getAllImages();
this.form.image = "";
};
reader.readAsDataURL(file);
},
async detectFace(index) {
const input = this.$refs[`photo-${index}`][0];
const options = new faceapi.TinyFaceDetectorOptions({
inputSize: 128,
scoreThreshold: 0.3
});
const detections = await faceapi
.detectAllFaces(input, options)
.withFaceLandmarks()
.withFaceExpressions()
.withAgeAndGender()
.withFaceDescriptors();
this.images[index].detections = detections;
await this.editImage(this.images[index]);
this.getAllImages();
}
}
};
</script>
<style>
.photo {
max-width: 200px;
margin-bottom: 10px;
}
</style>

This is where the image recognition magic happens. In the beforeMount hook, we load all the training data needed for face-api.js to do facial recognition. The WEIGHTS_URL is just the the weights folder of https://github.com/justadudewhohacks/face-api.js/ repository served by an HTTP server. With that link, we can download the whole repository into a Zip file and extract the weights folder from it. Then download http-server by running npm i -g http-server . Go to the weights folder and run http-server --cors so that we server the files and the browser won’t get CORS errors when trying to download them. Note that it might take a few seconds to load the files.

In the template, we have an “Upload Images” button to load the image. The button will open the open file dialog, then onChangeFileUpload will be called to convert it to a base64 string which is then saved to the server. Note that this is only done to keep the tutorial app simple. For production purposes, it should probably be saved to a file with a back end app.

Once the image is loaded, they’ll be displayed in the cards. Each card will have a “Detect Face” button. When it’s clicked, the detectFace function is called, which runs the face-api.js facial detection function which is called detectAllFaces . Note that we used TinyFaceDetectorOptions to shrink the images before doing facial recognition to speed it up. We chain the functions starting with with so that we get some human-readable insights. Once that’s done, the cards will be reloaded with the results.

Next in App.vue , replace the existing code with:

<template>
  <div id="app">
    <b-navbar toggleable="lg" type="dark" variant="info">
      <b-navbar-brand to="/">Facial Recognition App</b-navbar-brand><b-navbar-toggle target="nav-collapse"></b-navbar-toggle><b-collapse id="nav-collapse" is-nav>
        <b-navbar-nav>
          <b-nav-item to="/" :active="path  == '/'">Home</b-nav-item>
        </b-navbar-nav>
      </b-collapse>
    </b-navbar>
    <router-view />
  </div>
</template><script>
export default {
  data() {
    return {
      path: this.$route && this.$route.path
    };
  },
  watch: {
    $route(route) {
      this.path = route.path;
    }
  }
};
</script><style lang="scss">
.page {
  padding: 20px;
  margin: 0 auto;
}button,
.btn.btn-primary {
  margin-right: 10px !important;
}.button-toolbar {
  margin-bottom: 10px;
}
</style>

This adds a Bootstrap navigation bar to the top of our pages, and a router-view to display the routes we define.

Next in main.js , replace the code with:

import Vue from 'vue'
import App from './App.vue'
import router from './router'
import store from './store'
import BootstrapVue from "bootstrap-vue";
import "bootstrap/dist/css/bootstrap.css";
import "bootstrap-vue/dist/bootstrap-vue.css";
Vue.use(BootstrapVue);Vue.config.productionTip = falsenew Vue({
  router,
  store,
  render: h => h(App)
}).$mount('#app')

This adds the libraries we installed to our app so we can use them in our components.

In router.js , we replace the existing code with:

import Vue from "vue";
import Router from "vue-router";
import Home from "./views/Home.vue";Vue.use(Router);export default new Router({
  mode: "history",
  base: process.env.BASE_URL,
  routes: [
    {
      path: "/",
      name: "home",
      component: Home
    }
  ]
});

This includes our home and search pages.

Then in store.js , we replace the existing code with:

import Vue from "vue";
import Vuex from "vuex";Vue.use(Vuex);export default new Vuex.Store({
  state: {
    images: []
  },
  mutations: {
    setImages(state, payload) {
      state.images = payload;
    }
  },
  actions: {}
});

This adds our images state to the store so we can observe it in the computed block of HomePage . We have the setImages function to update the passwords state, and we use it in the components by calling this.$store.commit(“setImages”, data); like we did in HomePage .

Finally, in index.html , replace the existing code with:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <meta name="viewport" content="width=device-width,initial-scale=1.0" />
    <link rel="icon" href="<%= BASE_URL %>favicon.ico" />
    <title>Facial Detection App</title>
  </head>
  <body>
    <noscript>
      <strong
        >We're sorry but vue-face-api-tutorial-app doesn't work properly without
        JavaScript enabled. Please enable it to continue.</strong
      >
    </noscript>
    <div id="app"></div>
    <!-- built files will be auto injected -->
  </body>
</html>

Which changes the title.

amauAzA.jpg!web

Photo by Marius Ciocirlan on Unsplash

After all the hard work, we can start our app by running npm run serve .

To start the back end, we first install the json-server package by running npm i json-server . Then, go to our project folder and run:

json-server --watch db.json

In db.json , change the text to:

{
  "images": [
  ]
}

So we have the images endpoints defined in the requests.js available.

After all the hard work, we get the following:

eeYbIvy.png!web

uuM3InJ.png!web

As you can see, face-api.js is quite accurate in detecting the gender and facial expressions of both male and female faces. The ages also look quite close to what you expect. This is exciting given that there’s not a lot of work done to make this app, thanks to the developers of face-api.js


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK