48

Object Detection Using TensorFlow.js

 4 years ago
source link: https://www.tuicool.com/articles/yIjq6jI
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
EzUrE3n.png!web

This is the fourth post of the image processing series from zero to one.

In this post, we will build an image object detection system with Tensorflow-js with the pre-trained model.

To start with, there are lots of ways to deploy TensorFlow in webpage one way is to include ml5js. Visit https://ml5js.org/ . Its a wrapper around tf.js a tensor flow and p5.js library used for doing operations in Html element.

But, We will like to keep the power on the backend part so that I can try and run these models over backend with API's backend processes and so on.

Therefore, In the first half of the post, we will create a UI using React.js and Material-UI and in the second half will we create an API in Node.js to power the UI.

Let's start with building a sample React project.

FRONTEND PART:-

If you have followed along with my previous article the react project will seem to be fairly easy to build.

  1. Open the terminal and do

create-react-app image_classification_react_ui

This will create a react project to work with.

2. Let’s install the dependency required

npm install @material-ui/core
npm install - save isomorphic-fetch es6-promise
Note: isomorphic-fetch is required to call the object detection API endpoint from React code.

3. Open the project in your favorite editor and let’s create 2 folders

  1. container — This will contain a file — ImageOps.jsx , which have all frontend UI code.

  2. utils — This will contain a file Api.js , which is used to call the object detection endpoint.

└── src
    ├── containers
        ├── ImageOps.jsx
    ├── utils
        ├── Api.js

Let’s look into the ImageOps.jsx code and understand it.

import React from 'react';
 
import Container from '@material-ui/core/Container';
import Grid from '@material-ui/core/Grid';
 
import Card from '@material-ui/core/Card';
import CardContent from '@material-ui/core/CardContent';
import Typography from '@material-ui/core/Typography';
import Button from '@material-ui/core/Button';
import { red } from '@material-ui/core/colors';
 
import {api} from '../utils/Api';
 
import Table from '@material-ui/core/Table';
import TableBody from '@material-ui/core/TableBody';
import TableCell from '@material-ui/core/TableCell';
import TableHead from '@material-ui/core/TableHead';
import TableRow from '@material-ui/core/TableRow';
import Paper from '@material-ui/core/Paper';
import CircularProgress from '@material-ui/core/CircularProgress';
 
 
export default class ImageOps extends React.Component {
  
   constructor(props) {
       super(props);
 
       this.state = {
           image_object: null,
           image_object_details: {},
           active_type: null
       }
   }
 
   updateImageObject(e) {
       const file  = e.target.files[0];
       const reader = new FileReader();
      
       reader.readAsDataURL(file);
       reader.onload = () => {
           this.setState({image_object: reader.result, image_object_details: {}, active_type: null});
       };
 
   }
 
   processImageObject(type) {
 
       this.setState({active_type: type}, () => {
 
           if(!this.state.image_object_details[this.state.active_type]) {
               api("detect_image_objects", {
                   type,
                   data: this.state.image_object
               }).then((response) => {
                  
                   const filtered_data = response;
                   const image_details = this.state.image_object_details;
      
                   image_details[filtered_data.type] = filtered_data.data;
      
                   this.setState({image_object_details: image_details });
               });
           }
       });
   }
 
   render() {
       return (
           <Container maxWidth="md">
               <Grid container spacing={2}>
                   <Grid item xs={12}>
                       <CardContent>
                           <Typography variant="h4" color="textPrimary" component="h4">
                               Object Detection Tensorflow
                           </Typography>
                       </CardContent>
                   </Grid>
                   <Grid item xs={12}>
                       {this.state.image_object &&
                           <img src={this.state.image_object} alt="" height="500px"/>
                       }
                   </Grid>
                   <Grid item xs={12}>
                       <Card>
                           <CardContent>
                               <Button variant="contained"
                                   component='label' // <-- Just add me!
                                   >
                                   Upload Image
                                   <input accept="image/jpeg" onChange={(e) =>  this.updateImageObject(e)} type="file" style={{ display: 'none' }} />
                               </Button>
                           </CardContent>
                       </Card>
                   </Grid>
                   <Grid item xs={3}>
                       <Grid container justify="center" spacing={3}>
                           <Grid item >
                               {this.state.image_object && <Button onClick={() => this.processImageObject("imagenet")}variant="contained" color="primary">
                                   Get objects with ImageNet
                               </Button>}
                           </Grid>
                           <Grid item>
                               {this.state.image_object && <Button onClick={() => this.processImageObject("coco-ssd")}variant="contained" color="secondary">
                                   Get objects with Coco SSD
                               </Button>}
                           </Grid>
                       </Grid>
                   </Grid>
                   <Grid item xs={9}>
                       <Grid container justify="center">
                           {this.state.active_type && this.state.image_object_details[this.state.active_type] &&
                               <Grid item xs={12}>
                                   <Card>
                                       <CardContent>
                                           <Typography variant="h4" color="textPrimary" component="h4">
                                               {this.state.active_type.toUpperCase()}
                                           </Typography>
                                           <ImageDetails type={this.state.active_type} data = {this.state.image_object_details[this.state.active_type]}></ImageDetails>
                                       </CardContent>
                                   </Card>
                               </Grid>
                           }
                           {this.state.active_type && !this.state.image_object_details[this.state.active_type] &&
                               <Grid item xs={12}>
                                   <CircularProgress
                                       color="secondary"
                                   />
                               </Grid>
                           }
                       </Grid>
                   </Grid>
               </Grid>
           </Container>
       )
   }
}
 
class ImageDetails extends React.Component {
  
   render() {
 
       console.log(this.props.data);
 
       return (
           <Grid item xs={12}>
               <Paper>
                   <Table>
                   <TableHead>
                       <TableRow>
                       <TableCell>Objects</TableCell>
                       <TableCell align="right">Probability</TableCell>
                       </TableRow>
                   </TableHead>
                   <TableBody>
                       {this.props.data.map((row) => {
                           if (this.props.type === "imagenet") {
                               return (
                                   <TableRow key={row.className}>
                                       <TableCell component="th" scope="row">
                                       {row.className}
                                       </TableCell>
                                       <TableCell align="right">{row.probability.toFixed(2)}</TableCell>
                                   </TableRow>
                               )
                           } else if(this.props.type === "coco-ssd") {
                               return (
                                   <TableRow key={row.className}>
                                       <TableCell component="th" scope="row">
                                       {row.class}
                                       </TableCell>
                                       <TableCell align="right">{row.score.toFixed(2)}</TableCell>
                                   </TableRow>
                               )
                           }
                           })
                       }
                   </TableBody>
                   </Table>
               </Paper>
            
           </Grid>
       )
   }
}
 
}

Note: Here is the Github repo link of above — https://github.com/overflowjs-com/image_object_detction_react_ui . If you find understanding above diffcult then i highly recommend to read our Part 2 and Part 1.

In render, we have created a Grid of three rows with first row containing heading

Second, containing the image to display

<Grid item xs={12}>
  {this.state.image_object &&
    <img src={this.state.image_object} alt="" height="500px"/>}                
</Grid>

Here we are displaying an image if the image has been uploaded or image object is available in the state

Next grid contains a button to upload a file and update uploaded file to the current state.

<Grid item xs={12}>
    <Card>
        <CardContent>
            <Button variant="contained"
                component='label' // <-- Just add me!
                >
                Upload Image
                <input accept="image/jpeg" onChange={(e) =>  this.updateImageObject(e)} type="file" style={{ display: 'none' }} />
            </Button>
        </CardContent>
    </Card>
</Grid>

On Button to upload an image on change event we have called a function updateImage to update the currently selected image on the state.

updateImageObject(e) {
       const file  = e.target.files[0];
       const reader = new FileReader();
      
       reader.readAsDataURL(file);
       reader.onload = () => {
           this.setState({image_object: reader.result, image_object_details: {}, active_type: null
           });
       };
}

In the above code, we are reading the current file object from file input uploader and loading its data on the current state. As the new image is getting uploaded we are resetting image_object_details and active_type so that fresh operations can be applied on uploaded image

Below is the next grid that contains code for two buttons for each model.

<Grid item xs={3}>
        <Grid container justify="center" spacing={3}>
            <Grid item >
                {this.state.image_object && <Button onClick={() => this.processImageObject("imagenet")}variant="contained" color="primary">
                    Get objects with ImageNet
                </Button>}
            </Grid>
            <Grid item> 
                {this.state.image_object && <Button onClick={() => this.processImageObject("coco-ssd")}variant="contained" color="secondary">
                    Get objects with Coco SSD
                </Button>}
            </Grid>
        </Grid>
    </Grid>
    <Grid item xs={9}>
        <Grid container justify="center">
            {this.state.active_type && this.state.image_object_details[this.state.active_type] &&
                <Grid item xs={12}>
                    <Card>
                        <CardContent>
                            <Typography variant="h4" color="textPrimary" component="h4">
                                {this.state.active_type.toUpperCase()}
                            </Typography>
                            <ImageDetails data = {this.state.image_object_details[this.state.active_type]}></ImageDetails>
                        </CardContent>
                    </Card>
                </Grid>
            }
            {this.state.active_type && !this.state.image_object_details[this.state.active_type] && 
                <Grid item xs={12}>
                    <CircularProgress
                        color="secondary"
                    />
                </Grid>
            }
     </Grid>
</Grid>

Here we are dividing Grid into two parts 3 columns and 9 columns from 12 columns parent.

First Grid with 3 columns contains two Grid having two buttons

<Grid container justify="center" spacing={3}>
    <Grid item >
        {this.state.image_object && <Button onClick={() => this.processImageObject("imagenet")}variant="contained" color="primary">
            Get objects with ImageNet
        </Button>}
    </Grid>
    <Grid item> 
        {this.state.image_object && <Button onClick={() => this.processImageObject("coco-ssd")}variant="contained" color="secondary">
            Get objects with Coco SSD
        </Button>}
    </Grid>
</Grid>

We are analyzing image detection using ImageNet and Coco SSD Models and compare the outputs.

Each button has an action event onClick and it is calling a function processImageObject() which takes the name of the model as a parameter.

processImageObject(type) {
this.setState({active_type: type}, () => {
        api("detect_image_objects", {
            type,
            data: this.state.image_object
        }).then((response) => {
            
            const filtered_data = response;
            const image_details = this.state.image_object_details;
image_details[filtered_data.type] = filtered_data.data;
this.setState({image_object_details: image_details });
        });
    });
}

We are setting the state object action_type with currently selected modal.

The Process image object function will take the current image from state and send it to API function which I will show you next and API will be called detect_image_objects and in response, we will process and show in UI.

The response from API will be fetched and it will be set in stage image_object_details .

We are setting each API response based on the type of model (imagenet/coco-ssd)

This Buttons will only be visible when image_object is present in the state.

{
 this.state.image_object && 
 <Button onClick={() => this.processImageObject()} variant="contained" color="primary">Process Image 
 </Button>
}

Below is another grid we have created:

<Grid item xs={9}>
    <Grid container justify="center">
        {this.state.active_type && this.state.image_object_details[this.state.active_type] &&
            <Grid item xs={12}>
                <Card>
                    <CardContent>
                        <Typography variant="h4" color="textPrimary" component="h4">
                            {this.state.active_type.toUpperCase()}
                        </Typography>
                        <ImageDetails  type={this.state.active_type} data = {this.state.image_object_details[this.state.active_type]}></ImageDetails>
                    </CardContent>
                </Card>
            </Grid>
        }
        {this.state.active_type && !this.state.image_object_details[this.state.active_type] && 
            <Grid item xs={12}>
                <CircularProgress
                    color="secondary"
                />
            </Grid>
        }
    </Grid>
</Grid>

Here we have checked whether current action_type modal is selected or not then if API has processed details it shows object details. For this, we have created a component ImageDetails .

Let’s look into ImageDetails component code which is easy to understand.

class ImageDetails extends React.Component {
  
   render() {
 
       console.log(this.props.data);
 
       return (
           <Grid item xs={12}>
               <Paper>
                   <Table>
                   <TableHead>
                       <TableRow>
                       <TableCell>Objects</TableCell>
                       <TableCell align="right">Probability</TableCell>
                       </TableRow>
                   </TableHead>
                   <TableBody>
                       {this.props.data.map((row) => {
                           if (this.props.type === "imagenet") {
                               return (
                                   <TableRow key={row.className}>
                                       <TableCell component="th" scope="row">
                                       {row.className}
                                       </TableCell>
                                       <TableCell align="right">{row.probability.toFixed(2)}</TableCell>
                                   </TableRow>
                               )
                           } else if(this.props.type === "coco-ssd") {
                               return (
                                   <TableRow key={row.className}>
                                       <TableCell component="th" scope="row">
                                       {row.class}
                                       </TableCell>
                                       <TableCell align="right">{row.score.toFixed(2)}</TableCell>
                                   </TableRow>
                               )
                           }
                           })
                       }
                   </TableBody>
                   </Table>
               </Paper>
            
           </Grid>
       )
   }
}

This component will show details received from modal Name of Object and their probability. Based on the type of modal we are working with we can display two different outputs which are handled in this class.

4. The last step is to write the API.js wrapper to do a server-side call.

import fetch from  'isomorphic-fetch';

const BASE_API_URL = "http://localhost:4000/api/"
 
export function api(api_end_point, data) {
 
   return fetch(BASE_API_URL+api_end_point,
       {
           method: 'POST',
           headers: {
               'Content-Type': 'application/json'
           },
           body:JSON.stringify(data)
       }).then((response) => {
           return response.json();
       });
}

In this sample code, we are providing a wrapper over fetch API function will take API endpoint and data and it will construct complete URL and return response sent from API.

Final UI will look like this

viYbAvv.png!web

Object detection using Tensorflow.js

BACKEND PART:-

Now since we have our UI in place let’s get started with creating an API endpoint using tensorflow.js which will look like

http://localhost:4000/api/detect_image_objects
  1. The first step is to choose a boilerplate that is using express.js and providing the ability to just write a route and object detection logic. We are using https://github.com/developit/express-es6-rest-api for this tutorial. Let’s clone it

git clone https://github.com/developit/express-es6-rest-api image_detection_tensorflow_api

2. Now install all dependencies by running

cd image_detection_tensorflow_apinpm install

3. Go to config.json in the project root and edit port to 4000 and bodylimit to 10000kb.

Note: We will use pre-trained models imagenet and coco-ssd. Finding multiple objects from an image is a tedious work even though image net is famous to detect a single object from images (Animals/ Other objects ) but still, these both modals based on very vast diverse datasets. So, if you don’t get your object right don’t worry :sweat_smile:.

  1. To start with TensorFlow, we need to update the node version if you are using the old one. After you are good with the node version then let’s run the below command to install https://github.com/tensorflow/tfjs-models

npm install @tensorflow/tfjs-node

Note: You can install tfjs-node as per your system Linux/Windows/Mac using — https://www.npmjs.com/package/@tensorflow/tfjs-node

  1. Let’s now install both models that we are going to use, so run

npm install @tensorflow-models/mobilenet — save
npm install @tensorflow-models/coco-ssd — save
  1. We need to install the below module too, as required dependency

    npm install base64-to-uint8array — save
  2. Now go to index.js under src > api folder and create a new endpoint

api.post('/detect_image_objects', async (req, res) => {
  const data = req.body.data;
  const type = req.body.type;
  const objectDetect = new ObjectDetectors(data, type);
  const results = await objectDetect.process();
  res.json(results);
});

Here we are calling ObjectDetectors class and passing two arguments received from UI, one is base64 encoded image and other is a type of model.

  1. Now let’s create ObjectDetectors class. Go to src > api folder and create object_detector folder. Inside object_detector we will create a new file ObjectDetectors.js

const tf = require('@tensorflow/tfjs-node');
 
const cocossd = require('@tensorflow-models/coco-ssd');
const mobilenet = require('@tensorflow-models/mobilenet');
 
import toUint8Array from 'base64-to-uint8array';
 
 
export default class ObjectDetectors {
 
   constructor(image, type) {
 
       this.inputImage = image;
       this.type = type;
   }
  
   async loadCocoSsdModal() {
       const modal = await cocossd.load({
           base: 'mobilenet_v2'
       })
       return modal;
   }
 
   async loadMobileNetModal() {
       const modal = await mobilenet.load({
           version: 1,
           alpha: 0.25 | .50 | .75 | 1.0,
       })
       return modal;
   }
 
   getTensor3dObject(numOfChannels) {
 
       const imageData = this.inputImage.replace('data:image/jpeg;base64','')
                           .replace('data:image/png;base64','');
      
       const imageArray = toUint8Array(imageData);
      
       const tensor3d = tf.node.decodeJpeg( imageArray, numOfChannels );
 
       return tensor3d;
   }
 
   async process() {
        
       let predictions = null;
       const tensor3D = this.getTensor3dObject(3);
 
       if(this.type === "imagenet") {
 
           const model =  await this.loadMobileNetModal();
           predictions = await model.classify(tensor3D);
 
       } else {
 
           const model =  await this.loadCocoSsdModal();
           predictions = await model.detect(tensor3D);
       }
 
       tensor3D.dispose();
 
      return {data: predictions, type: this.type};
   }
}

We have a constructor which takes two parameters one is image base64 encoded and type of image.

A process function is called which is calling getTensor3dObject(3).

Note: Here 3 is the number of channels as in the UI we have limited image type to jpeg which is 3 channel image right now. We are not processing 4 channel images which are png you can build this easily as you can send image type in API and in backed change the given functions as needed.
getTensor3dObject(numOfChannels) {
 const imageData = this.inputImage.replace('data:image/jpeg;base64','')
           .replace('data:image/png;base64','');
const imageArray = toUint8Array(imageData);
const tensor3d = tf.node.decodeJpeg( imageArray, numOfChannels );
return tensor3d;
}

In this function, we are removing tags from the base64 image, converting it to an image array and building up our tensor3d.

Our pre-trained models consume either tensor3d object or <img> HTML tag or HTML video tag but as we are doing this from Node.js API, We have a base64 image which is converted to tensor3d object.

Gladly tensorflow.js provides a function for it decodeJpeg .

There are other functions also provided by TensorFlow for same work you can see more details — https://js.tensorflow.org/api_node/1.2.7/#node.decodeJpeg

Now decodeJpeg will convert our ArrayBuffer of an image of 3 channels to tesnor3d object.

if(this.type === "imagenet") {
 const model =  await this.loadMobileNetModal();
 predictions = await model.classify(tensor3D);
} else {
 const model =  await this.loadCocoSsdModal();
 predictions = await model.detect(tensor3D);
}

Based on the type of model picked we are loading model on API call. You can load models at the time, API starts to load but here for this blog, I am just loading them as API is called, so the API may take time to respond.

Now below are outputs I have got so far

IMAGENET MODEL OUTPUT

MJNvyuR.png!web

imagenet model object detection

Output of imagenet it provides the name of the object and its probability there are three objects identified with imagenet.

COCO-SSD MODEL OUTPUT-

If you read more about coco-ssd it cam identifies multiple objects even if they are similar. Along with a rectangle coordinates where their object is placed.

Read more here — https://github.com/tensorflow/tfjs-models/tree/master/coco-ssd

fEBZNfQ.png!web

coco-ssd model object detection

Here you can see it has identified 6 persons with their positions as a rectangle. Now you can use these coordinates for any purposes as they tell you the object name and object location.

You can use any image library to draw these rectangles build some cool image effect applications around these details.

You can try my tutorial on Cloudniary and OpenCV on React.js, Nodejs from previous articles try to use that knowledge to build cool stuff.

Get yourself added to our 2500+ people subscriber family to learn and grow more and please hit the share button on this article to share with your co-workers, friends, and others.

Check out articles onJavascript, Angular , Node.js , Vue.js .

For more articles stay tuned tooverflowjs.com

Thank you!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK