Face Recognition Software Tutorial

Introduction To Recognition API

The Sighthound Cloud Recognition API makes it easy to add face recognition to your applications. In this tutorial, we will take you through the simple steps needed to recognize people in photos: uploading images of the people to be recognized, adding those people to a group, training the group, and finally confirming that the recognition is working by testing against some new photos containing the people.

Core Concepts

OBJECTS
Objects are the things that will be recognized by the API. Currently, the API only supports Objects with a type of “person”, but future updates will include additional types. For the purposes of this tutorial on face recognition, the term Object will be referred to as either a Person or Person Object.

IMAGES
Images play an important role in the Recognition API. For every Person that a developer creates, one or more Images of that person should be uploaded to the system and linked to their Person Object.

GROUPS
Groups are simply just categories (e.g. Family, Friends, Employees, etc.) that contain one or more Persons. All recognition requests require that a Group be specified.

TRAINING
Training is a computer vision term that relates to the process of converting the features and metadata found in the Images into mathematical models that are used for face recognition. After adding Images, Persons, and Groups, the training endpoint must be called to make the data available for recognition requests.

Prerequisites

  • Download and extract the tutorial code and photos to your computer. If using your own images, it’s best to find at least 20 photos for each person you want to be recognized. Images that show the subjects’ faces rotated and turned in various angles and lighting conditions will help improve accuracy.
  • Create a Sighthound Cloud API Token. You will substitute your unique token in place of 'YourSighthoundCloudToken' in the code.

Let’s Get Started

Initial Setup

Select a programming lanuage below to see the initial steps needed to get started with the tutorial.

Node.js
Python
Java

* Install Node.js: https://nodejs.org/

* Extract the downloaded tutorial code zip file to your computer, open a terminal, 
  and 'cd' to '{extracted-zip-path}/code-samples/node/'

* Type and execute the command 'npm install' to install the required packages

* (optional) Install GraphicsMagick to draw detection boxes, person names, 
  and confidence scores on the final recognition images.
  - Mac: 'brew install graphicsmagick'
  - Windows: http://www.graphicsmagick.org/INSTALL-windows.html

* Open recognition.js and replace 'YourSighthoundCloudToken' with your unique Token

* When you are ready to run the tutorial code, execute the command 'node recognition.js'


* Install Python 2.7.11+: https://www.python.org/downloads/

* Extract the downloaded tutorial code zip file to your computer, open a terminal, 
  and 'cd' to '{extracted-zip-path}/code-samples/python/'

* Open recognition.py and replace 'YourSighthoundCloudToken' with your unique Token

* When you are ready to run the tutorial code, execute the command 'python recognition.py ../../images'


* Make sure Java SDK is installed: http://www.oracle.com/technetwork/java/javase/downloads/index.html

* Extract the tutorial zip file to your computer, open a terminal, and 'cd' to 
 '{extracted-zip-path}/code-samples/java/'

* Open recognition.java, replace 'YourSighthoundCloudToken' with your actual Token.

* When you are ready to compile the code, execute `javac -cp ".:javax.json-1.0.4.jar" Recognition.java`
        
* Run the tutorial code by executing `java -cp ".:javax.json-1.0.4.jar" Recognition`

For your selected programming language, a few variables, callbacks, helper functions, and imports should be defined in the 'recognition' source code file.

Node.js
Python
Java

// Filename: recognition.js
'use strict'
const fs = require('fs');
const path = require('path');
const async = require('async');
const request = require('request');
const gm = require('gm');

// TODO: Replace TOKEN with your own Sighthound Cloud Token
const recoConfig = {
  TOKEN: 'YourSighthoundCloudToken', 
  BASE_URL: 'https://dev.sighthoundapi.com/v1'
};

// Define a generic callback to be used for outputting responses and errors
function genericCallback(error, response, body) {
  if (!error && (response.statusCode == 200 || response.statusCode == 204)) {
    console.log(body, '\n');
  } else if (error) {
    console.log(error, '\n');
  } else {
    console.log(response.statusCode, body, '\n');
  }
}


# Filename: recognition.py
import base64
import httplib
import json
import os
import sys

# To annotate test images a recent version of Pillow is required. Under OS X
# or Windows install via `pip install Pillow`. Under linux install the
# `python-imaging` package.
from PIL import Image, ImageDraw, ImageFont

# Set this variable to True to print all server responses.
_print_responses = False

# Your Sighthound Cloud token. More information at
# https://www.sighthound.com/support/creating-api-token
_cloud_token = "YourSighthoundCloudToken"

# The cloud server to use, here we set the development server.
_cloud_host = "dev.sighthoundapi.com"

# A set in which to gather object names during step 1.
_object_ids = set()

# The name of the group to which we will add objects (step 2), train (step 3),
# and test with (step 4).
_group_name = "family"

# The directory where annotated test images will be written.
_output_folder = "out"


###############################################################################
def send_request(request_method, request_path, params):
    """A utility function to send API requests to the Sighthound Cloud server.

    This function will abort the script with sys.exit(1) on API errors.
    
    @param  request_method  The request method, "PUT" or "POST".
    @param  request_path    The URL path for the API request.
    @param  params          The parameters of the API request, if any.
    @return response_body   The body of the response.
    """
    # Send the request.
    headers = {"Content-type": "application/json",
               "X-Access-Token": _cloud_token}
    conn = httplib.HTTPSConnection(_cloud_host)
    conn.request(request_method, request_path, params, headers)

    # Process the response.
    response = conn.getresponse()
    body = response.read()
    error = response.status not in [200, 204]

    if _print_responses or error:
        print response.status, body

    if error:
        sys.exit(1)

    return body


###############################################################################
def is_image(filename):
    """A naive utility function to determine images via filename extension.

    @param  filename  The filename to examine.
    @return is_image  True if the file appears to be an image.
    """
    return filename.endswith('.png') or filename.endswith('.jpeg') or \
            filename.endswith('.jpg') or filename.endswith('.bmp')



import java.awt.BasicStroke;
import java.awt.Color;
import java.awt.Font;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.net.URL;
import java.net.URLEncoder;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.HashSet;
import java.util.Set;
import java.util.logging.Logger;

import javax.imageio.ImageIO;
import javax.json.Json;
import javax.json.JsonArray;
import javax.json.JsonArrayBuilder;
import javax.json.JsonObject;
import javax.json.JsonObjectBuilder;
import javax.json.JsonReader;

public class Recognition {
    // TODO: Replace TOKEN with your own Sighthound Cloud Token
    public static final String TOKEN = "YourSighthoundCloudToken";
    public static final String BASE_URL = "https://dev.sighthoundapi.com/v1/";

    // Set minimum confidence threshold needed to have a positive recognition.
    // Any values below this number will be marked as 'Unknown' in the tutorial.
    public static final double recognitionConfidenceThreshold = 0.5;
    // contentType
    private static final String contentTypeStream = "application/octet-stream";
    private static final String contentTypeJson = "application/json";

    // image folder if different from default folder
    private static String imageFolder = null;
    // working folder if different from default folder
    private static String workingFolder = null;
    // java logging
    private static Logger logger = Logger.getLogger(Recognition.class.getName());

    // Create an array of the people we want to recognize. For this tutorial,
    // the person's name will be their Object ID, and it's also the folder name
    // containing their training images.
    private static final Set peoples = new HashSet();

    // Define a generic callback to be used for outputting responses and errors
    private static void genericCallback(boolean error, int statusCode,
            String body) {
        if (!error && (statusCode == 200 || statusCode == 204)) {
            logger.info(body);
        } else if (error) {
            logger.warning(statusCode + "\n" + body);
        } else {
            logger.info(statusCode + "\n" + body);
        }
    }

    private static JsonObject httpCall(String api, String method,
            String contentType, byte[] body) throws IOException {
        URL apiURL = new URL(api);
        HttpURLConnection connection = (HttpURLConnection) apiURL
                .openConnection();
        connection.setRequestProperty("Content-Type", contentType);
        connection.setRequestProperty("X-Access-Token", TOKEN);
        connection.setRequestMethod(method);
        connection.setDoInput(true);
        if (body != null) {
            connection.setDoOutput(true);
            connection.setFixedLengthStreamingMode(body.length);
            OutputStream os = connection.getOutputStream();
            os.write(body);
            os.flush();
        }
        int statusCode = connection.getResponseCode();
        if (statusCode < 400) {
            JsonReader jReader = Json.createReader(connection.getInputStream());
            JsonObject jsonBody = jReader.readObject();
            genericCallback(false, statusCode, jsonBody.toString());
            return jsonBody;
        } else if (statusCode == 401) {
            genericCallback(true, statusCode, "Invalidated TOKEN");
            return null;
        } else {
            JsonReader jReader = Json.createReader(connection.getErrorStream());
            JsonObject jsonError = jReader.readObject();
            genericCallback(true, statusCode, jsonError.toString());
            return jsonError;
        }
    }

...

Step 1: Upload Images & Link to Persons

The first thing to do is create a unique "objectId" for each Person you upload to the API. This ID can be anything you want, but we'll use their names for this tutorial. You must include this ID in the query string when uploading images so that the API knows who is in the photo. In the code example below, we will upload several photos of Christy, Tristan, Abby, and Kate. This will accomplish two things: it will create four new Person Objects with IDs Christy, Tristan, Abby, and Kate, and associate the uploaded images with each Person to teach the system to recognize them again in the future.

Node.js
Python
Java

// Create an array of the people we want to recognize. For this tutorial, the 
// person's name will be their Object ID, and it's also the folder name 
// containing their training images in the downloadable tutorial code zip file.
const people = ['Christy', 'Tristan', 'Abby', 'Kate'];

function step1_UploadImages() {
  
  // Create a queue to manage calls made to the /image endpoint. This queue
  // sets a limit of 3 concurrent calls.
  const qImages = async.queue((item, callback) => {
    console.log('uploading objectId: ' + item.objectId + ' | imageId: ' +
                item.imageId + ' | path: ' + item.imageLocalPath + '\n');

    // Create a read stream for the image to be uploaded
    const imageFileStream = fs.createReadStream(item.imageLocalPath);

    // Define options used for the API request
    const requestOptions = {
      url: `${recoConfig.BASE_URL}/image/${item.imageId}`,
      headers: {
        'Content-Type': 'application/octet-stream',
        'X-Access-Token': recoConfig.TOKEN
      },
      method: 'PUT',
      qs: {
        objectId: item.objectId,
        objectType: 'person',
        train: 'manual'
      }
    };

    // Pipe the image stream into the request with the options and callback
    imageFileStream.pipe(request(requestOptions, callback));
  }, 3);

  // For each person, get list of images in their folder and add to upload queue.
  // The objectId will be the person's name and the imageId will be the filename.
  people.forEach((name) => {
    const trainingDir = path.join(__dirname,'..','..','images','training',name);
    console.log('Scanning for input files in ', trainingDir);

    fs.readdir(trainingDir, (err, files) => {
      console.log(`Uploading ${files.length} files from '${name}' folder.`);

      // For every image found in folder, add the item to the queue for uploading
      files.forEach((filename) => {
        if (filename.indexOf('.jpg') > -1){
          qImages.push({
            objectId: name, 
            imageId: filename, 
            imageLocalPath: path.join(trainingDir, filename)
          }, genericCallback);
        }
      });
    });
  });

  // Proceed to Step 2 after all items in queue have been processed
  qImages.drain = () => step2_AddObjectsToGroup(people);
}


def step1_upload_images(train_path):
    """Upload all training images.

    @param  train_path  The path to the training image directory. This expects
                        images to be organized in directories by object name as
                        in "sighthound-cloud-tutorial/images/training".
    """
    print "Step 1: Uploading training images"
    # Look for directories in our training folder. The names of each directory
    # will be used as the object id for the images within.
    for name in os.listdir(train_path):
        base_path = os.path.join(train_path, name)
        if os.path.isdir(base_path):
            # Upload all image files within the directory.
            print "  Adding images for object id " + name
            for training_file in os.listdir(base_path):
                file_path = os.path.join(base_path, training_file)
                if is_image(file_path):
                    print "    Uploading file " + training_file
                    add_training_image(file_path, name)

                    # Track all object ids for group creation in step 2.
                    _object_ids.add(name)

    print "Step 1 complete\n"


###############################################################################
def add_training_image(image_path, object_id):
    """Submit an image to Sighthound Cloud for training.

    @param  image_path  File path to the image to analyze. The filename will be
                        used as the image id
    @param  object_id   The id of the object (person) captured by this image.
    """
    base64_image = base64.b64encode(open(image_path).read())
    params = json.dumps({"image": base64_image})

    url_path = "/v1/image/%s?train=manual&objectType=person&objectId=%s" % \
            (os.path.basename(image_path), object_id)
    send_request("PUT", url_path, params)


private static void step1_UploadImages() throws IOException,
        InterruptedException {
  logger.info("*** STEP 1 - Upload Images ***");
  // Set the maximum number of concurrent uploads
  int concurrentUploads = 3;
  int concurrentCount = 0;
  Thread[] pool = new Thread[concurrentUploads];
  for (File person : peoples) {
      String objectId = person.getName();
      String requestParams = "?train=manual&objectType=person&objectId="
              + objectId;
      for (File image : person.listFiles()) {
          if (image.isFile() && !image.isHidden()) {
              final String api = BASE_URL + "image/"
                      + URLEncoder.encode(image.getName(), "UTF-8")
                      + requestParams;
              final byte[] data = Files.readAllBytes(Paths.get(image
                      .getCanonicalPath()));
              pool[concurrentCount] = new Thread() {
                  public void run() {
                      try {
                          httpCall(api, "PUT", contentTypeStream, data);
                      } catch (IOException e) {
                          logger.warning(e.getMessage());
                      }
                  };
              };
              pool[concurrentCount].start();
              concurrentCount++;
              if (concurrentCount >= concurrentUploads) {
                  for (int c = 0; c < concurrentCount; c++) {
                      pool[c].join();
                  }
                  concurrentCount = 0;
              }
          }
      }
      for (int c = 0; c < concurrentCount; c++) {
          pool[c].join();
      }
  }
}

Step 2: Add Persons to a Group

Now that we have four Person Objects in the system with several photos each, let's add them to a new Group called "family". Objects (People, in this tutorial) can be placed in one or more Groups.

Node.js
Python
Java

function step2_AddObjectsToGroup(objects) {
  console.log('*** STEP 2 - Adding People to Group "family" ***');
  const groupId = 'family';

  // Define options used for the API request
  const requestOptions = {
    body: JSON.stringify({objectIds: objects}),
    url: `${recoConfig.BASE_URL}/group/${groupId}`,
    headers: {
      'Content-Type': 'application/json',
      'X-Access-Token': recoConfig.TOKEN
    },
    method: 'PUT'
  };

  // Perform the API request using requestOptions and an anonymous callback
  request(requestOptions, (error, response, body) => {
    genericCallback(error, response, body);
    step3_TrainGroup(groupId);
  });
}


def step2_create_group():
    """Create a group named via _group_name with the members from step 1."""
    print "Step 2: Creating group"
    print "  Adding objects %s to group %s" % (str(_object_ids), _group_name)

    params = json.dumps({"objectIds": list(_object_ids)})
    send_request("PUT", "/v1/group/" + _group_name, params)
    
    print "Step 2 complete\n"


private static void step2_AddObjectsToGroup() throws IOException {
    logger.info("*** STEP 2 - Adding People to Group 'family' ***");
    String groupId = "family";
    final String api = BASE_URL + "group/"
            + URLEncoder.encode(groupId, "UTF-8");
    JsonArrayBuilder jsonArrayBuilder = Json.createArrayBuilder();
    for (File person : peoples) {
        jsonArrayBuilder.add(person.getName());
    }

    JsonObjectBuilder jsonObjectBuilder = Json.createObjectBuilder();
    jsonObjectBuilder.add("objectIds", jsonArrayBuilder);
    byte[] data = jsonObjectBuilder.build().toString().getBytes("UTF-8");
    httpCall(api, "PUT", contentTypeJson, data);

}

Step 3: Train the Group

After the Group has been created, it is time to train the system to recognize these people in future requests. "Training" is a computer vision term that relates to the process of converting image data into mathematical models that a computer system can use to detect and identify objects. The Sighthound Cloud API requires that Groups be trained after new Objects are added to a Group, or when additional images are uploaded and linked to existing Objects as in Step 1 of the tutorial.

Node.js
Python
Java

function step3_TrainGroup(groupId) {
  console.log(`*** Step 3 - Training Group '${groupId}' ***`);

  // Define options used for the API request
  const requestOptions = {
    url: `${recoConfig.BASE_URL}/group/${groupId}/training`,
    headers: {
      'Content-Type': 'application/json',
      'X-Access-Token': recoConfig.TOKEN
    },
    method: 'POST'
  };

  // Perform the API request using requestOptions and an anonymous callback
  request(requestOptions, (error, response, body) => {
    genericCallback(error, response, body);
    step4_TestReco(groupId);
  });
}


def step3_train_group():
    """Train the group we created in step 2 to prepare it for recognition."""
    print "Step 3: Training group"
    print "  Sending train request for group %s" % _group_name

    send_request("POST", "/v1/group/%s/training" % _group_name, None)
    
    print "Step 3 complete\n"


private static void step3_TrainGroup(String groupId) throws IOException {
    logger.info("*** Step 3 - Training Group '${groupId}' ***");
    final String api = BASE_URL + "group/"
            + URLEncoder.encode(groupId, "UTF-8") + "/training";
    httpCall(api, "POST", contentTypeJson, null);

}

Step 4: Test the Recognition

At this point, the API is now trained to recognize the 4 family members. The final step of this tutorial will upload several images to the recognition endpoint for testing: one image for each person, and a group shot that includes all four people plus a new person that wasn't trained. The 'family' Group will be specified in the recognition request so that the API knows which people to look for. Depending on which programming language you are using for the tutorial, a folder named "out" may be created in the same directory as your source code file after the photos are processed. This folder will contain images that were generated based on the recognition response. The images will have bounding vertices, person names, and recognition confidence scores drawn over the people detected in the images uploaded for testing.

Node.js
Python
Java

function step4_TestReco(groupId) {
  console.log('*** Step 4 - Test the Face Recognition ***');

  // Define the recognition callback
  function recoCallback(error, response, body) {
    if (!error && (response.statusCode == 200)) {
      console.log('Recognition success:', body);
      if (gm) {
        const objects = JSON.parse(body).objects;
        annotateImage(this.data.imageLocalPath, objects);
      } else {
        console.warn('\n*** Install GraphicsMagick to draw face recognition ' + 
          'results on images.')
      }
    } else if (error) {
      console.error(error);
    } else {
      console.error('error: ', response.statusCode, response.statusMessage);
    }
  }

  // Create a queue to manage calls made to the /recognition endpoint. This 
  // queue sets a limit of 1 concurrent upload.
  const qReco = async.queue((item, callback) => {
    console.log('\nUsing "' + item.groupId + '" group to recognize faces in ' +
      item.imageLocalPath + '\n');

    // Create a read stream for the image to be uploaded
    const imageFileStream = fs.createReadStream(item.imageLocalPath);

    // Define options used for the API request
    const requestOptions = {
      url: `${recoConfig.BASE_URL}/recognition`,
      headers: {
        'Content-Type': 'application/octet-stream',
        'X-Access-Token': recoConfig.TOKEN
      },
      method: 'POST',
      qs: {
        groupId: item.groupId
      }
    };

    // Pipe the image stream into the request with requestOptions and callback
    imageFileStream.pipe(request(requestOptions, callback));
  }, 1);

  
  // Get paths to the images to test recognition against
  const recoDir = path.join(__dirname, '..','..','images', 'reco-test');

  fs.readdir(recoDir, (err, files) => {
    console.log(`Recognizing faces in ${files.length} images`);

    // Add each image to the queue to be sent for face recognition
    files.forEach((filename) => {
      if (filename.indexOf('.jpg') > -1){
        qReco.push({
          groupId: groupId, 
          imageLocalPath: path.join(recoDir,filename)
        }, recoCallback);
      }
    });
  });

  // OPTIONAL - Using GraphicsMagick, markup the image with bounding boxes, 
  // names, and confidence scores.
  function annotateImage(imageFilePath, objects) {
    const inPath = path.parse(imageFilePath);
    const outPath = path.join(__dirname, 'out', inPath.name + '.png');

    // Set minimum confidence threshold needed to have a positive recognition.
    // Any values below this number will be marked as 'Unknown' in the tutorial.
    const recognitionConfidenceThreshold = 0.5

    // Load the source image and prepare to draw annotations on it.
    const outputImage = gm(imageFilePath)
      .autoOrient()
      .strokeWidth('2px')
      .fill('transparent')
      .font('Courier', 20);

    // Loop over each detected person and draw annotations
    objects.forEach((face) => {
      const confidence = face.faceAnnotation.recognitionConfidence;
      let name = face.objectId;

      // Set the bounding box color for positive recognitions
      outputImage.stroke('#73c7f1');

      // For low confidence scores, name the face 'Unknown' and use the color 
      // yellow for the bounding box
      if (confidence < recognitionConfidenceThreshold) {
        name = 'Unknown';
        outputImage.stroke('yellow');
        console.log('\nAn "Unknown" person was found since recognition ' +
          'confidence ' + confidence + ' is below the minimum threshold of ' +
          recognitionConfidenceThreshold);
      } else {
        console.log(`\nRecognized '${name}' with confidence ${confidence}`);
      }
      
      const verticesXY = face.faceAnnotation.bounding.vertices.map(
        vertice => [vertice.x, vertice.y]
      );
      console.log('Bounding vertices:', verticesXY);

      // Draw bounding box onto face
      outputImage.drawPolygon(verticesXY);

      // Get the x,y coordinate of the bottom left vertex
      const bottomLeft = verticesXY[3];
      const x = bottomLeft[0];
      const y = bottomLeft[1];

      // Draw objectId (name) and confidence score onto image
      outputImage.drawText(x, y + 16, name + '\n' + confidence);
    });

    // Save annotated image to local filesystem
    outputImage.write(outPath, (err) => {
      if (err){
        console.log('*** Face Recognition results not drawn on image. ' +
          'Install GraphicsMagick to do so.\n');
      }
    });
  }
}

// Start the recognition tutorial
step1_UploadImages();


def step4_test(test_path):
    """Send images to our newly trained group to test its recognition."""
    print "Step 4: Beginning tests"
    # Create the output directory.
    if not os.path.exists(_output_folder):
        os.mkdir(_output_folder)


    # Submit all images in the test directory for recognition.
    for test_file in os.listdir(test_path):
        file_path = os.path.join(test_path, test_file)
        if not is_image(file_path):
            continue

        print "  Submitting test image " + test_file
        base64_image = base64.b64encode(open(file_path).read())
        params = json.dumps({"image": base64_image})
        url_path = "/v1/recognition?groupId=" + _group_name
        response = json.loads(send_request("POST", url_path, params))

        # Annotate the image
        image = Image.open(file_path)
        font = ImageFont.load_default
        draw = ImageDraw.Draw(image)

        for face in response['objects']:
            # Retrieve and draw a bounding box for the detected face.
            json_vertices = face['faceAnnotation']['bounding']['vertices']
            vert_list = [(point['x'], point['y']) for point in json_vertices]
            draw.polygon(vert_list)

            # Retrieve and draw the id and confidence of the recongition.
            name = face['objectId']
            confidence = face['faceAnnotation']['recognitionConfidence']
            draw.text(vert_list[0], "%s - %s" % (name, confidence))

        image.save(os.path.join(_output_folder, test_file))

    print "Step 4 complete\n"


###############################################################################
if __name__ == '__main__':
    # The entry point for the recogntion sample. This expects to be called
    # with the "images" directory provided with this sample, or a directory
    # of identical structure.
    if len(sys.argv) != 2:
        print "Usage: python recognition.py "
        sys.exit(2)

    root_dir = sys.argv[1]

    step1_upload_images(os.path.join(root_dir, "training"))
    step2_create_group()
    step3_train_group()
    step4_test(os.path.join(root_dir, "reco-test"))


private static void step4_TestReco(String groupId) throws IOException {
    logger.info("*** Step 4 - Test the Face Recognition ***");
    final String api = BASE_URL + "recognition?groupId="
            + URLEncoder.encode(groupId, "UTF-8");
    File outFolder = new File(workingFolder + File.separator + "out");
    outFolder.mkdir();
    File testFolder = new File(imageFolder + File.separator + "reco-test");
    if (testFolder.exists()) {
        for (File recoFile : testFolder.listFiles()) {
            if (recoFile.isFile() && !recoFile.isHidden()) {
                final byte[] data = Files.readAllBytes(Paths.get(recoFile
                        .getCanonicalPath()));
                JsonObject result = httpCall(api, "POST",
                        contentTypeStream, data);
                if (result != null) {
                    annotateImage(outFolder, recoFile,
                            result.getJsonArray("objects"));
                }
            }
        }
    } else {
        logger.info("Failed to find images at "
                + testFolder.getCanonicalPath());
    }
}

// markup the image with bounding boxes, names, and confidence scores.
private static void annotateImage(File outFolder, File image,
        JsonArray objects) throws IOException {
    if (outFolder.isDirectory() && image.isFile() && objects != null
            && objects.size() > 0) {
        BufferedImage imageBuffer = ImageIO.read(image);
        Graphics2D g = imageBuffer.createGraphics();// .getGraphics();
        g.setStroke(new BasicStroke(2, BasicStroke.CAP_ROUND,
                BasicStroke.JOIN_ROUND));
        g.setFont(new Font("Courier", Font.BOLD, 20));
        for (int oi = 0; oi < objects.size(); oi++) {
            JsonObject object = objects.getJsonObject(oi);
            JsonObject faceAnnotation = object
                    .getJsonObject("faceAnnotation");
            JsonArray vertices = faceAnnotation.getJsonObject("bounding")
                    .getJsonArray("vertices");
            double confidence = faceAnnotation.getJsonNumber(
                    "recognitionConfidence").doubleValue();
            int nPoints = vertices.size();
            int[] xPoints = new int[nPoints];
            int[] yPoints = new int[nPoints];
            for (int ni = 0; ni < nPoints; ni++) {
                JsonObject point = vertices.getJsonObject(ni);
                xPoints[ni] = point.getInt("x");
                yPoints[ni] = point.getInt("y");
            }
            String name = object.getString("objectId");
            if (confidence < recognitionConfidenceThreshold) {
                name = "Unknown";
                g.setColor(Color.YELLOW);
                logger.info("An 'Unknown' person was found since recognition "
                        + "confidence "
                        + confidence
                        + " is below the minimum threshold of "
                        + recognitionConfidenceThreshold);
            } else {
                g.setColor(Color.decode("#73c7f1"));
                logger.info("Recognized " + name + " with confidence "
                        + confidence);
            }
            g.drawPolygon(xPoints, yPoints, nPoints);
            int x = xPoints[nPoints - 1];
            int y = yPoints[nPoints - 1];
            g.drawString(name, x, y + 16);
            g.drawString(String.valueOf(confidence), x, y + 36);
        }
        ImageIO.write(imageBuffer, "JPG",
                new File(outFolder.getCanonicalPath() + File.separator
                        + image.getName()));
    }
}

public static void main(String[] args) throws IOException,
        InterruptedException {
    if (workingFolder == null) {
        workingFolder = new File(".").getCanonicalPath();
    }
    if (imageFolder == null) {
        imageFolder = workingFolder + File.separator + ".."
                + File.separator + ".." + File.separator + "images";
    }
    logger.info(imageFolder);
    File images = new File(imageFolder + File.separator + "training");
    if (images.exists()) {
        for (File person : images.listFiles()) {
            if (person.isDirectory()) {
                peoples.add(person);
            }
        }
        step1_UploadImages();
        step2_AddObjectsToGroup();
        step3_TrainGroup("family");
        step4_TestReco("family");
    } else {
        logger.info("Failed to find images at " + images.getCanonicalPath());
    }
  }
}