Computer Vision Cloud API with over 1.5 billion requests processed

Bring Sighthound’s computer vision capabilities into your project with our detection and recognition APIs for cloud-connected applications.

Detection API & Recognition API

Sighthound Cloud offers a Recognition API that developers can use for vehicle recognition applications. Try out the following demo to see the Detection API and Recognition API in action.


 

Quickstart Guides

Detection API

View the Detection API code examples below or refer to the API documentation for full details.

 Code Examples

var image = {image: "https://www.example.com/path/to/image.jpg"};
var xmlhttp = new XMLHttpRequest();
var result;

xmlhttp.onreadystatechange = function () {
if (xmlhttp.readyState === 4 && xmlhttp.status === 200) {
result = xmlhttp.responseText;
}
}

xmlhttp.open("POST", "https://dev.sighthoundapi.com/v1/detections?type=face,person&faceOption=landmark,gender");
xmlhttp.setRequestHeader("Content-type", "application/json");
xmlhttp.setRequestHeader("X-Access-Token", "YOUR-CLOUD-TOKEN");
xmlhttp.send(JSON.stringify(image));

 URL and Headers

POST https://dev.sighthoundapi.com/v1/detections? type=face,person&faceOption=landmark,gender
Content-Type: application/json
X-Access-Token: Your-API-Key

Optional URL Parameters

type

A comma-separated list of object categories to detect. Valid options are ‘all’, ‘face’, ‘person’. Default is ‘all’.

faceOption

For type 'face', additional detections can be performed by passing comma-separated list of values. Valid options are ‘gender’ and ‘landmark’. Default is face bounding box only.

Body Parameters

image

The image to analyze. This can be a URL to an image (authentication data in URL is accepted) or inline as base64 encoded data.

Result

The result is a JSON array of all detected objects and information about the processed image.

{ "image": {
"width": 1280, "height": 960, "orientation": 1},
"objects": [
{"type": "person",
"boundingBox": { "x": 363, "y": 182, "height": 778, "width": 723} },
{"type": "face",
"boundingBox": {"x": 508, "y": 305, "height": 406, "width": 406,},
"attributes": {
"gender": "male", "genderConfidence": 0.9883, "frontal": true},
"landmarks": {
"faceContour": [[515,447],[517,491]...[872,436]],
"noseBridge": [[710,419],[711,441]...[712,487]],
"noseBall": [[680,519],[696,522]...[742,518]],
"eyebrowRight": [[736,387],[768,376]...854,394]],
"eyebrowLeft": [[555,413],[578,391]...679,391]],
"eyeRight": [[753,428],[774,414]...[777,432]],
"eyeRightCenter": [[786,423]],
"eyeLeft": [[597,435],[617,423]...[619,442]],
"eyeLeftCenter": [[630,432]],
"mouthOuter": [[650,590],[674,572]...[675,600]],
"mouthInner": [[661,587],[697,580]...[697,584]]}
}
]
}

objects

An array of all detected objects. Each includes the type of detection results returned, face or person, and a boundingBox of the object's location in the image. The xywidth, and height values are defined in a coordinate space with (0,0) as the top left corner of the image.

image

The widthheight, and orientation of the processed image. Orientation defaults to 1, else the value found in the image's Exif data and indicates that bounding boxes have been translated to match that coordinate space.

Have questions? Need help?

Get Support