Docs
API
Pipeline

Pipelines

ezML allows creation of very versatile functionality with the use of pipelines that stack multiple layers (functions) on top of each other.

Platform Creation

The easiest way to create a pipeline is utilizing our pipe builder on the platform. Simply log on to https://app.ezml.io and add the layers you require.

Then test it out and make sure it works as expected. Once you are happy with the results, you can export the code and use it in your own projects.

Configure Yourself

If you want to create a pipeline manually, it's not that difficult either. It's just an array of layers. Each layer must follow the following spec:

{
  "id": "layer_id",
  "name": "<name to show up in output>",
  "config": {
    "labels (if face detection or object detection)": (if face, combination of following)  ["region", "age", "emotion"] | (if object detection list of objects) [],
    "language (if OCR)": "en" | "ch" | "fr" | "es"
  }
}
  • config should me empty object for non configurable layers such as Vehicle Detection
  • layer_id needs to match ID from list here

How to use (check API endpoint docs for more info)

Send a base64 encoded image to the pipeline endpoint with the pipeline array in the payload. Example:

Output Structure

{
    "<first_layer_name>": [
        {
        "bbox": [x1, y1, x2, y2],
        "conf": float,
        "label": "<object>",
        "<second_layer_name": [
            {
                "bbox": [x1, y1, x2, y2],
                "conf": float,
                "label": "<object>",
                "<third_layer_name>": [...]
            }
        },
        ... more objects
    ]
}

Example

from PIL import Image, ImageDraw
import requests
import base64

def image_to_base64(image_path: str) -> str:
    with open(image_path, "rb") as image_file:
        # Read the image, encode it in base64, and convert to string
        return base64.b64encode(image_file.read()).decode('utf-8')


url = f"https://gateway.ezml.io/api/v1/functions/pipeline"

payload = {
    "image": image_to_base64("<path_to_image>"),
    # pipeline to find faces and their eyes
    "pipeline": [{"id": "FACE", "name": "face", "config": {"labels": ["region"]}},
                {"id": "GENERAL-VLM", "name": "eyes", "config": {"labels": ["region"]}}]
}

headers = {
    "Authorization": "Bearer <token from /auth>"
}

res = requests.post(url, json=payload, headers=headers)

# displaying results

image = Image.open("<path_to_image>")
draw = ImageDraw.Draw(image)
w, h = image.size

for face in res.json()["face"]:
    print(f"Face bounding box: {face['bbox']}")

    # display face rectangle
    top_left_x, top_left_y, bottom_right_x, bottom_right_y = face["bbox"]
    draw.rectangle((top_left_x * w, top_left_y * h, bottom_right_x * w, bottom_right_y * h), outline="red")

    for eye in face["eyes"]:
        print(f"Eye bounding box: {eye['bbox']}")

        # display eye rectangle
        top_left_x, top_left_y, bottom_right_x, bottom_right_y = detection["eyes"]
        draw.rectangle((top_left_x * w, top_left_y * h, bottom_right_x * w, bottom_right_y * h), outline="red")
image.show()
Output