Skip to content

Releases: roboflow/inference

v0.9.16

11 Mar 15:48
14fe2a9
Compare
Choose a tag to compare

🚀 Added

🎬 InferencePipeline can now process the video using your custom logic

Prior to v0.9.16, InferencePipeline was only able to make inference against Roboflow models. Now - you can inject any arbitrary logic of your choice and process videos (files and streams) using custom function you create. Just look at the example:

import os
import json
from inference.core.interfaces.camera.entities import VideoFrame
from inference import InferencePipeline

TARGET_DIR = "./my_predictions"

class MyModel:

  def __init__(self, weights_path: str):
    self._model = your_model_loader(weights_path)

  def infer(self, video_frame: VideoFrame) -> dict:
    return self._model(video_frame.image)


def save_prediction(prediction: dict, video_frame: VideoFrame) -> None:
  with open(os.path.join(TARGET_DIR, f"{video_frame.frame_id}.json")) as f:
    json.dump(prediction, f)

my_model = MyModel("./my_model.pt")

pipeline = InferencePipeline.init_with_custom_logic(
  video_reference="./my_video.mp4",
  on_video_frame=my_model.infer,   # <-- your custom video frame processing function
  on_prediction=save_prediction,  # <-- your custom sink for predictions
)

# start the pipeline
pipeline.start()
# wait for the pipeline to finish
pipeline.join()

That's not everything! Remember our workflows feature? We've just added workflows into InferencePipeline (in experimental mode). Check InferencePipeline.init_with_workflow(...) to test the feature.

❗ Breaking change: we've reverted changes introduced in v0.9.15 to InferencePipeline.init(...) making it compatible with YOLOWorld model. Now, you would need to use InferencePipeline.init_with_yolo_world(...) as shown here:

pipeline = InferencePipeline.init_with_yolo_world(
      video_reference="YOUR-VIDEO"
      on_prediction=...,
      classes=["person", "dog", "car", "truck"]
  )

We've updated 📖 docs to make it easy to use new feature.

Thanks @paulguerrie for great contribution

🌱 Changed

  • Huge changes in 📖 docs - thanks @capjamesg, @SkalskiP, @SolomonLake for contribution
  • Improved contributor experience by adding contributor guide and separating GHA CI, such that most important tests could work against repository fork
  • OpenVINO as default ONNX Execution Provider for x86 based docker images to improve speed of inference (@probicheaux )
  • Camera properties in InferencePipeline can be set now by caller (@sberan)

🔨 Fixed

  • added missing structlog dependency to package (@paulguerrie)
  • clarified models licence (@yeldarby)
  • bugs in lambda HTTP inference
  • fixed portion of security vulnerabilities
  • breaking: Two exceptions (WorkspaceLoadError, MalformedWorkflowResponseError), when raised will be given HTTP502 error, instead of HTTP500 as previously
  • bug in workflows with class-filter at the level of detection-based model blocks not being applied.

New Contributors

Full Changelog: v0.9.15...v0.9.16

v0.9.15

28 Feb 15:29
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.9.14...v0.9.15

v0.9.15rc1

27 Feb 22:27
3efd23d
Compare
Choose a tag to compare
v0.9.15rc1 Pre-release
Pre-release

What's Changed

Full Changelog: v0.9.14...v0.9.15rc1

v0.9.14

23 Feb 17:40
74032e7
Compare
Choose a tag to compare

🚀 Added

LMMs (GPT-4V and CogVLM) 🤝 workflows

Now, with Roboflow workflows LMMs integration is made easy 💪 . Just look at the demo! 🤯

lmms_in_workflows.mp4

As always, we encourage you to visit workflows docs 📖 and examples.

This is how to create a multi-functional app with workflows and LMMs:

inference server start
from inference_sdk import InferenceHTTPClient

LOCAL_CLIENT = InferenceHTTPClient(
    api_url="http://127.0.0.1:9001", 
    api_key=ROBOFLOW_API_KEY,
)
FLEXIBLE_SPECIFICATION = {
    "version": "1.0",
    "inputs": [
        { "type": "InferenceImage", "name": "image" },
        { "type": "InferenceParameter", "name": "open_ai_key" },
        { "type": "InferenceParameter", "name": "lmm_type" },
        { "type": "InferenceParameter", "name": "prompt" },
        { "type": "InferenceParameter", "name": "expected_output" },
    ],
    "steps": [     
        {
            "type": "LMM",
            "name": "step_1",
            "image": "$inputs.image",
            "lmm_type": "$inputs.lmm_type",
            "prompt": "$inputs.prompt",
            "json_output": "$inputs.expected_output",
            "remote_api_key": "$inputs.open_ai_key",
        },
    ],
    "outputs": [
        { "type": "JsonField", "name": "structured_output", "selector": "$steps.step_1.structured_output" },
        { "type": "JsonField", "name": "llm_output", "selector": "$steps.step_1.*" },
    ]   
}

response_gpt = LOCAL_CLIENT.infer_from_workflow(
    specification=FLEXIBLE_SPECIFICATION,
    images={
        "image": cars_image,
    },
    parameters={
        "open_ai_key": OPEN_AI_KEY,
        "lmm_type": "gpt_4v",
        "prompt": "You are supposed to act as object counting expert. Please provide number of **CARS** visible in the image",
        "expected_output": {
            "objects_count": "Integer value with number of objects",
        }
    }
)

🌱 Changed

🔨 Fixed

  • turn off instant page for load to cookbook page properly by @onuralpszr in #275 (thanks for contribution 🥇 )
  • bug in workflows that made cropping in multi-detection set-up

Full Changelog: v0.9.13...v0.9.14

v0.9.13

16 Feb 16:04
9f0265a
Compare
Choose a tag to compare

🚀 Added

YOLO World 🤝 workflows

We've introduced Yolo World model into workflows making it trivially easy to use the model as any other object-detection model ☺️

To try this out, install dependencies first:

pip install inference-sdk inference-cli

Start the server:

inference server start

And run the script:

from inference_sdk import InferenceHTTPClient

CLIENT = InferenceHTTPClient(api_url="http://127.0.0.1:9001", api_key="YOUR_API_KEY")

YOLO_WORLD = {
    "specification": {
        "version": "1.0",
        "inputs": [
            { "type": "InferenceImage", "name": "image" },
            { "type": "InferenceParameter", "name": "classes" },
            { "type": "InferenceParameter", "name": "confidence", "default_value": 0.003 },
        ],
        "steps": [
            {
                "type": "YoloWorld",
                "name": "step_1",
                "image": "$inputs.image",
                "class_names": "$inputs.classes",
                "confidence": "$inputs.confidence",
            },
        ],
        "outputs": [
            { "type": "JsonField", "name": "predictions", "selector": "$steps.step_1.predictions" },
        ]   
    }
}

response = CLIENT.infer_from_workflow(
    specification=YOLO_WORLD["specification"],
    images={
        "image": frame,
    },
    parameters={
        "classes": ["yellow filling", "black hole"]  # each time you may specify different classes!
    }
)

Check details in documentation 📖 and discover usage examples.

🏆 Contributors

@PawelPeczek-Roboflow (Paweł Pęczek)

Full Changelog: v0.9.12...v0.9.13

v0.9.12

16 Feb 13:59
2e49f85
Compare
Choose a tag to compare

🚀 Added

inference cookbook

Visit our cookbook 🧑‍🍳

Screenshot 2024-02-19 at 14 58 30

🔨 Fixed

In this release, we are fixing issues spotted in YoloWorld model released in v0.9.11, in particular:

  • bug with hashing of YOLO World classes making it impossible in some cases to run inference due to improper caching of CLIP embeddings
  • bug with YOLO World pre-processing of colour channels causing model misunderstanding of prompted colours

🏆 Contributors

@capjamesg (James Gallagher), @PawelPeczek-Roboflow (Paweł Pęczek)

Full Changelog: v0.9.11...v0.9.12

v0.9.12rc3

16 Feb 13:29
127776a
Compare
Choose a tag to compare
v0.9.12rc3 Pre-release
Pre-release

Fixed embeddings hashing

v0.9.12rc2

16 Feb 13:25
e6ecbb1
Compare
Choose a tag to compare
v0.9.12rc2 Pre-release
Pre-release

Fixed hashing of text embeddings

v0.9.12rc1

16 Feb 12:17
5945994
Compare
Choose a tag to compare
v0.9.12rc1 Pre-release
Pre-release

Release candidate with fix to Yolo-World pre-processing

v0.9.11

16 Feb 00:33
4ac428b
Compare
Choose a tag to compare

🚀 Added

YOLO World in the inference

Have you heard about YOLO World model? 🤔 If not - you would probably be interested to learn something about it! Our blog post 📰 may be a good starting point❗

Great news is that YOLO World is already integrated with inference. Model is capable to perform zero-shot detections of classes specified in inference parameter. Thanks to that, you may start making videos like that just now 🚀

yellow-filling-output-1280x720.mp4

Simply install dependencies.

pip install inference-sdk inference-cli

Start the server

inference server start

And run inference against our HTTP server:

from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(api_url="http://127.0.0.1:9001")
result = client.infer_from_yolo_world(
    inference_input=YOUR_IMAGE,
    class_names=["dog", "cat"],
)

Active Learning 🤝 workflows

Active Learning data collection made simple with workflows 🔥 Now, with just a little bit of configuration you can start data collection to improve your model over time. Just take look how easy it is:

active_learning_in_workflows.mp4

Key features:

  • works for all models supported at Roboflow platform, including the ones from Roboflow Universe - making it trivial to use off-the-shelf model during project kick-off stage to collect dataset while serving meaningful predictions
  • combines well with multiple workflows blocks - including DetectionsConsensus - making it possible to sample based on predictions of models ensemble 💥
  • Active Learning block may use project-level config of Active Learning or define Active Learning strategy directly in the block definition (refer to Active Learning documentation 📖 for details on how to configure data collection)

See documentation 📖 of new ActiveLearningDataCollector to find detailed info.

🌱 Changed

InferencePipeline now works with all models supported at Roboflow platform 🎆

For a long time - InferencePipeline worked only with object-detection models. This is no longer the case - from now on, other type of models supported at Roboflow platform (including stubs - like my-project/0) work under InferencePipeline. No changes are required in existing code. Just put model_id of your model and the pipeline should work. Sinks suited for detection-only models were adjusted to ignore non-compliant formats of predictions and produce warnings notifying about incompatibility.

🔨 Fixed

  • Bug in yolact model in #266

🏆 Contributors

@paulguerrie (Paul Guerrie), @probicheaux (Peter Robicheaux), @PawelPeczek-Roboflow (Paweł Pęczek)

Full Changelog: v0.9.10...v0.9.11