Skip to content

Releases: roboflow/inference

v0.21.1

30 Sep 18:17
dc9362b
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.21.0...v0.21.1

v0.21.0

27 Sep 14:02
23dd6c9
Compare
Choose a tag to compare

🚀 Added

👩‍🎨 Become an artist with Workflows 👨‍🎨

Ever wanted to be an artist but felt like you lacked the skills? No worries! We’ve just added the StabilityAI Inpainting block to the Workflows ecosystem. Now, you can effortlessly add whatever you envision into your images! 🌟🖼️

Credits to @Fafruch for origin idea 💪

inpainting_stability_ai_demo.mp4
📖 docs

🤯 Workflows + video + inference server - Experimental feature preview 🔬

Imagine creating a Workflow in our UI, tuning it to understand what happens in your video. So far, video processing with InferencePipeline required a bit of glue code and setup in your environment. We hope that soon, you won’t need any custom scripts for video processing! You’ll be able to ship your Workflow directly to the inference server, just pointing which video source to process.

We're thrilled to announce that we’ve taken the first step toward making this idea a reality! Check out our experimental feature for video processing, now controlled by the inference server with a user-friendly REST API for easy integration.

video_processing_behind_api.mp4

🔍 We encourage you to try it out! The feature is available in the inference server Docker images that you can self-host. Please note that this feature is experimental, and breaking changes are to be expected. Check out our 📖 docs to learn more.

🙃 Flips, Rotations and Resizing in Workflows

Tired of dealing with image orientation problems while building demos with Workflows? Whether it's resizing, rotating, or flipping, those headaches end today with our new features for seamless image adjustments!

All thanks to @EmilyGavrilenko and PR #683

✨ Ensure Your Tracked Objects Stay on Course! 🛰️

Wondering if the objects you're tracking follow the path you intended? We’ve got you covered in Workflows! Thanks to @shantanubala, we now offer Fréchet Distance Analysis as a Workflow block. Simply specify the desired trajectory, and Workflow calculates the deviation for each tracked box. 📊
See details: #682

What’s Fréchet Distance?
It’s a mathematical measure that compares the similarity between two curves—perfect for analyzing how closely your tracked objects follow the path you’ve set.

🆕 Background removal in Dynamic Crop and updated UI for VLMs

Let’s be honest—VLMs in Workflows still had room for improvement, especially when integrating model outputs with other blocks. Well, we've made it better! 🎉 Now, each model task comes with a clear description and a suggested parser to follow the block, helping you get the most out of your model predictions with ease. 🛠️

Additionally, you can now remove background while performing Dynamic Crop on Instance Segmentation model results 🤯

CleanShot.2024-09-26.at.18.38.33.mp4

🔧 Fixed

🌱 Changed

🏅 New Contributors

Full Changelog: v0.20.1...v0.21.0

v0.20.1

24 Sep 15:32
658cb3f
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.20.0...v0.20.1

v0.20.0

23 Sep 14:00
78a9116
Compare
Choose a tag to compare

🚀 Added

🌟 Florence 2 🤝 Workflows

Thanks to @probicheaux, the Workflows ecosystem just got better with the addition of the Florence 2 block. Florence 2, one of the top open-source releases this year, is a powerful Visual Language Model capable of tasks like object detection, segmentation, image captioning, OCR, and more. Now, you can use it directly in your workflows!

Florence 2 and SAM 2 - zero shot grounded segmentation

Ever wished for precise segmentation but didn’t have the data to train your model? Now you don’t need it! With Florence 2 and SAM 2, you can achieve stunning segmentation results effortlessly — without a single annotation.

Discover how to combine these powerful models and get top-tier segmentation quality for free!

florence2_and_sam2.mp4

Florence 2 as OCR model

Need Text Layout Detection and OCR? Florence 2 Has You Covered!

florence2_with_ocr.mp4

Zero-shot object detection needed?

Do not hesitate and try out Florence 2 as object detection model - the quality of results is surprisingly good 🔥

florence2_object_detection.mp4

🔔 Additional notes

  • Florence 2 requires either Roboflow Dedicated Deployment or self-hosted inference server - it is not available on Roboflow Hosted Platform
  • To discover full potential of Florence 2 - read the paper
  • Visit 📖 documentation of Florence 2 Workflow block

New version of SIFT block

Tired using SIFT descriptors calculation block followed by SIFT comparison? This is no longer needed. Check out SIFT Comparison v2 block. PR: #657

sift_updated

Workflows UQL extended with new operations

You may not even be aware, but Universal Query Language powers Workflows operations that can be fully customised in UI. There are two new features shipped:

Instance Segmentation ⏩ oriented rectangle

Thanks to @chandlersupple, Instance Segmentation results can be turned into oriented bounding boxes - check out 📖 docs

🔧 Fixed

  • Broken links removed from docs in #663
  • Fixes to release 0.19.0:

🌱 Changed

Full Changelog: v0.19.0...v0.20.0

v0.19.0

18 Sep 19:28
13332f8
Compare
Choose a tag to compare

🚀 Added

🎥 Video processing in workflows 🤯

We’re excited to announce that, thanks to the contributions of @grzegorz-roboflow, our Workflows ecosystem now extends to video processing! Dive in and explore the new possibilities:

dwell_time_demo.mp4

New blocks:

We've introduced minimal support for video processing in the Workflows UI, with plans to expand to more advanced features soon. To get started, you can create a Python script using the InferencePipeline, similar to the provided example.

Video source YT | Karol Majek

🔥 OWLv2 🤝 inference

Thanks to @probicheaux we have OWLv2 model in inference. OWLv2 was primarily trained to detect objects from text. The implementation in Inference currently only supports detecting objects from visual examples of that object.

You can use model in inference server - both CPU and GPU, as well as in Python package. Visit our 📖 docs to learn more.

Screen.Recording.2024-09-19.at.21.36.13.mov

👓 TROCR 🤝 inference

@stellasphere shipped TROCR model to expand OCR models offering in inference 🔥

You can use model in inference server - both CPU and GPU, as well as in Python package. Visit our 📖 docs to learn more.

🧑‍🎓 Workflows - endpoint to discover interface

Guessing the data format for Workflow inputs and outputs was a challange as for now, but thanks to @EmilyGavrilenko this is no longer the case. We offer two new endpoints (for workflows registered on the platform and for workflows submitted in payload). Details in #644.

🔔 Example response
{
    "inputs": {
        "image": ["image"],
        "model_id": ["roboflow_model_id"],
    },
    "outputs": {
        "detections": ["object_detection_prediction"],
         "crops": ["image"],
         "classification": {
            "inference_id": ["string"],
            "predictions": ["classification_prediction"],
         },
    },
    "typing_hints": {
        "image": "dict",
        "roboflow_model_id": "str",
        "object_detection_prediction": "dict",
        "string": "str",
        "classification_prediction": "dict",
    },
    "kinds_schemas": {
        "image": {},
        "object_detection_prediction": {"dict": "with OpenAPI 3.0 schema of result"},
        "classification_prediction": {"dict": "with OpenAPI 3.0 schema of result"}
    }
}

🌱 Changed

🔧 Fixed

  • Fixed bug with Workflows Execution Engine causing bug when conditional execution discards inputs of a step that changes dimensionality - see details in #645

♻️ Removed

Full Changelog: v0.18.1...v0.19.0

v0.18.1

06 Sep 17:18
3f4a264
Compare
Choose a tag to compare

🔨 Fixed

New VLM as Classifier Workflows block had bug - multi-label classification results were generated with "class_name" instead of "class" field in prediction details: #637

🌱 Changed

  • Increase timeout to 30 minutes for .github/workflows/test_package_install_inference_with_extras.yml by @grzegorz-roboflow in #635

Full Changelog: v0.18.0...v0.18.1

v0.18.0

06 Sep 13:09
297d420
Compare
Choose a tag to compare

🚀 Added

💪 New VLMs in Workflows

We've shipped blocks to integrate with Google Gemini and Anthropic Claude, but that's not everything! OpenAI block got updated. New "VLM Interface" of the block assumes that it can be prompted using pre-configured options and model output can be processed by set of formatter blocs to achieve desired end. It is now possible to:

  • use classification prompting in VLM block and apply VLM as Classifier block to turn output string into classification result and process further using other blocks from ecosystem
  • the same can be achieved for object-detection prompting and VLM as Detector block, which converts text produced by model into sv.Detections(...)

From now one, VLMs are much easier to integrate.

🧑‍🦱 USE CASE: PII protection when prompting VLM

Detect faces first, apply blur prediction visualisation and ask VLMs to tell what is the person eye colour - they won't be able to tell 🙃

👨‍🎨 USE CASE: VLM as object detection model

👓 USE CASE: VLM as secondary classifier

Turn VLM output into classification results and process using downstream blocks - here we ask Gemini to classify crops of dogs to tell what is dog's breed - then we extract top class as property.

image

🤯 Workflows previews in documentation 📖

Thanks to @joaomarcoscrs we can embed Workflows into documentation pages. Just take a look how amazing it is ❗

🌱 Changed

BREAKINGBatch[X] kinds removed from Workflows

What was changed and why?

In inference release 0.18.0 we decided to make drastic move to heal the ecosystem from the problem with ambiguous kinds names (Batch[X] vs X - see more here).

The change is breaking only for non-Roboflow Workflow plugins depending on imports from inference.core.workflows.execution_engine.entities.types module. To the best of our knowledge, there is no such plugin.

The change is not breaking in terms of running Workflows on Roboflow platform and on-prem given that external plugins were not used.

Migration guide

Migration should be relatively easy - in the code of a Workflow block, all instances of

from inference.core.workflows.execution_engine.entities.types import BATCH_OF_{{KIND_NAME}}

should be replaced with

from inference.core.workflows.execution_engine.entities.types import {{KIND_NAME}}

PR with changes as reference: #618

Full Changelog: v0.17.1...v0.18.0

v0.17.1

03 Sep 09:20
03860f0
Compare
Choose a tag to compare

❗IMPORTANT ❗Security issue in opencv-python

This PR provides fix for the following security issue:

opencv-python versions before v4.8.1.78 bundled libwebp binaries in wheels that are vulnerable to GHSA-j7hp-h8jx-5ppr. opencv-python v4.8.1.78 upgrades the bundled libwebp binary to v1.3.2.

We advise all clients using inference to migrate, especially in production environments.

Full Changelog: v0.17.0...v0.17.1

v0.17.0

30 Aug 14:55
dc65d23
Compare
Choose a tag to compare

🚀 Added

💪 More Classical Computer Vision blocks in workflows

Good news for the fans of classical computer vision!
We heard you – and we’ve added a bunch of new blocks to enhance your workflows.

Basic operations on images

Workflow Definition Preview

Camera focus check

Workflow Definition Preview

🚀 Upgrade of CLIP Comparison and Roboflow Dataset Upload blocks

We’ve made it even more versatile. The new outputs allow seamless integration with many other blocks, enabling powerful workflows like:

detection → crop → CLIP classification (on crops) → detection class replacement

Get ready to streamline your processes with enhanced compatibility and new possibilities!

image

For Roboflow Dataset Upload @ v2 there is now possibility to sample percentage of data to upload and we changed the default sizes of saved images to be bigger.

Do not worry! All your old Workflows using mentioned blocks are not affected with the change thanks to versioning 😄

💥 New version of 📖 Workflow docs 🔥

The Wait is Over – Our Workflows Documentation is Finally Here!

We’ve revamped and expanded the documentation to make your experience smoother. It’s now organized into three clear sections:

  • General Overview: Perfect for getting you up and running quickly.
  • Mid-Level User Guide: Gain a solid understanding of the ecosystem without diving too deep into the technical details.
  • Detailed Developer Guide: Designed for contributors, packed with everything you need to develop within the ecosystem.

Check it out and let us know what you think of the new docs!

🌱 Changed

🔨 Fixed

🏅 New Contributors

Full Changelog: v0.16.3...v0.17.0

v0.16.3

22 Aug 19:23
bbf64e1
Compare
Choose a tag to compare

🔨 Fixed

🚀 Added

SAM2 extension

While making inference from SAM2 model you may request inference package and inference server to cache prompts and low-resolution masks from your inputs to be re-used later on upon request. You are given two parameters (both in SAM2 request payload and SegmentAnything2.segment_image(...) method:

  • save_logits_to_cache
  • load_logits_from_cache
    which decide how the functionality should work. Saving logits masks to cache will make it possible, to re-use them for consecutive inferences agains the same image. Enabling loading triggers search through cache intended to find the most similar prompt cached for this specific image to retrieve its mask. The mechanism is useful when the same image is segmented multiple times with slightly different sets of prompts - as injecting previous masks in that scenario may lead to better results:
Before After

Please note that this feature is different than cache for image embeddings which speed consecutive requests with the same image up and if you don't wish the feature to be enabled, set DISABLE_SAM2_LOGITS_CACHE=True in your env.

🏅 @probicheaux and @tonylampada added the functionality in #582

Remaining changes

  • @EmilyGavrilenko added Workflow block search metadata to improve UI experience in #588
  • @grzegorz-roboflow added internal parameter for workflows request denoting preview in UI #595
  • @grzegorz-roboflow improved usage tracking extending it to models in #601 and #548
  • workflows equipped with new batch-oriented input - VideoFrameMetadata letting blocks to process videos statefully see #590, #597 more docs will come soon

Full Changelog: v0.16.2...v0.16.3