11th October 2024

Roboflow Inference allows you to deploy fine-tuned and basis fashions to be used in pc imaginative and prescient initiatives. Inference works throughout a variety of gadgets and architectures, from x86 CPUs to ARM gadgets like a Raspberry Pi to NVIDIA GPUs.

Roboflow now gives an Inference command-line software with which you should use the Roboflow Inference Server in your photographs with the brand new. In just some instructions you’ll be able to take a look at and deploy your Roboflow mannequin in your manufacturing atmosphere.

For full documentation on the CLI, see the CLI reference documentation.

The CLI is a part of the open supply Roboflow Inference Server repository and is out there to put in as a standalone pip package deal, or bundled with the inference package deal model `0.9.1` or increased.

Why use the CLI?

The Inference CLI is an effective way to get an inference server up and operating with out having to fret about your Docker/machine configuration or write a script. You should utilize it to check your mannequin in your native machine or combine the server with any instruments or companies that may execute terminal instructions.

Moreover, the CLI will maintain handbook duties like pulling the most recent Docker picture and restarting the server, making it simpler to maintain your inference server updated.

Two CLI use instances that we’ll discover on this information:

  1. Take a look at your mannequin in your native machine, evaluating it to the outcomes from the hosted inference server.
  2. Begin a cron job that begins a brand new inference server and runs inference on native photographs.

pip set up inference-cli

Inference allows you to simply get predictions from pc imaginative and prescient fashions by means of a easy, standardized interface.

The CLI works with the identical number of mannequin architectures for duties like object detection, occasion segmentation, single-label classification, and multi-label classification and works seamlessly with customized fashions you’ve educated and/or deployed with Roboflow, together with the tens of hundreds of fine-tuned fashions shared by our group.

So as to set up the CLI, you will want to have Python 3.7 or increased put in in your machine. To verify your Python model, run `python –version` in your terminal. In case you don’t have Python put in, you’ll be able to obtain the most recent model on the Python web site.

To put in the package deal, run:

pip set up inference-cli

Fundamental utilization

The Roboflow CLI supplies a easy and intuitive option to work together with the Roboflow Inference Server. Listed here are some primary utilization examples to get you began. For extra data on the obtainable instructions and choices, see the [CLI documentation](https://inference.roboflow.com/#cli). 

Beginning the Inference Server

Earlier than you start, guarantee that you’ve Docker put in in your machine. Docker supplies a containerized atmosphere, 

permitting the Roboflow Inference Server to run in a constant and remoted method. If you have not put in Docker but, you may get learn to set up Docker from their web site

To begin the native inference server, run the next command:

inference server begin

This may begin the inference server on port 9001. If there may be already a container operating on that port, the CLI will immediate you to cease it and begin a brand new one. 

This command will robotically detect whether or not your machine has an Nvidia GPU, and pull the suitable Docker picture.

Working Inference

To run inference you will want your Roboflow challenge ID, mannequin model quantity, and api key. Discuss with our documentation on retrieve your workspace and challenge IDs and discover your Roboflow API key.

To run inference on a picture utilizing the native inference server, run the next command:

inference infer /path/to/picture.jpg --project-id your-project --model-version 1 --api-key your-api-key

This command makes an http request to the inference server operating in your machine, which returns the inference leads to the terminal.

You’ll be able to provide a path to your native picture, or a url to a hosted picture.

Use Case #1: Evaluate inference in your machine with Roboflow Hosted API

For this instance, we’ll use a wildfire smoke detection mannequin hosted on Roboflow Universe.

First begin a neighborhood inference server with inference server begin.

Then, open up the Universe web page and scroll all the way down to the code snippets part and choose “On Gadget”.

Scroll all the way down to “Run inference on a picture”, and replica the bash command.

Open your terminal, and run the copied command, it is best to see a JSON output printed to the console, with a predictions array.

Scroll as much as the highest of the web page, and paste the picture URL from the command right here, to check your native predictions with the Roboflow Hosted API. 

After hitting the arrow (or urgent Enter), you’ll be able to examine the output predictions with those out of your terminal.

The predictions must be related, however outcomes can fluctuate barely based mostly on inference server model and machine {hardware}.

The CLI can even question the identical API as the online view, utilizing the –host possibility. Add the next line to your infer command --host https://detect.roboflow.com.

Your command ought to now look much like this, however along with your API key as an alternative of “API_KEY”:

inference infer https://supply.roboflow.com/5w20VzQObTXjJhTjq6kad9ubrm33/ZOPyGOffmPEdoSyStK7C/authentic.jpg 
--api-key API_KEY 
--project-id wildfire-smoke --model-version 1 --host https://detect.roboflow.com

The outcomes ought to now look precisely the identical as those on Roboflow Universe, as inference is being run on the identical model and {hardware} on our distant servers.

Now you’ll be able to take these steps to check out your Roboflow fashions in your native machine!

Use case 2 (Unix solely): Bash script may be run as soon as a day

You should utilize the CLI to combine along with your current bash pipelines. To reveal this, we are going to arrange a cron job in your machine that runs this script as soon as a day.

This is an instance of a script that begins a brand new inference server (with the most recent model) and runs inference on native photographs, saving outcomes to an output file. 

#!/bin/bash # Listing containing enter photographs
IMAGES_DIR="/path/to/photographs" # Checklist of enter photographs
IMAGES=("$IMAGES_DIR"/*.jpg) # Begin inference server
inference server begin # Watch for inference server to begin
echo "Ready for inference server to begin..."
sleep 3
inference server standing # Output file for inference outcomes
OUTPUT_FILE="inference_results.txt" # Run inference for every picture
for IMAGE in "${IMAGES[@]}"; do
  echo "Working inference for $IMAGE..."
  inference infer "$IMAGE" 
  --api-key REDACTED 
  --project-id wildfire-smoke --model-version 1 
  >> "$OUTPUT_FILE"
achieved echo "Inference outcomes written to $OUTPUT_FILE"

You will have to switch the IMAGES_DIR with a path to a folder containing photographs, and replace the API key to make use of yours, as an alternative of REDACTED.

Save this script to a file, corresponding to run_inference.sh. Make the script executable by operating the next command:

chmod +x run_inference.sh

Open your crontab file by operating the next command:

crontab -e

Lastly, paste the next line to the crontab file to run the script as soon as a day:

Zero 9 * * * /path/to/run_inference.sh

This may run the inference.sh script as soon as a day at 9AM. Change path/to/run_inference.sh with the complete system path to your .sh file.

Subsequent, save and shut the crontab file (by default utilizing vim controls, kind :wq, and hit Enter).

With these steps, it is best to now have a cron job set as much as run inference on native photographs as soon as a day. You’ll be able to modify the script and the cron job schedule to fit your wants.

Conclusion

The Inference CLI can be utilized to simply get Roboflow Inference up and operating. Use easy instructions to check your mannequin regionally, or combine with different command-line applications.

The CLI will proceed to be up to date with the most recent Inference Server options as they’re added within the coming months. Discover the most recent instructions within the CLI reference documentation. You’ll be able to contribute to the CLI straight by submitting an Concern or Pull Request within the Inference GitHub repository.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.