Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Update label-studio-ml-backend docs #6995

Draft
wants to merge 3 commits into
base: develop
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 0 additions & 16 deletions docs/source/guide/ml_tutorials.html
Original file line number Diff line number Diff line change
Expand Up @@ -335,22 +335,6 @@
title: YOLO ML Backend for Label Studio
type: guide
url: /tutorials/yolo.html
- categories:
- Computer Vision
- Video Classification
- Temporal Labeling
- LSTM
hide_frontmatter_title: true
hide_menu: true
image: /tutorials/yolo-video-classification.png
meta_description: Tutorial on how to use an example ML backend for Label Studio
with TimelineLabels
meta_title: TimelineLabels ML Backend for Label Studio
order: 51
tier: all
title: TimelineLabels ML Backend for Label Studio
type: guide
url: /tutorials/yolo_timeline_labels.html
layout: templates
meta_description: Tutorial documentation for setting up a machine learning model with
predictions using PyTorch, GPT2, Sci-kit learn, and other popular frameworks.
Expand Down
15 changes: 14 additions & 1 deletion docs/source/tutorials/gliner.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,4 +82,17 @@ The following common parameters are available:
- `WORKERS` - Specify the number of workers for the model server.
- `THREADS` - Specify the number of threads for the model server.
- `LABEL_STUDIO_URL` - Specify the URL of your Label Studio instance. Note that this might need to be `http://host.docker.internal:8080` if you are running Label Studio on another Docker container.
- `LABEL_STUDIO_API_KEY`- Specify the API key for authenticating your Label Studio instance. You can find this by logging into Label Studio and and [going to the **Account & Settings** page](https://labelstud.io/guide/user_account#Access-token).
- `LABEL_STUDIO_API_KEY`- Specify the API key for authenticating your Label Studio instance. You can find this by logging into Label Studio and and [going to the **Account & Settings** page](https://labelstud.io/guide/user_account#Access-token).

## A Note on Model Training

If you plan to use a webhook to train this model on "Start Training", note that you do
not need to configure a separate webhook. Instead, go to the three dots next to your model
on the Model tab in your project settings and click "start training".

Additionally, note that this container has been set for a **VERY SMALL** demo set, with only 1
non-eval sample (we expect the first 10 data samples to be for evaluation.)

If you're working with a larger dataset, be sure to:
1. update num_steps and batch size to the number of training steps you want and the batch size that works for your dataset.
2. change the uploaded model after training (line 239 of `model.py`) to the highest checkpoint that you have.
8 changes: 4 additions & 4 deletions docs/source/tutorials/mmdetection-3.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ https://mmdetection.readthedocs.io/en/latest/
This example demonstrates how to use the MMDetection model with Label Studio to annotate images with bounding boxes.
The model is based on the YOLOv3 architecture with a MobileNetV2 backbone and trained on the COCO dataset.

![screenshot.png](/tutorials/screenshot.png)
![screenshot.png](screenshot.png)

## Before you begin

Expand All @@ -43,7 +43,7 @@ docker-compose up -d

See the tutorial in the documentation for building your own image and advanced usage:

https://github.com/HumanSignal/label-studio/blob/develop/docs/source/tutorials/object-detector.md
https://github.com/HumanSignal/label-studio/blob/master/docs/source/tutorials/object-detector.md


## Labeling config
Expand Down Expand Up @@ -85,7 +85,7 @@ In this example, you can combine multiple labels into one Label Studio annotatio
1. Clone the Label Studio ML Backend repository in your directory of choice:

```
git clone https://github.com/HumanSignal/label-studio-ml-backend
git clone https://github.com/heartexlabs/label-studio-ml-backend
cd label-studio-ml-backend/label_studio_ml/examples/mmdetection-3
```

Expand Down Expand Up @@ -166,4 +166,4 @@ gunicorn --preload --bind :9090 --workers 1 --threads 1 --timeout 0 _wsgi:app
```

* Use this guide to find out your access token: https://labelstud.io/guide/api.html
* You can use and increased value of `SCORE_THRESHOLD` parameter when you see a lot of unwanted detections or lower its value if you don't see any detections.
* You can use and increased value of `SCORE_THRESHOLD` parameter when you see a lot of unwanted detections or lower its value if you don't see any detections.
21 changes: 17 additions & 4 deletions docs/source/tutorials/segment_anything_2_image.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,19 +131,32 @@ cd label_studio_ml/examples/segment_anything_2_image
pip install -r requirements.txt
```

2. Download [`segment-anything-2` repo](https://github.com/facebookresearch/segment-anything-2) into the root directory. Install SegmentAnything model and download checkpoints using [the official Meta documentation](https://github.com/facebookresearch/segment-anything-2?tab=readme-ov-file#installation)
2. Download [`segment-anything-2` repo](https://github.com/facebookresearch/sam2) into the root directory. Install SegmentAnything model and download checkpoints using [the official Meta documentation](https://github.com/facebookresearch/sam2?tab=readme-ov-file#installation)
You should now have the following folder structure:


| root directory
| label-studio-ml-backend
| label-studio-ml
| examples
| segment_anything_2_image
| sam2
| sam2
| checkpoints


3. Then you can start the ML backend on the default port `9090`:

```bash
cd ../
label-studio-ml start ./segment_anything_2_image
cd ~/sam2
label-studio-ml start ../label-studio-ml-backend/label_studio_ml/examples/segment_anything_2_image
```

Due to breaking changes from Meta [HERE](https://github.com/facebookresearch/sam2/blob/c2ec8e14a185632b0a5d8b161928ceb50197eddc/sam2/build_sam.py#L20), it is CRUCIAL that you run this command from the sam2 directory at your root directory.

4. Connect running ML backend server to Label Studio: go to your project `Settings -> Machine Learning -> Add Model` and specify `http://localhost:9090` as a URL. Read more in the official [Label Studio documentation](https://labelstud.io/guide/ml#Connect-the-model-to-Label-Studio).

## Running with Docker (coming soon)
## Running with Docker

1. Start Machine Learning backend on `http://localhost:9090` with prebuilt image:

Expand Down
4 changes: 2 additions & 2 deletions docs/source/tutorials/segment_anything_2_video.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ This guide describes the simplest way to start using **SegmentAnything 2** with
This repository is specifically for working with object tracking in videos. For working with images,
see the [segment_anything_2_image repository](https://github.com/HumanSignal/label-studio-ml-backend/tree/master/label_studio_ml/examples/segment_anything_2_image)

![sam2](/tutorials/Sam2Video.gif)
![sam2](./Sam2Video.gif)

## Before you begin

Expand Down Expand Up @@ -83,4 +83,4 @@ If you want to contribute to this repository to help with some of these limitati

## Customization

The ML backend can be customized by adding your own models and logic inside the `./segment_anything_2_video` directory.
The ML backend can be customized by adding your own models and logic inside the `./segment_anything_2_video` directory.
6 changes: 6 additions & 0 deletions docs/source/tutorials/segment_anything_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -280,6 +280,12 @@ to get a better understanding of the workflow when annotating with SAM.

Use the `Alt` hotkey to alter keypoint positive and negative labels.

First, select either the auto-keypoints or auto-rectangle tool, then choose a label. You can now draw keypoints or rectangles on the canvas.
Watch this video:

https://github.com/user-attachments/assets/28acf6ae-a83f-4919-9722-3c82a4b6dab6


### Notes for AdvancedSAM

* _**Please watch [this video](https://drive.google.com/file/d/1OMV1qLHc0yYRachPPb8et7dUBjxUsmR1/view?usp=sharing) first**_
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorials/tesseract.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,5 +173,5 @@ Example below:
![ls_demo_ocr](https://user-images.githubusercontent.com/17755198/165186574-05f0236f-a5f2-4179-ac90-ef11123927bc.gif)

Reference links:
- https://labelstud.io/blog/improve-ocr-quality-for-receipt-processing-with-tesseract-and-label-studio
- https://labelstud.io/blog/Improve-OCR-quality-with-Tesseract-and-Label-Studio.html
- https://labelstud.io/blog/release-130.html
Loading
Loading