Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retrieve Predictions: Prediction score missing #7151

Open
phgz opened this issue Feb 28, 2025 · 4 comments
Open

Retrieve Predictions: Prediction score missing #7151

phgz opened this issue Feb 28, 2025 · 4 comments

Comments

@phgz
Copy link

phgz commented Feb 28, 2025

When connecting a model and testing it, I get a

Image

with the score showing in the response.

However, when I use the feature from a project's page (select tasks and use action Retrieve Predictions), I get something like this

Image

The fields Prediction results and Prediction model versions are populated, but not Prediction score.

Is it because somehow it uses this template (https://labelstud.io/guide/task_format) for predictions, and shadows the score field? In fact in the JSON properties, there is mention of prediction.score, but is not present on the template.

Here is some more info from the network flow inspector:

Payload request and response from /api/dm/actionsid=retrieve_tasks_predictions&tabID=239&project=79

{"ordering":["-tasks:predictions_results"],"selectedItems":{"all":false,"included":[241147]},"filters":{"conjunction":"and","items":[]},"project":"79"}
{
    "processed_items": 1,
    "detail": "Retrieved 1 predictions"
}

Payload response from
api/tasks?page=1&page_size=30&view=239&project=79

{
    "total_annotations": 2,
    "total_predictions": 7,
    "total": 12399,
    "tasks": [
        {
            "id": 241147,
            "drafts": [],
            "annotators": [
                21
            ],
            "inner_id": 7,
            "cancelled_annotations": 0,
            "total_annotations": 1,
            "total_predictions": 1,
            "completed_at": "2025-02-27T19:17:43.255602Z",
            "annotations_results": "",
            "predictions_results": "[{from_name: validation, id: fe9fb7f1-f228-4eb0-810f-5c1ef5231bfb, to_name: comment, type: choices, value: {choices: [1]}}]",
            "predictions_score": null,
            "file_upload": null,
            "storage_filename": null,
            "annotations_ids": "",
            "predictions_model_versions": "0.0.1",
            "updated_by": [
                {
                    "user_id": 21
                }
            ],
            "data":
...

The "predictions_score" is set to null.

Environment (please complete the following information):

  • OS: Ubuntu
  • Label Studio Version 1.15.0
@heidi-humansignal
Copy link
Collaborator

This behavior isn’t caused by the task format template “shadowing” the score field. In Label Studio, the Data Manager calculates a field called “predictions_score” by aggregating prediction scores filtered by the project’s model version. In your case, when you test the model connection directly, the response (with score) is as expected; however, when using the action from the project page, the score is coming up as null because the system is only considering predictions whose “model_version” matches the project’s value.

A couple of things to check:

  1. Return Format: Please ensure your ML backend returns your prediction in a dictionary that includes the “score” alongside the “result” and “model_version” at the top level. For example, your prediction response should look like:

    {"result": [...],"score": 0.95,"model_version": "v1"}

  2. Field Naming: Renaming “prediction_score” to "score", as the Data Manager expects the score at the same level as “result.”

Thank you,
Abu

Comment by Abubakar Saad
Workflow Run

@phgz
Copy link
Author

phgz commented Mar 3, 2025

How do I change model version in the UI? I don't see how to set it.

Image

As for the return format, yes it follow the format you specify.

            pv = PredictionValue(
                score=prediction,
                result=[region],
                model_version=self.get("model_version"),
            )
            predictions.append(pv)

        return ModelResponse(predictions=predictions)

Renaming “prediction_score” to "score".

As for the field naming, I'm not sure what you are referring to.

@heidi-humansignal
Copy link
Collaborator

Hello,

Under the predictions tab, do you see different predictions there?

Thank you,
Abu

Comment by Abubakar Saad
Workflow Run

@phgz
Copy link
Author

phgz commented Mar 5, 2025

All I see is this, and the ... menu have a delete option:

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants