Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added benchmark notebook to generate videos and run benchmarks #285

Closed
wants to merge 11 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,16 @@ The instructions below assume you're on Linux.
pip install torchcodec
```

## Benchmark results

The following results were obtained by running the [benchmark_decoders.ipynb](./benchmarks/decoders/benchmark_decoders.ipynb) on a lightly-loaded 22-core machine. We first get the operation latency for various seek and decode patterns in a loop from a single python thread
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our README and other files restrict the line length to ~80, we should try to keep it this way (it also makes review easier to locate comments on specific lines).

for a single video and then compute the FPS (= 1 / latency). Error bars show the FPS of the p25 and p75 operation latency.

If you are running multiple copies of the decoder (for example in a DataLoader) on multiple threads, it is best to use `num_threads=1` for the best performance. If you care about the operation latency (not throughput of many concurrent operations) you can set `num_threads=0` to utilize all cores of your computer.

![Benchmark Results](./benchmarks/decoders/benchmark_results.png)


## Planned future work

We are actively working on the following features:
Expand Down
Loading
Loading