You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I started a TensorRT PR last year but left it after the big PIMPL refactoring messed things up just when I got less time to work on this repo. You can still see it in unmerged PRs, it can give you ideas on how to make it work for actual repo configuration. I only got to replace the main NN inference part, did a few computation changes for other ops but didn't replace some of the caffe components.
Type of Issue
Your System Configuration
OpenPose version: Latest GitHub code
General configuration:
Non-default settings:
3rd-party software:
If Windows system:
I want to accelerate the inference speed by using inference engine (ex:Intel Openvino, NVIDIA TensorRT) instead of using caffe.
I want to know how to get the input of the caffe model in your program and what should I do after I get the output?
Thank you very much.
The text was updated successfully, but these errors were encountered: