Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to accelerate the inference speed by using inference engine (ex:Intel Openvino, NVIDIA TensorRT) instead of using caffe? #914

Closed
jackgao0323 opened this issue Nov 6, 2018 · 3 comments

Comments

@jackgao0323
Copy link

Type of Issue

  • Help wanted

Your System Configuration

  1. OpenPose version: Latest GitHub code

  2. General configuration:

    • Installation mode: CMake
    • Operating system : Windows 10
    • Release or Debug mode : Release
    • Compiler : VS2015 community
  3. Non-default settings:

    • 3-D Reconstruction module added? : no
    • Any other custom CMake configuration with respect to the default version? : no
  4. 3rd-party software:

    • Caffe version: Default from OpenPose
    • CMake version : CMake 3.8.0
    • OpenCV version: OpenPose default
  5. If Windows system:

    • Portable demo or compiled library?

I want to accelerate the inference speed by using inference engine (ex:Intel Openvino, NVIDIA TensorRT) instead of using caffe.

I want to know how to get the input of the caffe model in your program and what should I do after I get the output?

Thank you very much.

@bushibushi
Copy link
Contributor

Hello, I started a TensorRT PR last year but left it after the big PIMPL refactoring messed things up just when I got less time to work on this repo. You can still see it in unmerged PRs, it can give you ideas on how to make it work for actual repo configuration. I only got to replace the main NN inference part, did a few computation changes for other ops but didn't replace some of the caffe components.

@jackgao0323
Copy link
Author

@bushibushi Thank you very much. I'll take a look right now.

@bushibushi
Copy link
Contributor

I'd suggest checking out early in the branch where i am certain things worked like 8023fb1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants