Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: Unable to Use Kokoro Quantized Models in SwiftUI Example Due to Opset Compatibility Error #1791

Closed
ahadjawaid opened this issue Feb 4, 2025 · 4 comments

Comments

@ahadjawaid
Copy link
Contributor

ahadjawaid commented Feb 4, 2025

I'm trying to use the quantized models from the kokoro ONNX repo for inference in the SwiftUI example. My process is as follows:

  1. I take the quantized model and add metadata using the sherpa-onnx/scripts/kokoro/add-meta-data.py script.
  2. I place the modified model into the SwiftUI repo.
  3. I attempt to run inference but encounter the following error:
    libc++abi: terminating due to uncaught exception of type Ort::Exception: Failed to load model with error: /Users/runner/work/onnxruntime-libs/onnxruntime-libs/onnxruntime/core/graph/model_load_utils.h:56 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::string, int> &, const logging::Logger &, bool, const std::string &, int) ONNX Runtime only *guarantees* support for models stamped with official released onnx opset versions. Opset 5 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx.ml is till opset 4.

I'm not too familiar with ONNX, so I'm unsure how to address this error. Is there a way to modify or convert the quantized model to ensure compatibility with ONNX Runtime? Or can we change the version of the ONNX runtime?

Also, I am able to run inference using the quantized model in the original kokoro ONNX repo.

Any guidance on how to resolve this issue would be greatly appreciated!

@csukuangfj
Copy link
Collaborator

Also, I am able to run inference using the quantized model in the original kokoro ONNX repo.

Which version of onnxruntime you are using?

sherpa-onnx uses onnxruntime 1.17.1

Please change

onnxruntime_version=1.17.1

to change the onnxruntime version used in sherpa-onnx for ios.

@ahadjawaid
Copy link
Contributor Author

I'm using the same as the original repo which is onnxruntime_version=1.17.1.

@ahadjawaid
Copy link
Contributor Author

So, the quantized version of Kokoro works on onnxruntime_version=1.18.1. Should I make a pull request for this?

@csukuangfj
Copy link
Collaborator

Not all.models in sherpa-onnx support onnxruntime>1.17.1

We need to.find a workaround for it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants