Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage] Where to set the tokenizer in the sparse example? #129

Closed
CharlesRiggins opened this issue Aug 29, 2024 · 1 comment
Closed

[Usage] Where to set the tokenizer in the sparse example? #129

CharlesRiggins opened this issue Aug 29, 2024 · 1 comment

Comments

@CharlesRiggins
Copy link

CharlesRiggins commented Aug 29, 2024

quantization_24_sparse_w4a16

I noticed that in the llama7b_sparse_w4a16 example, there doesn’t seem to be an option to specify a tokenizer. I’m curious if the example is using a LLaMA tokenizer by default. If so, this might be incorrect when other models are used.

@CharlesRiggins CharlesRiggins changed the title [Usage] Where to set the tokenizer in the sparse example [Usage] Where to set the tokenizer in the sparse example? Aug 29, 2024
@markurtz
Copy link
Collaborator

Good question; you can pass in the tokenizer with the tokenizer arg key. Otherwise it will default to loading the tokenizer used by the model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@markurtz @CharlesRiggins and others