Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for Inference Code on Custom Datasets #101

Open
dongqi-me opened this issue Oct 2, 2024 · 6 comments
Open

Request for Inference Code on Custom Datasets #101

dongqi-me opened this issue Oct 2, 2024 · 6 comments

Comments

@dongqi-me
Copy link

Dear VideoLLaMA2 Maintainers,

I have been using your library and successfully fine-tuned models with LoRA and QLoRA on my own dataset. However, I noticed that the repository does not include code for inference on custom datasets after fine-tuning.

Would it be possible to share or provide guidance on how to perform inference on a custom dataset using the fine-tuned models? It would greatly assist me in completing my project.

Thank you very much for your time and for maintaining such a valuable resource.

@sjghh
Copy link

sjghh commented Oct 9, 2024

Hello, I am also planning to fine-tune a model on my organized dataset. Could you please advise whether I should fine-tune the base model, or is it possible to continue fine-tuning the chat model?

@thisurawz1
Copy link

Dear VideoLLaMA2 Maintainers,

I have been using your library and successfully fine-tuned models with LoRA and QLoRA on my own dataset. However, I noticed that the repository does not include code for inference on custom datasets after fine-tuning.

Would it be possible to share or provide guidance on how to perform inference on a custom dataset using the fine-tuned models? It would greatly assist me in completing my project.

Thank you very much for your time and for maintaining such a valuable resource.

you have to update the videollama2 repository to the latest commit. then use the following script. just have to change the model path in the original inference script. thats all.

import sys
sys.path.append('./')
from videollama2 import model_init, mm_infer
from videollama2.utils import disable_torch_init


def inference():
    disable_torch_init()

    # Video Inference
    modal = 'video'
    modal_path = 'assets/cat_and_chicken.mp4' 
    instruct = 'What animals are in the video, what are they doing, and how does the video feel?'
    # Reply:
    # The video features a kitten and a baby chick playing together. The kitten is seen laying on the floor while the baby chick hops around. The two animals interact playfully with each other, and the video has a cute and heartwarming feel to it.

    # Image Inference
    modal = 'image'
    modal_path = 'assets/sora.png'
    instruct = 'What is the woman wearing, what is she doing, and how does the image feel?'
    # Reply:
    # The woman in the image is wearing a black coat and sunglasses, and she is walking down a rain-soaked city street. The image feels vibrant and lively, with the bright city lights reflecting off the wet pavement, creating a visually appealing atmosphere. The woman's presence adds a sense of style and confidence to the scene, as she navigates the bustling urban environment.

    model_path = 'DAMO-NLP-SG/VideoLLaMA2-7B'
    # Base model inference (only need to replace model_path)
    # model_path = 'work_dirs/videollama2/finetune_downstream_sft_settings_qlora' #your fine-tuned weights directory 
    model, processor, tokenizer = model_init(model_path)
    output = mm_infer(processor[modal](modal_path), instruct, model=model, tokenizer=tokenizer, do_sample=False, modal=modal)

    print(output)

if __name__ == "__main__":
    inference()

@SHIVAM3052
Copy link

Dear VideoLLaMA2 Maintainers,

I have been using your library and successfully fine-tuned models with LoRA and QLoRA on my own dataset. However, I noticed that the repository does not include code for inference on custom datasets after fine-tuning.

Would it be possible to share or provide guidance on how to perform inference on a custom dataset using the fine-tuned models? It would greatly assist me in completing my project.

Thank you very much for your time and for maintaining such a valuable resource.

Hi Dongqi,

Can you share your custom dataset finetuning code or repo with me

@Evanhimself
Copy link

I have also successfully fine-tuned the model using LoRA on my own dataset, but when I use the original inference script for inference, I find that the output values ​​are all empty. Why?

@LiangMeng89
Copy link

Dear VideoLLaMA2 Maintainers,

I have been using your library and successfully fine-tuned models with LoRA and QLoRA on my own dataset. However, I noticed that the repository does not include code for inference on custom datasets after fine-tuning.

Would it be possible to share or provide guidance on how to perform inference on a custom dataset using the fine-tuned models? It would greatly assist me in completing my project.

Thank you very much for your time and for maintaining such a valuable resource.

Hello,I'm a phD student from ZJU, I also use videollama2 to do my own research,we create a WeChat group to discuss some issues of videollama2 and help each other,could you join us? Please contact me: WeChat number == LiangMeng19357260600, phone number == +86 19357260600,e-mail == [email protected].

@LiangMeng89
Copy link

Dear VideoLLaMA2 Maintainers,
I have been using your library and successfully fine-tuned models with LoRA and QLoRA on my own dataset. However, I noticed that the repository does not include code for inference on custom datasets after fine-tuning.
Would it be possible to share or provide guidance on how to perform inference on a custom dataset using the fine-tuned models? It would greatly assist me in completing my project.
Thank you very much for your time and for maintaining such a valuable resource.

Hi Dongqi,

Can you share your custom dataset finetuning code or repo with me

Hello,I'm a phD student from ZJU, I also use videollama2 to do my own research,we create a WeChat group to discuss some issues of videollama2 and help each other,could you join us? Please contact me: WeChat number == LiangMeng19357260600, phone number == +86 19357260600,e-mail == [email protected].

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants