You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Great work from yourself and your team. Quick question,we are thinking about using that method with a largeur Falcon model. do you think we Can have therefore a greater gap of performance with gpt 3.5?the idea being if a 1b model Can Do that, what Can be with a 40b model.
The text was updated successfully, but these errors were encountered:
Not sure about that. However we did see that when using a T5-style encoder-decoder model, a larger model achieves better performance. Due to the resource limit, we did not scale to models larger than 1B.
Great work from yourself and your team. Quick question,we are thinking about using that method with a largeur Falcon model. do you think we Can have therefore a greater gap of performance with gpt 3.5?the idea being if a 1b model Can Do that, what Can be with a 40b model.
The text was updated successfully, but these errors were encountered: