-
I'd appreciate help in my assumptions I'm growing from the course, and a question at the end. Let's use the code at following location in the course: https://www.learnpytorch.io/02_pytorch_classification/#5-improving-a-model-from-a-model-perspective
My assumptions: A1: The TRAINING portion in the loop is the only part of the loop that influences/improves the model. My questions: Q1: Why is the testing portion placed inside the loop? Is this just to let us see if the iterations of training are also improving the accuracy of the model with the test feature values? My observations: O1: Placing the Testing code inside the loop that does training, then separately placing code to predict (e.g. y_preds) outside of the loop, confuses things for me. Can someone please confirm/correct my assumptions, and respond to my questions/observations. Thanks, btw, thank you so much, Daniel, for your amazing course. You are a great teacher. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Greetings @tljthree , Q1: Yes, I think you have understood it correct. This is useful because it allows you to track the model's performance during the training phase. For example, it allows you to detect overfitting (training continues to improve while testing plateaus/decline). Also, by tracking your model's performance you can evaluate the effect of using different hyperparameter values (i.e., hyperparameter tuning). These are just two examples. Q2: Yes, you can do that. That is usually what you do when you want to use your trained model on new data. Perhaps you can think of this as the phase where you put your model into work. For example, if you trained a model to classify cats, you start to use it as a cat classifier. Best wishes. |
Beta Was this translation helpful? Give feedback.
Greetings @tljthree ,
Q1: Yes, I think you have understood it correct. This is useful because it allows you to track the model's performance during the training phase. For example, it allows you to detect overfitting (training continues to improve while testing plateaus/decline). Also, by tracking your model's performance you can evaluate the effect of using different hyperparameter values (i.e., hyperparameter tuning). These are just two examples.
Q2: Yes, you can do that. That is usually what you do when you want to use your trained model on new data. Perhaps you can think of this as the phase where you put your model into work. For example, if you trained a model to classify cats, you …