You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, different data loading methods will definitely have no effect on the model training speed. You need to check the size of the input model training image and your data preprocessing code. In particular, an oversize of the image scale will increase its training duration for any model.
I tried to change the image size to (256, 256), but the same phenomenon occurs with the training process: very fast all at once, then a long pause, at which point checking the GPU doesn't work. After the pause, the GPU works only for a short while, when the training progress is updated.
But using the same data loading method on other models(CNN or transformer based) does not show this phenomenon. What are the possible reasons for this?
我将数据的加载方式改成一般使用的根据路径索引批量读取,
而不是先将所有数据先转换为.npy,并一次性读取到变量里。
源代码的方式需要占用大量的内存,训练速度快。
而改写后的数据加载方式,内存占用小,但训练速度很慢,一卡一卡的。
我想问,这是什么原因造成的呢?是mamba只适合这种数据加载方式吗?如果只能先预加载数据才能提升速度,那这在实际的实时推理应用中似乎就没有意义了。
以下是改写的数据加载代码(一般的模型都会使用这种方法):
class isic_loader(Dataset):
""" dataset class for Brats datasets
"""
The text was updated successfully, but these errors were encountered: