You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There seems to be an issue in creating a subclass using PrivacyEngine as a parent class. Specifically, if I don't define the _prepare_model() function again in the subclass and just call it as self._prepare_model( ) or super._prepare_model( ) in the subclass it gives the following type of error:
TypeError: PrivacyEngine._prepare_model() got an unexpected keyword argument 'max_grad_norm'
It's a pretty straightforward notebook with all the packages and support Google Colab has to offer. All you need to do is pip install opacus to get opacus and the required dependencies
in testing(self, module, batch_first, max_grad_norm, loss_reduction, grad_sample_mode)
2 def testing(self,module,batch_first, max_grad_norm, loss_reduction, grad_sample_mode):
3 print("Here")
----> 4 module = self._prepare_model( #replacing self with super() also gives same error
5 module,
6 batch_first=batch_first,
TypeError: PrivacyEngine._prepare_model() got an unexpected keyword argument 'max_grad_norm'
This is the entire error stack. I hope this helps, it's easy to follow and see where the issue stems from.
Expected behavior
There should be no error, unless there's a mismatch between the library that you install from pip and the contents on GitHub because there clearly exists a max_grad_norm argument in _prepare_model( ) function in the PrivacyEngine( ) class. If it's a mismatch, I hope it gets flagged, but otherwise, it should be passed normally because max_grad_norm is a rather important parameter controlling the clipping of gradients.
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
PyTorch Version (e.g., 1.0): 2.5.1+cu124
OS (e.g., Linux): Ubuntu 22.04.4 LTS (x86_64)
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source): -
Python version: 3.11.11
CUDA/cuDNN version: 12.5.82
GPU models and configuration: -
Any other relevant information: -
Additional context
I encountered this error when adding additional functionalities to the PrivacyEngine for a specific project. Making changes in the source file seemed like a bad idea, so I created a subclass with added functionalities. Another observation is that the error goes away when I comment out max_grad_norm as an argument.
The text was updated successfully, but these errors were encountered:
Hi there, thank you for the clear description of the issue and the example. The issue is due to a difference between the opacus package and the nightly version of opacus, which now adds max_grad_norm as a an argument to _prepare_model.
Or you can remove max_grad_norm if you prefer to stick to the package. We will update the package to match the latest version soon. For more context, the main difference between the two is the addition of ghost clipping, a much more memory-efficient way to perform DP-SGD.
🐛 Bug
There seems to be an issue in creating a subclass using
PrivacyEngine
as a parent class. Specifically, if I don't define the_prepare_model()
function again in the subclass and just call it asself._prepare_model( )
orsuper._prepare_model( )
in the subclass it gives the following type of error:Colab Notebook
You can find the reproducible code here: Opacus Bug Report
To Reproduce
Steps to reproduce the behavior:
It's a pretty straightforward notebook with all the packages and support Google Colab has to offer. All you need to do is
pip install opacus
to get opacus and the required dependenciesThis is the entire error stack. I hope this helps, it's easy to follow and see where the issue stems from.
Expected behavior
There should be no error, unless there's a mismatch between the library that you install from pip and the contents on GitHub because there clearly exists a
max_grad_norm
argument in_prepare_model( )
function in thePrivacyEngine( )
class. If it's a mismatch, I hope it gets flagged, but otherwise, it should be passed normally becausemax_grad_norm
is a rather important parameter controlling the clipping of gradients.Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
conda
,pip
, source):pip
Additional context
I encountered this error when adding additional functionalities to the
PrivacyEngine
for a specific project. Making changes in the source file seemed like a bad idea, so I created a subclass with added functionalities. Another observation is that the error goes away when I comment outmax_grad_norm
as an argument.The text was updated successfully, but these errors were encountered: