Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with creating a subclass using PrivacyEngine #728

Closed
grim-hitman0XX opened this issue Feb 4, 2025 · 2 comments
Closed

Issues with creating a subclass using PrivacyEngine #728

grim-hitman0XX opened this issue Feb 4, 2025 · 2 comments

Comments

@grim-hitman0XX
Copy link

🐛 Bug

There seems to be an issue in creating a subclass using PrivacyEngine as a parent class. Specifically, if I don't define the _prepare_model() function again in the subclass and just call it as self._prepare_model( ) or super._prepare_model( ) in the subclass it gives the following type of error:

TypeError: PrivacyEngine._prepare_model() got an unexpected keyword argument 'max_grad_norm'

Colab Notebook

You can find the reproducible code here: Opacus Bug Report

To Reproduce

Steps to reproduce the behavior:

It's a pretty straightforward notebook with all the packages and support Google Colab has to offer. All you need to do is pip install opacus to get opacus and the required dependencies

TypeError Traceback (most recent call last)
in <cell line: 0>()
----> 1 test.testing(module,True, 1.0, "mean", "hooks")

in testing(self, module, batch_first, max_grad_norm, loss_reduction, grad_sample_mode)
2 def testing(self,module,batch_first, max_grad_norm, loss_reduction, grad_sample_mode):
3 print("Here")
----> 4 module = self._prepare_model( #replacing self with super() also gives same error
5 module,
6 batch_first=batch_first,

TypeError: PrivacyEngine._prepare_model() got an unexpected keyword argument 'max_grad_norm'

This is the entire error stack. I hope this helps, it's easy to follow and see where the issue stems from.

Expected behavior

There should be no error, unless there's a mismatch between the library that you install from pip and the contents on GitHub because there clearly exists a max_grad_norm argument in _prepare_model( ) function in the PrivacyEngine( ) class. If it's a mismatch, I hope it gets flagged, but otherwise, it should be passed normally because max_grad_norm is a rather important parameter controlling the clipping of gradients.

Environment

Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).

You can get the script and run it with:

wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
  • PyTorch Version (e.g., 1.0): 2.5.1+cu124
  • OS (e.g., Linux): Ubuntu 22.04.4 LTS (x86_64)
  • How you installed PyTorch (conda, pip, source): pip
  • Build command you used (if compiling from source): -
  • Python version: 3.11.11
  • CUDA/cuDNN version: 12.5.82
  • GPU models and configuration: -
  • Any other relevant information: -

Additional context

I encountered this error when adding additional functionalities to the PrivacyEngine for a specific project. Making changes in the source file seemed like a bad idea, so I created a subclass with added functionalities. Another observation is that the error goes away when I comment out max_grad_norm as an argument.

@iden-kalemaj
Copy link
Contributor

Hi there, thank you for the clear description of the issue and the example. The issue is due to a difference between the opacus package and the nightly version of opacus, which now adds max_grad_norm as a an argument to _prepare_model.

To use the nightly version (from Colab):

! pip install opacus
%cd opacus 
! pip install -e .

Or you can remove max_grad_norm if you prefer to stick to the package. We will update the package to match the latest version soon. For more context, the main difference between the two is the addition of ghost clipping, a much more memory-efficient way to perform DP-SGD.

@iden-kalemaj
Copy link
Contributor

Closing this issue due to inactivity, but feel free to re-open if there are follow-up questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants