You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix out of date documentation & remove friction points (pytorch#78682)
Fixes various friction points with the documentation for onboarding new users and remove instructions that were no longer valid
Changes include:
- Listing prerequisites earlier, so that devs can ensure they're met before encountering error messages
- Removing linter invocations that are no longer valid
- Modifying instructions to install mkl packages to only apply to x86 based CPUs
[skip ci]
Pull Request resolved: pytorch#78682
Approved by: https://github.com/seemethere, https://github.com/janeyx99, https://github.com/malfet
If you want to have no-op incremental rebuilds (which are fast), see the section below titled "Make no-op build fast."
112
120
113
-
3.Install PyTorch in `develop`mode:
121
+
3.Follow the instructions for [installing PyTorch from source](https://github.com/pytorch/pytorch#from-source), except when it's time to install PyTorch instead of invoking `setup.py install` you'll want to call `setup.py develop`instead:
114
122
115
-
The change you have to make is to replace
123
+
Specifically, the change you have to make is to replace
116
124
117
125
```bash
118
126
python setup.py install
@@ -125,8 +133,8 @@ python setup.py develop
125
133
```
126
134
127
135
This mode will symlink the Python files from the current local source
128
-
tree into the Python install. Hence, if you modify a Python file, you
129
-
do not need to reinstall PyTorch again and again. This is especially
136
+
tree into the Python install. This way when you modify a Python file, you
137
+
won't need to reinstall PyTorch again and again. This is especially
130
138
useful if you are only changing Python files.
131
139
132
140
For example:
@@ -143,10 +151,6 @@ torch as it is not installed`; next run `python setup.py clean`. After
143
151
that, you can install in `develop` mode again.
144
152
145
153
### Tips and Debugging
146
-
147
-
* A prerequisite to installing PyTorch is CMake. We recommend installing it with [Homebrew](https://brew.sh/)
148
-
with `brew install cmake` if you are developing on MacOS or Linux system.
149
-
* Our `setup.py` requires Python >= 3.7
150
154
* If a commit is simple and doesn't affect any code (keep in mind that some docstrings contain code
151
155
that is used in tests), you can add `[skip ci]` (case sensitive) somewhere in your commit message to
152
156
[skip all build / test steps](https://github.blog/changelog/2021-02-08-github-actions-skip-pull-request-and-push-workflows-with-skip-ci/).
@@ -172,14 +176,14 @@ with `brew install cmake` if you are developing on MacOS or Linux system.
* If you run into issue running `git submodule update --init --recursive --jobs 0`. Please try the following:
175
-
- If you encountered error such as
179
+
- If you encounter an error such as
176
180
```
177
181
error: Submodule 'third_party/pybind11' could not be updated
178
182
```
179
183
check whether your Git local or global config file contains any `submodule.*` settings. If yes, remove them and try again.
180
184
(please reference [this doc](https://git-scm.com/docs/git-config#Documentation/git-config.txt-submoduleltnamegturl) for more info).
181
185
182
-
- If you encountered error such as
186
+
- If you encounter an error such as
183
187
```
184
188
fatal: unable to access 'https://github.com/pybind11/pybind11.git': could not load PEM client certificate ...
185
189
```
@@ -189,7 +193,7 @@ with `brew install cmake` if you are developing on MacOS or Linux system.
189
193
openssl x509 -noout -in <cert_file> -dates
190
194
```
191
195
192
-
- If you encountered error that some third_party modules are not checkout correctly, such as
196
+
- If you encounter an error that some third_party modules are not checked out correctly, such as
193
197
```
194
198
Could not find .../pytorch/third_party/pybind11/CMakeLists.txt
195
199
```
@@ -308,6 +312,12 @@ into the repo directory.
308
312
309
313
### Python Unit Testing
310
314
315
+
**Prerequisites**:
316
+
The following packages should be installed with either `conda` or `pip`:
317
+
-`expecttest` and `hypothesis` - required to run tests
318
+
-`mypy` - recommended for linting
319
+
-`pytest` - recommended to run tests more selectively
320
+
311
321
All PyTorch test suites are located in the `test` folder and start with
312
322
`test_`. Run the entire test
313
323
suite with
@@ -340,10 +350,6 @@ in `test/test_jit.py`. Your command would be:
340
350
python test/test_jit.py TestJit.test_Sequential
341
351
```
342
352
343
-
The `expecttest` and `hypothesis` libraries must be installed to run the tests. `mypy` is
344
-
an optional dependency, and `pytest` may help run tests more selectively.
345
-
All these packages can be installed with `conda` or `pip`.
346
-
347
353
**Weird note:** In our CI (Continuous Integration) jobs, we actually run the tests from the `test` folder and **not** the root of the repo, since there are various dependencies we set up for CI that expects the tests to be run from the test folder. As such, there may be some inconsistencies between local testing and CI testing--if you observe an inconsistency, please [file an issue](https://github.com/pytorch/pytorch/issues/new/choose).
348
354
349
355
### Better local unit tests with `pytest`
@@ -365,54 +371,24 @@ command runs tests such as `TestNN.test_BCELoss` and
365
371
366
372
### Local linting
367
373
368
-
You can run the same linting steps that are used in CI locally via `make`:
369
-
370
-
```bash
371
-
# Lint all files
372
-
make lint -j 6 # run lint (using 6 parallel jobs)
373
-
374
-
# Lint only the files you have changed
375
-
make quicklint -j 6
376
-
```
377
-
378
-
These jobs may require extra dependencies that aren't dependencies of PyTorch
379
-
itself, so you can install them via this command, which you should only have to
380
-
run once:
374
+
Install all prerequisites by running
381
375
382
376
```bash
383
377
make setup_lint
384
378
```
385
379
386
-
To run a specific linting step, use one of these targets or see the
387
-
[`Makefile`](Makefile) for a complete list of options.
380
+
You can now run the same linting steps that are used in CI locally via `make`:
388
381
389
382
```bash
390
-
# Check for tabs, trailing newlines, etc.
391
-
make quick_checks
392
-
393
-
make flake8
394
-
395
-
make mypy
396
-
397
-
make cmakelint
398
-
399
-
make clang-tidy
383
+
make lint
400
384
```
401
385
402
-
To run a lint only on changes, add the `CHANGED_ONLY` option:
403
-
404
-
```bash
405
-
make <name of lint> CHANGED_ONLY=--changed-only
406
-
```
386
+
Learn more about the linter on the [lintrunner wiki page](https://github.com/pytorch/pytorch/wiki/lintrunner)
407
387
408
-
### Running `mypy`
388
+
####Running `mypy`
409
389
410
390
`mypy` is an optional static type checker for Python. We have multiple `mypy`
411
-
configs for the PyTorch codebase, so you can run them all using this command:
412
-
413
-
```bash
414
-
make mypy
415
-
```
391
+
configs for the PyTorch codebase that are automatically validated against whenever the linter is run.
@@ -462,18 +438,17 @@ of very low signal to reviewers.
462
438
463
439
So you want to write some documentation and don't know where to start?
464
440
PyTorch has two main types of documentation:
465
-
-user-facing documentation.
441
+
-**User facing documentation**:
466
442
These are the docs that you see over at [our docs website](https://pytorch.org/docs).
467
-
-developer facing documentation.
443
+
-**Developer facing documentation**:
468
444
Developer facing documentation is spread around our READMEs in our codebase and in
469
445
the [PyTorch Developer Wiki](https://pytorch.org/wiki).
470
446
If you're interested in adding new developer docs, please read this [page on the wiki](https://github.com/pytorch/pytorch/wiki/Where-or-how-should-I-add-documentation%3F) on our best practices for where to put it.
471
447
472
448
The rest of this section is about user-facing documentation.
-[Get the PyTorch Source](#get-the-pytorch-source)
28
29
-[Install PyTorch](#install-pytorch)
@@ -152,16 +153,19 @@ They require JetPack 4.2 and above, and [@dusty-nv](https://github.com/dusty-nv)
152
153
153
154
### From Source
154
155
155
-
If you are installing from source, you will need Python 3.7 or later and a C++14 compiler. Also, we highly recommend installing an [Anaconda](https://www.anaconda.com/distribution/#download-section) environment.
156
-
You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
156
+
#### Prerequisites
157
+
If you are installing from source, you will need:
158
+
- Python 3.7 or later (for Linux, Python 3.7.6+ or 3.8.1+ is needed)
159
+
- A C++14 compatible compiler, such as clang
157
160
158
-
Once you have [Anaconda](https://www.anaconda.com/distribution/#download-section)installed, here are the instructions.
161
+
We highly recommend installing an [Anaconda](https://www.anaconda.com/distribution/#download-section)environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
159
162
160
-
If you want to compile with CUDA support, install
163
+
If you want to compile with CUDA support, install the following (note that CUDA is not supported on macOS)
161
164
-[NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 10.2 or above
162
165
-[NVIDIA cuDNN](https://developer.nvidia.com/cudnn) v7 or above
163
166
-[Compiler](https://gist.github.com/ax3l/9489132) compatible with CUDA
164
-
Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/pdf/cuDNN-Support-Matrix.pdf) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardwares
167
+
168
+
Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/pdf/cuDNN-Support-Matrix.pdf) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware
165
169
166
170
If you want to disable CUDA support, export the environment variable `USE_CUDA=0`.
167
171
Other potentially useful environment variables may be found in `setup.py`.
@@ -177,25 +181,33 @@ Other potentially useful environment variables may be found in `setup.py`.
Note that if you are using [Anaconda](https://www.anaconda.com/distribution/#download-section), you may experience an error caused by the linker:
@@ -232,16 +247,14 @@ error: command 'g++' failed with exit status 1
232
247
233
248
This is caused by `ld` from Conda environment shadowing the system `ld`. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.7.6+ and 3.8.1+.
@@ -255,17 +268,20 @@ come with Visual Studio Code by default.
255
268
256
269
If you want to build legacy python code, please refer to [Building on legacy code and CUDA](https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md#building-on-legacy-code-and-cuda)
257
270
258
-
Build with CPU
271
+
**CPU-only builds**
272
+
273
+
In this mode PyTorch computations will run on your CPU, not your GPU
259
274
260
-
It's fairly easy to build with CPU.
261
275
```cmd
262
276
conda activate
263
277
python setup.py install
264
278
```
265
279
266
280
Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/master/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.
267
281
268
-
Build with CUDA
282
+
**CUDA based build**
283
+
284
+
In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching
269
285
270
286
[NVTX](https://docs.nvidia.com/gameworks/content/gameworkslibrary/nvtx/nvidia_tools_extension_library_nvtx.htm) is needed to build Pytorch with CUDA.
271
287
NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto already installed CUDA run CUDA installation once again and check the corresponding checkbox.
0 commit comments