This is the API for the Transformer Lab App which is the main repo for this project. Please go the Transformer Lab App repository to learn more and access documentation.
Use the instructions below if you are installing and running the API on a server, manually.
- An NVIDIA GPU + Linux or Windows with WSL2 support
- or MacOS with Apple Silicon
- If you do not have a GPU or have an Intel Mac, the API will run but will only be able to do inference, but not things like training
You can use the install script to get the application running:
./install.sh
This will install conda if it's not installed, and then use conda and pip to install the rest of the application requirements.
If you prefer to install the API without using the install script you can follow the steps on this page:
https://transformerlab.ai/docs/install/advanced-install
Once conda and dependencies are installed, run the following:
./run.sh
Dependencies are managed with uv (installed separately). Add new requirements to requirements.in
and regenerate their corresponding requirements-uv.txt
files by running the following two commands:
# default GPU enabled requirements
uv pip compile requirements.in -o requirements-uv.txt
# requirements for systems without GPU support
uv pip compile requirements.in -o requirements-no-gpu-uv.txt --extra-index-url=https://download.pytorch.org/whl/cpu
sed -i 's/\+cpu//g' requirements-no-gpu-uv.txt #replaces all +cpu in the requirements as uv pip compile adds it to all the pytorch libraries, and that breaks the install