Skip to content

FlagScale is a Large Language Model (LLM) toolkit based on open-sourced projects.

License

Notifications You must be signed in to change notification settings

aoyulong/FlagScale

 
 

Repository files navigation

Latest News

  • [2024/11] Released v0.6.0:
    • Introduced general multi-dimensional heterogeneous parallelism and CPU-based communication between different chips.
    • Added the full support for LLaVA-OneVision, achieving SOTA results on the Infinity-MM dataset.
    • Open-sourced the optimized CFG implementation and accelerated the generation and understanding tasks for Emu3.
    • Implemented the auto-tuning feature and enhanced the CI/CD system.
  • [2024/4] Released v0.3: Achieved heterogeneous hybrid training of the Aquila2-70B-Expr model on a cluster using both NVIDIA and Iluvatar chips. Adapted the Aquila2 series to AI chips from six different manufacturers.
  • [2023/11] Released v0.2: Introduced training support for Aquila2-70B-Expr, enabling heterogeneous training across chips with the same or compatible architectures.
  • [2023/10] Released v0.1: Supported Aquila models with optimized training schemes for Aquila2-7B and Aquila2-34B, including parallel strategies, optimizations, and hyper-parameter settings.

About

FlagScale is a comprehensive toolkit designed to support the entire lifecycle of large models, developed with the backing of the Beijing Academy of Artificial Intelligence (BAAI). It builds on the strengths of several prominent open-source projects, including Megatron-LM and vllm, to provide a robust, end-to-end solution for managing and scaling large models.

The primary objective of FlagScale is to enable seamless scalability across diverse hardware architectures while maximizing computational resource efficiency and enhancing model performance. By offering essential components for model development, training, and deployment, FlagScale aims to serve as an indispensable toolkit for optimizing both the speed and effectiveness of large model workflows.

Quick Start

FlagScale leverages Hydra for configuration management. The configurations are organized into two levels: an outer experiment-level YAML file and an inner task-level YAML file.

  • The experiment-level YAML file defines the experiment directory, backend engine, task type, and other related environmental configurations.
  • The task-level YAML file specifies the model, dataset, and parameters for specific tasks such as training or inference.

All valid configurations in the task-level YAML file correspond to the arguments used in backend engines such as Megatron-LM and vllm, with hyphens (-) replaced by underscores (_). For a complete list of available configurations, please refer to the backend engine documentation. Simply copy and modify the existing YAML files in the examples folder to get started.

Setup

We recommend using the latest release of NGC's PyTorch container for setup.

  1. Clone the repository:

    git clone https://github.com/FlagOpen/FlagScale.git
  2. Install the dependencies:

    cd FlagScale
    pip install -r requirements/requirements-dev.txt

    You can install only the required packages for the specific backend engine you need by modifying the requirements.

  3. Install the packages with customized extensions:

    cd vllm
    pip install .
    
    cd megatron-energon
    pip install .

Run a Task

FlagScale provides a unified runner for various tasks, including training and inference. Simply specify the configuration file to run the task with a single command. The runner will automatically load the configurations and execute the task. The following example demonstrates how to run a distributed training task.

  1. Start the distributed training job:

    python run.py --config-path ./examples/aquila/conf --config-name config action=run

    The data_path in the demo is the path of the training datasets following the Megatron-LM format. For quickly running the pretraining process, we also provide a small processed data (bin and idx) from the Pile dataset.

  2. Stop the distributed training job:

    python run.py --config-path ./examples/aquila/conf --config-name config action=stop

License

This project is licensed under the Apache License (Version 2.0). This project also contains other third-party components under other open-source licenses. See the LICENSE file for more information.

About

FlagScale is a Large Language Model (LLM) toolkit based on open-sourced projects.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.1%
  • Cuda 7.4%
  • C++ 2.1%
  • Shell 1.3%
  • C 0.6%
  • CMake 0.2%
  • Other 0.3%