Skip to content

Latest commit

 

History

History
66 lines (45 loc) · 2.07 KB

README.md

File metadata and controls

66 lines (45 loc) · 2.07 KB

DeepPlace

An end-to-end learning approach DeepPlace for placement problem with two stages. The deep reinforcement learning (DRL) agent places the macros sequentially, followed by a gradient-based optimization placer to arrange millions of standard cells. We use PPO for all the experiments implemented with Pytorch, and the GPU version of DREAMPlace is adopted as gradient based optimization placer for arranging standard cells.

Requirements

In order to install requirements, follow:

# PyTorch
conda install pytorch torchvision -c soumith

# Baselines for Atari preprocessing
git clone https://github.com/openai/baselines.git
cd baselines
pip install -e .

# Other requirements
pip install -r requirements.txt

# DREAMplace installation
git clone --recursive https://github.com/limbo018/DREAMPlace.git
mkdir build 
cd build 
cmake .. -DCMAKE_INSTALL_PREFIX=your_install_path -DPYTHON_EXECUTABLE=$(which python)
make 
make install

#Get benchmarks
python benchmarks/ispd2005_2015.py

# DGL installation
conda install -c dglteam dgl-cuda10.2

Training

Macro Placement

python main.py --task "place" --algo ppo --use-gae --lr 2.5e-4 --clip-param 0.1 --value-loss-coef 0.5 --num-processes 1 --num-steps 2840 --num-mini-batch 4 --log-interval 1 --use-linear-lr-decay --entropy-coef 0.01

Joint Macro/Standard cell Placement

python EDA-AI/main.py --task "fullplace" --algo ppo --use-gae --lr 2.5e-4 --clip-param 0.1 --value-loss-coef 0.5 --num-processes 1 --num-steps 2840 --num-mini-batch 4 --log-interval 1 --use-linear-lr-decay --entropy-coef 0.01

Validation

python validation.py --task "place" --num-processes 1 --num-mini-batch 1 --num-steps 710 --lr 2.5e-4 --clip-param 0.1 --value-loss-coef 0.5 --entropy-coef 0.01

Results

pretraining

fullplace