From e49b0ae8c9962d291b8ac3491e51d4b961577ed1 Mon Sep 17 00:00:00 2001 From: Kashiwade-music Date: Sun, 14 May 2023 21:55:28 +0900 Subject: [PATCH 001/257] fix: Fixed inconsistency between dev/DOCKER_DEPLOY.md and root Dockerfile - Added the time package and successfully passed the "./run_reg_test.py vtr_reg_basic" test as instructed in dev/DOCKER_DEPLOY.md. - The image is now based on Ubuntu 22.04, officialy recommended environment --- Dockerfile | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/Dockerfile b/Dockerfile index 95304df8971..71f5129301a 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,4 +1,4 @@ -FROM ubuntu:20.04 +FROM ubuntu:22.04 ARG DEBIAN_FRONTEND=noninteractive # set out workspace ENV WORKSPACE=/workspace @@ -19,6 +19,7 @@ RUN apt-get update -qq \ libtbb-dev \ python3-pip \ git \ + time \ # Install python packages && pip install -r requirements.txt \ # Cleanup From 75d48cc61ab230943e43f5b9e5f96b3128eea045 Mon Sep 17 00:00:00 2001 From: Kashiwade-music Date: Sun, 14 May 2023 21:56:06 +0900 Subject: [PATCH 002/257] docs: Update doc about Dockerfile --- dev/DOCKER_DEPLOY.md | 54 +++++++++++++++++++++++++------------------- 1 file changed, 31 insertions(+), 23 deletions(-) diff --git a/dev/DOCKER_DEPLOY.md b/dev/DOCKER_DEPLOY.md index b3c80fbe741..7f4610dae69 100644 --- a/dev/DOCKER_DEPLOY.md +++ b/dev/DOCKER_DEPLOY.md @@ -1,47 +1,55 @@ -Overview -======== +# Building VTR on Docker +## Overview Docker creates an isolated container on your system so you know that VTR will run without further configurations nor affecting any other work. Our Docker file sets up this enviroment by installing all necessary Linux packages and applications as well as Perl modules. -Additionally, Cloud9 is installed, which enables the remote management of your container through browser. With Cloud9, VTR can be started easier (and even modified and recompiled) without the need to logging into a terminal. If the Cloud9 endpoint is published outside your LAN, you can also execute VTR remotely or share your screen with other users. +## Setup +1. Install docker (Community Edition is free and sufficient for VTR): https://docs.docker.com/engine/install/ -Setup -===== +2. Clone the VTR project: -Install docker (Community Edition is free and sufficient for VTR): https://docs.docker.com/engine/installation/ + ``` + git clone https://github.com/verilog-to-routing/vtr-verilog-to-routing + ``` -Clone the VTR project: +3. CD to the VTR folder and build the docker image: -`git clone https://github.com/verilog-to-routing/vtr-verilog-to-routing` + ``` + docker build . -t vtrimg + ``` -CD to the VTR folder and build the docker image: +4. Start docker with the new image: -`docker build . -t vtrimg` + ``` + docker run -it -d --name vtr vtrimg + ``` -Start docker with the new image and connect the current volume with the workspace volume of the container: -`sudo docker run -it -d -p :8080 -v :/workspace vtrimg` +## Running +1. Attach to the Docker container. Attaching will open a shell on the `/workspace` directory within the container. +The project root directory from the docker build process is copied and placed in the `/workspace` directory. -Running -======= + ```sh + # from host computer + docker exec -it vtr /bin/bash + ``` -Open a browser (Google Chrome for example) and navigate to your host's url at the port you opened up. For example: -http://192.168.1.30:8080 +2. Ensure that a basic regression test passes: -First, use one of the terminals and compile VTR: -make && make installation/ + ```sh + # in container + ./run_reg_test.py vtr_reg_basic + ``` -Second, ensure that a basic regression test passes: -./run_reg_test.py vtr_reg_basic +3. Run and/or modify VTR in the usual way. -Third, run and/or modify VTR in the usual way. -Developpement Debugging -======================= +## Development Debugging + the container already comes with clang as the default compiler and with scan-build the do statistical analysis on the build set to `debug` in makefile From d576537d659b0b8e125badf56655c4c2fdc64c55 Mon Sep 17 00:00:00 2001 From: Kashiwade-music Date: Mon, 15 May 2023 08:15:45 +0900 Subject: [PATCH 003/257] docs: Update doc about Dockerfile --- dev/DOCKER_DEPLOY.md | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/dev/DOCKER_DEPLOY.md b/dev/DOCKER_DEPLOY.md index 7f4610dae69..6770e27968b 100644 --- a/dev/DOCKER_DEPLOY.md +++ b/dev/DOCKER_DEPLOY.md @@ -47,14 +47,3 @@ The project root directory from the docker build process is copied and placed in 3. Run and/or modify VTR in the usual way. - -## Development Debugging - -the container already comes with clang as the default compiler and with scan-build the do statistical analysis on the build -set to `debug` in makefile - -run `scan-build make -j4` from the root VTR directory. -to output the html analysis to a specific folder, run `scan-build make -j4 -o /some/folder` - -the output is html and viewable in any browser. - From 2f96d366d17396226e6e03d8b5fdee7ed7802d70 Mon Sep 17 00:00:00 2001 From: Kashiwade-music Date: Mon, 15 May 2023 08:50:41 +0900 Subject: [PATCH 004/257] docs: Change instructions to verify if VTR is installed correctly --- dev/DOCKER_DEPLOY.md | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/dev/DOCKER_DEPLOY.md b/dev/DOCKER_DEPLOY.md index 6770e27968b..14eab8cf018 100644 --- a/dev/DOCKER_DEPLOY.md +++ b/dev/DOCKER_DEPLOY.md @@ -38,12 +38,25 @@ The project root directory from the docker build process is copied and placed in docker exec -it vtr /bin/bash ``` -2. Ensure that a basic regression test passes: +1. Verfiy that VTR has been installed correctly: ```sh # in container - ./run_reg_test.py vtr_reg_basic + ./vtr_flow/scripts/run_vtr_task.py regression_tests/vtr_reg_basic/basic_timing ``` -3. Run and/or modify VTR in the usual way. + The expected output is: + + ``` + k6_N10_mem32K_40nm/single_ff OK + k6_N10_mem32K_40nm/single_ff OK + k6_N10_mem32K_40nm/single_wire OK + k6_N10_mem32K_40nm/single_wire OK + k6_N10_mem32K_40nm/diffeq1 OK + k6_N10_mem32K_40nm/diffeq1 OK + k6_N10_mem32K_40nm/ch_intrinsics OK + k6_N10_mem32K_40nm/ch_intrinsics OK + ``` + +2. Run and/or modify VTR in the usual way. From 66fa02dd2ccdc40edb71bf759f300150a4b96d1d Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Thu, 15 Feb 2024 12:00:31 -0500 Subject: [PATCH 005/257] avoid copying Region and PartitionRegion unnecessarily --- vpr/src/base/partition.cpp | 20 +++++---- vpr/src/base/partition.h | 11 +++-- vpr/src/base/partition_region.cpp | 53 ++++++++++++----------- vpr/src/base/partition_region.h | 50 +++++++++++---------- vpr/src/base/region.cpp | 4 +- vpr/src/base/region.h | 7 ++- vpr/src/base/vpr_constraints.cpp | 38 ++++++++-------- vpr/src/base/vpr_constraints.h | 30 +++++++++---- vpr/src/base/vpr_constraints_serializer.h | 12 ++--- vpr/src/base/vpr_constraints_writer.cpp | 8 ++-- vpr/src/base/vpr_constraints_writer.h | 2 +- vpr/src/draw/draw_floorplanning.cpp | 26 +++++------ vpr/src/pack/attraction_groups.cpp | 4 +- vpr/src/pack/cluster_util.cpp | 8 ++-- vpr/src/pack/constraints_report.cpp | 8 ++-- vpr/src/place/initial_placement.cpp | 6 +-- vpr/src/place/move_utils.cpp | 9 ++-- vpr/src/place/place_constraints.cpp | 50 ++++++++++----------- vpr/test/test_vpr_constraints.cpp | 25 +++++------ 19 files changed, 197 insertions(+), 174 deletions(-) diff --git a/vpr/src/base/partition.cpp b/vpr/src/base/partition.cpp index 107a8ec2d3a..6e004b86d46 100644 --- a/vpr/src/base/partition.cpp +++ b/vpr/src/base/partition.cpp @@ -1,29 +1,33 @@ #include "partition.h" #include "partition_region.h" #include -#include +#include -const std::string Partition::get_name() { +const std::string& Partition::get_name() const{ return name; } void Partition::set_name(std::string _part_name) { - name = _part_name; + name = std::move(_part_name); } -const PartitionRegion Partition::get_part_region() { +const PartitionRegion& Partition::get_part_region() const { + return part_region; +} + +PartitionRegion& Partition::get_mutable_part_region() { return part_region; } void Partition::set_part_region(PartitionRegion pr) { - part_region = pr; + part_region = std::move(pr); } -void print_partition(FILE* fp, Partition part) { - std::string name = part.get_name(); +void print_partition(FILE* fp, const Partition& part) { + const std::string& name = part.get_name(); fprintf(fp, "partition_name: %s\n", name.c_str()); - PartitionRegion pr = part.get_part_region(); + const PartitionRegion& pr = part.get_part_region(); print_partition_region(fp, pr); } diff --git a/vpr/src/base/partition.h b/vpr/src/base/partition.h index 7ef144e22a7..9c8984b8c86 100644 --- a/vpr/src/base/partition.h +++ b/vpr/src/base/partition.h @@ -28,7 +28,7 @@ class Partition { /** * @brief Get the unique name of the partition */ - const std::string get_name(); + const std::string& get_name() const; /** * @brief Set the name of the partition @@ -46,7 +46,12 @@ class Partition { /** * @brief Get the PartitionRegion (union of rectangular regions) for this partition */ - const PartitionRegion get_part_region(); + const PartitionRegion& get_part_region() const; + + /** + * @brief Get the mutable PartitionRegion (union of rectangular regions) for this partition + */ + PartitionRegion& get_mutable_part_region(); private: std::string name; ///< name of the partition, name will be unique across partitions @@ -54,6 +59,6 @@ class Partition { }; ///@brief used to print data from a Partition -void print_partition(FILE* fp, Partition part); +void print_partition(FILE* fp, const Partition& part); #endif /* PARTITION_H */ diff --git a/vpr/src/base/partition_region.cpp b/vpr/src/base/partition_region.cpp index 4e08d58f79c..2676b6d1035 100644 --- a/vpr/src/base/partition_region.cpp +++ b/vpr/src/base/partition_region.cpp @@ -1,32 +1,34 @@ #include "partition_region.h" #include "region.h" +#include + void PartitionRegion::add_to_part_region(Region region) { - partition_region.push_back(region); + regions.push_back(region); } -std::vector PartitionRegion::get_partition_region() { - return partition_region; +const std::vector& PartitionRegion::get_regions() const { + return regions; } -std::vector PartitionRegion::get_partition_region() const { - return partition_region; +std::vector& PartitionRegion::get_mutable_regions() { + return regions; } void PartitionRegion::set_partition_region(std::vector pr) { - partition_region = pr; + regions = std::move(pr); } -bool PartitionRegion::empty() { - return partition_region.size() == 0; +bool PartitionRegion::empty() const { + return regions.empty(); } -bool PartitionRegion::is_loc_in_part_reg(t_pl_loc loc) { +bool PartitionRegion::is_loc_in_part_reg(const t_pl_loc& loc) const { bool is_in_pr = false; - for (unsigned int i = 0; i < partition_region.size(); i++) { - is_in_pr = partition_region[i].is_loc_in_reg(loc); - if (is_in_pr == true) { + for (const auto & region : regions) { + is_in_pr = region.is_loc_in_reg(loc); + if (is_in_pr) { break; } } @@ -41,12 +43,13 @@ PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionR * Rectangles are not merged even if it would be possible */ PartitionRegion pr; + auto& pr_regions = pr.get_mutable_regions(); Region intersect_region; - for (unsigned int i = 0; i < cluster_pr.partition_region.size(); i++) { - for (unsigned int j = 0; j < new_pr.partition_region.size(); j++) { - intersect_region = intersection(cluster_pr.partition_region[i], new_pr.partition_region[j]); + for (const auto& cluster_region : cluster_pr.get_regions()) { + for (const auto& new_region : new_pr.get_regions()) { + intersect_region = intersection(cluster_region, new_region); if (!intersect_region.empty()) { - pr.partition_region.push_back(intersect_region); + pr_regions.push_back(intersect_region); } } } @@ -55,11 +58,11 @@ PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionR } void update_cluster_part_reg(PartitionRegion& cluster_pr, const PartitionRegion& new_pr) { - Region intersect_region; std::vector int_regions; - for (unsigned int i = 0; i < cluster_pr.partition_region.size(); i++) { - for (unsigned int j = 0; j < new_pr.partition_region.size(); j++) { - intersect_region = intersection(cluster_pr.partition_region[i], new_pr.partition_region[j]); + + for (const auto& cluster_region : cluster_pr.get_regions()) { + for (const auto& new_region : new_pr.get_regions()) { + Region intersect_region = intersection(cluster_region, new_region); if (!intersect_region.empty()) { int_regions.push_back(intersect_region); } @@ -68,14 +71,14 @@ void update_cluster_part_reg(PartitionRegion& cluster_pr, const PartitionRegion& cluster_pr.set_partition_region(int_regions); } -void print_partition_region(FILE* fp, PartitionRegion pr) { - std::vector part_region = pr.get_partition_region(); +void print_partition_region(FILE* fp, const PartitionRegion& pr) { + const std::vector& regions = pr.get_regions(); - int pr_size = part_region.size(); + int pr_size = regions.size(); fprintf(fp, "\tNumber of regions in partition is: %d\n", pr_size); - for (unsigned int i = 0; i < part_region.size(); i++) { - print_region(fp, part_region[i]); + for (const auto & region : regions) { + print_region(fp, region); } } diff --git a/vpr/src/base/partition_region.h b/vpr/src/base/partition_region.h index eb89399191c..2ea9796091b 100644 --- a/vpr/src/base/partition_region.h +++ b/vpr/src/base/partition_region.h @@ -25,8 +25,12 @@ class PartitionRegion { /** * @brief Return the union of regions */ - std::vector get_partition_region(); - std::vector get_partition_region() const; + std::vector& get_mutable_regions(); + + /** + * @brief Return the union of regions + */ + const std::vector& get_regions() const; /** * @brief Set the union of regions @@ -36,7 +40,7 @@ class PartitionRegion { /** * @brief Check if the PartitionRegion is empty (meaning there is no constraint on the object the PartitionRegion belongs to) */ - bool empty(); + bool empty() const; /** * @brief Check if the given location is within the legal bounds of the PartitionRegion. @@ -44,30 +48,30 @@ class PartitionRegion { * * @param loc The location to be checked */ - bool is_loc_in_part_reg(t_pl_loc loc); - - /** - * @brief Global friend function that returns the intersection of two PartitionRegions - * - * @param cluster_pr One of the PartitionRegions to be intersected - * @param new_pr One of the PartitionRegions to be intersected - */ - friend PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionRegion& new_pr); - - /** - * @brief Global friend function that updates the PartitionRegion of a cluster with the intersection - * of the cluster PartitionRegion and a new PartitionRegion - * - * @param cluster_pr The cluster PartitionRegion that is to be updated - * @param new_pr The new PartitionRegion that the cluster PartitionRegion will be intersected with - */ - friend void update_cluster_part_reg(PartitionRegion& cluster_pr, const PartitionRegion& new_pr); + bool is_loc_in_part_reg(const t_pl_loc& loc) const; private: - std::vector partition_region; ///< union of rectangular regions that a partition can be placed in + std::vector regions; ///< union of rectangular regions that a partition can be placed in }; ///@brief used to print data from a PartitionRegion -void print_partition_region(FILE* fp, PartitionRegion pr); +void print_partition_region(FILE* fp, const PartitionRegion& pr); + +/** +* @brief Global friend function that returns the intersection of two PartitionRegions +* +* @param cluster_pr One of the PartitionRegions to be intersected +* @param new_pr One of the PartitionRegions to be intersected +*/ +PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionRegion& new_pr); + +/** +* @brief Global friend function that updates the PartitionRegion of a cluster with the intersection +* of the cluster PartitionRegion and a new PartitionRegion +* +* @param cluster_pr The cluster PartitionRegion that is to be updated +* @param new_pr The new PartitionRegion that the cluster PartitionRegion will be intersected with +*/ +void update_cluster_part_reg(PartitionRegion& cluster_pr, const PartitionRegion& new_pr); #endif /* PARTITION_REGIONS_H */ diff --git a/vpr/src/base/region.cpp b/vpr/src/base/region.cpp index 5c38f9ace86..e45266c723c 100644 --- a/vpr/src/base/region.cpp +++ b/vpr/src/base/region.cpp @@ -42,7 +42,7 @@ bool Region::empty() { || layer_num < 0); } -bool Region::is_loc_in_reg(t_pl_loc loc) { +bool Region::is_loc_in_reg(t_pl_loc loc) const { bool is_loc_in_reg = false; int loc_layer_num = loc.layer; @@ -149,7 +149,7 @@ Region intersection(const Region& r1, const Region& r2) { return intersect; } -void print_region(FILE* fp, Region region) { +void print_region(FILE* fp, const Region& region) { const auto region_coord = region.get_region_rect(); const auto region_rect = vtr::Rect(region_coord.xmin, region_coord.ymin, region_coord.xmax, region_coord.ymax); fprintf(fp, "\tRegion: \n"); diff --git a/vpr/src/base/region.h b/vpr/src/base/region.h index 7b1ceec6dda..dfdfd26d20c 100644 --- a/vpr/src/base/region.h +++ b/vpr/src/base/region.h @@ -43,8 +43,7 @@ struct RegionRectCoord { bool operator==(const RegionRectCoord& rhs) const { vtr::Rect lhs_rect(xmin, ymin, xmax, ymax); vtr::Rect rhs_rect(rhs.xmin, rhs.ymin, rhs.xmax, rhs.ymax); - return lhs_rect == rhs_rect - && layer_num == rhs.layer_num; + return (lhs_rect == rhs_rect) && (layer_num == rhs.layer_num); } }; @@ -105,7 +104,7 @@ class Region { * * @param loc The location to be checked */ - bool is_loc_in_reg(t_pl_loc loc); + bool is_loc_in_reg(t_pl_loc loc) const; bool operator==(const Region& reg) const { return (reg.get_region_rect() == this->get_region_rect() @@ -142,7 +141,7 @@ bool do_regions_intersect(Region r1, Region r2); Region intersection(const Region& r1, const Region& r2); ///@brief Used to print data from a Region -void print_region(FILE* fp, Region region); +void print_region(FILE* fp, const Region& region); namespace std { template<> diff --git a/vpr/src/base/vpr_constraints.cpp b/vpr/src/base/vpr_constraints.cpp index 95c7e7b7358..8580c9419ca 100644 --- a/vpr/src/base/vpr_constraints.cpp +++ b/vpr/src/base/vpr_constraints.cpp @@ -1,7 +1,7 @@ #include "vpr_constraints.h" #include "partition.h" -void VprConstraints::add_constrained_atom(const AtomBlockId blk_id, const PartitionId part_id) { +void VprConstraints::add_constrained_atom(AtomBlockId blk_id, PartitionId part_id) { auto got = constrained_atoms.find(blk_id); /** @@ -16,27 +16,29 @@ void VprConstraints::add_constrained_atom(const AtomBlockId blk_id, const Partit } } -PartitionId VprConstraints::get_atom_partition(AtomBlockId blk_id) { - PartitionId part_id; - +PartitionId VprConstraints::get_atom_partition(AtomBlockId blk_id) const { auto got = constrained_atoms.find(blk_id); if (got == constrained_atoms.end()) { - return part_id = PartitionId::INVALID(); ///< atom is not in a partition, i.e. unconstrained + return PartitionId::INVALID(); ///< atom is not in a partition, i.e. unconstrained } else { return got->second; } } -void VprConstraints::add_partition(Partition part) { +void VprConstraints::add_partition(const Partition& part) { partitions.push_back(part); } -Partition VprConstraints::get_partition(PartitionId part_id) { +const Partition& VprConstraints::get_partition(PartitionId part_id) const { + return partitions[part_id]; +} + +Partition& VprConstraints::get_mutable_partition(PartitionId part_id) { return partitions[part_id]; } -std::vector VprConstraints::get_part_atoms(PartitionId part_id) { +std::vector VprConstraints::get_part_atoms(PartitionId part_id) const { std::vector part_atoms; for (auto& it : constrained_atoms) { @@ -48,18 +50,19 @@ std::vector VprConstraints::get_part_atoms(PartitionId part_id) { return part_atoms; } -int VprConstraints::get_num_partitions() { +int VprConstraints::get_num_partitions() const { return partitions.size(); } -PartitionRegion VprConstraints::get_partition_pr(PartitionId part_id) { - PartitionRegion pr; - pr = partitions[part_id].get_part_region(); - return pr; +const PartitionRegion& VprConstraints::get_partition_pr(PartitionId part_id) const { + return partitions[part_id].get_part_region(); +} + +PartitionRegion& VprConstraints::get_mutable_partition_pr(PartitionId part_id) { + return partitions[part_id].get_mutable_part_region(); } -void print_constraints(FILE* fp, VprConstraints constraints) { - Partition temp_part; +void print_constraints(FILE* fp, const VprConstraints& constraints) { std::vector atoms; int num_parts = constraints.get_num_partitions(); @@ -69,7 +72,7 @@ void print_constraints(FILE* fp, VprConstraints constraints) { for (int i = 0; i < num_parts; i++) { PartitionId part_id(i); - temp_part = constraints.get_partition(part_id); + const Partition& temp_part = constraints.get_partition(part_id); fprintf(fp, "\npartition_id: %zu\n", size_t(part_id)); print_partition(fp, temp_part); @@ -80,8 +83,7 @@ void print_constraints(FILE* fp, VprConstraints constraints) { fprintf(fp, "\tAtom vector size is %d\n", atoms_size); fprintf(fp, "\tIds of atoms in partition: \n"); - for (unsigned int j = 0; j < atoms.size(); j++) { - AtomBlockId atom_id = atoms[j]; + for (auto atom_id : atoms) { fprintf(fp, "\t#%zu\n", size_t(atom_id)); } } diff --git a/vpr/src/base/vpr_constraints.h b/vpr/src/base/vpr_constraints.h index fd3f64842a4..9dd09f47b82 100644 --- a/vpr/src/base/vpr_constraints.h +++ b/vpr/src/base/vpr_constraints.h @@ -43,7 +43,7 @@ class VprConstraints { * @param blk_id The atom being stored * @param part_id The partition the atom is being constrained to */ - void add_constrained_atom(const AtomBlockId blk_id, const PartitionId part_id); + void add_constrained_atom(AtomBlockId blk_id, PartitionId part_id); /** * @brief Return id of the partition the atom belongs to @@ -52,40 +52,54 @@ class VprConstraints { * * @param blk_id The atom for which the partition id is needed */ - PartitionId get_atom_partition(AtomBlockId blk_id); + PartitionId get_atom_partition(AtomBlockId blk_id) const; /** * @brief Store a partition * * @param part The partition being stored */ - void add_partition(Partition part); + void add_partition(const Partition& part); /** * @brief Return a partition * * @param part_id The id of the partition that is wanted */ - Partition get_partition(PartitionId part_id); + const Partition& get_partition(PartitionId part_id) const; + + /** + * @brief Returns a mutable partition + * + * @param part_id The id of the partition that is wanted + */ + Partition& get_mutable_partition(PartitionId part_id); /** * @brief Return all the atoms that belong to a partition * * @param part_id The id of the partition whose atoms are needed */ - std::vector get_part_atoms(PartitionId part_id); + std::vector get_part_atoms(PartitionId part_id) const; /** * @brief Returns the number of partitions in the object */ - int get_num_partitions(); + int get_num_partitions() const; /** * @brief Returns the PartitionRegion belonging to the specified Partition * * @param part_id The id of the partition whose PartitionRegion is needed */ - PartitionRegion get_partition_pr(PartitionId part_id); + const PartitionRegion& get_partition_pr(PartitionId part_id) const; + + /** + * @brief Returns the mutable PartitionRegion belonging to the specified Partition + * + * @param part_id The id of the partition whose PartitionRegion is needed + */ + PartitionRegion& get_mutable_partition_pr(PartitionId part_id); private: /** @@ -100,6 +114,6 @@ class VprConstraints { }; ///@brief used to print floorplanning constraints data from a VprConstraints object -void print_constraints(FILE* fp, VprConstraints constraints); +void print_constraints(FILE* fp, const VprConstraints& constraints); #endif /* VPR_CONSTRAINTS_H */ diff --git a/vpr/src/base/vpr_constraints_serializer.h b/vpr/src/base/vpr_constraints_serializer.h index 5405eb0e21a..902d3977a80 100644 --- a/vpr/src/base/vpr_constraints_serializer.h +++ b/vpr/src/base/vpr_constraints_serializer.h @@ -222,8 +222,8 @@ class VprConstraintsSerializer final : public uxsd::VprConstraintsBase regions = pr.get_partition_region(); + const PartitionRegion& pr = part_info.part.get_part_region(); + const std::vector& regions = pr.get_regions(); return regions.size(); } virtual inline Region get_partition_add_region(int n, partition_info& part_info) final { - PartitionRegion pr = part_info.part.get_part_region(); - std::vector regions = pr.get_partition_region(); + const PartitionRegion& pr = part_info.part.get_part_region(); + const std::vector& regions = pr.get_regions(); return regions[n]; } diff --git a/vpr/src/base/vpr_constraints_writer.cpp b/vpr/src/base/vpr_constraints_writer.cpp index de8c91dedbb..056d0fd7151 100644 --- a/vpr/src/base/vpr_constraints_writer.cpp +++ b/vpr/src/base/vpr_constraints_writer.cpp @@ -196,7 +196,7 @@ void setup_vpr_floorplan_constraints_cutpoints(VprConstraints& constraints, int } int num_partitions = 0; - for (auto region : region_atoms) { + for (const auto& region : region_atoms) { Partition part; PartitionId partid(num_partitions); std::string part_name = "Part" + std::to_string(num_partitions); @@ -205,15 +205,15 @@ void setup_vpr_floorplan_constraints_cutpoints(VprConstraints& constraints, int {reg_coord.xmin, reg_coord.ymin, reg_coord.xmax, reg_coord.ymax, reg_coord.layer_num}); constraints.add_partition(part); - for (unsigned int k = 0; k < region.second.size(); k++) { - constraints.add_constrained_atom(region.second[k], partid); + for (auto blk_id : region.second) { + constraints.add_constrained_atom(blk_id, partid); } num_partitions++; } } -void create_partition(Partition& part, std::string part_name, const RegionRectCoord& region_cord) { +void create_partition(Partition& part, const std::string& part_name, const RegionRectCoord& region_cord) { part.set_name(part_name); PartitionRegion part_pr; Region part_region; diff --git a/vpr/src/base/vpr_constraints_writer.h b/vpr/src/base/vpr_constraints_writer.h index 955542be637..9db00bb4612 100644 --- a/vpr/src/base/vpr_constraints_writer.h +++ b/vpr/src/base/vpr_constraints_writer.h @@ -45,6 +45,6 @@ void setup_vpr_floorplan_constraints_one_loc(VprConstraints& constraints, int ex */ void setup_vpr_floorplan_constraints_cutpoints(VprConstraints& constraints, int horizontal_cutpoints, int vertical_cutpoints); -void create_partition(Partition& part, std::string part_name, const RegionRectCoord& region_cord); +void create_partition(Partition& part, const std::string& part_name, const RegionRectCoord& region_cord); #endif /* VPR_SRC_BASE_VPR_CONSTRAINTS_WRITER_H_ */ diff --git a/vpr/src/draw/draw_floorplanning.cpp b/vpr/src/draw/draw_floorplanning.cpp index 126bbd63212..8cb32442774 100644 --- a/vpr/src/draw/draw_floorplanning.cpp +++ b/vpr/src/draw/draw_floorplanning.cpp @@ -84,9 +84,9 @@ static void highlight_partition(ezgl::renderer* g, int partitionID, int alpha) { auto constraints = floorplanning_ctx.constraints; t_draw_coords* draw_coords = get_draw_coords_vars(); - auto partition = constraints.get_partition((PartitionId)partitionID); - auto& partition_region = partition.get_part_region(); - auto regions = partition_region.get_partition_region(); + const auto& partition = constraints.get_partition((PartitionId)partitionID); + const auto& partition_region = partition.get_part_region(); + const auto& regions = partition_region.get_regions(); bool name_drawn = false; ezgl::color partition_color = kelly_max_contrast_colors_no_black[partitionID % (kelly_max_contrast_colors_no_black.size())]; @@ -116,13 +116,13 @@ static void highlight_partition(ezgl::renderer* g, int partitionID, int alpha) { if (!name_drawn) { g->set_font_size(10); - std::string partition_name = partition.get_name(); + const std::string& partition_name = partition.get_name(); g->set_color(partition_color, 230); g->draw_text( on_screen_rect.center(), - partition_name.c_str(), + partition_name, on_screen_rect.width() - 10, on_screen_rect.height() - 10); @@ -165,12 +165,11 @@ void draw_constrained_atoms(ezgl::renderer* g) { for (int partitionID = 0; partitionID < num_partitions; partitionID++) { auto atoms = constraints.get_part_atoms((PartitionId)partitionID); - for (size_t j = 0; j < atoms.size(); j++) { - AtomBlockId const& const_atom = atoms[j]; - if (atom_ctx.lookup.atom_pb(const_atom) != nullptr) { - const t_pb* pb = atom_ctx.lookup.atom_pb(const_atom); + for (const auto atom_id : atoms) { + if (atom_ctx.lookup.atom_pb(atom_id) != nullptr) { + const t_pb* pb = atom_ctx.lookup.atom_pb(atom_id); auto color = kelly_max_contrast_colors_no_black[partitionID % (kelly_max_contrast_colors_no_black.size())]; - ClusterBlockId clb_index = atom_ctx.lookup.atom_clb(atoms[j]); + ClusterBlockId clb_index = atom_ctx.lookup.atom_clb(atom_id); auto type = cluster_ctx.clb_nlist.block_type(clb_index); draw_internal_pb(clb_index, cluster_ctx.clb_nlist.block_pb(clb_index), pb, ezgl::rectangle({0, 0}, 0, 0), type, color, g); @@ -232,7 +231,7 @@ static void draw_internal_pb(const ClusterBlockId clb_index, t_pb* current_pb, c g->draw_text( abs_bbox.center(), - blk_tag.c_str(), + blk_tag, abs_bbox.width() + 10, abs_bbox.height() + 10); @@ -307,7 +306,7 @@ static GtkTreeModel* create_and_fill_model(void) { for (int partitionID = 0; partitionID < num_partitions; partitionID++) { auto atoms = constraints.get_part_atoms((PartitionId)partitionID); - auto partition = constraints.get_partition((PartitionId)partitionID); + const auto& partition = constraints.get_partition((PartitionId)partitionID); std::string partition_name(partition.get_name() + " (" + std::to_string(atoms.size()) + " primitives)"); @@ -318,8 +317,7 @@ static GtkTreeModel* create_and_fill_model(void) { COL_NAME, partition_name.c_str(), -1); - for (size_t j = 0; j < atoms.size(); j++) { - AtomBlockId const& const_atom = atoms[j]; + for (auto const_atom : atoms) { std::string atom_name = (atom_ctx.lookup.atom_pb(const_atom))->name; gtk_tree_store_append(store, &child_iter, &iter); gtk_tree_store_set(store, &child_iter, diff --git a/vpr/src/pack/attraction_groups.cpp b/vpr/src/pack/attraction_groups.cpp index 2c70d9d11cd..e4bd17620e4 100644 --- a/vpr/src/pack/attraction_groups.cpp +++ b/vpr/src/pack/attraction_groups.cpp @@ -64,8 +64,8 @@ void AttractionInfo::create_att_groups_for_overfull_regions() { for (int ipart = 0; ipart < num_parts; ipart++) { PartitionId partid(ipart); - Partition part = floorplanning_ctx.constraints.get_partition(partid); - auto& pr_regions = part.get_part_region(); + const Partition& part = floorplanning_ctx.constraints.get_partition(partid); + const auto& pr_regions = part.get_part_region(); PartitionRegion intersect_pr; diff --git a/vpr/src/pack/cluster_util.cpp b/vpr/src/pack/cluster_util.cpp index c1170afba63..48fe7a9dd71 100644 --- a/vpr/src/pack/cluster_util.cpp +++ b/vpr/src/pack/cluster_util.cpp @@ -94,11 +94,11 @@ static void echo_clusters(char* filename) { auto& floorplanning_ctx = g_vpr_ctx.mutable_floorplanning(); for (ClusterBlockId clb_id : cluster_ctx.clb_nlist.blocks()) { - std::vector reg = floorplanning_ctx.cluster_constraints[clb_id].get_partition_region(); - if (reg.size() != 0) { + const std::vector& regions = floorplanning_ctx.cluster_constraints[clb_id].get_regions(); + if (!regions.empty()) { fprintf(fp, "\nRegions in Cluster %zu:\n", size_t(clb_id)); - for (unsigned int i = 0; i < reg.size(); i++) { - print_region(fp, reg[i]); + for (const auto & region : regions) { + print_region(fp, region); } } } diff --git a/vpr/src/pack/constraints_report.cpp b/vpr/src/pack/constraints_report.cpp index f75823aefab..2c58ef341a4 100644 --- a/vpr/src/pack/constraints_report.cpp +++ b/vpr/src/pack/constraints_report.cpp @@ -18,12 +18,10 @@ bool floorplan_constraints_regions_overfull() { } t_logical_block_type_ptr bt = cluster_ctx.clb_nlist.block_type(blk_id); - PartitionRegion pr = floorplanning_ctx.cluster_constraints[blk_id]; - std::vector regions = pr.get_partition_region(); - - for (unsigned int i_reg = 0; i_reg < regions.size(); i_reg++) { - Region current_reg = regions[i_reg]; + const PartitionRegion& pr = floorplanning_ctx.cluster_constraints[blk_id]; + const std::vector& regions = pr.get_regions(); + for (const auto& current_reg : regions) { auto got = regions_count_info.find(current_reg); if (got == regions_count_info.end()) { diff --git a/vpr/src/place/initial_placement.cpp b/vpr/src/place/initial_placement.cpp index 7e67f169ef2..b9b97b1998f 100644 --- a/vpr/src/place/initial_placement.cpp +++ b/vpr/src/place/initial_placement.cpp @@ -236,7 +236,7 @@ static bool is_loc_legal(t_pl_loc& loc, PartitionRegion& pr, t_logical_block_typ bool legal = false; //Check if the location is within its constraint region - for (auto reg : pr.get_partition_region()) { + for (const auto& reg : pr.get_regions()) { const auto reg_coord = reg.get_region_rect(); vtr::Rect reg_rect(reg_coord.xmin, reg_coord.ymin, reg_coord.xmax, reg_coord.ymax); if (reg_coord.layer_num != loc.layer) continue; @@ -580,7 +580,7 @@ bool try_place_macro_randomly(const t_pl_macro& pl_macro, const PartitionRegion& //If the block has more than one floorplan region, pick a random region to get the min/max x and y values int region_index; - std::vector regions = pr.get_partition_region(); + const std::vector& regions = pr.get_regions(); if (regions.size() > 1) { region_index = vtr::irand(regions.size() - 1); } else { @@ -637,7 +637,7 @@ bool try_place_macro_exhaustively(const t_pl_macro& pl_macro, const PartitionReg const auto& compressed_block_grid = g_vpr_ctx.placement().compressed_block_grids[block_type->index]; auto& place_ctx = g_vpr_ctx.mutable_placement(); - std::vector regions = pr.get_partition_region(); + const std::vector& regions = pr.get_regions(); bool placed = false; diff --git a/vpr/src/place/move_utils.cpp b/vpr/src/place/move_utils.cpp index 2c62d6ec371..b7692581a61 100644 --- a/vpr/src/place/move_utils.cpp +++ b/vpr/src/place/move_utils.cpp @@ -1276,13 +1276,10 @@ bool intersect_range_limit_with_floorplan_constraints(t_logical_block_type_ptr t max_grid_loc.y, layer_num}); - auto& floorplanning_ctx = g_vpr_ctx.floorplanning(); + const auto& floorplanning_ctx = g_vpr_ctx.floorplanning(); - PartitionRegion pr = floorplanning_ctx.cluster_constraints[b_from]; - std::vector regions; - if (!pr.empty()) { - regions = pr.get_partition_region(); - } + const PartitionRegion& pr = floorplanning_ctx.cluster_constraints[b_from]; + const std::vector& regions = pr.get_regions(); Region intersect_reg; /* * If region size is greater than 1, the block is constrained to more than one rectangular region. diff --git a/vpr/src/place/place_constraints.cpp b/vpr/src/place/place_constraints.cpp index f1c5045251b..e1b153d9d71 100644 --- a/vpr/src/place/place_constraints.cpp +++ b/vpr/src/place/place_constraints.cpp @@ -42,8 +42,8 @@ bool is_macro_constrained(const t_pl_macro& pl_macro) { bool is_macro_constrained = false; bool is_member_constrained = false; - for (size_t imember = 0; imember < pl_macro.members.size(); imember++) { - ClusterBlockId iblk = pl_macro.members[imember].blk_index; + for (const auto & member : pl_macro.members) { + ClusterBlockId iblk = member.blk_index; is_member_constrained = is_cluster_constrained(iblk); if (is_member_constrained) { @@ -62,25 +62,25 @@ PartitionRegion update_macro_head_pr(const t_pl_macro& pl_macro, const Partition int num_constrained_members = 0; auto& floorplanning_ctx = g_vpr_ctx.floorplanning(); - for (size_t imember = 0; imember < pl_macro.members.size(); imember++) { - ClusterBlockId iblk = pl_macro.members[imember].blk_index; + for (const auto & member : pl_macro.members) { + ClusterBlockId iblk = member.blk_index; is_member_constrained = is_cluster_constrained(iblk); if (is_member_constrained) { num_constrained_members++; - //PartitionRegion of the constrained block - PartitionRegion block_pr; + //PartitionRegion of the constrained block modified for the head according to the offset PartitionRegion modified_pr; - block_pr = floorplanning_ctx.cluster_constraints[iblk]; - std::vector block_regions = block_pr.get_partition_region(); + //PartitionRegion of the constrained block + const PartitionRegion& block_pr = floorplanning_ctx.cluster_constraints[iblk]; + const std::vector& block_regions = block_pr.get_regions(); - for (unsigned int i = 0; i < block_regions.size(); i++) { + for (const auto & block_region : block_regions) { Region modified_reg; - auto offset = pl_macro.members[imember].offset; + auto offset = member.offset; - const auto block_reg_coord = block_regions[i].get_region_rect(); + const auto block_reg_coord = block_region.get_region_rect(); modified_reg.set_region_rect({block_reg_coord.xmin - offset.x, block_reg_coord.ymin - offset.y, @@ -89,8 +89,8 @@ PartitionRegion update_macro_head_pr(const t_pl_macro& pl_macro, const Partition block_reg_coord.layer_num}); //check that subtile is not an invalid value before changing, otherwise it just stays -1 - if (block_regions[i].get_sub_tile() != NO_SUBTILE) { - modified_reg.set_sub_tile(block_regions[i].get_sub_tile() - offset.sub_tile); + if (block_region.get_sub_tile() != NO_SUBTILE) { + modified_reg.set_sub_tile(block_region.get_sub_tile() - offset.sub_tile); } modified_pr.add_to_part_region(modified_reg); @@ -116,13 +116,13 @@ PartitionRegion update_macro_head_pr(const t_pl_macro& pl_macro, const Partition } PartitionRegion update_macro_member_pr(PartitionRegion& head_pr, const t_pl_offset& offset, const PartitionRegion& grid_pr, const t_pl_macro& pl_macro) { - std::vector block_regions = head_pr.get_partition_region(); + const std::vector& block_regions = head_pr.get_regions(); PartitionRegion macro_pr; - for (unsigned int i = 0; i < block_regions.size(); i++) { + for (const auto & block_region : block_regions) { Region modified_reg; - const auto block_reg_coord = block_regions[i].get_region_rect(); + const auto block_reg_coord = block_region.get_region_rect(); modified_reg.set_region_rect({block_reg_coord.xmin + offset.x, block_reg_coord.ymin + offset.y, @@ -131,8 +131,8 @@ PartitionRegion update_macro_member_pr(PartitionRegion& head_pr, const t_pl_offs block_reg_coord.layer_num}); //check that subtile is not an invalid value before changing, otherwise it just stays -1 - if (block_regions[i].get_sub_tile() != NO_SUBTILE) { - modified_reg.set_sub_tile(block_regions[i].get_sub_tile() + offset.sub_tile); + if (block_region.get_sub_tile() != NO_SUBTILE) { + modified_reg.set_sub_tile(block_region.get_sub_tile() + offset.sub_tile); } macro_pr.add_to_part_region(modified_reg); @@ -154,9 +154,9 @@ void print_macro_constraint_error(const t_pl_macro& pl_macro) { VTR_LOG( "Feasible floorplanning constraints could not be calculated for the placement macro. \n" "The placement macro contains the following blocks: \n"); - for (unsigned int i = 0; i < pl_macro.members.size(); i++) { - std::string blk_name = cluster_ctx.clb_nlist.block_name((pl_macro.members[i].blk_index)); - VTR_LOG("Block %s (#%zu) ", blk_name.c_str(), size_t(pl_macro.members[i].blk_index)); + for (const auto & member : pl_macro.members) { + std::string blk_name = cluster_ctx.clb_nlist.block_name((member.blk_index)); + VTR_LOG("Block %s (#%zu) ", blk_name.c_str(), size_t(member.blk_index)); } VTR_LOG("\n"); VPR_ERROR(VPR_ERROR_PLACE, " \n Check that the above-mentioned placement macro blocks have compatible floorplan constraints.\n"); @@ -380,7 +380,7 @@ int region_tile_cover(const Region& reg, t_logical_block_type_ptr block_type, t_ */ bool is_pr_size_one(PartitionRegion& pr, t_logical_block_type_ptr block_type, t_pl_loc& loc) { auto& device_ctx = g_vpr_ctx.device(); - std::vector regions = pr.get_partition_region(); + const std::vector& regions = pr.get_regions(); bool pr_size_one; int pr_size = 0; int reg_size; @@ -439,11 +439,11 @@ bool is_pr_size_one(PartitionRegion& pr, t_logical_block_type_ptr block_type, t_ } int get_part_reg_size(PartitionRegion& pr, t_logical_block_type_ptr block_type, GridTileLookup& grid_tiles) { - std::vector part_reg = pr.get_partition_region(); + const std::vector& regions = pr.get_regions(); int num_tiles = 0; - for (unsigned int i_reg = 0; i_reg < part_reg.size(); i_reg++) { - num_tiles += grid_tiles.region_tile_count(part_reg[i_reg], block_type); + for (const auto & region : regions) { + num_tiles += grid_tiles.region_tile_count(region, block_type); } return num_tiles; diff --git a/vpr/test/test_vpr_constraints.cpp b/vpr/test/test_vpr_constraints.cpp index f9a5d7e5bd4..f0fb486d76a 100644 --- a/vpr/test/test_vpr_constraints.cpp +++ b/vpr/test/test_vpr_constraints.cpp @@ -53,7 +53,7 @@ TEST_CASE("PartitionRegion", "[vpr]") { pr1.add_to_part_region(r1); - std::vector pr_regions = pr1.get_partition_region(); + const std::vector& pr_regions = pr1.get_regions(); REQUIRE(pr_regions[0].get_sub_tile() == 3); const auto pr_reg_coord = pr_regions[0].get_region_rect(); @@ -80,8 +80,8 @@ TEST_CASE("Partition", "[vpr]") { part_reg.add_to_part_region(r1); part.set_part_region(part_reg); - PartitionRegion part_reg_2 = part.get_part_region(); - std::vector regions = part_reg_2.get_partition_region(); + const PartitionRegion& part_reg_2 = part.get_part_region(); + const std::vector& regions = part_reg_2.get_regions(); REQUIRE(regions[0].get_sub_tile() == 3); @@ -121,8 +121,7 @@ TEST_CASE("VprConstraints", "[vpr]") { vprcon.add_partition(part); - Partition got_part; - got_part = vprcon.get_partition(part_id); + const Partition& got_part = vprcon.get_partition(part_id); REQUIRE(got_part.get_name() == "part_name"); std::vector partition_atoms; @@ -235,7 +234,7 @@ TEST_CASE("PartRegionIntersect", "[vpr]") { PartitionRegion int_pr; int_pr = intersection(pr1, pr2); - std::vector regions = int_pr.get_partition_region(); + const std::vector& regions = int_pr.get_regions(); vtr::Rect int_rect(0, 0, 1, 1); vtr::Rect int_rect_2(1, 1, 2, 2); @@ -268,7 +267,7 @@ TEST_CASE("PartRegionIntersect2", "[vpr]") { PartitionRegion int_pr; int_pr = intersection(pr1, pr2); - std::vector regions = int_pr.get_partition_region(); + const std::vector& regions = int_pr.get_regions(); vtr::Rect int_rect(0, 0, 2, 2); REQUIRE(regions.size() == 1); const auto first_reg_coord = regions[0].get_region_rect(); @@ -304,9 +303,9 @@ TEST_CASE("PartRegionIntersect3", "[vpr]") { PartitionRegion int_pr; int_pr = intersection(pr1, pr2); - std::vector regions = int_pr.get_partition_region(); + const std::vector& regions = int_pr.get_regions(); - REQUIRE(regions.size() == 0); + REQUIRE(regions.empty()); } //2x2 regions, 1 overlap @@ -337,7 +336,7 @@ TEST_CASE("PartRegionIntersect4", "[vpr]") { PartitionRegion int_pr; int_pr = intersection(pr1, pr2); - std::vector regions = int_pr.get_partition_region(); + const std::vector& regions = int_pr.get_regions(); vtr::Rect intersect(1, 2, 3, 4); @@ -374,7 +373,7 @@ TEST_CASE("PartRegionIntersect5", "[vpr]") { PartitionRegion int_pr; int_pr = intersection(pr1, pr2); - std::vector regions = int_pr.get_partition_region(); + const std::vector& regions = int_pr.get_regions(); vtr::Rect int_r1r3(2, 6, 4, 7); vtr::Rect int_r2r4(6, 4, 8, 5); @@ -415,7 +414,7 @@ TEST_CASE("PartRegionIntersect6", "[vpr]") { PartitionRegion int_pr; int_pr = intersection(pr1, pr2); - std::vector regions = int_pr.get_partition_region(); + const std::vector& regions = int_pr.get_regions(); vtr::Rect int_r1r3(2, 3, 4, 4); vtr::Rect int_r1r4(2, 6, 4, 7); @@ -455,7 +454,7 @@ TEST_CASE("MacroConstraints", "[vpr]") { PartitionRegion macro_pr = update_macro_member_pr(head_pr, offset, grid_pr, pl_macro); - std::vector mac_regions = macro_pr.get_partition_region(); + const std::vector& mac_regions = macro_pr.get_regions(); const auto mac_first_reg_coord = mac_regions[0].get_region_rect(); From cbe5665e46ae094e633e490a5e7b92038844f3ec Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Thu, 15 Feb 2024 19:55:12 -0500 Subject: [PATCH 006/257] pass by ref and range based loops --- libs/libvtrutil/src/vtr_util.cpp | 10 ++--- libs/libvtrutil/src/vtr_util.h | 8 ++-- vpr/src/base/SetupGrid.cpp | 19 ++++----- vpr/src/base/SetupGrid.h | 9 +++-- vpr/src/base/atom_netlist.cpp | 10 ++--- vpr/src/base/atom_netlist.h | 10 ++--- vpr/src/base/constraints_load.cpp | 2 +- vpr/src/base/constraints_load.h | 2 +- vpr/src/base/setup_noc.cpp | 18 ++++----- vpr/src/base/setup_noc.h | 2 +- vpr/src/base/vpr_constraints.cpp | 2 +- vpr/src/base/vpr_constraints_reader.cpp | 2 +- vpr/src/base/vpr_constraints_writer.cpp | 5 +-- vpr/src/base/vpr_constraints_writer.h | 14 ++++++- vpr/src/noc/noc_traffic_flows.cpp | 16 +++++++- vpr/src/noc/noc_traffic_flows.h | 8 +++- .../noc/read_xml_noc_traffic_flows_file.cpp | 39 +++++++++++++------ vpr/src/noc/read_xml_noc_traffic_flows_file.h | 12 ++++-- vpr/src/pack/attraction_groups.cpp | 16 ++++---- vpr/src/pack/attraction_groups.h | 4 +- vpr/src/pack/cluster_util.cpp | 32 +++++++-------- vpr/src/pack/cluster_util.h | 4 +- vpr/src/pack/pack.cpp | 12 ++++-- vpr/src/place/move_utils.cpp | 2 +- 24 files changed, 156 insertions(+), 102 deletions(-) diff --git a/libs/libvtrutil/src/vtr_util.cpp b/libs/libvtrutil/src/vtr_util.cpp index 2a7a247bde1..e73cee1d1a4 100644 --- a/libs/libvtrutil/src/vtr_util.cpp +++ b/libs/libvtrutil/src/vtr_util.cpp @@ -26,7 +26,7 @@ static int cont; /* line continued? (used by strtok)*/ * * The split strings (excluding the delimiters) are returned */ -std::vector split(const char* text, const std::string delims) { +std::vector split(const char* text, const std::string& delims) { if (text) { std::string text_str(text); return split(text_str, delims); @@ -39,7 +39,7 @@ std::vector split(const char* text, const std::string delims) { * * The split strings (excluding the delimiters) are returned */ -std::vector split(const std::string& text, const std::string delims) { +std::vector split(const std::string& text, const std::string& delims) { std::vector tokens; std::string curr_tok; @@ -102,7 +102,7 @@ std::string replace_all(const std::string& input, const std::string& search, con } ///@brief Retruns true if str starts with prefix -bool starts_with(std::string str, std::string prefix) { +bool starts_with(const std::string& str, const std::string& prefix) { return str.find(prefix) == 0; } @@ -461,8 +461,8 @@ bool file_exists(const char* filename) { * * Returns true if the extension is correct, and false otherwise. */ -bool check_file_name_extension(std::string file_name, - std::string file_extension) { +bool check_file_name_extension(const std::string& file_name, + const std::string& file_extension) { auto ext = std::filesystem::path(file_name).extension(); return ext == file_extension; } diff --git a/libs/libvtrutil/src/vtr_util.h b/libs/libvtrutil/src/vtr_util.h index edcb7ba8598..114de793751 100644 --- a/libs/libvtrutil/src/vtr_util.h +++ b/libs/libvtrutil/src/vtr_util.h @@ -14,8 +14,8 @@ namespace vtr { * * The split strings (excluding the delimiters) are returned */ -std::vector split(const char* text, const std::string delims = " \t\n"); -std::vector split(const std::string& text, const std::string delims = " \t\n"); +std::vector split(const char* text, const std::string& delims = " \t\n"); +std::vector split(const std::string& text, const std::string& delims = " \t\n"); ///@brief Returns 'input' with the first instance of 'search' replaced with 'replace' std::string replace_first(const std::string& input, const std::string& search, const std::string& replace); @@ -24,7 +24,7 @@ std::string replace_first(const std::string& input, const std::string& search, c std::string replace_all(const std::string& input, const std::string& search, const std::string& replace); ///@brief Retruns true if str starts with prefix -bool starts_with(std::string str, std::string prefix); +bool starts_with(const std::string& str, const std::string& prefix); ///@brief Returns a std::string formatted using a printf-style format string std::string string_fmt(const char* fmt, ...); @@ -69,7 +69,7 @@ double atod(const std::string& value); */ int get_file_line_number_of_last_opened_file(); bool file_exists(const char* filename); -bool check_file_name_extension(std::string file_name, std::string file_extension); +bool check_file_name_extension(const std::string& file_name, const std::string& file_extension); extern std::string out_file_prefix; diff --git a/vpr/src/base/SetupGrid.cpp b/vpr/src/base/SetupGrid.cpp index 3569f5bff1f..626d3027259 100644 --- a/vpr/src/base/SetupGrid.cpp +++ b/vpr/src/base/SetupGrid.cpp @@ -31,8 +31,8 @@ using vtr::t_formula_data; static DeviceGrid auto_size_device_grid(const std::vector& grid_layouts, const std::map& minimum_instance_counts, float maximum_device_utilization); static std::vector grid_overused_resources(const DeviceGrid& grid, std::map instance_counts); -static bool grid_satisfies_instance_counts(const DeviceGrid& grid, std::map instance_counts, float maximum_utilization); -static DeviceGrid build_device_grid(const t_grid_def& grid_def, size_t width, size_t height, bool warn_out_of_range = true, std::vector limiting_resources = std::vector()); +static bool grid_satisfies_instance_counts(const DeviceGrid& grid, const std::map& instance_counts, float maximum_utilization); +static DeviceGrid build_device_grid(const t_grid_def& grid_def, size_t width, size_t height, bool warn_out_of_range = true, const std::vector& limiting_resources = std::vector()); static void CheckGrid(const DeviceGrid& grid); @@ -46,7 +46,7 @@ static void set_grid_block_type(int priority, const t_metadata_dict* meta); ///@brief Create the device grid based on resource requirements -DeviceGrid create_device_grid(std::string layout_name, const std::vector& grid_layouts, const std::map& minimum_instance_counts, float target_device_utilization) { +DeviceGrid create_device_grid(const std::string& layout_name, const std::vector& grid_layouts, const std::map& minimum_instance_counts, float target_device_utilization) { if (layout_name == "auto") { //Auto-size the device // @@ -78,9 +78,9 @@ DeviceGrid create_device_grid(std::string layout_name, const std::vector& grid_layouts, size_t width, size_t height) { +DeviceGrid create_device_grid(const std::string& layout_name, const std::vector& grid_layouts, size_t width, size_t height) { if (layout_name == "auto") { - VTR_ASSERT(grid_layouts.size() > 0); + VTR_ASSERT(!grid_layouts.empty()); //Auto-size if (grid_layouts[0].grid_type == GridDefType::AUTO) { //Auto layout of the specified dimensions @@ -145,7 +145,7 @@ DeviceGrid create_device_grid(std::string layout_name, const std::vector& grid_layouts, const std::map& minimum_instance_counts, float maximum_device_utilization) { - VTR_ASSERT(grid_layouts.size() > 0); + VTR_ASSERT(!grid_layouts.empty()); DeviceGrid grid; @@ -281,6 +281,7 @@ static std::vector grid_overused_resources(const Devic //Sort so we allocate logical blocks with the fewest equivalent sites first (least flexible) std::vector logical_block_types; + logical_block_types.reserve(device_ctx.logical_block_types.size()); for (auto& block_type : device_ctx.logical_block_types) { logical_block_types.push_back(&block_type); } @@ -316,7 +317,7 @@ static std::vector grid_overused_resources(const Devic return overused_resources; } -static bool grid_satisfies_instance_counts(const DeviceGrid& grid, std::map instance_counts, float maximum_utilization) { +static bool grid_satisfies_instance_counts(const DeviceGrid& grid, const std::map& instance_counts, float maximum_utilization) { //Are the resources satisified? auto overused_resources = grid_overused_resources(grid, instance_counts); @@ -335,7 +336,7 @@ static bool grid_satisfies_instance_counts(const DeviceGrid& grid, std::map limiting_resources) { +static DeviceGrid build_device_grid(const t_grid_def& grid_def, size_t grid_width, size_t grid_height, bool warn_out_of_range, const std::vector& limiting_resources) { if (grid_def.grid_type == GridDefType::FIXED) { if (grid_def.width != int(grid_width) || grid_def.height != int(grid_height)) { VPR_FATAL_ERROR(VPR_ERROR_OTHER, @@ -754,7 +755,7 @@ static void CheckGrid(const DeviceGrid& grid) { } } -float calculate_device_utilization(const DeviceGrid& grid, std::map instance_counts) { +float calculate_device_utilization(const DeviceGrid& grid, const std::map& instance_counts) { //Record the resources of the grid std::map grid_resources; for (int layer_num = 0; layer_num < grid.get_num_layers(); ++layer_num) { diff --git a/vpr/src/base/SetupGrid.h b/vpr/src/base/SetupGrid.h index 4dd80c28539..977ce2f51e2 100644 --- a/vpr/src/base/SetupGrid.h +++ b/vpr/src/base/SetupGrid.h @@ -13,13 +13,16 @@ #include "physical_types.h" ///@brief Find the device satisfying the specified minimum resources -DeviceGrid create_device_grid(std::string layout_name, +DeviceGrid create_device_grid(const std::string& layout_name, const std::vector& grid_layouts, const std::map& minimum_instance_counts, float target_device_utilization); ///@brief Find the device close in size to the specified dimensions -DeviceGrid create_device_grid(std::string layout_name, const std::vector& grid_layouts, size_t min_width, size_t min_height); +DeviceGrid create_device_grid(const std::string& layout_name, + const std::vector& grid_layouts, + size_t min_width, + size_t min_height); /** * @brief Calculate the device utilization @@ -27,7 +30,7 @@ DeviceGrid create_device_grid(std::string layout_name, const std::vector instance_counts); +float calculate_device_utilization(const DeviceGrid& grid, const std::map& instance_counts); /** * @brief Returns the effective size of the device diff --git a/vpr/src/base/atom_netlist.cpp b/vpr/src/base/atom_netlist.cpp index 39af4d23e1c..1cbd2232f1f 100644 --- a/vpr/src/base/atom_netlist.cpp +++ b/vpr/src/base/atom_netlist.cpp @@ -115,7 +115,7 @@ AtomBlockId AtomNetlist::find_atom_pin_driver(const AtomBlockId blk_id, const t_ return AtomBlockId::INVALID(); } -std::unordered_set AtomNetlist::net_aliases(const std::string net_name) const { +std::unordered_set AtomNetlist::net_aliases(const std::string& net_name) const { auto net_id = find_net(net_name); VTR_ASSERT(net_id != AtomNetId::INVALID()); @@ -137,7 +137,7 @@ std::unordered_set AtomNetlist::net_aliases(const std::string net_n * Mutators * */ -AtomBlockId AtomNetlist::create_block(const std::string name, const t_model* model, const TruthTable truth_table) { +AtomBlockId AtomNetlist::create_block(const std::string& name, const t_model* model, const TruthTable& truth_table) { AtomBlockId blk_id = Netlist::create_block(name); //Initialize the data @@ -205,7 +205,7 @@ AtomPinId AtomNetlist::create_pin(const AtomPortId port_id, BitIndex port_bit, c return pin_id; } -AtomNetId AtomNetlist::create_net(const std::string name) { +AtomNetId AtomNetlist::create_net(const std::string& name) { AtomNetId net_id = Netlist::create_net(name); //Check post-conditions: size @@ -214,11 +214,11 @@ AtomNetId AtomNetlist::create_net(const std::string name) { return net_id; } -AtomNetId AtomNetlist::add_net(const std::string name, AtomPinId driver, std::vector sinks) { +AtomNetId AtomNetlist::add_net(const std::string& name, AtomPinId driver, std::vector sinks) { return Netlist::add_net(name, driver, sinks); } -void AtomNetlist::add_net_alias(const std::string net_name, const std::string alias_net_name) { +void AtomNetlist::add_net_alias(const std::string& net_name, const std::string& alias_net_name) { auto net_id = find_net(net_name); VTR_ASSERT(net_id != AtomNetId::INVALID()); diff --git a/vpr/src/base/atom_netlist.h b/vpr/src/base/atom_netlist.h index d639b2d5d57..de1bb4f53bf 100644 --- a/vpr/src/base/atom_netlist.h +++ b/vpr/src/base/atom_netlist.h @@ -157,7 +157,7 @@ class AtomNetlist : public Netlist net_aliases(const std::string net_name) const; + std::unordered_set net_aliases(const std::string& net_name) const; public: //Public Mutators /* @@ -173,7 +173,7 @@ class AtomNetlist : public Netlist sinks); + AtomNetId add_net(const std::string& name, AtomPinId driver, std::vector sinks); /** * @brief Adds a value to the net aliases set for a given net name in the net_aliases_map. @@ -218,7 +218,7 @@ class AtomNetlist : public Netlist arch.noc->router_list.size()) // check whether the noc topology information provided is using all the routers in the FPGA { VPR_FATAL_ERROR(VPR_ERROR_OTHER, "The Provided NoC topology information in the architecture file uses less number of routers than what is available in the FPGA device."); - } else if (noc_router_tiles.size() == 0) // case where no physical router tiles were found + } else if (noc_router_tiles.empty()) // case where no physical router tiles were found { VPR_FATAL_ERROR(VPR_ERROR_OTHER, "No physical NoC routers were found on the FPGA device. Either the provided name for the physical router tile was incorrect or the FPGA device has no routers."); } @@ -58,7 +58,7 @@ void setup_noc(const t_arch& arch) { return; } -void identify_and_store_noc_router_tile_positions(const DeviceGrid& device_grid, std::vector& noc_router_tiles, std::string noc_router_tile_name) { +void identify_and_store_noc_router_tile_positions(const DeviceGrid& device_grid, std::vector& noc_router_tiles, const std::string& noc_router_tile_name) { const int num_layers = device_grid.get_num_layers(); int curr_tile_width; int curr_tile_height; @@ -173,10 +173,10 @@ void create_noc_routers(const t_noc_inf& noc_info, NocStorage* noc_model, std::v error_case_physical_router_index_2 = INVALID_PHYSICAL_ROUTER_INDEX; // determine the physical router tile that is closest to the current user described router in the arch file - for (auto physical_router = noc_router_tiles.begin(); physical_router != noc_router_tiles.end(); physical_router++) { + for (auto & physical_router : noc_router_tiles) { // get the position of the current physical router tile on the FPGA device - curr_physical_router_pos_x = physical_router->tile_centroid_x; - curr_physical_router_pos_y = physical_router->tile_centroid_y; + curr_physical_router_pos_x = physical_router.tile_centroid_x; + curr_physical_router_pos_y = physical_router.tile_centroid_y; // use euclidean distance to calculate the length between the current user described router and the physical router curr_calculated_distance = sqrt(pow(abs(curr_physical_router_pos_x - curr_logical_router_position_x), 2.0) + pow(abs(curr_physical_router_pos_y - curr_logical_router_position_y), 2.0)); @@ -237,14 +237,14 @@ void create_noc_links(const t_noc_inf* noc_info, NocStorage* noc_model) { noc_model->make_room_for_noc_router_link_list(); // go through each router and add its outgoing links to the NoC - for (auto router = noc_info->router_list.begin(); router != noc_info->router_list.end(); router++) { + for (const auto & router : noc_info->router_list) { // get the converted id of the current source router - source_router = noc_model->convert_router_id(router->id); + source_router = noc_model->convert_router_id(router.id); // go through all the routers connected to the current one and add links to the noc - for (auto conn_router_id = router->connection_list.begin(); conn_router_id != router->connection_list.end(); conn_router_id++) { + for (int conn_router_id : router.connection_list) { // get the converted id of the currently connected sink router - sink_router = noc_model->convert_router_id(*conn_router_id); + sink_router = noc_model->convert_router_id(conn_router_id); // add the link to the Noc noc_model->add_link(source_router, sink_router); diff --git a/vpr/src/base/setup_noc.h b/vpr/src/base/setup_noc.h index 23737d1c5b1..62b3ae4d543 100644 --- a/vpr/src/base/setup_noc.h +++ b/vpr/src/base/setup_noc.h @@ -89,7 +89,7 @@ void setup_noc(const t_arch& arch); * tile in the FPGA architecture description * file. */ -void identify_and_store_noc_router_tile_positions(const DeviceGrid& device_grid, std::vector& list_of_noc_router_tiles, std::string noc_router_tile_name); +void identify_and_store_noc_router_tile_positions(const DeviceGrid& device_grid, std::vector& list_of_noc_router_tiles, const std::string& noc_router_tile_name); /** * @brief Creates NoC routers and adds them to the NoC model based diff --git a/vpr/src/base/vpr_constraints.cpp b/vpr/src/base/vpr_constraints.cpp index 8580c9419ca..c44fc490ab3 100644 --- a/vpr/src/base/vpr_constraints.cpp +++ b/vpr/src/base/vpr_constraints.cpp @@ -41,7 +41,7 @@ Partition& VprConstraints::get_mutable_partition(PartitionId part_id) { std::vector VprConstraints::get_part_atoms(PartitionId part_id) const { std::vector part_atoms; - for (auto& it : constrained_atoms) { + for (const auto& it : constrained_atoms) { if (it.second == part_id) { part_atoms.push_back(it.first); } diff --git a/vpr/src/base/vpr_constraints_reader.cpp b/vpr/src/base/vpr_constraints_reader.cpp index 8e69b7b42b4..c1d3f33389b 100644 --- a/vpr/src/base/vpr_constraints_reader.cpp +++ b/vpr/src/base/vpr_constraints_reader.cpp @@ -35,7 +35,7 @@ void load_vpr_constraints_file(const char* read_vpr_constraints_name) { auto& floorplanning_ctx = g_vpr_ctx.mutable_floorplanning(); floorplanning_ctx.constraints = reader.constraints_; - VprConstraints ctx_constraints = floorplanning_ctx.constraints; + const auto& ctx_constraints = floorplanning_ctx.constraints; if (getEchoEnabled() && isEchoFileEnabled(E_ECHO_VPR_CONSTRAINTS)) { echo_constraints(getEchoFileName(E_ECHO_VPR_CONSTRAINTS), ctx_constraints); diff --git a/vpr/src/base/vpr_constraints_writer.cpp b/vpr/src/base/vpr_constraints_writer.cpp index 056d0fd7151..073b02dc1f3 100644 --- a/vpr/src/base/vpr_constraints_writer.cpp +++ b/vpr/src/base/vpr_constraints_writer.cpp @@ -55,8 +55,7 @@ void setup_vpr_floorplan_constraints_one_loc(VprConstraints& constraints, int ex * The subtile can also optionally be set in the PartitionRegion, based on the value passed in by the user. */ for (auto blk_id : cluster_ctx.clb_nlist.blocks()) { - std::string part_name; - part_name = cluster_ctx.clb_nlist.block_name(blk_id); + const std::string& part_name = cluster_ctx.clb_nlist.block_name(blk_id); PartitionId partid(part_id); Partition part; @@ -65,7 +64,7 @@ void setup_vpr_floorplan_constraints_one_loc(VprConstraints& constraints, int ex PartitionRegion pr; Region reg; - auto loc = place_ctx.block_locs[blk_id].loc; + const auto& loc = place_ctx.block_locs[blk_id].loc; reg.set_region_rect({loc.x - expand, loc.y - expand, diff --git a/vpr/src/base/vpr_constraints_writer.h b/vpr/src/base/vpr_constraints_writer.h index 9db00bb4612..f99335c7c42 100644 --- a/vpr/src/base/vpr_constraints_writer.h +++ b/vpr/src/base/vpr_constraints_writer.h @@ -37,7 +37,19 @@ */ void write_vpr_floorplan_constraints(const char* file_name, int expand, bool subtile, int horizontal_partitions, int vertical_partitions); -//Generate constraints which lock all blocks to one location. +/** + * @brief Populates VprConstraints by creating a partition for each clustered block. + * All atoms in the clustered block are assigned to the same partition. The created partition + * for each clustered block would include the current location of the clustered block. The + * partition is expanded from four sides by "expand" blocks. + * + * @param constraints The VprConstraints to be populated. + * @param expand The amount the floorplan region will be expanded around the current + * x, y location of the block. Ex. if location is (1, 1) and expand = 1, + * the floorplan region will be from (0, 0) to (2, 2). + * @param subtile Specifies whether to write out the constraint regions with or without + * subtile values. + */ void setup_vpr_floorplan_constraints_one_loc(VprConstraints& constraints, int expand, bool subtile); /* Generate constraints which divide the grid into partition according to the horizontal and vertical partition values passed in diff --git a/vpr/src/noc/noc_traffic_flows.cpp b/vpr/src/noc/noc_traffic_flows.cpp index 426597bd71c..18b03444c91 100644 --- a/vpr/src/noc/noc_traffic_flows.cpp +++ b/vpr/src/noc/noc_traffic_flows.cpp @@ -54,11 +54,23 @@ const std::vector& NocTrafficFlows::get_all_traffic_flow_id(vo // setters for the traffic flows -void NocTrafficFlows::create_noc_traffic_flow(const std::string& source_router_module_name, const std::string& sink_router_module_name, ClusterBlockId source_router_cluster_id, ClusterBlockId sink_router_cluster_id, double traffic_flow_bandwidth, double traffic_flow_latency, int traffic_flow_priority) { +void NocTrafficFlows::create_noc_traffic_flow(const std::string& source_router_module_name, + const std::string& sink_router_module_name, + ClusterBlockId source_router_cluster_id, + ClusterBlockId sink_router_cluster_id, + double traffic_flow_bandwidth, + double traffic_flow_latency, + int traffic_flow_priority) { VTR_ASSERT_MSG(!built_traffic_flows, "NoC traffic flows have already been added, cannot modify further."); // create and add the new traffic flow to the vector - noc_traffic_flows.emplace_back(source_router_module_name, sink_router_module_name, source_router_cluster_id, sink_router_cluster_id, traffic_flow_bandwidth, traffic_flow_latency, traffic_flow_priority); + noc_traffic_flows.emplace_back(source_router_module_name, + sink_router_module_name, + source_router_cluster_id, + sink_router_cluster_id, + traffic_flow_bandwidth, + traffic_flow_latency, + traffic_flow_priority); //since the new traffic flow was added to the back of the vector, its id will be the index of the last element NocTrafficFlowId curr_traffic_flow_id = (NocTrafficFlowId)(noc_traffic_flows.size() - 1); diff --git a/vpr/src/noc/noc_traffic_flows.h b/vpr/src/noc/noc_traffic_flows.h index 8b433ef3599..c1c73f8884f 100644 --- a/vpr/src/noc/noc_traffic_flows.h +++ b/vpr/src/noc/noc_traffic_flows.h @@ -255,7 +255,13 @@ class NocTrafficFlows { * at the sink router. * @param traffic_flow_priority The importance of a given traffic flow. */ - void create_noc_traffic_flow(const std::string& source_router_module_name, const std::string& sink_router_module_name, ClusterBlockId source_router_cluster_id, ClusterBlockId sink_router_cluster_id, double traffic_flow_bandwidth, double traffic_flow_latency, int traffic_flow_priority); + void create_noc_traffic_flow(const std::string& source_router_module_name, + const std::string& sink_router_module_name, + ClusterBlockId source_router_cluster_id, + ClusterBlockId sink_router_cluster_id, + double traffic_flow_bandwidth, + double traffic_flow_latency, + int traffic_flow_priority); /** * @brief Copies the passed in router_cluster_id_in_netlist vector to the diff --git a/vpr/src/noc/read_xml_noc_traffic_flows_file.cpp b/vpr/src/noc/read_xml_noc_traffic_flows_file.cpp index b785d2c4da6..07bd53be7ce 100644 --- a/vpr/src/noc/read_xml_noc_traffic_flows_file.cpp +++ b/vpr/src/noc/read_xml_noc_traffic_flows_file.cpp @@ -76,7 +76,12 @@ void read_xml_noc_traffic_flows_file(const char* noc_flows_file) { return; } -void process_single_flow(pugi::xml_node single_flow_tag, const pugiutil::loc_data& loc_data, const ClusteringContext& cluster_ctx, NocContext& noc_ctx, t_physical_tile_type_ptr noc_router_tile_type, const std::vector& cluster_blocks_compatible_with_noc_router_tiles) { +void process_single_flow(pugi::xml_node single_flow_tag, + const pugiutil::loc_data& loc_data, + const ClusteringContext& cluster_ctx, + NocContext& noc_ctx, + t_physical_tile_type_ptr noc_router_tile_type, + const std::vector& cluster_blocks_compatible_with_noc_router_tiles) { // contains all traffic flows NocTrafficFlows* noc_traffic_flow_storage = &noc_ctx.noc_traffic_flows_storage; @@ -113,7 +118,13 @@ void process_single_flow(pugi::xml_node single_flow_tag, const pugiutil::loc_dat verify_traffic_flow_properties(traffic_flow_bandwidth, max_traffic_flow_latency, traffic_flow_priority, single_flow_tag, loc_data); // The current flow information is legal, so store it - noc_traffic_flow_storage->create_noc_traffic_flow(source_router_module_name, sink_router_module_name, source_router_id, sink_router_id, traffic_flow_bandwidth, max_traffic_flow_latency, traffic_flow_priority); + noc_traffic_flow_storage->create_noc_traffic_flow(source_router_module_name, + sink_router_module_name, + source_router_id, + sink_router_id, + traffic_flow_bandwidth, + max_traffic_flow_latency, + traffic_flow_priority); return; } @@ -169,7 +180,7 @@ int get_traffic_flow_priority(pugi::xml_node single_flow_tag, const pugiutil::lo return traffic_flow_priority; } -void verify_traffic_flow_router_modules(std::string source_router_name, std::string sink_router_name, pugi::xml_node single_flow_tag, const pugiutil::loc_data& loc_data) { +void verify_traffic_flow_router_modules(const std::string& source_router_name, const std::string& sink_router_name, pugi::xml_node single_flow_tag, const pugiutil::loc_data& loc_data) { // check that the source router module name is not empty if (source_router_name == "") { vpr_throw(VPR_ERROR_OTHER, loc_data.filename_c_str(), loc_data.line(single_flow_tag), "Invalid name for the source NoC router module."); @@ -206,7 +217,11 @@ void verify_traffic_flow_properties(double traffic_flow_bandwidth, double max_tr return; } -ClusterBlockId get_router_module_cluster_id(std::string router_module_name, const ClusteringContext& cluster_ctx, pugi::xml_node single_flow_tag, const pugiutil::loc_data& loc_data, const std::vector& cluster_blocks_compatible_with_noc_router_tiles) { +ClusterBlockId get_router_module_cluster_id(const std::string& router_module_name, + const ClusteringContext& cluster_ctx, + pugi::xml_node single_flow_tag, + const pugiutil::loc_data& loc_data, + const std::vector& cluster_blocks_compatible_with_noc_router_tiles) { ClusterBlockId router_module_id = ClusterBlockId::INVALID(); // Given a regex pattern, use it to match a name of a cluster router block within the clustered netlist. If a matching cluster block is found, then return its cluster block id. @@ -226,7 +241,7 @@ ClusterBlockId get_router_module_cluster_id(std::string router_module_name, cons return router_module_id; } -void check_traffic_flow_router_module_type(std::string router_module_name, ClusterBlockId router_module_id, pugi::xml_node single_flow_tag, const pugiutil::loc_data& loc_data, const ClusteringContext& cluster_ctx, t_physical_tile_type_ptr noc_router_tile_type) { +void check_traffic_flow_router_module_type(const std::string& router_module_name, ClusterBlockId router_module_id, pugi::xml_node single_flow_tag, const pugiutil::loc_data& loc_data, const ClusteringContext& cluster_ctx, t_physical_tile_type_ptr noc_router_tile_type) { // get the logical type of the provided router module t_logical_block_type_ptr router_module_logical_type = cluster_ctx.clb_nlist.block_type(router_module_id); @@ -257,7 +272,7 @@ t_physical_tile_type_ptr get_physical_type_of_noc_router_tile(const DeviceContex physical_noc_router->get_router_layer_position()}); } -bool check_that_all_router_blocks_have_an_associated_traffic_flow(NocContext& noc_ctx, t_physical_tile_type_ptr noc_router_tile_type, std::string noc_flows_file) { +bool check_that_all_router_blocks_have_an_associated_traffic_flow(NocContext& noc_ctx, t_physical_tile_type_ptr noc_router_tile_type, const std::string& noc_flows_file) { bool result = true; // contains the number of all the noc router blocks in the design @@ -269,10 +284,10 @@ bool check_that_all_router_blocks_have_an_associated_traffic_flow(NocContext& no /* * Go through the router subtiles and get the router logical block types the subtiles support. Then determine how many of each router logical block types there are in the clustered netlist. The accumulated sum of all these clusters is the total number of router blocks in the design. */ - for (auto subtile = noc_router_subtiles->begin(); subtile != noc_router_subtiles->end(); subtile++) { - for (auto router_logical_block = subtile->equivalent_sites.begin(); router_logical_block != subtile->equivalent_sites.end(); router_logical_block++) { + for (const auto & noc_router_subtile : *noc_router_subtiles) { + for (auto router_logical_block : noc_router_subtile.equivalent_sites) { // get the number of logical blocks in the design of the current logical block type - number_of_router_blocks_in_design += clustered_netlist_stats.num_blocks_type[(*router_logical_block)->index]; + number_of_router_blocks_in_design += clustered_netlist_stats.num_blocks_type[router_logical_block->index]; } } @@ -299,14 +314,14 @@ std::vector get_cluster_blocks_compatible_with_noc_router_tiles( // vector to store all the cluster blocks ids that can be placed within a physical NoC router tile on the FPGA std::vector cluster_blocks_compatible_with_noc_router_tiles; - for (auto cluster_block_id = cluster_netlist_blocks.begin(); cluster_block_id != cluster_netlist_blocks.end(); cluster_block_id++) { + for (auto cluster_blk_id : cluster_netlist_blocks) { // get the logical type of the block - t_logical_block_type_ptr cluster_block_type = cluster_ctx.clb_nlist.block_type(*cluster_block_id); + t_logical_block_type_ptr cluster_block_type = cluster_ctx.clb_nlist.block_type(cluster_blk_id); // check if the current block is compatible with a NoC router tile // if it is, then this block is a NoC outer instantiated by the user in the design, so add it to the vector compatible blocks if (is_tile_compatible(noc_router_tile_type, cluster_block_type)) { - cluster_blocks_compatible_with_noc_router_tiles.push_back(*cluster_block_id); + cluster_blocks_compatible_with_noc_router_tiles.push_back(cluster_blk_id); } } diff --git a/vpr/src/noc/read_xml_noc_traffic_flows_file.h b/vpr/src/noc/read_xml_noc_traffic_flows_file.h index e8005665b3c..55cecc38bc1 100644 --- a/vpr/src/noc/read_xml_noc_traffic_flows_file.h +++ b/vpr/src/noc/read_xml_noc_traffic_flows_file.h @@ -142,7 +142,7 @@ int get_traffic_flow_priority(pugi::xml_node single_flow_tag, const pugiutil::lo * @param loc_data Contains location data about the current line in the xml * file. Passed in for error logging. */ -void verify_traffic_flow_router_modules(std::string source_router_name, std::string sink_router_name, pugi::xml_node single_flow_tag, const pugiutil::loc_data& loc_data); +void verify_traffic_flow_router_modules(const std::string& source_router_name, const std::string& sink_router_name, pugi::xml_node single_flow_tag, const pugiutil::loc_data& loc_data); /** * @brief Ensures the traffic flow's bandwidth, latency constraint and @@ -181,7 +181,11 @@ void verify_traffic_flow_properties(double traffic_flow_bandwidth, double max_tr * @return ClusterBlockId The corresponding router block id of the provided * router module name. */ -ClusterBlockId get_router_module_cluster_id(std::string router_module_name, const ClusteringContext& cluster_ctx, pugi::xml_node single_flow_tag, const pugiutil::loc_data& loc_data, const std::vector& cluster_blocks_compatible_with_noc_router_tiles); +ClusterBlockId get_router_module_cluster_id(const std::string& router_module_name, + const ClusteringContext& cluster_ctx, + pugi::xml_node single_flow_tag, + const pugiutil::loc_data& loc_data, + const std::vector& cluster_blocks_compatible_with_noc_router_tiles); /** * @brief Checks to see whether a given router block is compatible with a NoC @@ -204,7 +208,7 @@ ClusterBlockId get_router_module_cluster_id(std::string router_module_name, cons * FPGA. Used to check if the router block is * compatible with a router tile. */ -void check_traffic_flow_router_module_type(std::string router_module_name, ClusterBlockId router_module_id, pugi::xml_node single_flow_tag, const pugiutil::loc_data& loc_data, const ClusteringContext& cluster_ctx, t_physical_tile_type_ptr noc_router_tile_type); +void check_traffic_flow_router_module_type(const std::string& router_module_name, ClusterBlockId router_module_id, pugi::xml_node single_flow_tag, const pugiutil::loc_data& loc_data, const ClusteringContext& cluster_ctx, t_physical_tile_type_ptr noc_router_tile_type); /** * @brief Retrieves the physical type of a noc router tile. @@ -237,7 +241,7 @@ t_physical_tile_type_ptr get_physical_type_of_noc_router_tile(const DeviceContex * associated traffic flow. False means there are some router * blocks that do not have a an associated traffic flow. */ -bool check_that_all_router_blocks_have_an_associated_traffic_flow(NocContext& noc_ctx, t_physical_tile_type_ptr noc_router_tile_type, std::string noc_flows_file); +bool check_that_all_router_blocks_have_an_associated_traffic_flow(NocContext& noc_ctx, t_physical_tile_type_ptr noc_router_tile_type, const std::string& noc_flows_file); /** * @brief Goes through the blocks within the clustered netlist and identifies diff --git a/vpr/src/pack/attraction_groups.cpp b/vpr/src/pack/attraction_groups.cpp index e4bd17620e4..60e72546e51 100644 --- a/vpr/src/pack/attraction_groups.cpp +++ b/vpr/src/pack/attraction_groups.cpp @@ -1,7 +1,7 @@ #include "attraction_groups.h" AttractionInfo::AttractionInfo(bool attraction_groups_on) { - auto& floorplanning_ctx = g_vpr_ctx.mutable_floorplanning(); + const auto& floorplanning_ctx = g_vpr_ctx.floorplanning(); auto& atom_ctx = g_vpr_ctx.atom(); int num_parts = floorplanning_ctx.constraints.get_num_partitions(); @@ -33,7 +33,7 @@ AttractionInfo::AttractionInfo(bool attraction_groups_on) { } void AttractionInfo::create_att_groups_for_overfull_regions() { - auto& floorplanning_ctx = g_vpr_ctx.mutable_floorplanning(); + const auto& floorplanning_ctx = g_vpr_ctx.floorplanning(); auto& atom_ctx = g_vpr_ctx.atom(); int num_parts = floorplanning_ctx.constraints.get_num_partitions(); @@ -47,10 +47,10 @@ void AttractionInfo::create_att_groups_for_overfull_regions() { atom_attraction_group.resize(num_atoms); fill(atom_attraction_group.begin(), atom_attraction_group.end(), AttractGroupId::INVALID()); - auto& overfull_regions = floorplanning_ctx.overfull_regions; + const auto& overfull_regions = floorplanning_ctx.overfull_regions; PartitionRegion overfull_regions_pr; - for (unsigned int i = 0; i < overfull_regions.size(); i++) { - overfull_regions_pr.add_to_part_region(overfull_regions[i]); + for (const auto & overfull_region : overfull_regions) { + overfull_regions_pr.add_to_part_region(overfull_region); } /* * Create a PartitionRegion that contains all the overfull regions so that you can @@ -88,7 +88,7 @@ void AttractionInfo::create_att_groups_for_overfull_regions() { } void AttractionInfo::create_att_groups_for_all_regions() { - auto& floorplanning_ctx = g_vpr_ctx.mutable_floorplanning(); + const auto& floorplanning_ctx = g_vpr_ctx.floorplanning(); auto& atom_ctx = g_vpr_ctx.atom(); int num_parts = floorplanning_ctx.constraints.get_num_partitions(); @@ -137,8 +137,8 @@ void AttractionInfo::assign_atom_attraction_ids() { AttractionGroup att_group = attraction_groups[group_id]; - for (unsigned int iatom = 0; iatom < att_group.group_atoms.size(); iatom++) { - atom_attraction_group[att_group.group_atoms[iatom]] = group_id; + for (auto group_atom : att_group.group_atoms) { + atom_attraction_group[group_atom] = group_id; } } } diff --git a/vpr/src/pack/attraction_groups.h b/vpr/src/pack/attraction_groups.h index 109afa667cc..813d6e0fb1b 100644 --- a/vpr/src/pack/attraction_groups.h +++ b/vpr/src/pack/attraction_groups.h @@ -80,7 +80,7 @@ class AttractionInfo { int num_attraction_groups(); - int get_att_group_pulls(); + int get_att_group_pulls() const; void set_att_group_pulls(int num_pulls); @@ -101,7 +101,7 @@ class AttractionInfo { int att_group_pulls = 1; }; -inline int AttractionInfo::get_att_group_pulls() { +inline int AttractionInfo::get_att_group_pulls() const { return att_group_pulls; } diff --git a/vpr/src/pack/cluster_util.cpp b/vpr/src/pack/cluster_util.cpp index 48fe7a9dd71..0c1891c7927 100644 --- a/vpr/src/pack/cluster_util.cpp +++ b/vpr/src/pack/cluster_util.cpp @@ -76,22 +76,21 @@ static void echo_clusters(char* filename) { cluster_atoms[clb_index].push_back(atom_blk_id); } - for (auto i = cluster_atoms.begin(); i != cluster_atoms.end(); i++) { - std::string cluster_name; - cluster_name = cluster_ctx.clb_nlist.block_name(i->first); - fprintf(fp, "Cluster %s Id: %zu \n", cluster_name.c_str(), size_t(i->first)); + for (auto & cluster_atom : cluster_atoms) { + const std::string& cluster_name = cluster_ctx.clb_nlist.block_name(cluster_atom.first); + fprintf(fp, "Cluster %s Id: %zu \n", cluster_name.c_str(), size_t(cluster_atom.first)); fprintf(fp, "\tAtoms in cluster: \n"); - int num_atoms = i->second.size(); + int num_atoms = cluster_atom.second.size(); for (auto j = 0; j < num_atoms; j++) { - AtomBlockId atom_id = i->second[j]; + AtomBlockId atom_id = cluster_atom.second[j]; fprintf(fp, "\t %s \n", atom_ctx.nlist.block_name(atom_id).c_str()); } } fprintf(fp, "\nCluster Floorplanning Constraints:\n"); - auto& floorplanning_ctx = g_vpr_ctx.mutable_floorplanning(); + const auto& floorplanning_ctx = g_vpr_ctx.floorplanning(); for (ClusterBlockId clb_id : cluster_ctx.clb_nlist.blocks()) { const std::vector& regions = floorplanning_ctx.cluster_constraints[clb_id].get_regions(); @@ -1318,9 +1317,6 @@ enum e_block_pack_status atom_cluster_floorplanning_check(const AtomBlockId blk_ PartitionId partid; partid = floorplanning_ctx.constraints.get_atom_partition(blk_id); - PartitionRegion atom_pr; - PartitionRegion cluster_pr; - //if the atom does not belong to a partition, it can be put in the cluster //regardless of what the cluster's PartitionRegion is because it has no constraints if (partid == PartitionId::INVALID()) { @@ -1331,12 +1327,12 @@ enum e_block_pack_status atom_cluster_floorplanning_check(const AtomBlockId blk_ return BLK_PASSED; } else { //get pr of that partition - atom_pr = floorplanning_ctx.constraints.get_partition_pr(partid); + const PartitionRegion& atom_pr = floorplanning_ctx.constraints.get_partition_pr(partid); //intersect it with the pr of the current cluster - cluster_pr = floorplanning_ctx.cluster_constraints[clb_index]; + PartitionRegion cluster_pr = floorplanning_ctx.cluster_constraints[clb_index]; - if (cluster_pr.empty() == true) { + if (cluster_pr.empty()) { temp_cluster_pr = atom_pr; cluster_pr_needs_update = true; if (verbosity > 3) { @@ -1349,7 +1345,7 @@ enum e_block_pack_status atom_cluster_floorplanning_check(const AtomBlockId blk_ update_cluster_part_reg(cluster_pr, atom_pr); } - if (cluster_pr.empty() == true) { + if (cluster_pr.empty()) { if (verbosity > 3) { VTR_LOG("\t\t\t Intersect: Atom block %d failed floorplanning check for cluster %d \n", blk_id, clb_index); } @@ -2036,7 +2032,7 @@ void start_new_cluster(t_cluster_placement_stats* cluster_placement_stats, const int num_models, const int max_cluster_size, const t_arch* arch, - std::string device_layout_name, + const std::string& device_layout_name, std::vector* lb_type_rr_graphs, t_lb_router_data** router_data, const int detailed_routing_stage, @@ -3466,10 +3462,10 @@ enum e_block_pack_status check_chain_root_placement_feasibility(const t_pb_graph } else { block_pack_status = BLK_FAILED_FEASIBLE; for (const auto& chain : chain_root_pins) { - for (size_t tieOff = 0; tieOff < chain.size(); tieOff++) { + for (auto tieOff : chain) { // check if this chosen primitive is one of the possible // starting points for this chain. - if (pb_graph_node == chain[tieOff]->parent_node) { + if (pb_graph_node == tieOff->parent_node) { // this location matches with the one of the dedicated chain // input from outside logic block, therefore it is feasible block_pack_status = BLK_PASSED; @@ -3624,7 +3620,7 @@ void update_le_count(const t_pb* pb, const t_logical_block_type_ptr logic_block_ * This function returns true if the given physical block has * a primitive matching the given blif model and is used */ -bool pb_used_for_blif_model(const t_pb* pb, std::string blif_model_name) { +bool pb_used_for_blif_model(const t_pb* pb, const std::string& blif_model_name) { auto pb_graph_node = pb->pb_graph_node; auto pb_type = pb_graph_node->pb_type; auto mode = &pb_type->modes[pb->mode]; diff --git a/vpr/src/pack/cluster_util.h b/vpr/src/pack/cluster_util.h index 6c05272e1e7..2f01e38b1e5 100644 --- a/vpr/src/pack/cluster_util.h +++ b/vpr/src/pack/cluster_util.h @@ -331,7 +331,7 @@ void start_new_cluster(t_cluster_placement_stats* cluster_placement_stats, const int num_models, const int max_cluster_size, const t_arch* arch, - std::string device_layout_name, + const std::string& device_layout_name, std::vector* lb_type_rr_graphs, t_lb_router_data** router_data, const int detailed_routing_stage, @@ -442,7 +442,7 @@ t_logical_block_type_ptr identify_logic_block_type(std::map& le_count, const t_pb_type* le_pb_type); diff --git a/vpr/src/pack/pack.cpp b/vpr/src/pack/pack.cpp index 9fd61587cde..e561ca59365 100644 --- a/vpr/src/pack/pack.cpp +++ b/vpr/src/pack/pack.cpp @@ -27,7 +27,10 @@ /* #define DUMP_PB_GRAPH 1 */ /* #define DUMP_BLIF_INPUT 1 */ -static bool try_size_device_grid(const t_arch& arch, const std::map& num_type_instances, float target_device_utilization, std::string device_layout_name); +static bool try_size_device_grid(const t_arch& arch, + const std::map& num_type_instances, + float target_device_utilization, + const std::string& device_layout_name); /** * @brief Counts the total number of logic models that the architecture can implement. @@ -153,7 +156,7 @@ bool try_pack(t_packer_opts* packer_opts, * is not dense enough and there are floorplan constraints, it is presumed that the constraints are the cause * of the floorplan not fitting, so attraction groups are turned on for later iterations. */ - bool floorplan_not_fitting = (floorplan_regions_overfull || g_vpr_ctx.mutable_floorplanning().constraints.get_num_partitions() > 0); + bool floorplan_not_fitting = (floorplan_regions_overfull || g_vpr_ctx.floorplanning().constraints.get_num_partitions() > 0); if (fits_on_device && !floorplan_regions_overfull) { break; //Done @@ -331,7 +334,10 @@ std::unordered_set alloc_and_load_is_clock(bool global_clocks) { return (is_clock); } -static bool try_size_device_grid(const t_arch& arch, const std::map& num_type_instances, float target_device_utilization, std::string device_layout_name) { +static bool try_size_device_grid(const t_arch& arch, + const std::map& num_type_instances, + float target_device_utilization, + const std::string& device_layout_name) { auto& device_ctx = g_vpr_ctx.mutable_device(); //Build the device diff --git a/vpr/src/place/move_utils.cpp b/vpr/src/place/move_utils.cpp index b7692581a61..7dec20a1d01 100644 --- a/vpr/src/place/move_utils.cpp +++ b/vpr/src/place/move_utils.cpp @@ -28,7 +28,7 @@ void report_aborted_moves() { if (f_move_abort_reasons.empty()) { VTR_LOG(" No moves aborted\n"); } - for (auto kv : f_move_abort_reasons) { + for (const auto& kv : f_move_abort_reasons) { VTR_LOG(" %s: %zu\n", kv.first.c_str(), kv.second); } } From 3777e5f420e3895237b3169415ec245caad00f4f Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Fri, 16 Feb 2024 10:36:39 -0500 Subject: [PATCH 007/257] find noc router atoms --- vpr/src/pack/pack.cpp | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/vpr/src/pack/pack.cpp b/vpr/src/pack/pack.cpp index e561ca59365..bd740b7821c 100644 --- a/vpr/src/pack/pack.cpp +++ b/vpr/src/pack/pack.cpp @@ -40,6 +40,26 @@ static bool try_size_device_grid(const t_arch& arch, */ static int count_models(const t_model* user_models); +static std::vector find_noc_router_atoms() { + const auto& atom_ctx = g_vpr_ctx.atom(); + + // NoC router atoms are expected to have a specific blif model + const std::string noc_router_blif_model_name = "noc_router_adapter_block"; + + // stores found NoC router atoms + std::vector noc_router_atoms; + + // iterate over all atoms and find those whose blif model matches + for (auto atom_id : atom_ctx.nlist.blocks()) { + const t_model* model = atom_ctx.nlist.block_model(atom_id); + if (noc_router_blif_model_name == model->name) { + noc_router_atoms.push_back(atom_id); + } + } + + return noc_router_atoms; +} + bool try_pack(t_packer_opts* packer_opts, const t_analysis_opts* analysis_opts, const t_arch* arch, @@ -131,6 +151,11 @@ bool try_pack(t_packer_opts* packer_opts, int pack_iteration = 1; bool floorplan_regions_overfull = false; + auto noc_atoms = find_noc_router_atoms(); + for (auto noc_atom : noc_atoms) { + std::cout << "NoC Atom: " << atom_ctx.nlist.block_name(noc_atom) << std::endl; + } + while (true) { free_clustering_data(*packer_opts, clustering_data); From 71f85c5d980694f4f7003476eea885a0e5ef2cfe Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Fri, 16 Feb 2024 13:06:05 -0500 Subject: [PATCH 008/257] add exclusivity index to PartitionRegion --- vpr/src/base/partition_region.cpp | 48 ++++++++++++++++++++++++++----- vpr/src/base/partition_region.h | 9 ++++-- 2 files changed, 48 insertions(+), 9 deletions(-) diff --git a/vpr/src/base/partition_region.cpp b/vpr/src/base/partition_region.cpp index 2676b6d1035..77afc4fa5e7 100644 --- a/vpr/src/base/partition_region.cpp +++ b/vpr/src/base/partition_region.cpp @@ -36,6 +36,22 @@ bool PartitionRegion::is_loc_in_part_reg(const t_pl_loc& loc) const { return is_in_pr; } +int PartitionRegion::get_exclusivity_index() const { + return exclusivity_index; +} + +void PartitionRegion::set_exclusivity_index(int index) { + /* negative exclusivity index means this PartitionRegion is compatible + * with other PartitionsRegions as long as the intersection of their + * regions is not empty. + */ + if (index < 0) { + index = -1; + } + + exclusivity_index = index; +} + PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionRegion& new_pr) { /**for N regions in part_region and M in the calling object you can get anywhere from * 0 to M*N regions in the resulting vector. Only intersection regions with non-zero area rectangles and @@ -43,11 +59,20 @@ PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionR * Rectangles are not merged even if it would be possible */ PartitionRegion pr; + + const int cluster_exclusivity = cluster_pr.get_exclusivity_index(); + const int new_exclusivity = new_pr.get_exclusivity_index(); + + // PartitionRegion are not compatible even if their regions overlap + if (cluster_exclusivity != new_exclusivity) { + return pr; + } + auto& pr_regions = pr.get_mutable_regions(); - Region intersect_region; + for (const auto& cluster_region : cluster_pr.get_regions()) { for (const auto& new_region : new_pr.get_regions()) { - intersect_region = intersection(cluster_region, new_region); + Region intersect_region = intersection(cluster_region, new_region); if (!intersect_region.empty()) { pr_regions.push_back(intersect_region); } @@ -60,14 +85,23 @@ PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionR void update_cluster_part_reg(PartitionRegion& cluster_pr, const PartitionRegion& new_pr) { std::vector int_regions; - for (const auto& cluster_region : cluster_pr.get_regions()) { - for (const auto& new_region : new_pr.get_regions()) { - Region intersect_region = intersection(cluster_region, new_region); - if (!intersect_region.empty()) { - int_regions.push_back(intersect_region); + const int cluster_exclusivity = cluster_pr.get_exclusivity_index(); + const int new_exclusivity = new_pr.get_exclusivity_index(); + + // check whether PartitionRegions are compatible in the first place + if (cluster_exclusivity == new_exclusivity) { + + // now that we know PartitionRegions are compatible, look for overlapping regions + for (const auto& cluster_region : cluster_pr.get_regions()) { + for (const auto& new_region : new_pr.get_regions()) { + Region intersect_region = intersection(cluster_region, new_region); + if (!intersect_region.empty()) { + int_regions.push_back(intersect_region); + } } } } + cluster_pr.set_partition_region(int_regions); } diff --git a/vpr/src/base/partition_region.h b/vpr/src/base/partition_region.h index 2ea9796091b..ec4d24a065f 100644 --- a/vpr/src/base/partition_region.h +++ b/vpr/src/base/partition_region.h @@ -50,15 +50,20 @@ class PartitionRegion { */ bool is_loc_in_part_reg(const t_pl_loc& loc) const; + int get_exclusivity_index() const; + + void set_exclusivity_index(int index); + private: std::vector regions; ///< union of rectangular regions that a partition can be placed in + int exclusivity_index = -1; ///< PartitionRegions with different exclusivity_index values are not compatible }; ///@brief used to print data from a PartitionRegion void print_partition_region(FILE* fp, const PartitionRegion& pr); /** -* @brief Global friend function that returns the intersection of two PartitionRegions +* @brief Global function that returns the intersection of two PartitionRegions * * @param cluster_pr One of the PartitionRegions to be intersected * @param new_pr One of the PartitionRegions to be intersected @@ -66,7 +71,7 @@ void print_partition_region(FILE* fp, const PartitionRegion& pr); PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionRegion& new_pr); /** -* @brief Global friend function that updates the PartitionRegion of a cluster with the intersection +* @brief Global function that updates the PartitionRegion of a cluster with the intersection * of the cluster PartitionRegion and a new PartitionRegion * * @param cluster_pr The cluster PartitionRegion that is to be updated From 2917fca40ccc55423900c2174c5b551671e7d56f Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Fri, 16 Feb 2024 14:12:34 -0500 Subject: [PATCH 009/257] add update_noc_reachability_partitions() --- vpr/src/pack/pack.cpp | 105 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 102 insertions(+), 3 deletions(-) diff --git a/vpr/src/pack/pack.cpp b/vpr/src/pack/pack.cpp index bd740b7821c..b491fe0d362 100644 --- a/vpr/src/pack/pack.cpp +++ b/vpr/src/pack/pack.cpp @@ -5,6 +5,8 @@ #include #include #include +#include +#include #include "vtr_assert.h" #include "vtr_log.h" @@ -60,6 +62,104 @@ static std::vector find_noc_router_atoms() { return noc_router_atoms; } +static void update_noc_reachability_partitions(const std::vector& noc_atoms) { + const auto& atom_ctx = g_vpr_ctx.atom(); + auto& constraints = g_vpr_ctx.mutable_floorplanning().constraints; + const auto& high_fanout_thresholds = g_vpr_ctx.cl_helper().high_fanout_thresholds; + + const size_t high_fanout_threshold = high_fanout_thresholds.get_threshold(""); + + // get the total number of atoms + const size_t n_atoms = atom_ctx.nlist.blocks().size(); + + vtr::vector atom_visited(n_atoms, false); + + int exclusivity_cnt = 0; + + RegionRectCoord unconstrained_rect{std::numeric_limits::min(), + std::numeric_limits::min(), + std::numeric_limits::max(), + std::numeric_limits::max(), + -1}; + Region unconstrained_region; + unconstrained_region.set_region_rect(unconstrained_rect); + + for (auto noc_atom_id : noc_atoms) { + // check if this NoC router has already been visited + if (atom_visited[noc_atom_id]) { + continue; + } + + exclusivity_cnt++; + + PartitionRegion associated_noc_partition_region; + associated_noc_partition_region.set_exclusivity_index(exclusivity_cnt); + associated_noc_partition_region.add_to_part_region(unconstrained_region); + + Partition associated_noc_partition; + associated_noc_partition.set_name(atom_ctx.nlist.block_name(noc_atom_id)); + associated_noc_partition.set_part_region(associated_noc_partition_region); + auto associated_noc_partition_id = (PartitionId)constraints.get_num_partitions(); + constraints.add_partition(associated_noc_partition); + + const PartitionId noc_partition_id = constraints.get_atom_partition(noc_atom_id); + + if (noc_partition_id == PartitionId::INVALID()) { + constraints.add_constrained_atom(noc_atom_id, associated_noc_partition_id); + } else { // noc atom is already in a partition + auto& noc_partition = constraints.get_mutable_partition(noc_partition_id); + auto& noc_partition_region = noc_partition.get_mutable_part_region(); + VTR_ASSERT(noc_partition_region.get_exclusivity_index() < 0); + noc_partition_region.set_exclusivity_index(exclusivity_cnt); + } + + std::queue q; + q.push(noc_atom_id); + atom_visited[noc_atom_id] = true; + + while (!q.empty()) { + AtomBlockId current_atom = q.front(); + q.pop(); + + PartitionId atom_partition_id = constraints.get_atom_partition(noc_atom_id); + if (atom_partition_id == PartitionId::INVALID()) { + constraints.add_constrained_atom(current_atom, associated_noc_partition_id); + } else { + auto& atom_partition = constraints.get_mutable_partition(atom_partition_id); + auto& atom_partition_region = atom_partition.get_mutable_part_region(); + VTR_ASSERT(atom_partition_region.get_exclusivity_index() < 0); + atom_partition_region.set_exclusivity_index(exclusivity_cnt); + } + + for(auto pin : atom_ctx.nlist.block_pins(current_atom)) { + AtomNetId net_id = atom_ctx.nlist.pin_net(pin); + size_t net_fanout = atom_ctx.nlist.net_sinks(net_id).size(); + + if (net_fanout >= high_fanout_threshold) { + continue; + } + + AtomBlockId driver_atom_id = atom_ctx.nlist.net_driver_block(net_id); + if (!atom_visited[driver_atom_id]) { + q.push(driver_atom_id); + atom_visited[driver_atom_id] = true; + } + + for (auto sink_pin : atom_ctx.nlist.net_sinks(net_id)) { + AtomBlockId sink_atom_id = atom_ctx.nlist.pin_block(sink_pin); + if (!atom_visited[sink_atom_id]) { + q.push(sink_atom_id); + atom_visited[sink_atom_id] = true; + } + } + + } + } + + } +} + + bool try_pack(t_packer_opts* packer_opts, const t_analysis_opts* analysis_opts, const t_arch* arch, @@ -151,10 +251,9 @@ bool try_pack(t_packer_opts* packer_opts, int pack_iteration = 1; bool floorplan_regions_overfull = false; + // find all NoC router atoms auto noc_atoms = find_noc_router_atoms(); - for (auto noc_atom : noc_atoms) { - std::cout << "NoC Atom: " << atom_ctx.nlist.block_name(noc_atom) << std::endl; - } + update_noc_reachability_partitions(noc_atoms); while (true) { free_clustering_data(*packer_opts, clustering_data); From ca55771a81b54616af3949e8d2c2c30757d34a5e Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Fri, 16 Feb 2024 14:24:20 -0500 Subject: [PATCH 010/257] changed unconstrained region coordinates --- vpr/src/pack/pack.cpp | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/vpr/src/pack/pack.cpp b/vpr/src/pack/pack.cpp index b491fe0d362..67ab28d8bbc 100644 --- a/vpr/src/pack/pack.cpp +++ b/vpr/src/pack/pack.cpp @@ -66,6 +66,7 @@ static void update_noc_reachability_partitions(const std::vector& n const auto& atom_ctx = g_vpr_ctx.atom(); auto& constraints = g_vpr_ctx.mutable_floorplanning().constraints; const auto& high_fanout_thresholds = g_vpr_ctx.cl_helper().high_fanout_thresholds; + const auto& device_ctx = g_vpr_ctx.device(); const size_t high_fanout_threshold = high_fanout_thresholds.get_threshold(""); @@ -76,10 +77,10 @@ static void update_noc_reachability_partitions(const std::vector& n int exclusivity_cnt = 0; - RegionRectCoord unconstrained_rect{std::numeric_limits::min(), - std::numeric_limits::min(), - std::numeric_limits::max(), - std::numeric_limits::max(), + RegionRectCoord unconstrained_rect{0, + 0, + (int)device_ctx.grid.width() - 1, + (int)device_ctx.grid.height() - 1, -1}; Region unconstrained_region; unconstrained_region.set_region_rect(unconstrained_rect); @@ -121,13 +122,14 @@ static void update_noc_reachability_partitions(const std::vector& n AtomBlockId current_atom = q.front(); q.pop(); - PartitionId atom_partition_id = constraints.get_atom_partition(noc_atom_id); + PartitionId atom_partition_id = constraints.get_atom_partition(current_atom); if (atom_partition_id == PartitionId::INVALID()) { constraints.add_constrained_atom(current_atom, associated_noc_partition_id); } else { auto& atom_partition = constraints.get_mutable_partition(atom_partition_id); auto& atom_partition_region = atom_partition.get_mutable_part_region(); - VTR_ASSERT(atom_partition_region.get_exclusivity_index() < 0); +// std::cout << "ss" << atom_partition_region.get_exclusivity_index() << std::endl; + VTR_ASSERT(atom_partition_region.get_exclusivity_index() < 0 || current_atom == noc_atom_id); atom_partition_region.set_exclusivity_index(exclusivity_cnt); } From d5a98a4d25cca831df7805b0700cfa72efb718d0 Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Fri, 16 Feb 2024 15:56:08 -0500 Subject: [PATCH 011/257] bugfix: check if there is any unfixed Noc routers If all routers are constrained, place_noc_routers_randomly crashes. --- vpr/src/place/initial_noc_placement.cpp | 5 +++++ vpr/src/place/initial_placement.cpp | 14 +++++++++++--- 2 files changed, 16 insertions(+), 3 deletions(-) diff --git a/vpr/src/place/initial_noc_placement.cpp b/vpr/src/place/initial_noc_placement.cpp index 9294f3b291b..a8c619f8dbd 100644 --- a/vpr/src/place/initial_noc_placement.cpp +++ b/vpr/src/place/initial_noc_placement.cpp @@ -103,6 +103,11 @@ static void place_noc_routers_randomly(std::vector& unfixed_rout * only once. */ + // check if all NoC routers have already been placed + if (unfixed_routers.empty()) { + return; + } + // Make a copy of NoC physical routers because we want to change its order vtr::vector noc_phy_routers = noc_ctx.noc_model.get_noc_routers(); diff --git a/vpr/src/place/initial_placement.cpp b/vpr/src/place/initial_placement.cpp index b9b97b1998f..30e3cc190ae 100644 --- a/vpr/src/place/initial_placement.cpp +++ b/vpr/src/place/initial_placement.cpp @@ -67,7 +67,11 @@ static void clear_all_grid_locs(); * * @return true if macro was placed, false if not. */ -static bool place_macro(int macros_max_num_tries, const t_pl_macro& pl_macro, enum e_pad_loc_type pad_loc_type, std::vector* blk_types_empty_locs_in_grid, vtr::vector& block_scores); +static bool place_macro(int macros_max_num_tries, + const t_pl_macro& pl_macro, + enum e_pad_loc_type pad_loc_type, + std::vector* blk_types_empty_locs_in_grid, + vtr::vector& block_scores); /* * Assign scores to each block based on macro size and floorplanning constraints. @@ -107,7 +111,10 @@ static int get_blk_type_first_loc(t_pl_loc& loc, const t_pl_macro& pl_macro, std * @param blk_types_empty_locs_in_grid first location (lowest y) and number of remaining blocks in each column for the blk_id type * */ -static void update_blk_type_first_loc(int blk_type_column_index, t_logical_block_type_ptr block_type, const t_pl_macro& pl_macro, std::vector* blk_types_empty_locs_in_grid); +static void update_blk_type_first_loc(int blk_type_column_index, + t_logical_block_type_ptr block_type, + const t_pl_macro& pl_macro, + std::vector* blk_types_empty_locs_in_grid); /** * @brief Initializes empty locations of the grid with a specific block type into vector for dense initial placement @@ -212,7 +219,8 @@ static void check_initial_placement_legality() { for (auto blk_id : cluster_ctx.clb_nlist.blocks()) { if (place_ctx.block_locs[blk_id].loc.x == INVALID_X) { - VTR_LOG("Block %s (# %d) of type %s could not be placed during initial placement iteration %d\n", cluster_ctx.clb_nlist.block_name(blk_id).c_str(), blk_id, cluster_ctx.clb_nlist.block_type(blk_id)->name, MAX_INIT_PLACE_ATTEMPTS - 1); + VTR_LOG("Block %s (# %d) of type %s could not be placed during initial placement iteration %d\n", + cluster_ctx.clb_nlist.block_name(blk_id).c_str(), blk_id, cluster_ctx.clb_nlist.block_type(blk_id)->name, MAX_INIT_PLACE_ATTEMPTS - 1); unplaced_blocks++; } } From 5eb7a07b2a4f39cff390d49019ada68309d67636 Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Fri, 16 Feb 2024 16:44:41 -0500 Subject: [PATCH 012/257] clean modified cluster_constraints elements --- vpr/src/pack/pack.cpp | 24 +++++++++++++++++++----- vpr/src/place/place_constraints.cpp | 5 ++--- 2 files changed, 21 insertions(+), 8 deletions(-) diff --git a/vpr/src/pack/pack.cpp b/vpr/src/pack/pack.cpp index 67ab28d8bbc..013edc51b77 100644 --- a/vpr/src/pack/pack.cpp +++ b/vpr/src/pack/pack.cpp @@ -66,7 +66,6 @@ static void update_noc_reachability_partitions(const std::vector& n const auto& atom_ctx = g_vpr_ctx.atom(); auto& constraints = g_vpr_ctx.mutable_floorplanning().constraints; const auto& high_fanout_thresholds = g_vpr_ctx.cl_helper().high_fanout_thresholds; - const auto& device_ctx = g_vpr_ctx.device(); const size_t high_fanout_threshold = high_fanout_thresholds.get_threshold(""); @@ -79,9 +78,9 @@ static void update_noc_reachability_partitions(const std::vector& n RegionRectCoord unconstrained_rect{0, 0, - (int)device_ctx.grid.width() - 1, - (int)device_ctx.grid.height() - 1, - -1}; + std::numeric_limits::max(), + std::numeric_limits::max(), + 0}; Region unconstrained_region; unconstrained_region.set_region_rect(unconstrained_rect); @@ -128,7 +127,6 @@ static void update_noc_reachability_partitions(const std::vector& n } else { auto& atom_partition = constraints.get_mutable_partition(atom_partition_id); auto& atom_partition_region = atom_partition.get_mutable_part_region(); -// std::cout << "ss" << atom_partition_region.get_exclusivity_index() << std::endl; VTR_ASSERT(atom_partition_region.get_exclusivity_index() < 0 || current_atom == noc_atom_id); atom_partition_region.set_exclusivity_index(exclusivity_cnt); } @@ -253,6 +251,7 @@ bool try_pack(t_packer_opts* packer_opts, int pack_iteration = 1; bool floorplan_regions_overfull = false; + auto constraints_backup = g_vpr_ctx.floorplanning().constraints; // find all NoC router atoms auto noc_atoms = find_noc_router_atoms(); update_noc_reachability_partitions(noc_atoms); @@ -398,6 +397,21 @@ bool try_pack(t_packer_opts* packer_opts, //check clustering and output it check_and_output_clustering(*packer_opts, is_clock, arch, helper_ctx.total_clb_num, clustering_data.intra_lb_routing); + + g_vpr_ctx.mutable_floorplanning().constraints = constraints_backup; + const int max_y = (int)g_vpr_ctx.device().grid.height(); + const int max_x = (int)g_vpr_ctx.device().grid.width(); + for (auto& cluster_partition_region : g_vpr_ctx.mutable_floorplanning().cluster_constraints) { + const auto& regions = cluster_partition_region.get_regions(); + if (regions.size() == 1) { + const auto rect = regions[0].get_region_rect(); + + if (rect.xmin <= 0 && rect.ymin <= 0 && rect.xmax >= max_x && rect.ymax >= max_y) { + cluster_partition_region = PartitionRegion(); + } + } + } + // Free Data Structures free_clustering_data(*packer_opts, clustering_data); diff --git a/vpr/src/place/place_constraints.cpp b/vpr/src/place/place_constraints.cpp index e1b153d9d71..6a425401718 100644 --- a/vpr/src/place/place_constraints.cpp +++ b/vpr/src/place/place_constraints.cpp @@ -33,8 +33,7 @@ int check_placement_floorplanning() { /*returns true if cluster has floorplanning constraints, false if it doesn't*/ bool is_cluster_constrained(ClusterBlockId blk_id) { auto& floorplanning_ctx = g_vpr_ctx.floorplanning(); - PartitionRegion pr; - pr = floorplanning_ctx.cluster_constraints[blk_id]; + const PartitionRegion& pr = floorplanning_ctx.cluster_constraints[blk_id]; return (!pr.empty()); } @@ -250,7 +249,7 @@ void load_cluster_constraints() { PartitionRegion empty_pr; floorplanning_ctx.cluster_constraints[cluster_id] = empty_pr; - //if there are any constrainted atoms in the cluster, + //if there are any constrained atoms in the cluster, //we update the cluster's PartitionRegion for (auto atom : *atoms) { PartitionId partid = floorplanning_ctx.constraints.get_atom_partition(atom); From 1667eebde9106b637b324ba99807a7d89bcf3682 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Mon, 19 Feb 2024 15:00:54 -0500 Subject: [PATCH 013/257] architecture: fix 3d delay of siv full opin --- .../3d_full_OPIN_inter_die_stratixiv_arch.timing.xml | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/vtr_flow/arch/multi_die/stratixiv_3d/3d_full_OPIN_inter_die_stratixiv_arch.timing.xml b/vtr_flow/arch/multi_die/stratixiv_3d/3d_full_OPIN_inter_die_stratixiv_arch.timing.xml index fdf81e678b4..7043961214a 100644 --- a/vtr_flow/arch/multi_die/stratixiv_3d/3d_full_OPIN_inter_die_stratixiv_arch.timing.xml +++ b/vtr_flow/arch/multi_die/stratixiv_3d/3d_full_OPIN_inter_die_stratixiv_arch.timing.xml @@ -5115,9 +5115,11 @@ while keeping the switch delay a reasonable (see comment in ) portion of the overall delay. --> + + @@ -5215,14 +5217,14 @@ --> - + 1 1 1 1 1 1 1 1 1 - + - + - + From 9262224a5615050301bf8de13ef157ae33a1474c Mon Sep 17 00:00:00 2001 From: amin1377 Date: Mon, 19 Feb 2024 15:07:09 -0500 Subject: [PATCH 015/257] architecture: update the readme for 3d arch --- vtr_flow/arch/multi_die/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/vtr_flow/arch/multi_die/README.md b/vtr_flow/arch/multi_die/README.md index 23aa7bcff79..d8e511368eb 100644 --- a/vtr_flow/arch/multi_die/README.md +++ b/vtr_flow/arch/multi_die/README.md @@ -28,6 +28,7 @@ This directory contains architecture files for 3D FPGAs. The architectures are d - The architecture has two dice. - Both dice are SIV-like FPGA fabric. - All pins can cross die. + - This is a completely hypothetical architecture, as the area required to place drivers on every channel segment to drive an IPIN on the other die would be too large. For the purpose of this scenario, we assume an inter-die connection delay of 0. - `3d_full_OPIN_inter_die_stratixiv_arch.timing.xml` - The architecture has two dice. - Both dice are SIV-like FPGA fabric. From ad0c69a80136ecbc55f4be145725eeedbe6ec619 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Mon, 19 Feb 2024 19:23:42 -0500 Subject: [PATCH 016/257] vpr: place: add layer_coord to placer context --- vpr/src/place/placer_context.h | 1 + 1 file changed, 1 insertion(+) diff --git a/vpr/src/place/placer_context.h b/vpr/src/place/placer_context.h index f5e56bbf37f..6cdd7684c6e 100644 --- a/vpr/src/place/placer_context.h +++ b/vpr/src/place/placer_context.h @@ -113,6 +113,7 @@ struct PlacerMoveContext : public Context { // These vectors will grow up with the net size as it is mostly used to save coords of the net pins or net bb edges std::vector X_coord; std::vector Y_coord; + std::vector layer_coord; // Container to save the highly critical pins (higher than a timing criticality limit setted by commandline option) std::vector> highly_crit_pins; From 125f0b1787684db19ec38277f3fd9dd2c90aaf70 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Mon, 19 Feb 2024 19:24:12 -0500 Subject: [PATCH 017/257] vpr: place: initialize layer_coord --- vpr/src/place/place.cpp | 1 + 1 file changed, 1 insertion(+) diff --git a/vpr/src/place/place.cpp b/vpr/src/place/place.cpp index 2e30d2f3c43..a58bb29d1ee 100644 --- a/vpr/src/place/place.cpp +++ b/vpr/src/place/place.cpp @@ -933,6 +933,7 @@ void try_place(const Netlist<>& net_list, //allocate helper vectors that are used by many move generators place_move_ctx.X_coord.resize(10, 0); place_move_ctx.Y_coord.resize(10, 0); + place_move_ctx.layer_coord.resize(10, 0); //allocate move type statistics vectors MoveTypeStat move_type_stat; From a40af3ffbc504c3e1eb0276c6ccf891b4edd2cc9 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Mon, 19 Feb 2024 19:29:31 -0500 Subject: [PATCH 018/257] vpr: place: update median move generator to get median of layers --- vpr/src/place/median_move_generator.cpp | 52 ++++++++++++++----------- 1 file changed, 29 insertions(+), 23 deletions(-) diff --git a/vpr/src/place/median_move_generator.cpp b/vpr/src/place/median_move_generator.cpp index 324d0cd3e44..a107c85cd77 100644 --- a/vpr/src/place/median_move_generator.cpp +++ b/vpr/src/place/median_move_generator.cpp @@ -49,6 +49,7 @@ e_create_move MedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& blocks_ //reused to save allocation time place_move_ctx.X_coord.clear(); place_move_ctx.Y_coord.clear(); + place_move_ctx.layer_coord.clear(); std::vector layer_blk_cnt(num_layers, 0); //true if the net is a feedback from the block to itself @@ -112,27 +113,19 @@ e_create_move MedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& blocks_ place_move_ctx.X_coord.push_back(coords.xmax); place_move_ctx.Y_coord.push_back(coords.ymin); place_move_ctx.Y_coord.push_back(coords.ymax); - if (is_multi_layer) { - for (int layer_num = 0; layer_num < num_layers; layer_num++) { - layer_blk_cnt[layer_num] += place_move_ctx.num_sink_pin_layer[size_t(net_id)][layer_num]; - } - // If the pin under consideration is of type sink, it shouldn't be added to layer_blk_cnt since the block - // is moving - if (cluster_ctx.clb_nlist.pin_type(pin_id) == PinType::SINK) { - VTR_ASSERT_SAFE(layer_blk_cnt[from_layer] > 0); - layer_blk_cnt[from_layer]--; - } - } + place_move_ctx.layer_coord.push_back(coords.layer_min); + place_move_ctx.layer_coord.push_back(coords.layer_max); } - if ((place_move_ctx.X_coord.empty()) || (place_move_ctx.Y_coord.empty())) { - VTR_LOGV_DEBUG(g_vpr_ctx.placement().f_placer_debug, "\tMove aborted - X_coord and y_coord are empty\n"); + if ((place_move_ctx.X_coord.empty()) || (place_move_ctx.Y_coord.empty()) || (place_move_ctx.layer_coord.empty())) { + VTR_LOGV_DEBUG(g_vpr_ctx.placement().f_placer_debug, "\tMove aborted - X_coord or y_coord or layer_coord are empty\n"); return e_create_move::ABORT; } //calculate the median region std::sort(place_move_ctx.X_coord.begin(), place_move_ctx.X_coord.end()); std::sort(place_move_ctx.Y_coord.begin(), place_move_ctx.Y_coord.end()); + std::sort(place_move_ctx.layer_coord.begin(), place_move_ctx.layer_coord.end()); limit_coords.xmin = place_move_ctx.X_coord[floor((place_move_ctx.X_coord.size() - 1) / 2)]; limit_coords.xmax = place_move_ctx.X_coord[floor((place_move_ctx.X_coord.size() - 1) / 2) + 1]; @@ -140,6 +133,9 @@ e_create_move MedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& blocks_ limit_coords.ymin = place_move_ctx.Y_coord[floor((place_move_ctx.Y_coord.size() - 1) / 2)]; limit_coords.ymax = place_move_ctx.Y_coord[floor((place_move_ctx.Y_coord.size() - 1) / 2) + 1]; + limit_coords.layer_min = place_move_ctx.layer_coord[floor((place_move_ctx.layer_coord.size() - 1) / 2)]; + limit_coords.layer_max = place_move_ctx.layer_coord[floor((place_move_ctx.layer_coord.size() - 1) / 2) + 1]; + //arrange the different range limiters t_range_limiters range_limiters{rlim, place_move_ctx.first_rlim, @@ -149,17 +145,8 @@ e_create_move MedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& blocks_ t_pl_loc median_point; median_point.x = (limit_coords.xmin + limit_coords.xmax) / 2; median_point.y = (limit_coords.ymin + limit_coords.ymax) / 2; + median_point.layer = (limit_coords.layer_min + limit_coords.layer_max) / 2; - // Before calling find_to_loc_centroid a valid layer should be assigned to "to" location. If there are multiple layers, the layer - // with highest number of sinks will be used. Otherwise, the same layer as "from" loc is assigned. - if (is_multi_layer) { - int layer_num = std::distance(layer_blk_cnt.begin(), std::max_element(layer_blk_cnt.begin(), layer_blk_cnt.end())); - median_point.layer = layer_num; - to.layer = layer_num; - } else { - median_point.layer = from.layer; - to.layer = from.layer; - } if (!find_to_loc_centroid(cluster_from_type, from, median_point, range_limiters, to, b_from)) { return e_create_move::ABORT; } @@ -194,6 +181,9 @@ static void get_bb_from_scratch_excluding_block(ClusterNetId net_id, t_bb& bb_co int ymin = OPEN; int ymax = OPEN; + int layer_min = OPEN; + int layer_max = OPEN; + int pnum; auto& cluster_ctx = g_vpr_ctx.clustering(); @@ -208,11 +198,14 @@ static void get_bb_from_scratch_excluding_block(ClusterNetId net_id, t_bb& bb_co pnum = net_pin_to_tile_pin_index(net_id, 0); int src_x = place_ctx.block_locs[bnum].loc.x + physical_tile_type(bnum)->pin_width_offset[pnum]; int src_y = place_ctx.block_locs[bnum].loc.y + physical_tile_type(bnum)->pin_height_offset[pnum]; + int src_layer = place_ctx.block_locs[bnum].loc.layer; xmin = src_x; ymin = src_y; xmax = src_x; ymax = src_y; + layer_min = src_layer; + layer_max = src_layer; first_block = true; } @@ -225,12 +218,15 @@ static void get_bb_from_scratch_excluding_block(ClusterNetId net_id, t_bb& bb_co const auto& block_loc = place_ctx.block_locs[bnum].loc; int x = block_loc.x + physical_tile_type(bnum)->pin_width_offset[pnum]; int y = block_loc.y + physical_tile_type(bnum)->pin_height_offset[pnum]; + int layer = block_loc.layer; if (!first_block) { xmin = x; ymin = y; xmax = x; ymax = y; + layer_max = layer; + layer_min = layer; first_block = true; continue; } @@ -245,6 +241,12 @@ static void get_bb_from_scratch_excluding_block(ClusterNetId net_id, t_bb& bb_co } else if (y > ymax) { ymax = y; } + + if (layer < layer_min) { + layer_min = layer; + } else if (layer > layer_max) { + layer_max = layer; + } } /* Now I've found the coordinates of the bounding box. There are no * @@ -258,6 +260,10 @@ static void get_bb_from_scratch_excluding_block(ClusterNetId net_id, t_bb& bb_co bb_coord_new.ymin = std::max(std::min(ymin, device_ctx.grid.height() - 2), 1); //-2 for no perim channels bb_coord_new.xmax = std::max(std::min(xmax, device_ctx.grid.width() - 2), 1); //-2 for no perim channels bb_coord_new.ymax = std::max(std::min(ymax, device_ctx.grid.height() - 2), 1); //-2 for no perim channels + VTR_ASSERT(layer_min >= 0); + bb_coord_new.layer_min = layer_min; + VTR_ASSERT(layer_max < device_ctx.grid.get_num_layers()); + bb_coord_new.layer_max = layer_max; } /* From bdc1d81707201b96e657bc7db1e774a1fc88b452 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Tue, 20 Feb 2024 09:34:22 -0500 Subject: [PATCH 019/257] vpr: place: update increamental update of median move to include the layer --- vpr/src/place/median_move_generator.cpp | 84 +++++++++++++++++++++++-- 1 file changed, 80 insertions(+), 4 deletions(-) diff --git a/vpr/src/place/median_move_generator.cpp b/vpr/src/place/median_move_generator.cpp index a107c85cd77..5f7f8a5976e 100644 --- a/vpr/src/place/median_move_generator.cpp +++ b/vpr/src/place/median_move_generator.cpp @@ -5,7 +5,14 @@ #include "placer_globals.h" #include "move_utils.h" -static bool get_bb_incrementally(ClusterNetId net_id, t_bb& bb_coord_new, int xold, int yold, int xnew, int ynew); +static bool get_bb_incrementally(ClusterNetId net_id, + t_bb& bb_coord_new, + int xold, + int yold, + int layer_old, + int xnew, + int ynew, + int layer_old); static void get_bb_from_scratch_excluding_block(ClusterNetId net_id, t_bb& bb_coord_new, ClusterBlockId block_id, bool& skip_net); @@ -43,7 +50,7 @@ e_create_move MedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& blocks_ t_bb coords(OPEN, OPEN, OPEN, OPEN, OPEN, OPEN); t_bb limit_coords; ClusterBlockId bnum; - int pnum, xnew, xold, ynew, yold; + int pnum, xnew, xold, ynew, yold, layer_new, layer_old; //clear the vectors that saves X & Y coords //reused to save allocation time @@ -85,8 +92,11 @@ e_create_move MedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& blocks_ VTR_ASSERT(pnum >= 0); xold = place_ctx.block_locs[bnum].loc.x + physical_tile_type(bnum)->pin_width_offset[pnum]; yold = place_ctx.block_locs[bnum].loc.y + physical_tile_type(bnum)->pin_height_offset[pnum]; + layer_old = place_ctx.block_locs[bnum].loc.layer; xold = std::max(std::min(xold, (int)device_ctx.grid.width() - 2), 1); //-2 for no perim channels yold = std::max(std::min(yold, (int)device_ctx.grid.height() - 2), 1); //-2 for no perim channels + VTR_ASSERT(layer_old >= 0); + VTR_ASSERT(layer_old < device_ctx.grid.get_num_layers()); //To calulate the bb incrementally while excluding the moving block //assume that the moving block is moved to a non-critical coord of the bb @@ -102,7 +112,20 @@ e_create_move MedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& blocks_ ynew = net_bb_coords.ymin; } - if (!get_bb_incrementally(net_id, coords, xold, yold, xnew, ynew)) { + if (net_bb_coords.layer_min == layer_old) { + layer_new = net_bb_coords.layer_max; + } else { + layer_new = net_bb_coords.layer_min; + } + + if (!get_bb_incrementally(net_id, + coords, + xold, + yold, + layer_old + xnew, + ynew, + layer_new)) { get_bb_from_scratch_excluding_block(net_id, coords, b_from, skip_net); if (skip_net) continue; @@ -279,7 +302,14 @@ static void get_bb_from_scratch_excluding_block(ClusterNetId net_id, t_bb& bb_co * the pins always lie on the outside of the bounding box. * * The x and y coordinates are the pin's x and y coordinates. */ /* IO blocks are considered to be one cell in for simplicity. */ -static bool get_bb_incrementally(ClusterNetId net_id, t_bb& bb_coord_new, int xold, int yold, int xnew, int ynew) { +static bool get_bb_incrementally(ClusterNetId net_id, + t_bb& bb_coord_new, + int xold, + int yold, + int layer_old, + int xnew, + int ynew, + int layer_new) { //TODO: account for multiple physical pin instances per logical pin auto& device_ctx = g_vpr_ctx.device(); @@ -287,8 +317,12 @@ static bool get_bb_incrementally(ClusterNetId net_id, t_bb& bb_coord_new, int xo xnew = std::max(std::min(xnew, device_ctx.grid.width() - 2), 1); //-2 for no perim channels ynew = std::max(std::min(ynew, device_ctx.grid.height() - 2), 1); //-2 for no perim channels + VTR_ASSERT(layer_new > 0); + VTR_ASSERT(layer_new < device_ctx.grid.get_num_layers()); xold = std::max(std::min(xold, device_ctx.grid.width() - 2), 1); //-2 for no perim channels yold = std::max(std::min(yold, device_ctx.grid.height() - 2), 1); //-2 for no perim channels + VTR_ASSERT(layer_old > 0); + VTR_ASSERT(layer_old < device_ctx.grid.get_num_layers()); t_bb union_bb_edge; t_bb union_bb; @@ -416,5 +450,47 @@ static bool get_bb_incrementally(ClusterNetId net_id, t_bb& bb_coord_new, int xo bb_coord_new.ymin = curr_bb_coord.ymin; bb_coord_new.ymax = curr_bb_coord.ymax; } + + if (layer_new < layer_old) { + if (layer_old == curr_bb_coord.layer_max) { + if (curr_bb_edge.layer_max == 1) { + return false; + } else { + bb_coord_new.layer_max = curr_bb_coord.layer_max; + } + } else { + bb_coord_new.layer_max = curr_bb_coord.layer_max; + } + + if (layer_new < curr_bb_coord.layer_min) { + bb_coord_new.layer_min = layer_new; + } else if (layer_new == curr_bb_coord.layer_min) { + bb_coord_new.layer_min = layer_new; + } else { + bb_coord_new.layer_min = curr_bb_coord.layer_min; + } + + } else if (layer_new > layer_old) { + if (layer_old == curr_bb_coord.layer_min) { + if (curr_bb_edge.layer_min == 1) { + return false; + } else { + bb_coord_new.layer_min = curr_bb_coord.layer_min; + } + } else { + bb_coord_new.layer_min = curr_bb_coord.layer_min; + } + + if (layer_new > curr_bb_coord.layer_max) { + bb_coord_new.layer_max = layer_new; + } else if (layer_new == curr_bb_coord.layer_max) { + bb_coord_new.layer_max = layer_new; + } else { + bb_coord_new.layer_max = curr_bb_coord.layer_max; + } + } else { + bb_coord_new.layer_min = curr_bb_coord.layer_min; + bb_coord_new.layer_max = curr_bb_coord.layer_max; + } return true; } From bebec3c4d991fee8ce95c0b110d3cc830726739b Mon Sep 17 00:00:00 2001 From: amin1377 Date: Tue, 20 Feb 2024 10:03:29 -0500 Subject: [PATCH 020/257] vpr: place: update weighted median move generator bb calculator to consider layers --- vpr/src/place/move_utils.h | 2 ++ .../place/weighted_median_move_generator.cpp | 30 +++++++++++++++++-- 2 files changed, 30 insertions(+), 2 deletions(-) diff --git a/vpr/src/place/move_utils.h b/vpr/src/place/move_utils.h index 3ff8e729833..38488da37a2 100644 --- a/vpr/src/place/move_utils.h +++ b/vpr/src/place/move_utils.h @@ -69,6 +69,8 @@ struct t_bb_cost { t_edge_cost xmax = {0, 0.0}; t_edge_cost ymin = {0, 0.0}; t_edge_cost ymax = {0, 0.0}; + t_edge_cost layer_min = {0, 0.}; + t_edge_cost layer_max = {0, 0.}; }; /** diff --git a/vpr/src/place/weighted_median_move_generator.cpp b/vpr/src/place/weighted_median_move_generator.cpp index 2d343cd3347..4549bfa9dca 100644 --- a/vpr/src/place/weighted_median_move_generator.cpp +++ b/vpr/src/place/weighted_median_move_generator.cpp @@ -162,8 +162,8 @@ e_create_move WeightedMedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& * - criticalities: the timing criticalities of all connections */ static void get_bb_cost_for_net_excluding_block(ClusterNetId net_id, ClusterBlockId, ClusterPinId moving_pin_id, const PlacerCriticalities* criticalities, t_bb_cost* coords, bool& skip_net) { - int pnum, x, y, xmin, xmax, ymin, ymax; - float xmin_cost, xmax_cost, ymin_cost, ymax_cost, cost; + int pnum, x, y, layer, xmin, xmax, ymin, ymax, layer_min, layer_max; + float xmin_cost, xmax_cost, ymin_cost, ymax_cost, layer_min_cost, layer_max_cost, cost; skip_net = true; @@ -171,11 +171,16 @@ static void get_bb_cost_for_net_excluding_block(ClusterNetId net_id, ClusterBloc xmax = 0; ymin = 0; ymax = 0; + layer_min = 0; + layer_max = 0; + cost = 0.0; xmin_cost = 0.0; xmax_cost = 0.0; ymin_cost = 0.0; ymax_cost = 0.0; + layer_min_cost = 0.; + layer_max_cost = 0.; auto& cluster_ctx = g_vpr_ctx.clustering(); auto& place_ctx = g_vpr_ctx.placement(); @@ -187,6 +192,7 @@ static void get_bb_cost_for_net_excluding_block(ClusterNetId net_id, ClusterBloc int ipin; for (auto pin_id : cluster_ctx.clb_nlist.net_pins(net_id)) { bnum = cluster_ctx.clb_nlist.pin_block(pin_id); + layer = place_ctx.block_locs[bnum].loc.layer; if (pin_id != moving_pin_id) { skip_net = false; @@ -220,6 +226,10 @@ static void get_bb_cost_for_net_excluding_block(ClusterNetId net_id, ClusterBloc xmax_cost = cost; ymax = y; ymax_cost = cost; + layer_min = layer; + layer_min_cost = cost; + layer_max = layer; + layer_max_cost = cost; is_first_block = false; } else { if (x < xmin) { @@ -237,6 +247,20 @@ static void get_bb_cost_for_net_excluding_block(ClusterNetId net_id, ClusterBloc ymax = y; ymax_cost = cost; } + + if (layer < layer_min) { + layer_min = layer; + layer_min_cost = cost; + } else if (layer > layer_max) { + layer_max = layer; + layer_max_cost = cost; + } else if (layer == layer_min) { + if (cost > layer_min_cost) + layer_min_cost = cost; + } else if (layer == layer_max) { + if (cost > layer_max_cost) + layer_max_cost = cost; + } } } } @@ -246,4 +270,6 @@ static void get_bb_cost_for_net_excluding_block(ClusterNetId net_id, ClusterBloc coords->xmax = {xmax, xmax_cost}; coords->ymin = {ymin, ymin_cost}; coords->ymax = {ymax, ymax_cost}; + coords->layer_min = {layer_min, layer_min_cost}; + coords->layer_max = {layer_max, layer_max_cost}; } From 27a3705d840083f86cea2ad295633f27ada6cf13 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Tue, 20 Feb 2024 10:14:20 -0500 Subject: [PATCH 021/257] vpr: place: update weigted median to get the layer of bbs --- .../place/weighted_median_move_generator.cpp | 38 ++++++++----------- 1 file changed, 15 insertions(+), 23 deletions(-) diff --git a/vpr/src/place/weighted_median_move_generator.cpp b/vpr/src/place/weighted_median_move_generator.cpp index 4549bfa9dca..e886238064c 100644 --- a/vpr/src/place/weighted_median_move_generator.cpp +++ b/vpr/src/place/weighted_median_move_generator.cpp @@ -45,6 +45,7 @@ e_create_move WeightedMedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& //reused to save allocation time place_move_ctx.X_coord.clear(); place_move_ctx.Y_coord.clear(); + place_move_ctx.layer_coord.clear(); std::vector layer_blk_cnt(num_layers, 0); //true if the net is a feedback from the block to itself (all the net terminals are connected to the same block) @@ -76,27 +77,19 @@ e_create_move WeightedMedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& place_move_ctx.X_coord.insert(place_move_ctx.X_coord.end(), ceil(coords.xmax.criticality * CRIT_MULT_FOR_W_MEDIAN), coords.xmax.edge); place_move_ctx.Y_coord.insert(place_move_ctx.Y_coord.end(), ceil(coords.ymin.criticality * CRIT_MULT_FOR_W_MEDIAN), coords.ymin.edge); place_move_ctx.Y_coord.insert(place_move_ctx.Y_coord.end(), ceil(coords.ymax.criticality * CRIT_MULT_FOR_W_MEDIAN), coords.ymax.edge); - // If multile layers are available, I need to keep track of how many sinks are in each layer. - if (is_multi_layer) { - for (int layer_num = 0; layer_num < num_layers; layer_num++) { - layer_blk_cnt[layer_num] += place_move_ctx.num_sink_pin_layer[size_t(net_id)][layer_num]; - } - // If the pin under consideration if of type sink, it is counted in place_move_ctx.num_sink_pin_layer, and we don't want to consider the moving pins - if (cluster_ctx.clb_nlist.pin_type(pin_id) != PinType::DRIVER) { - VTR_ASSERT(layer_blk_cnt[from.layer] > 0); - layer_blk_cnt[from.layer]--; - } - } + place_move_ctx.layer_coord.insert(place_move_ctx.layer_coord.end(), ceil(coords.layer_min.criticality * CRIT_MULT_FOR_W_MEDIAN), coords.layer_min.edge); + place_move_ctx.layer_coord.insert(place_move_ctx.layer_coord.end(), ceil(coords.layer_max.criticality * CRIT_MULT_FOR_W_MEDIAN), coords.layer_max.edge); } - if ((place_move_ctx.X_coord.empty()) || (place_move_ctx.Y_coord.empty())) { - VTR_LOGV_DEBUG(g_vpr_ctx.placement().f_placer_debug, "\tMove aborted - X_coord and y_coord are empty\n"); + if ((place_move_ctx.X_coord.empty()) || (place_move_ctx.Y_coord.empty()) || (place_move_ctx.layer_coord.empty())) { + VTR_LOGV_DEBUG(g_vpr_ctx.placement().f_placer_debug, "\tMove aborted - X_coord or y_coord or layer_coord are empty\n"); return e_create_move::ABORT; } //calculate the weighted median region std::sort(place_move_ctx.X_coord.begin(), place_move_ctx.X_coord.end()); std::sort(place_move_ctx.Y_coord.begin(), place_move_ctx.Y_coord.end()); + std::sort(place_move_ctx.layer_coord.begin(), place_move_ctx.layer_coord.end()); if (place_move_ctx.X_coord.size() == 1) { limit_coords.xmin = place_move_ctx.X_coord[0]; @@ -114,6 +107,14 @@ e_create_move WeightedMedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& limit_coords.ymax = place_move_ctx.Y_coord[floor((place_move_ctx.Y_coord.size() - 1) / 2) + 1]; } + if (place_move_ctx.layer_coord.size() == 1) { + limit_coords.layer_min = place_move_ctx.layer_coord[0]; + limit_coords.layer_max = limit_coords.layer_min; + } else { + limit_coords.layer_min = place_move_ctx.layer_coord[floor((place_move_ctx.layer_coord.size() - 1) / 2)]; + limit_coords.layer_max = place_move_ctx.layer_coord[floor((place_move_ctx.layer_coord.size() - 1) / 2) + 1]; + } + t_range_limiters range_limiters{rlim, place_move_ctx.first_rlim, placer_opts.place_dm_rlim}; @@ -121,17 +122,8 @@ e_create_move WeightedMedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& t_pl_loc w_median_point; w_median_point.x = (limit_coords.xmin + limit_coords.xmax) / 2; w_median_point.y = (limit_coords.ymin + limit_coords.ymax) / 2; + w_median_point.layer = ((limit_coords.layer_min + limit_coords.layer_max) / 2); - // If multiple layers are available, we would choose the median layer, otherwise the same layer (layer #0) as the from_loc would be chosen - //#TODO: Since we are now only considering 2 layers, the layer with maximum number of sinks should be chosen. we need to update it to get the true median - if (is_multi_layer) { - int layer_num = std::distance(layer_blk_cnt.begin(), std::max_element(layer_blk_cnt.begin(), layer_blk_cnt.end())); - w_median_point.layer = layer_num; - to.layer = layer_num; - } else { - w_median_point.layer = from.layer; - to.layer = from.layer; - } if (!find_to_loc_centroid(cluster_from_type, from, w_median_point, range_limiters, to, b_from)) { return e_create_move::ABORT; } From 0e73635de1d85719e7eeda814772cb59a3e0c873 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Tue, 20 Feb 2024 10:31:38 -0500 Subject: [PATCH 022/257] vpr: place: remove unused variable --- vpr/src/place/median_move_generator.cpp | 6 +++--- vpr/src/place/weighted_median_move_generator.cpp | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/vpr/src/place/median_move_generator.cpp b/vpr/src/place/median_move_generator.cpp index 5f7f8a5976e..2cf77a5c9ed 100644 --- a/vpr/src/place/median_move_generator.cpp +++ b/vpr/src/place/median_move_generator.cpp @@ -12,7 +12,7 @@ static bool get_bb_incrementally(ClusterNetId net_id, int layer_old, int xnew, int ynew, - int layer_old); + int layer_new); static void get_bb_from_scratch_excluding_block(ClusterNetId net_id, t_bb& bb_coord_new, ClusterBlockId block_id, bool& skip_net); @@ -36,7 +36,7 @@ e_create_move MedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& blocks_ auto& place_move_ctx = g_placer_ctx.mutable_move(); const int num_layers = device_ctx.grid.get_num_layers(); - bool is_multi_layer = (num_layers > 1); + t_pl_loc from = place_ctx.block_locs[b_from].loc; int from_layer = from.layer; @@ -122,7 +122,7 @@ e_create_move MedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& blocks_ coords, xold, yold, - layer_old + layer_old, xnew, ynew, layer_new)) { diff --git a/vpr/src/place/weighted_median_move_generator.cpp b/vpr/src/place/weighted_median_move_generator.cpp index e886238064c..058671c5121 100644 --- a/vpr/src/place/weighted_median_move_generator.cpp +++ b/vpr/src/place/weighted_median_move_generator.cpp @@ -28,7 +28,7 @@ e_create_move WeightedMedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& auto& place_move_ctx = g_placer_ctx.mutable_move(); int num_layers = g_vpr_ctx.device().grid.get_num_layers(); - bool is_multi_layer = (num_layers > 1); + t_pl_loc from = place_ctx.block_locs[b_from].loc; auto cluster_from_type = cluster_ctx.clb_nlist.block_type(b_from); From cf4ad7883815bf3e5342837417518c97b5012084 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Tue, 20 Feb 2024 10:48:27 -0500 Subject: [PATCH 023/257] vpr: place: assing valid layer to the centroid location passed to find_to_centroid_loc method --- vpr/src/place/centroid_move_generator.cpp | 4 ++-- vpr/src/place/move_utils.cpp | 2 +- vpr/src/place/weighted_centroid_move_generator.cpp | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/vpr/src/place/centroid_move_generator.cpp b/vpr/src/place/centroid_move_generator.cpp index f1316701998..9e9dbe70150 100644 --- a/vpr/src/place/centroid_move_generator.cpp +++ b/vpr/src/place/centroid_move_generator.cpp @@ -39,8 +39,8 @@ e_create_move CentroidMoveGenerator::propose_move(t_pl_blocks_to_be_moved& block calculate_centroid_loc(b_from, false, centroid, nullptr); // Centroid location is not necessarily a valid location, and the downstream location expect a valid - // layer for "to" location. So if the layer is not valid, we set it to the same layer as from loc. - to.layer = (centroid.layer < 0) ? from.layer : centroid.layer; + // layer for the centroid location. So if the layer is not valid, we set it to the same layer as from loc. + centroid.layer = (centroid.layer < 0) ? from.layer : centroid.layer; /* Find a location near the weighted centroid_loc */ if (!find_to_loc_centroid(cluster_from_type, from, centroid, range_limiters, to, b_from)) { return e_create_move::ABORT; diff --git a/vpr/src/place/move_utils.cpp b/vpr/src/place/move_utils.cpp index 2c62d6ec371..83c33961634 100644 --- a/vpr/src/place/move_utils.cpp +++ b/vpr/src/place/move_utils.cpp @@ -923,7 +923,7 @@ bool find_to_loc_centroid(t_logical_block_type_ptr blk_type, ClusterBlockId b_from) { //Retrieve the compressed block grid for this block type const auto& compressed_block_grid = g_vpr_ctx.placement().compressed_block_grids[blk_type->index]; - const int to_layer_num = to_loc.layer; + const int to_layer_num = centroid.layer; VTR_ASSERT(to_layer_num >= 0); const int num_layers = g_vpr_ctx.device().grid.get_num_layers(); diff --git a/vpr/src/place/weighted_centroid_move_generator.cpp b/vpr/src/place/weighted_centroid_move_generator.cpp index d33b6fa2ebe..93dd5c796f8 100644 --- a/vpr/src/place/weighted_centroid_move_generator.cpp +++ b/vpr/src/place/weighted_centroid_move_generator.cpp @@ -40,7 +40,7 @@ e_create_move WeightedCentroidMoveGenerator::propose_move(t_pl_blocks_to_be_move // Centroid location is not necessarily a valid location, and the downstream location expect a valid // layer for "to" location. So if the layer is not valid, we set it to the same layer as from loc. - to.layer = (centroid.layer < 0) ? from.layer : centroid.layer; + centroid.layer = (centroid.layer < 0) ? from.layer : centroid.layer; if (!find_to_loc_centroid(cluster_from_type, from, centroid, range_limiters, to, b_from)) { return e_create_move::ABORT; } From f7dec22e1085f13be58b606716057c40caceb122 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Thu, 22 Feb 2024 15:59:56 -0500 Subject: [PATCH 024/257] vpr: placement: fix a typo in assertion --- vpr/src/place/median_move_generator.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/vpr/src/place/median_move_generator.cpp b/vpr/src/place/median_move_generator.cpp index 2cf77a5c9ed..ae8c1b335a7 100644 --- a/vpr/src/place/median_move_generator.cpp +++ b/vpr/src/place/median_move_generator.cpp @@ -321,7 +321,7 @@ static bool get_bb_incrementally(ClusterNetId net_id, VTR_ASSERT(layer_new < device_ctx.grid.get_num_layers()); xold = std::max(std::min(xold, device_ctx.grid.width() - 2), 1); //-2 for no perim channels yold = std::max(std::min(yold, device_ctx.grid.height() - 2), 1); //-2 for no perim channels - VTR_ASSERT(layer_old > 0); + VTR_ASSERT(layer_old >= 0); VTR_ASSERT(layer_old < device_ctx.grid.get_num_layers()); t_bb union_bb_edge; From 097e8810227bae64f5cdc19dd729324d4567ad5e Mon Sep 17 00:00:00 2001 From: amin1377 Date: Thu, 22 Feb 2024 16:03:26 -0500 Subject: [PATCH 025/257] vpr: placement: (2) fix a typo in assertion --- vpr/src/place/median_move_generator.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/vpr/src/place/median_move_generator.cpp b/vpr/src/place/median_move_generator.cpp index ae8c1b335a7..69358781c7d 100644 --- a/vpr/src/place/median_move_generator.cpp +++ b/vpr/src/place/median_move_generator.cpp @@ -317,7 +317,7 @@ static bool get_bb_incrementally(ClusterNetId net_id, xnew = std::max(std::min(xnew, device_ctx.grid.width() - 2), 1); //-2 for no perim channels ynew = std::max(std::min(ynew, device_ctx.grid.height() - 2), 1); //-2 for no perim channels - VTR_ASSERT(layer_new > 0); + VTR_ASSERT(layer_new >= 0); VTR_ASSERT(layer_new < device_ctx.grid.get_num_layers()); xold = std::max(std::min(xold, device_ctx.grid.width() - 2), 1); //-2 for no perim channels yold = std::max(std::min(yold, device_ctx.grid.height() - 2), 1); //-2 for no perim channels From a3c8aae7378723ec844e199014f24859feee07e9 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Fri, 23 Feb 2024 08:45:15 -0500 Subject: [PATCH 026/257] vpr: place: assign a valid value to layer if it is not valie --- vpr/src/place/median_move_generator.cpp | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/vpr/src/place/median_move_generator.cpp b/vpr/src/place/median_move_generator.cpp index 69358781c7d..6356083d40f 100644 --- a/vpr/src/place/median_move_generator.cpp +++ b/vpr/src/place/median_move_generator.cpp @@ -281,12 +281,10 @@ static void get_bb_from_scratch_excluding_block(ClusterNetId net_id, t_bb& bb_co * is 0). See route_common.cpp for a channel diagram. */ bb_coord_new.xmin = std::max(std::min(xmin, device_ctx.grid.width() - 2), 1); //-2 for no perim channels bb_coord_new.ymin = std::max(std::min(ymin, device_ctx.grid.height() - 2), 1); //-2 for no perim channels + bb_coord_new.layer_min = std::max(std::min(layer_min, device_ctx.grid.get_num_layers()), 0); bb_coord_new.xmax = std::max(std::min(xmax, device_ctx.grid.width() - 2), 1); //-2 for no perim channels bb_coord_new.ymax = std::max(std::min(ymax, device_ctx.grid.height() - 2), 1); //-2 for no perim channels - VTR_ASSERT(layer_min >= 0); - bb_coord_new.layer_min = layer_min; - VTR_ASSERT(layer_max < device_ctx.grid.get_num_layers()); - bb_coord_new.layer_max = layer_max; + bb_coord_new.layer_max = std::max(std::min(layer_max, device_ctx.grid.get_num_layers()), 0); } /* From d8b48827a79971411977de5a6ed3896370975ad5 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Fri, 23 Feb 2024 09:29:30 -0500 Subject: [PATCH 027/257] vpr: place: update median increamental bb update to take a valid layer num --- vpr/src/place/median_move_generator.cpp | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/vpr/src/place/median_move_generator.cpp b/vpr/src/place/median_move_generator.cpp index 6356083d40f..60475a0c8d3 100644 --- a/vpr/src/place/median_move_generator.cpp +++ b/vpr/src/place/median_move_generator.cpp @@ -281,10 +281,10 @@ static void get_bb_from_scratch_excluding_block(ClusterNetId net_id, t_bb& bb_co * is 0). See route_common.cpp for a channel diagram. */ bb_coord_new.xmin = std::max(std::min(xmin, device_ctx.grid.width() - 2), 1); //-2 for no perim channels bb_coord_new.ymin = std::max(std::min(ymin, device_ctx.grid.height() - 2), 1); //-2 for no perim channels - bb_coord_new.layer_min = std::max(std::min(layer_min, device_ctx.grid.get_num_layers()), 0); + bb_coord_new.layer_min = std::max(std::min(layer_min, device_ctx.grid.get_num_layers() - 1), 0); bb_coord_new.xmax = std::max(std::min(xmax, device_ctx.grid.width() - 2), 1); //-2 for no perim channels bb_coord_new.ymax = std::max(std::min(ymax, device_ctx.grid.height() - 2), 1); //-2 for no perim channels - bb_coord_new.layer_max = std::max(std::min(layer_max, device_ctx.grid.get_num_layers()), 0); + bb_coord_new.layer_max = std::max(std::min(layer_max, device_ctx.grid.get_num_layers() - 1), 0); } /* @@ -315,12 +315,11 @@ static bool get_bb_incrementally(ClusterNetId net_id, xnew = std::max(std::min(xnew, device_ctx.grid.width() - 2), 1); //-2 for no perim channels ynew = std::max(std::min(ynew, device_ctx.grid.height() - 2), 1); //-2 for no perim channels - VTR_ASSERT(layer_new >= 0); - VTR_ASSERT(layer_new < device_ctx.grid.get_num_layers()); + layer_new = std::max(std::min(layer_new, device_ctx.grid.get_num_layers() -1 ), 0); + xold = std::max(std::min(xold, device_ctx.grid.width() - 2), 1); //-2 for no perim channels yold = std::max(std::min(yold, device_ctx.grid.height() - 2), 1); //-2 for no perim channels - VTR_ASSERT(layer_old >= 0); - VTR_ASSERT(layer_old < device_ctx.grid.get_num_layers()); + layer_old = std::max(std::min(layer_old, device_ctx.grid.get_num_layers() - 1), 0); t_bb union_bb_edge; t_bb union_bb; From 67a898903547f02ab04c0cdaa7a3123cf464148e Mon Sep 17 00:00:00 2001 From: amin1377 Date: Fri, 23 Feb 2024 09:55:42 -0500 Subject: [PATCH 028/257] make format --- vpr/src/place/median_move_generator.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/vpr/src/place/median_move_generator.cpp b/vpr/src/place/median_move_generator.cpp index 60475a0c8d3..13c418ddf6b 100644 --- a/vpr/src/place/median_move_generator.cpp +++ b/vpr/src/place/median_move_generator.cpp @@ -315,7 +315,7 @@ static bool get_bb_incrementally(ClusterNetId net_id, xnew = std::max(std::min(xnew, device_ctx.grid.width() - 2), 1); //-2 for no perim channels ynew = std::max(std::min(ynew, device_ctx.grid.height() - 2), 1); //-2 for no perim channels - layer_new = std::max(std::min(layer_new, device_ctx.grid.get_num_layers() -1 ), 0); + layer_new = std::max(std::min(layer_new, device_ctx.grid.get_num_layers() - 1), 0); xold = std::max(std::min(xold, device_ctx.grid.width() - 2), 1); //-2 for no perim channels yold = std::max(std::min(yold, device_ctx.grid.height() - 2), 1); //-2 for no perim channels From e2da96087bf5c6b385a0c9f2c5eb171cbb4181b9 Mon Sep 17 00:00:00 2001 From: amin1377 Date: Fri, 23 Feb 2024 14:43:21 -0500 Subject: [PATCH 029/257] vpr: place: update get_bb_from_scratch to keep track of bb layer --- vpr/src/place/place.cpp | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/vpr/src/place/place.cpp b/vpr/src/place/place.cpp index a58bb29d1ee..f860edab422 100644 --- a/vpr/src/place/place.cpp +++ b/vpr/src/place/place.cpp @@ -2817,8 +2817,8 @@ static void get_bb_from_scratch(ClusterNetId net_id, t_bb& coords, t_bb& num_on_edges, vtr::NdMatrixProxy num_sink_pin_layer) { - int pnum, x, y, pin_layer, xmin, xmax, ymin, ymax; - int xmin_edge, xmax_edge, ymin_edge, ymax_edge; + int pnum, x, y, pin_layer, xmin, xmax, ymin, ymax, layer_min, layer_max; + int xmin_edge, xmax_edge, ymin_edge, ymax_edge, layer_min_edge, layer_max_edge; auto& cluster_ctx = g_vpr_ctx.clustering(); auto& place_ctx = g_vpr_ctx.placement(); @@ -2832,18 +2832,25 @@ static void get_bb_from_scratch(ClusterNetId net_id, + physical_tile_type(bnum)->pin_width_offset[pnum]; y = place_ctx.block_locs[bnum].loc.y + physical_tile_type(bnum)->pin_height_offset[pnum]; + pin_layer = place_ctx.block_locs[bnum].loc.layer; x = max(min(x, grid.width() - 2), 1); y = max(min(y, grid.height() - 2), 1); + pin_layer = max(min(pin_layer, grid.get_num_layers() - 1), 0); xmin = x; ymin = y; + layer_min = pin_layer; xmax = x; ymax = y; + layer_max = pin_layer; + xmin_edge = 1; ymin_edge = 1; + layer_min_edge = 1; xmax_edge = 1; ymax_edge = 1; + layer_max_edge = 1; for (int layer_num = 0; layer_num < grid.get_num_layers(); layer_num++) { num_sink_pin_layer[layer_num] = 0; @@ -2867,6 +2874,7 @@ static void get_bb_from_scratch(ClusterNetId net_id, x = max(min(x, grid.width() - 2), 1); //-2 for no perim channels y = max(min(y, grid.height() - 2), 1); //-2 for no perim channels + pin_layer = max(min(pin_layer, grid.get_num_layers() - 1), 0); if (x == xmin) { xmin_edge++; @@ -2894,6 +2902,19 @@ static void get_bb_from_scratch(ClusterNetId net_id, ymax_edge = 1; } + if (pin_layer == layer_min) { + layer_min_edge++; + } + if (pin_layer == layer_max) { + layer_max_edge++; + } else if (pin_layer < layer_min) { + layer_min = pin_layer; + layer_min_edge = 1; + } else if (pin_layer > layer_max) { + layer_max = pin_layer; + layer_max_edge = 1; + } + num_sink_pin_layer[pin_layer]++; } @@ -2903,11 +2924,18 @@ static void get_bb_from_scratch(ClusterNetId net_id, coords.xmax = xmax; coords.ymin = ymin; coords.ymax = ymax; + coords.layer_min = layer_min; + coords.layer_max = layer_max; + VTR_ASSERT(layer_min >= 0 && layer_min < device_ctx.grid.get_num_layers()); + VTR_ASSERT(layer_max >= 0 && layer_max < device_ctx.grid.get_num_layers()); + num_on_edges.xmin = xmin_edge; num_on_edges.xmax = xmax_edge; num_on_edges.ymin = ymin_edge; num_on_edges.ymax = ymax_edge; + num_on_edges.layer_min = layer_min_edge; + num_on_edges.layer_max = layer_max_edge; } /* This routine finds the bounding box of each net from scratch when the bounding box is of type per-layer (i.e. * From 755e74d2df7a1592d8f9846d49972d44eef7c71a Mon Sep 17 00:00:00 2001 From: amin1377 Date: Fri, 23 Feb 2024 14:48:58 -0500 Subject: [PATCH 030/257] vpr: place: update get_non_updateable_bb to keep track of layer of bb --- vpr/src/place/place.cpp | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/vpr/src/place/place.cpp b/vpr/src/place/place.cpp index f860edab422..eabdcd6c13e 100644 --- a/vpr/src/place/place.cpp +++ b/vpr/src/place/place.cpp @@ -3190,7 +3190,7 @@ static void get_non_updateable_bb(ClusterNetId net_id, vtr::NdMatrixProxy num_sink_pin_layer) { //TODO: account for multiple physical pin instances per logical pin - int xmax, ymax, xmin, ymin, x, y, layer; + int xmax, ymax, layer_max, xmin, ymin, layer_min, x, y, layer; int pnum; auto& cluster_ctx = g_vpr_ctx.clustering(); @@ -3204,11 +3204,14 @@ static void get_non_updateable_bb(ClusterNetId net_id, + physical_tile_type(bnum)->pin_width_offset[pnum]; y = place_ctx.block_locs[bnum].loc.y + physical_tile_type(bnum)->pin_height_offset[pnum]; + layer = place_ctx.block_locs[bnum].loc.layer; xmin = x; ymin = y; + layer_min = layer; xmax = x; ymax = y; + layer_max = layer; for (int layer_num = 0; layer_num < device_ctx.grid.get_num_layers(); layer_num++) { num_sink_pin_layer[layer_num] = 0; @@ -3235,6 +3238,12 @@ static void get_non_updateable_bb(ClusterNetId net_id, ymax = y; } + if (layer < layer_min) { + layer_min = layer; + } else if (layer > layer_max) { + layer_max = layer; + } + num_sink_pin_layer[layer]++; } @@ -3248,8 +3257,10 @@ static void get_non_updateable_bb(ClusterNetId net_id, bb_coord_new.xmin = max(min(xmin, device_ctx.grid.width() - 2), 1); //-2 for no perim channels bb_coord_new.ymin = max(min(ymin, device_ctx.grid.height() - 2), 1); //-2 for no perim channels + bb_coord_new.layer_min = max(min(layer_min, device_ctx.grid.get_num_layers() - 1), 0); bb_coord_new.xmax = max(min(xmax, device_ctx.grid.width() - 2), 1); //-2 for no perim channels bb_coord_new.ymax = max(min(ymax, device_ctx.grid.height() - 2), 1); //-2 for no perim channels + bb_coord_new.layer_max = max(min(layer_max, device_ctx.grid.get_num_layers() - 1), 0); } static void get_non_updateable_layer_bb(ClusterNetId net_id, From 26d24d461177447295e0d7421b6f231e760e758e Mon Sep 17 00:00:00 2001 From: amin1377 Date: Fri, 23 Feb 2024 14:49:41 -0500 Subject: [PATCH 031/257] vpr: place: update update_bb to keep track of layer --- vpr/src/place/place.cpp | 71 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) diff --git a/vpr/src/place/place.cpp b/vpr/src/place/place.cpp index eabdcd6c13e..9107028db80 100644 --- a/vpr/src/place/place.cpp +++ b/vpr/src/place/place.cpp @@ -3360,8 +3360,10 @@ static void update_bb(ClusterNetId net_id, pin_new_loc.x = max(min(pin_new_loc.x, device_ctx.grid.width() - 2), 1); //-2 for no perim channels pin_new_loc.y = max(min(pin_new_loc.y, device_ctx.grid.height() - 2), 1); //-2 for no perim channels + pin_new_loc.layer_num = max(min(pin_new_loc.layer_num, device_ctx.grid.get_num_layers() - 1), 0); pin_old_loc.x = max(min(pin_old_loc.x, device_ctx.grid.width() - 2), 1); //-2 for no perim channels pin_old_loc.y = max(min(pin_old_loc.y, device_ctx.grid.height() - 2), 1); //-2 for no perim channels + pin_old_loc.layer_num = max(min(pin_old_loc.layer_num, device_ctx.grid.get_num_layers() - 1), 0); /* Check if the net had been updated before. */ if (bb_updated_before[net_id] == GOT_FROM_SCRATCH) { @@ -3541,6 +3543,75 @@ static void update_bb(ClusterNetId net_id, num_sink_pin_layer_new[pin_new_loc.layer_num] = (curr_num_sink_pin_layer)[pin_new_loc.layer_num] + 1; } } + + if (pin_new_loc.layer_num < pin_old_loc.layer_num) { + if (pin_old_loc.layer_num == curr_bb_coord->layer_max) { + if (curr_bb_edge->layer_max == 1) { + get_bb_from_scratch(net_id, bb_coord_new, bb_edge_new, num_sink_pin_layer_new); + bb_updated_before[net_id] = GOT_FROM_SCRATCH; + return; + } else { + bb_edge_new.layer_max = curr_bb_edge->layer_max - 1; + bb_coord_new.layer_max = curr_bb_coord->layer_max; + } + } else { + bb_coord_new.layer_max = curr_bb_coord->layer_max; + bb_edge_new.layer_max = curr_bb_edge->layer_max; + } + + + if (pin_new_loc.layer_num < curr_bb_coord->layer_min) { + bb_coord_new.layer_min = pin_new_loc.layer_num; + bb_edge_new.layer_min = 1; + } else if (pin_new_loc.layer_num == curr_bb_coord->layer_min) { + bb_coord_new.layer_min = pin_new_loc.layer_num; + bb_edge_new.layer_min = curr_bb_edge->layer_min + 1; + } else { + bb_coord_new.layer_min = curr_bb_coord->layer_min; + bb_edge_new.layer_min = curr_bb_edge->layer_min; + } + + } else if (pin_new_loc.layer_num > pin_old_loc.layer_num) { + + + if (pin_old_loc.layer_num == curr_bb_coord->layer_min) { + if (curr_bb_edge->layer_min == 1) { + get_bb_from_scratch(net_id, bb_coord_new, bb_edge_new, num_sink_pin_layer_new); + bb_updated_before[net_id] = GOT_FROM_SCRATCH; + return; + } else { + bb_edge_new.layer_min = curr_bb_edge->layer_min - 1; + bb_coord_new.layer_min = curr_bb_coord->layer_min; + } + } else { + bb_coord_new.layer_min = curr_bb_coord->layer_min; + bb_edge_new.layer_min = curr_bb_edge->layer_min; + } + + if (pin_new_loc.layer_num > curr_bb_coord->layer_max) { + bb_coord_new.layer_max = pin_new_loc.layer_num; + bb_edge_new.layer_max = 1; + } else if (pin_new_loc.layer_num == curr_bb_coord->layer_max) { + bb_coord_new.layer_max = pin_new_loc.layer_num; + bb_edge_new.layer_max = curr_bb_edge->layer_max + 1; + } else { + bb_coord_new.layer_max = curr_bb_coord->layer_max; + bb_edge_new.layer_max = curr_bb_edge->layer_max; + } + + + } else { + bb_coord_new.layer_min = curr_bb_coord->layer_min; + bb_coord_new.layer_max = curr_bb_coord->layer_max; + bb_edge_new.layer_min = curr_bb_edge->layer_min; + bb_edge_new.layer_max = curr_bb_edge->layer_max; + } + + } else { + bb_coord_new.layer_min = curr_bb_coord->layer_min; + bb_coord_new.layer_max = curr_bb_coord->layer_max; + bb_edge_new.layer_min = curr_bb_edge->layer_min; + bb_edge_new.layer_max = curr_bb_edge->layer_max; } if (bb_updated_before[net_id] == NOT_UPDATED_YET) { From 66ef6c64d85f9b6c5eb435982183ffd89807d38e Mon Sep 17 00:00:00 2001 From: amin1377 Date: Fri, 23 Feb 2024 14:50:47 -0500 Subject: [PATCH 032/257] vpr: place: set valid value of old layer if if it is not valid --- vpr/src/place/median_move_generator.cpp | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/vpr/src/place/median_move_generator.cpp b/vpr/src/place/median_move_generator.cpp index 13c418ddf6b..1c9db0b1b44 100644 --- a/vpr/src/place/median_move_generator.cpp +++ b/vpr/src/place/median_move_generator.cpp @@ -93,10 +93,10 @@ e_create_move MedianMoveGenerator::propose_move(t_pl_blocks_to_be_moved& blocks_ xold = place_ctx.block_locs[bnum].loc.x + physical_tile_type(bnum)->pin_width_offset[pnum]; yold = place_ctx.block_locs[bnum].loc.y + physical_tile_type(bnum)->pin_height_offset[pnum]; layer_old = place_ctx.block_locs[bnum].loc.layer; + xold = std::max(std::min(xold, (int)device_ctx.grid.width() - 2), 1); //-2 for no perim channels yold = std::max(std::min(yold, (int)device_ctx.grid.height() - 2), 1); //-2 for no perim channels - VTR_ASSERT(layer_old >= 0); - VTR_ASSERT(layer_old < device_ctx.grid.get_num_layers()); + layer_old = std::max(std::min(layer_old, (int)device_ctx.grid.get_num_layers() - 1), 0); //To calulate the bb incrementally while excluding the moving block //assume that the moving block is moved to a non-critical coord of the bb From 895e34f2957b068f41a21e4d4abdb59c25929a0d Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Fri, 23 Feb 2024 15:40:51 -0500 Subject: [PATCH 033/257] revert noc-aware clustering --- vpr/src/pack/pack.cpp | 138 ------------------------------------------ 1 file changed, 138 deletions(-) diff --git a/vpr/src/pack/pack.cpp b/vpr/src/pack/pack.cpp index 013edc51b77..62558798ad5 100644 --- a/vpr/src/pack/pack.cpp +++ b/vpr/src/pack/pack.cpp @@ -5,8 +5,6 @@ #include #include #include -#include -#include #include "vtr_assert.h" #include "vtr_log.h" @@ -42,124 +40,6 @@ static bool try_size_device_grid(const t_arch& arch, */ static int count_models(const t_model* user_models); -static std::vector find_noc_router_atoms() { - const auto& atom_ctx = g_vpr_ctx.atom(); - - // NoC router atoms are expected to have a specific blif model - const std::string noc_router_blif_model_name = "noc_router_adapter_block"; - - // stores found NoC router atoms - std::vector noc_router_atoms; - - // iterate over all atoms and find those whose blif model matches - for (auto atom_id : atom_ctx.nlist.blocks()) { - const t_model* model = atom_ctx.nlist.block_model(atom_id); - if (noc_router_blif_model_name == model->name) { - noc_router_atoms.push_back(atom_id); - } - } - - return noc_router_atoms; -} - -static void update_noc_reachability_partitions(const std::vector& noc_atoms) { - const auto& atom_ctx = g_vpr_ctx.atom(); - auto& constraints = g_vpr_ctx.mutable_floorplanning().constraints; - const auto& high_fanout_thresholds = g_vpr_ctx.cl_helper().high_fanout_thresholds; - - const size_t high_fanout_threshold = high_fanout_thresholds.get_threshold(""); - - // get the total number of atoms - const size_t n_atoms = atom_ctx.nlist.blocks().size(); - - vtr::vector atom_visited(n_atoms, false); - - int exclusivity_cnt = 0; - - RegionRectCoord unconstrained_rect{0, - 0, - std::numeric_limits::max(), - std::numeric_limits::max(), - 0}; - Region unconstrained_region; - unconstrained_region.set_region_rect(unconstrained_rect); - - for (auto noc_atom_id : noc_atoms) { - // check if this NoC router has already been visited - if (atom_visited[noc_atom_id]) { - continue; - } - - exclusivity_cnt++; - - PartitionRegion associated_noc_partition_region; - associated_noc_partition_region.set_exclusivity_index(exclusivity_cnt); - associated_noc_partition_region.add_to_part_region(unconstrained_region); - - Partition associated_noc_partition; - associated_noc_partition.set_name(atom_ctx.nlist.block_name(noc_atom_id)); - associated_noc_partition.set_part_region(associated_noc_partition_region); - auto associated_noc_partition_id = (PartitionId)constraints.get_num_partitions(); - constraints.add_partition(associated_noc_partition); - - const PartitionId noc_partition_id = constraints.get_atom_partition(noc_atom_id); - - if (noc_partition_id == PartitionId::INVALID()) { - constraints.add_constrained_atom(noc_atom_id, associated_noc_partition_id); - } else { // noc atom is already in a partition - auto& noc_partition = constraints.get_mutable_partition(noc_partition_id); - auto& noc_partition_region = noc_partition.get_mutable_part_region(); - VTR_ASSERT(noc_partition_region.get_exclusivity_index() < 0); - noc_partition_region.set_exclusivity_index(exclusivity_cnt); - } - - std::queue q; - q.push(noc_atom_id); - atom_visited[noc_atom_id] = true; - - while (!q.empty()) { - AtomBlockId current_atom = q.front(); - q.pop(); - - PartitionId atom_partition_id = constraints.get_atom_partition(current_atom); - if (atom_partition_id == PartitionId::INVALID()) { - constraints.add_constrained_atom(current_atom, associated_noc_partition_id); - } else { - auto& atom_partition = constraints.get_mutable_partition(atom_partition_id); - auto& atom_partition_region = atom_partition.get_mutable_part_region(); - VTR_ASSERT(atom_partition_region.get_exclusivity_index() < 0 || current_atom == noc_atom_id); - atom_partition_region.set_exclusivity_index(exclusivity_cnt); - } - - for(auto pin : atom_ctx.nlist.block_pins(current_atom)) { - AtomNetId net_id = atom_ctx.nlist.pin_net(pin); - size_t net_fanout = atom_ctx.nlist.net_sinks(net_id).size(); - - if (net_fanout >= high_fanout_threshold) { - continue; - } - - AtomBlockId driver_atom_id = atom_ctx.nlist.net_driver_block(net_id); - if (!atom_visited[driver_atom_id]) { - q.push(driver_atom_id); - atom_visited[driver_atom_id] = true; - } - - for (auto sink_pin : atom_ctx.nlist.net_sinks(net_id)) { - AtomBlockId sink_atom_id = atom_ctx.nlist.pin_block(sink_pin); - if (!atom_visited[sink_atom_id]) { - q.push(sink_atom_id); - atom_visited[sink_atom_id] = true; - } - } - - } - } - - } -} - - bool try_pack(t_packer_opts* packer_opts, const t_analysis_opts* analysis_opts, const t_arch* arch, @@ -252,9 +132,6 @@ bool try_pack(t_packer_opts* packer_opts, bool floorplan_regions_overfull = false; auto constraints_backup = g_vpr_ctx.floorplanning().constraints; - // find all NoC router atoms - auto noc_atoms = find_noc_router_atoms(); - update_noc_reachability_partitions(noc_atoms); while (true) { free_clustering_data(*packer_opts, clustering_data); @@ -397,21 +274,6 @@ bool try_pack(t_packer_opts* packer_opts, //check clustering and output it check_and_output_clustering(*packer_opts, is_clock, arch, helper_ctx.total_clb_num, clustering_data.intra_lb_routing); - - g_vpr_ctx.mutable_floorplanning().constraints = constraints_backup; - const int max_y = (int)g_vpr_ctx.device().grid.height(); - const int max_x = (int)g_vpr_ctx.device().grid.width(); - for (auto& cluster_partition_region : g_vpr_ctx.mutable_floorplanning().cluster_constraints) { - const auto& regions = cluster_partition_region.get_regions(); - if (regions.size() == 1) { - const auto rect = regions[0].get_region_rect(); - - if (rect.xmin <= 0 && rect.ymin <= 0 && rect.xmax >= max_x && rect.ymax >= max_y) { - cluster_partition_region = PartitionRegion(); - } - } - } - // Free Data Structures free_clustering_data(*packer_opts, clustering_data); From f778c8723399b4b1480c23d91bb1ffa6749a3a3c Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Fri, 23 Feb 2024 15:49:52 -0500 Subject: [PATCH 034/257] remove exclusivity_index and fix some typos --- vpr/src/base/partition_region.cpp | 43 ++++------------------------- vpr/src/base/partition_region.h | 5 ---- vpr/src/base/setup_noc.cpp | 8 +++--- vpr/src/pack/attraction_groups.cpp | 2 +- vpr/src/pack/cluster_util.cpp | 4 +-- vpr/src/place/place_constraints.cpp | 14 +++++----- 6 files changed, 20 insertions(+), 56 deletions(-) diff --git a/vpr/src/base/partition_region.cpp b/vpr/src/base/partition_region.cpp index 77afc4fa5e7..1f0c9dbd62c 100644 --- a/vpr/src/base/partition_region.cpp +++ b/vpr/src/base/partition_region.cpp @@ -36,22 +36,6 @@ bool PartitionRegion::is_loc_in_part_reg(const t_pl_loc& loc) const { return is_in_pr; } -int PartitionRegion::get_exclusivity_index() const { - return exclusivity_index; -} - -void PartitionRegion::set_exclusivity_index(int index) { - /* negative exclusivity index means this PartitionRegion is compatible - * with other PartitionsRegions as long as the intersection of their - * regions is not empty. - */ - if (index < 0) { - index = -1; - } - - exclusivity_index = index; -} - PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionRegion& new_pr) { /**for N regions in part_region and M in the calling object you can get anywhere from * 0 to M*N regions in the resulting vector. Only intersection regions with non-zero area rectangles and @@ -60,14 +44,6 @@ PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionR */ PartitionRegion pr; - const int cluster_exclusivity = cluster_pr.get_exclusivity_index(); - const int new_exclusivity = new_pr.get_exclusivity_index(); - - // PartitionRegion are not compatible even if their regions overlap - if (cluster_exclusivity != new_exclusivity) { - return pr; - } - auto& pr_regions = pr.get_mutable_regions(); for (const auto& cluster_region : cluster_pr.get_regions()) { @@ -85,19 +61,12 @@ PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionR void update_cluster_part_reg(PartitionRegion& cluster_pr, const PartitionRegion& new_pr) { std::vector int_regions; - const int cluster_exclusivity = cluster_pr.get_exclusivity_index(); - const int new_exclusivity = new_pr.get_exclusivity_index(); - - // check whether PartitionRegions are compatible in the first place - if (cluster_exclusivity == new_exclusivity) { - - // now that we know PartitionRegions are compatible, look for overlapping regions - for (const auto& cluster_region : cluster_pr.get_regions()) { - for (const auto& new_region : new_pr.get_regions()) { - Region intersect_region = intersection(cluster_region, new_region); - if (!intersect_region.empty()) { - int_regions.push_back(intersect_region); - } + // now that we know PartitionRegions are compatible, look for overlapping regions + for (const auto& cluster_region : cluster_pr.get_regions()) { + for (const auto& new_region : new_pr.get_regions()) { + Region intersect_region = intersection(cluster_region, new_region); + if (!intersect_region.empty()) { + int_regions.push_back(intersect_region); } } } diff --git a/vpr/src/base/partition_region.h b/vpr/src/base/partition_region.h index ec4d24a065f..db73d2d7f09 100644 --- a/vpr/src/base/partition_region.h +++ b/vpr/src/base/partition_region.h @@ -50,13 +50,8 @@ class PartitionRegion { */ bool is_loc_in_part_reg(const t_pl_loc& loc) const; - int get_exclusivity_index() const; - - void set_exclusivity_index(int index); - private: std::vector regions; ///< union of rectangular regions that a partition can be placed in - int exclusivity_index = -1; ///< PartitionRegions with different exclusivity_index values are not compatible }; ///@brief used to print data from a PartitionRegion diff --git a/vpr/src/base/setup_noc.cpp b/vpr/src/base/setup_noc.cpp index de3bbb6840c..a2975b9683c 100644 --- a/vpr/src/base/setup_noc.cpp +++ b/vpr/src/base/setup_noc.cpp @@ -139,7 +139,7 @@ void create_noc_routers(const t_noc_inf& noc_info, NocStorage* noc_model, std::v double curr_logical_router_position_x; double curr_logical_router_position_y; - // keep track of the index of each physical router (this helps uniqely identify them) + // keep track of the index of each physical router (this helps uniquely identify them) int curr_physical_router_index = 0; // keep track of the ids of the routers that create the case where multiple routers have the same distance to a physical router tile @@ -153,7 +153,7 @@ void create_noc_routers(const t_noc_inf& noc_info, NocStorage* noc_model, std::v // Below we create all the routers within the NoC // - // go through each user desctibed router in the arch file and assign it to a physical router on the FPGA + // go through each user described router in the arch file and assign it to a physical router on the FPGA for (auto logical_router = noc_info.router_list.begin(); logical_router != noc_info.router_list.end(); logical_router++) { // assign the shortest distance to a large value (this is done so that the first distance calculated and we can replace this) shortest_distance = LLONG_MAX; @@ -173,7 +173,7 @@ void create_noc_routers(const t_noc_inf& noc_info, NocStorage* noc_model, std::v error_case_physical_router_index_2 = INVALID_PHYSICAL_ROUTER_INDEX; // determine the physical router tile that is closest to the current user described router in the arch file - for (auto & physical_router : noc_router_tiles) { + for (auto& physical_router : noc_router_tiles) { // get the position of the current physical router tile on the FPGA device curr_physical_router_pos_x = physical_router.tile_centroid_x; curr_physical_router_pos_y = physical_router.tile_centroid_y; @@ -237,7 +237,7 @@ void create_noc_links(const t_noc_inf* noc_info, NocStorage* noc_model) { noc_model->make_room_for_noc_router_link_list(); // go through each router and add its outgoing links to the NoC - for (const auto & router : noc_info->router_list) { + for (const auto& router : noc_info->router_list) { // get the converted id of the current source router source_router = noc_model->convert_router_id(router.id); diff --git a/vpr/src/pack/attraction_groups.cpp b/vpr/src/pack/attraction_groups.cpp index 60e72546e51..b8f0351d6a7 100644 --- a/vpr/src/pack/attraction_groups.cpp +++ b/vpr/src/pack/attraction_groups.cpp @@ -49,7 +49,7 @@ void AttractionInfo::create_att_groups_for_overfull_regions() { const auto& overfull_regions = floorplanning_ctx.overfull_regions; PartitionRegion overfull_regions_pr; - for (const auto & overfull_region : overfull_regions) { + for (const auto& overfull_region : overfull_regions) { overfull_regions_pr.add_to_part_region(overfull_region); } /* diff --git a/vpr/src/pack/cluster_util.cpp b/vpr/src/pack/cluster_util.cpp index 0c1891c7927..84dd08f3a0e 100644 --- a/vpr/src/pack/cluster_util.cpp +++ b/vpr/src/pack/cluster_util.cpp @@ -76,7 +76,7 @@ static void echo_clusters(char* filename) { cluster_atoms[clb_index].push_back(atom_blk_id); } - for (auto & cluster_atom : cluster_atoms) { + for (auto& cluster_atom : cluster_atoms) { const std::string& cluster_name = cluster_ctx.clb_nlist.block_name(cluster_atom.first); fprintf(fp, "Cluster %s Id: %zu \n", cluster_name.c_str(), size_t(cluster_atom.first)); fprintf(fp, "\tAtoms in cluster: \n"); @@ -96,7 +96,7 @@ static void echo_clusters(char* filename) { const std::vector& regions = floorplanning_ctx.cluster_constraints[clb_id].get_regions(); if (!regions.empty()) { fprintf(fp, "\nRegions in Cluster %zu:\n", size_t(clb_id)); - for (const auto & region : regions) { + for (const auto& region : regions) { print_region(fp, region); } } diff --git a/vpr/src/place/place_constraints.cpp b/vpr/src/place/place_constraints.cpp index 6a425401718..72f7925ff28 100644 --- a/vpr/src/place/place_constraints.cpp +++ b/vpr/src/place/place_constraints.cpp @@ -41,7 +41,7 @@ bool is_macro_constrained(const t_pl_macro& pl_macro) { bool is_macro_constrained = false; bool is_member_constrained = false; - for (const auto & member : pl_macro.members) { + for (const auto& member : pl_macro.members) { ClusterBlockId iblk = member.blk_index; is_member_constrained = is_cluster_constrained(iblk); @@ -61,7 +61,7 @@ PartitionRegion update_macro_head_pr(const t_pl_macro& pl_macro, const Partition int num_constrained_members = 0; auto& floorplanning_ctx = g_vpr_ctx.floorplanning(); - for (const auto & member : pl_macro.members) { + for (const auto& member : pl_macro.members) { ClusterBlockId iblk = member.blk_index; is_member_constrained = is_cluster_constrained(iblk); @@ -75,7 +75,7 @@ PartitionRegion update_macro_head_pr(const t_pl_macro& pl_macro, const Partition const PartitionRegion& block_pr = floorplanning_ctx.cluster_constraints[iblk]; const std::vector& block_regions = block_pr.get_regions(); - for (const auto & block_region : block_regions) { + for (const auto& block_region : block_regions) { Region modified_reg; auto offset = member.offset; @@ -118,7 +118,7 @@ PartitionRegion update_macro_member_pr(PartitionRegion& head_pr, const t_pl_offs const std::vector& block_regions = head_pr.get_regions(); PartitionRegion macro_pr; - for (const auto & block_region : block_regions) { + for (const auto& block_region : block_regions) { Region modified_reg; const auto block_reg_coord = block_region.get_region_rect(); @@ -153,7 +153,7 @@ void print_macro_constraint_error(const t_pl_macro& pl_macro) { VTR_LOG( "Feasible floorplanning constraints could not be calculated for the placement macro. \n" "The placement macro contains the following blocks: \n"); - for (const auto & member : pl_macro.members) { + for (const auto& member : pl_macro.members) { std::string blk_name = cluster_ctx.clb_nlist.block_name((member.blk_index)); VTR_LOG("Block %s (#%zu) ", blk_name.c_str(), size_t(member.blk_index)); } @@ -371,7 +371,7 @@ int region_tile_cover(const Region& reg, t_logical_block_type_ptr block_type, t_ } /* - * Used when marking fixed blocks to check whether the ParitionRegion associated with a block + * Used when marking fixed blocks to check whether the PartitionRegion associated with a block * covers one tile. If it covers one tile, it is marked as fixed. If it covers 0 tiles or * more than one tile, it will not be marked as fixed. As soon as it is known that the * PartitionRegion covers more than one tile, there is no need to check further regions @@ -441,7 +441,7 @@ int get_part_reg_size(PartitionRegion& pr, t_logical_block_type_ptr block_type, const std::vector& regions = pr.get_regions(); int num_tiles = 0; - for (const auto & region : regions) { + for (const auto& region : regions) { num_tiles += grid_tiles.region_tile_count(region, block_type); } From f23d7b5457365ed8d75caf73106ffdf0ffccb372 Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Mon, 26 Feb 2024 10:08:52 -0500 Subject: [PATCH 035/257] Revert "remove exclusivity_index and fix some typos" This reverts commit f778c8723399b4b1480c23d91bb1ffa6749a3a3c. --- vpr/src/base/partition_region.cpp | 43 ++++++++++++++++++++++++++----- vpr/src/base/partition_region.h | 5 ++++ 2 files changed, 42 insertions(+), 6 deletions(-) diff --git a/vpr/src/base/partition_region.cpp b/vpr/src/base/partition_region.cpp index 1f0c9dbd62c..77afc4fa5e7 100644 --- a/vpr/src/base/partition_region.cpp +++ b/vpr/src/base/partition_region.cpp @@ -36,6 +36,22 @@ bool PartitionRegion::is_loc_in_part_reg(const t_pl_loc& loc) const { return is_in_pr; } +int PartitionRegion::get_exclusivity_index() const { + return exclusivity_index; +} + +void PartitionRegion::set_exclusivity_index(int index) { + /* negative exclusivity index means this PartitionRegion is compatible + * with other PartitionsRegions as long as the intersection of their + * regions is not empty. + */ + if (index < 0) { + index = -1; + } + + exclusivity_index = index; +} + PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionRegion& new_pr) { /**for N regions in part_region and M in the calling object you can get anywhere from * 0 to M*N regions in the resulting vector. Only intersection regions with non-zero area rectangles and @@ -44,6 +60,14 @@ PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionR */ PartitionRegion pr; + const int cluster_exclusivity = cluster_pr.get_exclusivity_index(); + const int new_exclusivity = new_pr.get_exclusivity_index(); + + // PartitionRegion are not compatible even if their regions overlap + if (cluster_exclusivity != new_exclusivity) { + return pr; + } + auto& pr_regions = pr.get_mutable_regions(); for (const auto& cluster_region : cluster_pr.get_regions()) { @@ -61,12 +85,19 @@ PartitionRegion intersection(const PartitionRegion& cluster_pr, const PartitionR void update_cluster_part_reg(PartitionRegion& cluster_pr, const PartitionRegion& new_pr) { std::vector int_regions; - // now that we know PartitionRegions are compatible, look for overlapping regions - for (const auto& cluster_region : cluster_pr.get_regions()) { - for (const auto& new_region : new_pr.get_regions()) { - Region intersect_region = intersection(cluster_region, new_region); - if (!intersect_region.empty()) { - int_regions.push_back(intersect_region); + const int cluster_exclusivity = cluster_pr.get_exclusivity_index(); + const int new_exclusivity = new_pr.get_exclusivity_index(); + + // check whether PartitionRegions are compatible in the first place + if (cluster_exclusivity == new_exclusivity) { + + // now that we know PartitionRegions are compatible, look for overlapping regions + for (const auto& cluster_region : cluster_pr.get_regions()) { + for (const auto& new_region : new_pr.get_regions()) { + Region intersect_region = intersection(cluster_region, new_region); + if (!intersect_region.empty()) { + int_regions.push_back(intersect_region); + } } } } diff --git a/vpr/src/base/partition_region.h b/vpr/src/base/partition_region.h index db73d2d7f09..ec4d24a065f 100644 --- a/vpr/src/base/partition_region.h +++ b/vpr/src/base/partition_region.h @@ -50,8 +50,13 @@ class PartitionRegion { */ bool is_loc_in_part_reg(const t_pl_loc& loc) const; + int get_exclusivity_index() const; + + void set_exclusivity_index(int index); + private: std::vector regions; ///< union of rectangular regions that a partition can be placed in + int exclusivity_index = -1; ///< PartitionRegions with different exclusivity_index values are not compatible }; ///@brief used to print data from a PartitionRegion From 5dd2025f02f32edcfc37f516c299dbb262e27310 Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Mon, 26 Feb 2024 10:09:12 -0500 Subject: [PATCH 036/257] Revert "revert noc-aware clustering" This reverts commit 895e34f2957b068f41a21e4d4abdb59c25929a0d. --- vpr/src/pack/pack.cpp | 138 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 138 insertions(+) diff --git a/vpr/src/pack/pack.cpp b/vpr/src/pack/pack.cpp index 62558798ad5..013edc51b77 100644 --- a/vpr/src/pack/pack.cpp +++ b/vpr/src/pack/pack.cpp @@ -5,6 +5,8 @@ #include #include #include +#include +#include #include "vtr_assert.h" #include "vtr_log.h" @@ -40,6 +42,124 @@ static bool try_size_device_grid(const t_arch& arch, */ static int count_models(const t_model* user_models); +static std::vector find_noc_router_atoms() { + const auto& atom_ctx = g_vpr_ctx.atom(); + + // NoC router atoms are expected to have a specific blif model + const std::string noc_router_blif_model_name = "noc_router_adapter_block"; + + // stores found NoC router atoms + std::vector noc_router_atoms; + + // iterate over all atoms and find those whose blif model matches + for (auto atom_id : atom_ctx.nlist.blocks()) { + const t_model* model = atom_ctx.nlist.block_model(atom_id); + if (noc_router_blif_model_name == model->name) { + noc_router_atoms.push_back(atom_id); + } + } + + return noc_router_atoms; +} + +static void update_noc_reachability_partitions(const std::vector& noc_atoms) { + const auto& atom_ctx = g_vpr_ctx.atom(); + auto& constraints = g_vpr_ctx.mutable_floorplanning().constraints; + const auto& high_fanout_thresholds = g_vpr_ctx.cl_helper().high_fanout_thresholds; + + const size_t high_fanout_threshold = high_fanout_thresholds.get_threshold(""); + + // get the total number of atoms + const size_t n_atoms = atom_ctx.nlist.blocks().size(); + + vtr::vector atom_visited(n_atoms, false); + + int exclusivity_cnt = 0; + + RegionRectCoord unconstrained_rect{0, + 0, + std::numeric_limits::max(), + std::numeric_limits::max(), + 0}; + Region unconstrained_region; + unconstrained_region.set_region_rect(unconstrained_rect); + + for (auto noc_atom_id : noc_atoms) { + // check if this NoC router has already been visited + if (atom_visited[noc_atom_id]) { + continue; + } + + exclusivity_cnt++; + + PartitionRegion associated_noc_partition_region; + associated_noc_partition_region.set_exclusivity_index(exclusivity_cnt); + associated_noc_partition_region.add_to_part_region(unconstrained_region); + + Partition associated_noc_partition; + associated_noc_partition.set_name(atom_ctx.nlist.block_name(noc_atom_id)); + associated_noc_partition.set_part_region(associated_noc_partition_region); + auto associated_noc_partition_id = (PartitionId)constraints.get_num_partitions(); + constraints.add_partition(associated_noc_partition); + + const PartitionId noc_partition_id = constraints.get_atom_partition(noc_atom_id); + + if (noc_partition_id == PartitionId::INVALID()) { + constraints.add_constrained_atom(noc_atom_id, associated_noc_partition_id); + } else { // noc atom is already in a partition + auto& noc_partition = constraints.get_mutable_partition(noc_partition_id); + auto& noc_partition_region = noc_partition.get_mutable_part_region(); + VTR_ASSERT(noc_partition_region.get_exclusivity_index() < 0); + noc_partition_region.set_exclusivity_index(exclusivity_cnt); + } + + std::queue q; + q.push(noc_atom_id); + atom_visited[noc_atom_id] = true; + + while (!q.empty()) { + AtomBlockId current_atom = q.front(); + q.pop(); + + PartitionId atom_partition_id = constraints.get_atom_partition(current_atom); + if (atom_partition_id == PartitionId::INVALID()) { + constraints.add_constrained_atom(current_atom, associated_noc_partition_id); + } else { + auto& atom_partition = constraints.get_mutable_partition(atom_partition_id); + auto& atom_partition_region = atom_partition.get_mutable_part_region(); + VTR_ASSERT(atom_partition_region.get_exclusivity_index() < 0 || current_atom == noc_atom_id); + atom_partition_region.set_exclusivity_index(exclusivity_cnt); + } + + for(auto pin : atom_ctx.nlist.block_pins(current_atom)) { + AtomNetId net_id = atom_ctx.nlist.pin_net(pin); + size_t net_fanout = atom_ctx.nlist.net_sinks(net_id).size(); + + if (net_fanout >= high_fanout_threshold) { + continue; + } + + AtomBlockId driver_atom_id = atom_ctx.nlist.net_driver_block(net_id); + if (!atom_visited[driver_atom_id]) { + q.push(driver_atom_id); + atom_visited[driver_atom_id] = true; + } + + for (auto sink_pin : atom_ctx.nlist.net_sinks(net_id)) { + AtomBlockId sink_atom_id = atom_ctx.nlist.pin_block(sink_pin); + if (!atom_visited[sink_atom_id]) { + q.push(sink_atom_id); + atom_visited[sink_atom_id] = true; + } + } + + } + } + + } +} + + bool try_pack(t_packer_opts* packer_opts, const t_analysis_opts* analysis_opts, const t_arch* arch, @@ -132,6 +252,9 @@ bool try_pack(t_packer_opts* packer_opts, bool floorplan_regions_overfull = false; auto constraints_backup = g_vpr_ctx.floorplanning().constraints; + // find all NoC router atoms + auto noc_atoms = find_noc_router_atoms(); + update_noc_reachability_partitions(noc_atoms); while (true) { free_clustering_data(*packer_opts, clustering_data); @@ -274,6 +397,21 @@ bool try_pack(t_packer_opts* packer_opts, //check clustering and output it check_and_output_clustering(*packer_opts, is_clock, arch, helper_ctx.total_clb_num, clustering_data.intra_lb_routing); + + g_vpr_ctx.mutable_floorplanning().constraints = constraints_backup; + const int max_y = (int)g_vpr_ctx.device().grid.height(); + const int max_x = (int)g_vpr_ctx.device().grid.width(); + for (auto& cluster_partition_region : g_vpr_ctx.mutable_floorplanning().cluster_constraints) { + const auto& regions = cluster_partition_region.get_regions(); + if (regions.size() == 1) { + const auto rect = regions[0].get_region_rect(); + + if (rect.xmin <= 0 && rect.ymin <= 0 && rect.xmax >= max_x && rect.ymax >= max_y) { + cluster_partition_region = PartitionRegion(); + } + } + } + // Free Data Structures free_clustering_data(*packer_opts, clustering_data); From 0d0d311f5f2f3692149ddd35b6156c0444718051 Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Mon, 26 Feb 2024 10:13:50 -0500 Subject: [PATCH 037/257] reduced fanout threshold in update_noc_reachability_partitions() --- vpr/src/pack/pack.cpp | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/vpr/src/pack/pack.cpp b/vpr/src/pack/pack.cpp index 013edc51b77..36b911f393d 100644 --- a/vpr/src/pack/pack.cpp +++ b/vpr/src/pack/pack.cpp @@ -65,9 +65,9 @@ static std::vector find_noc_router_atoms() { static void update_noc_reachability_partitions(const std::vector& noc_atoms) { const auto& atom_ctx = g_vpr_ctx.atom(); auto& constraints = g_vpr_ctx.mutable_floorplanning().constraints; - const auto& high_fanout_thresholds = g_vpr_ctx.cl_helper().high_fanout_thresholds; +// const auto& high_fanout_thresholds = g_vpr_ctx.cl_helper().high_fanout_thresholds; - const size_t high_fanout_threshold = high_fanout_thresholds.get_threshold(""); + const size_t high_fanout_threshold = 32; //;high_fanout_thresholds.get_threshold(""); // get the total number of atoms const size_t n_atoms = atom_ctx.nlist.blocks().size(); From e4168ee3de4cae8f61945d4e9cec46294fa5d4e7 Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Tue, 27 Feb 2024 15:55:35 -0500 Subject: [PATCH 038/257] ignore reset --- vpr/src/place/place.cpp | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/vpr/src/place/place.cpp b/vpr/src/place/place.cpp index 2e30d2f3c43..e60f2c3a153 100644 --- a/vpr/src/place/place.cpp +++ b/vpr/src/place/place.cpp @@ -618,6 +618,13 @@ void try_place(const Netlist<>& net_list, num_swap_aborted = 0; num_ts_called = 0; + for (auto net_id : cluster_ctx.clb_nlist.nets()) { + if (cluster_ctx.clb_nlist.net_name(net_id) == "reset") { + g_vpr_ctx.mutable_clustering().clb_nlist.set_net_is_global(net_id, true); + g_vpr_ctx.mutable_clustering().clb_nlist.set_net_is_ignored(net_id, true); + } + } + if (placer_opts.place_algorithm.is_timing_driven()) { /*do this before the initial placement to avoid messing up the initial placement */ place_delay_model = alloc_lookups_and_delay_model(net_list, From 04e2f2aaec985ff24d96ad3487cae7dcd28d2409 Mon Sep 17 00:00:00 2001 From: soheilshahrouz Date: Wed, 28 Feb 2024 12:37:16 -0500 Subject: [PATCH 039/257] lock NoC routers and constrain other blocks --- vpr/src/place/initial_noc_placement.cpp | 71 +++++++++++++++++++++++++ vpr/src/place/initial_placement.cpp | 1 + 2 files changed, 72 insertions(+) diff --git a/vpr/src/place/initial_noc_placement.cpp b/vpr/src/place/initial_noc_placement.cpp index a8c619f8dbd..ba09c456ca5 100644 --- a/vpr/src/place/initial_noc_placement.cpp +++ b/vpr/src/place/initial_noc_placement.cpp @@ -5,6 +5,9 @@ #include "noc_place_checkpoint.h" #include "vtr_math.h" +#include +#include + /** * @brief Evaluates whether a NoC router swap should be accepted or not. * If delta cost is non-positive, the move is always accepted. If the cost @@ -251,6 +254,7 @@ static void noc_routers_anneal(const t_noc_opts& noc_opts) { void initial_noc_placement(const t_noc_opts& noc_opts, int seed) { auto& noc_ctx = g_vpr_ctx.noc(); + auto& cluster_ctx = g_vpr_ctx.clustering(); // Get all the router clusters const std::vector& router_blk_ids = noc_ctx.noc_traffic_flows_storage.get_router_clusters_in_netlist(); @@ -280,4 +284,71 @@ void initial_noc_placement(const t_noc_opts& noc_opts, int seed) { // Run the simulated annealing optimizer for NoC routers noc_routers_anneal(noc_opts); + + auto& device_ctx = g_vpr_ctx.device(); + + for (auto router_blk_id : router_blk_ids) { + g_vpr_ctx.mutable_placement().block_locs[router_blk_id].is_fixed = true; + + vtr::vector block_visited(cluster_ctx.clb_nlist.blocks().size(), false); + + std::queue q; + q.push(router_blk_id); + block_visited[router_blk_id] = true; + + const auto& noc_loc = g_vpr_ctx.placement().block_locs[router_blk_id].loc; + + const int height = device_ctx.grid.height(); + const int width = device_ctx.grid.width(); + + RegionRectCoord rect_coord{std::max(0, noc_loc.x - 20), + std::max(0, noc_loc.y - 20), + std::min(width-1, noc_loc.x + 20), + std::min(height-1, noc_loc.y + 20), 0}; + Region region; + region.set_region_rect(rect_coord); + + while (!q.empty()) { + ClusterBlockId current_block_id = q.front(); + q.pop(); + + auto block_type = cluster_ctx.clb_nlist.block_type(current_block_id); + if (std::strcmp(block_type->name, "io") != 0) { + auto& constraint = g_vpr_ctx.mutable_floorplanning().cluster_constraints[current_block_id]; + constraint.add_to_part_region(region); + } + + for (ClusterPinId pin_id : cluster_ctx.clb_nlist.block_pins(current_block_id)) { + ClusterNetId net_id = cluster_ctx.clb_nlist.pin_net(pin_id); + + if (cluster_ctx.clb_nlist.net_is_ignored(net_id)) { + continue; + } + + if (cluster_ctx.clb_nlist.net_sinks(net_id).size() >= 32) { + continue; + } + + if (cluster_ctx.clb_nlist.pin_type(pin_id) == PinType::DRIVER) { + //ignore nets that are globally routed + + for (auto sink_pin_id : cluster_ctx.clb_nlist.net_sinks(net_id)) { + auto sink_block_id = cluster_ctx.clb_nlist.pin_block(sink_pin_id); + if (!block_visited[sink_block_id]) { + block_visited[sink_block_id] = true; + q.push(sink_block_id); + } + } + } else { //else the pin is sink --> only care about its driver + ClusterPinId source_pin = cluster_ctx.clb_nlist.net_driver(net_id); + auto source_blk_id = cluster_ctx.clb_nlist.pin_block(source_pin); + if (!block_visited[source_blk_id]) { + block_visited[source_blk_id] = true; + q.push(source_blk_id); + } + } + } + } + + } } \ No newline at end of file diff --git a/vpr/src/place/initial_placement.cpp b/vpr/src/place/initial_placement.cpp index 30e3cc190ae..8c75deb0713 100644 --- a/vpr/src/place/initial_placement.cpp +++ b/vpr/src/place/initial_placement.cpp @@ -1123,6 +1123,7 @@ void initial_placement(const t_placer_opts& placer_opts, if (noc_opts.noc) { // NoC routers are placed before other blocks initial_noc_placement(noc_opts, placer_opts.seed); + propagate_place_constraints(); } //Assign scores to blocks and placement macros according to how difficult they are to place From 59c712c8f3618e721d1d87407326d6376aa40ab6 Mon Sep 17 00:00:00 2001 From: Oleksandr Date: Fri, 15 Mar 2024 21:16:25 +0200 Subject: [PATCH 040/257] add Interactive Path Analysis (iteration 2, with multiselection feature), except draw_basic.cpp/h changes, will be updated separately due to conflict with layered draw ability --- .../libtatum/tatum/TimingReporter.cpp | 33 +- .../libtatum/tatum/TimingReporter.hpp | 9 +- vpr/CMakeLists.txt | 10 +- vpr/src/base/SetupVPR.cpp | 9 + vpr/src/base/SetupVPR.h | 1 + vpr/src/base/read_options.cpp | 10 + vpr/src/base/read_options.h | 4 + vpr/src/base/vpr_api.cpp | 5 +- vpr/src/base/vpr_api.h | 1 + vpr/src/base/vpr_context.h | 78 + vpr/src/base/vpr_types.h | 7 + vpr/src/draw/draw.cpp | 15 +- vpr/src/draw/draw.h | 4 +- vpr/src/server/bytearray.h | 82 + vpr/src/server/commconstants.h | 38 + vpr/src/server/convertutils.cpp | 77 + vpr/src/server/convertutils.h | 15 + vpr/src/server/gateio.cpp | 264 ++ vpr/src/server/gateio.h | 149 + vpr/src/server/gtkcomboboxhelper.cpp | 50 + vpr/src/server/gtkcomboboxhelper.h | 12 + vpr/src/server/pathhelper.cpp | 96 + vpr/src/server/pathhelper.h | 31 + vpr/src/server/serverupdate.cpp | 44 + vpr/src/server/serverupdate.h | 23 + vpr/src/server/task.cpp | 111 + vpr/src/server/task.h | 73 + vpr/src/server/taskresolver.cpp | 165 ++ vpr/src/server/taskresolver.h | 51 + vpr/src/server/telegrambuffer.cpp | 74 + vpr/src/server/telegrambuffer.h | 48 + vpr/src/server/telegramframe.h | 19 + vpr/src/server/telegramheader.cpp | 76 + vpr/src/server/telegramheader.h | 61 + vpr/src/server/telegramoptions.h | 162 + vpr/src/server/telegramparser.cpp | 79 + vpr/src/server/telegramparser.h | 29 + vpr/src/server/zlibutils.cpp | 79 + vpr/src/server/zlibutils.h | 10 + vpr/thirdparty/sockpp/.editorconfig | 17 + vpr/thirdparty/sockpp/.gitattributes | 34 + vpr/thirdparty/sockpp/.gitignore | 45 + vpr/thirdparty/sockpp/.hgeol | 32 + vpr/thirdparty/sockpp/.travis.yml | 83 + vpr/thirdparty/sockpp/CHANGELOG.md | 123 + vpr/thirdparty/sockpp/CMakeLists.txt | 223 ++ vpr/thirdparty/sockpp/CONTRIBUTING.md | 21 + vpr/thirdparty/sockpp/Doxyfile | 2606 +++++++++++++++++ vpr/thirdparty/sockpp/LICENSE | 30 + vpr/thirdparty/sockpp/README.md | 261 ++ vpr/thirdparty/sockpp/buildtst.sh | 52 + .../sockpp/cmake/sockppConfig.cmake | 14 + vpr/thirdparty/sockpp/cmake/version.h.in | 59 + vpr/thirdparty/sockpp/conanfile.py | 66 + vpr/thirdparty/sockpp/devenv.sh | 9 + vpr/thirdparty/sockpp/doc/CMakeLists.txt | 66 + vpr/thirdparty/sockpp/doc/Doxyfile.cmake | 2304 +++++++++++++++ vpr/thirdparty/sockpp/examples/CMakeLists.txt | 47 + .../sockpp/examples/linux/CMakeLists.txt | 67 + .../sockpp/examples/linux/canrecv.cpp | 94 + .../sockpp/examples/linux/cantime.cpp | 98 + .../sockpp/examples/tcp/CMakeLists.txt | 86 + .../sockpp/examples/tcp/tcp6echo.cpp | 99 + .../sockpp/examples/tcp/tcp6echosvr.cpp | 109 + .../sockpp/examples/tcp/tcpecho.cpp | 97 + .../sockpp/examples/tcp/tcpechomt.cpp | 124 + .../sockpp/examples/tcp/tcpechosvr.cpp | 110 + .../sockpp/examples/tcp/tcpechotest.cpp | 122 + .../sockpp/examples/udp/CMakeLists.txt | 82 + .../sockpp/examples/udp/udp6echo.cpp | 89 + .../sockpp/examples/udp/udpecho.cpp | 89 + .../sockpp/examples/udp/udpechosvr.cpp | 121 + .../sockpp/examples/unix/CMakeLists.txt | 87 + .../sockpp/examples/unix/undgramecho.cpp | 97 + .../sockpp/examples/unix/undgramechosvr.cpp | 93 + .../sockpp/examples/unix/unecho.cpp | 87 + .../sockpp/examples/unix/unechosvr.cpp | 111 + .../sockpp/examples/unix/unechotest.cpp | 114 + .../sockpp/include/sockpp/acceptor.h | 279 ++ .../sockpp/include/sockpp/can_address.h | 200 ++ .../sockpp/include/sockpp/can_frame.h | 103 + .../sockpp/include/sockpp/can_socket.h | 206 ++ .../sockpp/include/sockpp/connector.h | 268 ++ .../sockpp/include/sockpp/datagram_socket.h | 421 +++ .../sockpp/include/sockpp/exception.h | 129 + .../sockpp/include/sockpp/inet6_address.h | 250 ++ .../sockpp/include/sockpp/inet_address.h | 242 ++ .../sockpp/include/sockpp/platform.h | 112 + vpr/thirdparty/sockpp/include/sockpp/result.h | 126 + .../sockpp/include/sockpp/sock_address.h | 203 ++ vpr/thirdparty/sockpp/include/sockpp/socket.h | 581 ++++ .../sockpp/include/sockpp/stream_socket.h | 367 +++ .../sockpp/include/sockpp/tcp6_acceptor.h | 69 + .../sockpp/include/sockpp/tcp6_connector.h | 66 + .../sockpp/include/sockpp/tcp6_socket.h | 65 + .../sockpp/include/sockpp/tcp_acceptor.h | 69 + .../sockpp/include/sockpp/tcp_connector.h | 65 + .../sockpp/include/sockpp/tcp_socket.h | 65 + .../sockpp/include/sockpp/udp6_socket.h | 65 + .../sockpp/include/sockpp/udp_socket.h | 65 + .../sockpp/include/sockpp/unix_acceptor.h | 115 + .../sockpp/include/sockpp/unix_address.h | 194 ++ .../sockpp/include/sockpp/unix_connector.h | 65 + .../sockpp/include/sockpp/unix_dgram_socket.h | 68 + .../include/sockpp/unix_stream_socket.h | 68 + vpr/thirdparty/sockpp/src/CMakeLists.txt | 88 + vpr/thirdparty/sockpp/src/acceptor.cpp | 110 + vpr/thirdparty/sockpp/src/connector.cpp | 136 + vpr/thirdparty/sockpp/src/datagram_socket.cpp | 81 + vpr/thirdparty/sockpp/src/exception.cpp | 96 + vpr/thirdparty/sockpp/src/inet6_address.cpp | 137 + vpr/thirdparty/sockpp/src/inet_address.cpp | 131 + .../sockpp/src/linux/can_address.cpp | 104 + .../sockpp/src/linux/can_socket.cpp | 93 + vpr/thirdparty/sockpp/src/result.cpp | 59 + vpr/thirdparty/sockpp/src/socket.cpp | 333 +++ vpr/thirdparty/sockpp/src/stream_socket.cpp | 277 ++ .../sockpp/src/unix/unix_address.cpp | 82 + .../sockpp/tests/unit/CMakeLists.txt | 100 + .../sockpp/tests/unit/catch2_version.h | 48 + .../sockpp/tests/unit/test_acceptor.cpp | 130 + .../sockpp/tests/unit/test_connector.cpp | 66 + .../tests/unit/test_datagram_socket.cpp | 98 + .../sockpp/tests/unit/test_inet6_address.cpp | 173 ++ .../sockpp/tests/unit/test_inet_address.cpp | 144 + .../sockpp/tests/unit/test_result.cpp | 71 + .../sockpp/tests/unit/test_socket.cpp | 351 +++ .../sockpp/tests/unit/test_stream_socket.cpp | 100 + .../sockpp/tests/unit/test_tcp_socket.cpp | 154 + .../sockpp/tests/unit/test_unix_address.cpp | 157 + .../tests/unit/test_unix_dgram_socket.cpp | 71 + .../tests/unit/test_unix_stream_socket.cpp | 71 + .../sockpp/tests/unit/unit_tests.cpp | 71 + vpr/thirdparty/sockpp/travis_build.sh | 19 + .../sockpp/travis_install_catch2.sh | 20 + 135 files changed, 18256 insertions(+), 13 deletions(-) create mode 100644 vpr/src/server/bytearray.h create mode 100644 vpr/src/server/commconstants.h create mode 100644 vpr/src/server/convertutils.cpp create mode 100644 vpr/src/server/convertutils.h create mode 100644 vpr/src/server/gateio.cpp create mode 100644 vpr/src/server/gateio.h create mode 100644 vpr/src/server/gtkcomboboxhelper.cpp create mode 100644 vpr/src/server/gtkcomboboxhelper.h create mode 100644 vpr/src/server/pathhelper.cpp create mode 100644 vpr/src/server/pathhelper.h create mode 100644 vpr/src/server/serverupdate.cpp create mode 100644 vpr/src/server/serverupdate.h create mode 100644 vpr/src/server/task.cpp create mode 100644 vpr/src/server/task.h create mode 100644 vpr/src/server/taskresolver.cpp create mode 100644 vpr/src/server/taskresolver.h create mode 100644 vpr/src/server/telegrambuffer.cpp create mode 100644 vpr/src/server/telegrambuffer.h create mode 100644 vpr/src/server/telegramframe.h create mode 100644 vpr/src/server/telegramheader.cpp create mode 100644 vpr/src/server/telegramheader.h create mode 100644 vpr/src/server/telegramoptions.h create mode 100644 vpr/src/server/telegramparser.cpp create mode 100644 vpr/src/server/telegramparser.h create mode 100644 vpr/src/server/zlibutils.cpp create mode 100644 vpr/src/server/zlibutils.h create mode 100644 vpr/thirdparty/sockpp/.editorconfig create mode 100644 vpr/thirdparty/sockpp/.gitattributes create mode 100644 vpr/thirdparty/sockpp/.gitignore create mode 100644 vpr/thirdparty/sockpp/.hgeol create mode 100644 vpr/thirdparty/sockpp/.travis.yml create mode 100644 vpr/thirdparty/sockpp/CHANGELOG.md create mode 100644 vpr/thirdparty/sockpp/CMakeLists.txt create mode 100644 vpr/thirdparty/sockpp/CONTRIBUTING.md create mode 100644 vpr/thirdparty/sockpp/Doxyfile create mode 100644 vpr/thirdparty/sockpp/LICENSE create mode 100644 vpr/thirdparty/sockpp/README.md create mode 100755 vpr/thirdparty/sockpp/buildtst.sh create mode 100644 vpr/thirdparty/sockpp/cmake/sockppConfig.cmake create mode 100644 vpr/thirdparty/sockpp/cmake/version.h.in create mode 100644 vpr/thirdparty/sockpp/conanfile.py create mode 100755 vpr/thirdparty/sockpp/devenv.sh create mode 100644 vpr/thirdparty/sockpp/doc/CMakeLists.txt create mode 100644 vpr/thirdparty/sockpp/doc/Doxyfile.cmake create mode 100644 vpr/thirdparty/sockpp/examples/CMakeLists.txt create mode 100644 vpr/thirdparty/sockpp/examples/linux/CMakeLists.txt create mode 100644 vpr/thirdparty/sockpp/examples/linux/canrecv.cpp create mode 100644 vpr/thirdparty/sockpp/examples/linux/cantime.cpp create mode 100644 vpr/thirdparty/sockpp/examples/tcp/CMakeLists.txt create mode 100644 vpr/thirdparty/sockpp/examples/tcp/tcp6echo.cpp create mode 100644 vpr/thirdparty/sockpp/examples/tcp/tcp6echosvr.cpp create mode 100644 vpr/thirdparty/sockpp/examples/tcp/tcpecho.cpp create mode 100644 vpr/thirdparty/sockpp/examples/tcp/tcpechomt.cpp create mode 100644 vpr/thirdparty/sockpp/examples/tcp/tcpechosvr.cpp create mode 100644 vpr/thirdparty/sockpp/examples/tcp/tcpechotest.cpp create mode 100644 vpr/thirdparty/sockpp/examples/udp/CMakeLists.txt create mode 100644 vpr/thirdparty/sockpp/examples/udp/udp6echo.cpp create mode 100644 vpr/thirdparty/sockpp/examples/udp/udpecho.cpp create mode 100644 vpr/thirdparty/sockpp/examples/udp/udpechosvr.cpp create mode 100644 vpr/thirdparty/sockpp/examples/unix/CMakeLists.txt create mode 100644 vpr/thirdparty/sockpp/examples/unix/undgramecho.cpp create mode 100644 vpr/thirdparty/sockpp/examples/unix/undgramechosvr.cpp create mode 100644 vpr/thirdparty/sockpp/examples/unix/unecho.cpp create mode 100644 vpr/thirdparty/sockpp/examples/unix/unechosvr.cpp create mode 100644 vpr/thirdparty/sockpp/examples/unix/unechotest.cpp create mode 100644 vpr/thirdparty/sockpp/include/sockpp/acceptor.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/can_address.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/can_frame.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/can_socket.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/connector.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/datagram_socket.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/exception.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/inet6_address.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/inet_address.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/platform.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/result.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/sock_address.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/socket.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/stream_socket.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/tcp6_acceptor.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/tcp6_connector.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/tcp6_socket.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/tcp_acceptor.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/tcp_connector.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/tcp_socket.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/udp6_socket.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/udp_socket.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/unix_acceptor.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/unix_address.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/unix_connector.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/unix_dgram_socket.h create mode 100644 vpr/thirdparty/sockpp/include/sockpp/unix_stream_socket.h create mode 100644 vpr/thirdparty/sockpp/src/CMakeLists.txt create mode 100644 vpr/thirdparty/sockpp/src/acceptor.cpp create mode 100644 vpr/thirdparty/sockpp/src/connector.cpp create mode 100644 vpr/thirdparty/sockpp/src/datagram_socket.cpp create mode 100644 vpr/thirdparty/sockpp/src/exception.cpp create mode 100644 vpr/thirdparty/sockpp/src/inet6_address.cpp create mode 100644 vpr/thirdparty/sockpp/src/inet_address.cpp create mode 100644 vpr/thirdparty/sockpp/src/linux/can_address.cpp create mode 100644 vpr/thirdparty/sockpp/src/linux/can_socket.cpp create mode 100644 vpr/thirdparty/sockpp/src/result.cpp create mode 100644 vpr/thirdparty/sockpp/src/socket.cpp create mode 100644 vpr/thirdparty/sockpp/src/stream_socket.cpp create mode 100644 vpr/thirdparty/sockpp/src/unix/unix_address.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/CMakeLists.txt create mode 100644 vpr/thirdparty/sockpp/tests/unit/catch2_version.h create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_acceptor.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_connector.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_datagram_socket.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_inet6_address.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_inet_address.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_result.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_socket.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_stream_socket.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_tcp_socket.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_unix_address.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_unix_dgram_socket.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/test_unix_stream_socket.cpp create mode 100644 vpr/thirdparty/sockpp/tests/unit/unit_tests.cpp create mode 100755 vpr/thirdparty/sockpp/travis_build.sh create mode 100755 vpr/thirdparty/sockpp/travis_install_catch2.sh diff --git a/libs/EXTERNAL/libtatum/libtatum/tatum/TimingReporter.cpp b/libs/EXTERNAL/libtatum/libtatum/tatum/TimingReporter.cpp index 609b0c0b03e..bda7eb4e8b1 100644 --- a/libs/EXTERNAL/libtatum/libtatum/tatum/TimingReporter.cpp +++ b/libs/EXTERNAL/libtatum/libtatum/tatum/TimingReporter.cpp @@ -99,6 +99,15 @@ void TimingReporter::report_timing_setup(std::ostream& os, report_timing(os, paths); } +void TimingReporter::report_timing_setup(std::vector& paths, + std::ostream& os, + const SetupTimingAnalyzer& setup_analyzer, + size_t npaths, bool usePathElementSeparator) const { + paths = path_collector_.collect_worst_setup_timing_paths(timing_graph_, setup_analyzer, npaths); + + report_timing(os, paths, usePathElementSeparator); +} + void TimingReporter::report_timing_hold(std::string filename, const HoldTimingAnalyzer& hold_analyzer, size_t npaths) const { @@ -114,6 +123,16 @@ void TimingReporter::report_timing_hold(std::ostream& os, report_timing(os, paths); } +void TimingReporter::report_timing_hold(std::vector& paths, + std::ostream& os, + const HoldTimingAnalyzer& hold_analyzer, + size_t npaths, + bool usePathElementSeparator) const { + paths = path_collector_.collect_worst_hold_timing_paths(timing_graph_, hold_analyzer, npaths); + + report_timing(os, paths, usePathElementSeparator); +} + void TimingReporter::report_skew_setup(std::string filename, const SetupTimingAnalyzer& setup_analyzer, size_t nworst) const { @@ -195,7 +214,7 @@ void TimingReporter::report_unconstrained_hold(std::ostream& os, */ void TimingReporter::report_timing(std::ostream& os, - const std::vector& paths) const { + const std::vector& paths, bool usePathElementSeparator) const { tatum::OsFormatGuard flag_guard(os); os << "#Timing report of worst " << paths.size() << " path(s)\n"; @@ -206,14 +225,14 @@ void TimingReporter::report_timing(std::ostream& os, size_t i = 0; for(const auto& path : paths) { os << "#Path " << ++i << "\n"; - report_timing_path(os, path); + report_timing_path(os, path, usePathElementSeparator); os << "\n"; } os << "#End of timing report\n"; } -void TimingReporter::report_timing_path(std::ostream& os, const TimingPath& timing_path) const { +void TimingReporter::report_timing_path(std::ostream& os, const TimingPath& timing_path, bool usePathElementSeparator) const { std::string divider = "--------------------------------------------------------------------------------"; TimingPathInfo path_info = timing_path.path_info(); @@ -252,7 +271,7 @@ void TimingReporter::report_timing_path(std::ostream& os, const TimingPath& timi arr_path = report_timing_clock_launch_subpath(os, path_helper, timing_path.clock_launch_path(), path_info.launch_domain(), path_info.type()); - arr_path = report_timing_data_arrival_subpath(os, path_helper, timing_path.data_arrival_path(), path_info.launch_domain(), path_info.type(), arr_path); + arr_path = report_timing_data_arrival_subpath(os, path_helper, timing_path.data_arrival_path(), path_info.launch_domain(), path_info.type(), arr_path, usePathElementSeparator); { //Final arrival time @@ -584,7 +603,8 @@ Time TimingReporter::report_timing_data_arrival_subpath(std::ostream& os, const TimingSubPath& subpath, DomainId domain, TimingType timing_type, - Time path) const { + Time path, + bool usePathElementSeparator) const { { //Input constraint @@ -615,7 +635,7 @@ Time TimingReporter::report_timing_data_arrival_subpath(std::ostream& os, //Launch data for(const TimingPathElem& path_elem : subpath.elements()) { - + if (usePathElementSeparator) os << "el{\n"; //Ask the application for a detailed breakdown of the edge delays auto delay_breakdown = name_resolver_.edge_delay_breakdown(path_elem.incomming_edge(), delay_type); if (!delay_breakdown.components.empty()) { @@ -644,6 +664,7 @@ Time TimingReporter::report_timing_data_arrival_subpath(std::ostream& os, path = path_elem.tag().time(); path_helper.update_print_path(os, point, path); + if (usePathElementSeparator) os << "el}\n"; } return path; } diff --git a/libs/EXTERNAL/libtatum/libtatum/tatum/TimingReporter.hpp b/libs/EXTERNAL/libtatum/libtatum/tatum/TimingReporter.hpp index 1569aa6d704..a3a381fe6a7 100644 --- a/libs/EXTERNAL/libtatum/libtatum/tatum/TimingReporter.hpp +++ b/libs/EXTERNAL/libtatum/libtatum/tatum/TimingReporter.hpp @@ -64,9 +64,11 @@ class TimingReporter { public: void report_timing_setup(std::string filename, const tatum::SetupTimingAnalyzer& setup_analyzer, size_t npaths=REPORT_TIMING_DEFAULT_NPATHS) const; void report_timing_setup(std::ostream& os, const tatum::SetupTimingAnalyzer& setup_analyzer, size_t npaths=REPORT_TIMING_DEFAULT_NPATHS) const; + void report_timing_setup(std::vector& paths, std::ostream& os, const tatum::SetupTimingAnalyzer& setup_analyzer, size_t npaths=REPORT_TIMING_DEFAULT_NPATHS, bool usePathElementSeparator=false) const; void report_timing_hold(std::string filename, const tatum::HoldTimingAnalyzer& hold_analyzer, size_t npaths=REPORT_TIMING_DEFAULT_NPATHS) const; void report_timing_hold(std::ostream& os, const tatum::HoldTimingAnalyzer& hold_analyzer, size_t npaths=REPORT_TIMING_DEFAULT_NPATHS) const; + void report_timing_hold(std::vector& paths, std::ostream& os, const tatum::HoldTimingAnalyzer& hold_analyzer, size_t npaths=REPORT_TIMING_DEFAULT_NPATHS, bool usePathElementSeparator=false) const; void report_skew_setup(std::string filename, const tatum::SetupTimingAnalyzer& setup_analyzer, size_t nworst=REPORT_TIMING_DEFAULT_NPATHS) const; void report_skew_setup(std::ostream& os, const tatum::SetupTimingAnalyzer& setup_analyzer, size_t nworst=REPORT_TIMING_DEFAULT_NPATHS) const; @@ -94,9 +96,9 @@ class TimingReporter { }; private: - void report_timing(std::ostream& os, const std::vector& paths) const; + void report_timing(std::ostream& os, const std::vector& paths, bool usePathElementSeparator = false) const; - void report_timing_path(std::ostream& os, const TimingPath& path) const; + void report_timing_path(std::ostream& os, const TimingPath& path, bool usePathElementSeparator = false) const; void report_unconstrained(std::ostream& os, const NodeType type, const detail::TagRetriever& tag_retriever) const; @@ -123,7 +125,8 @@ class TimingReporter { const TimingSubPath& subpath, DomainId domain, TimingType timing_type, - Time path) const; + Time path, + bool usePathElementSeparator = false) const; Time report_timing_data_required_element(std::ostream& os, detail::ReportTimingPathHelper& path_helper, diff --git a/vpr/CMakeLists.txt b/vpr/CMakeLists.txt index 371d11f39bc..68222ee5f89 100644 --- a/vpr/CMakeLists.txt +++ b/vpr/CMakeLists.txt @@ -15,6 +15,12 @@ set(VPR_PGO_DATA_DIR "." CACHE PATH "Where to store and retrieve PGO data") #Handle graphics setup set(GRAPHICS_DEFINES "") +#sockpp +set(SOCKPP_BUILD_SHARED OFF CACHE BOOL "Override default value" FORCE) +set(SOCKPP_BUILD_STATIC ON CACHE BOOL "Override default value" FORCE) +add_subdirectory(thirdparty/sockpp) +set(THIRDPARTY_INCLUDE_DIRS thirdparty/sockpp/include) + if (VPR_USE_EZGL STREQUAL "on") message(STATUS "EZGL: graphics enabled") set( @@ -59,7 +65,7 @@ add_library(libvpr STATIC ) -target_include_directories(libvpr PUBLIC ${LIB_INCLUDE_DIRS}) +target_include_directories(libvpr PUBLIC ${LIB_INCLUDE_DIRS} ${THIRDPARTY_INCLUDE_DIRS}) #VPR_ANALYTIC_PLACE is inisitalized in the root CMakeLists #Check Eigen dependency @@ -89,6 +95,8 @@ target_link_libraries(libvpr libargparse libpugixml librrgraph + sockpp-static + -lz ) #link graphics library only when graphics set to on diff --git a/vpr/src/base/SetupVPR.cpp b/vpr/src/base/SetupVPR.cpp index a93b648f87b..9b0f1d06f3e 100644 --- a/vpr/src/base/SetupVPR.cpp +++ b/vpr/src/base/SetupVPR.cpp @@ -38,6 +38,8 @@ static void SetupAnnealSched(const t_options& Options, static void SetupRouterOpts(const t_options& Options, t_router_opts* RouterOpts); static void SetupNocOpts(const t_options& Options, t_noc_opts* NocOpts); +static void SetupServerOpts(const t_options& Options, + t_server_opts* ServerOpts); static void SetupRoutingArch(const t_arch& Arch, t_det_routing_arch* RoutingArch); static void SetupTiming(const t_options& Options, const bool TimingEnabled, t_timing_inf* Timing); static void SetupSwitches(const t_arch& Arch, @@ -99,6 +101,7 @@ void SetupVPR(const t_options* Options, t_router_opts* RouterOpts, t_analysis_opts* AnalysisOpts, t_noc_opts* NocOpts, + t_server_opts* ServerOpts, t_det_routing_arch* RoutingArch, std::vector** PackerRRGraphs, std::vector& Segments, @@ -144,6 +147,7 @@ void SetupVPR(const t_options* Options, SetupAnalysisOpts(*Options, *AnalysisOpts); SetupPowerOpts(*Options, PowerOpts, Arch); SetupNocOpts(*Options, NocOpts); + SetupServerOpts(*Options, ServerOpts); if (readArchFile == true) { vtr::ScopedStartFinishTimer t("Loading Architecture Description"); @@ -744,6 +748,11 @@ static void SetupNocOpts(const t_options& Options, t_noc_opts* NocOpts) { return; } +static void SetupServerOpts(const t_options& Options, t_server_opts* ServerOpts) { + ServerOpts->is_server_mode_enabled = Options.is_server_mode_enabled; + ServerOpts->port_num = Options.server_port_num; +} + static void find_ipin_cblock_switch_index(const t_arch& Arch, int& wire_to_arch_ipin_switch, int& wire_to_arch_ipin_switch_between_dice) { for (auto cb_switch_name_index = 0; cb_switch_name_index < (int)Arch.ipin_cblock_switch_name.size(); cb_switch_name_index++) { int ipin_cblock_switch_index = UNDEFINED; diff --git a/vpr/src/base/SetupVPR.h b/vpr/src/base/SetupVPR.h index 7f7bb7105ea..97499eb8614 100644 --- a/vpr/src/base/SetupVPR.h +++ b/vpr/src/base/SetupVPR.h @@ -20,6 +20,7 @@ void SetupVPR(const t_options* Options, t_router_opts* RouterOpts, t_analysis_opts* AnalysisOpts, t_noc_opts* NocOpts, + t_server_opts* ServerOpts, t_det_routing_arch* RoutingArch, std::vector** PackerRRGraphs, std::vector& Segments, diff --git a/vpr/src/base/read_options.cpp b/vpr/src/base/read_options.cpp index fc01cd4bb96..3c035ed4a1a 100644 --- a/vpr/src/base/read_options.cpp +++ b/vpr/src/base/read_options.cpp @@ -1326,6 +1326,16 @@ argparse::ArgumentParser create_arg_parser(std::string prog_name, t_options& arg .action(argparse::Action::STORE_TRUE) .default_value("off"); + stage_grp.add_argument(args.is_server_mode_enabled, "--server") + .help("Run in server mode") + .action(argparse::Action::STORE_TRUE) + .default_value("off"); + + stage_grp.add_argument(args.server_port_num, "--port") + .help("Server port number") + .default_value("60555") + .show_in(argparse::ShowIn::HELP_ONLY); + stage_grp.epilog( "If none of the stage options are specified, all stages are run.\n" "Analysis is always run after routing, unless the implementation\n" diff --git a/vpr/src/base/read_options.h b/vpr/src/base/read_options.h index e6476ba151e..631248d615f 100644 --- a/vpr/src/base/read_options.h +++ b/vpr/src/base/read_options.h @@ -75,6 +75,10 @@ struct t_options { argparse::ArgValue allow_dangling_combinational_nodes; argparse::ArgValue terminate_if_timing_fails; + /* Server options */ + argparse::ArgValue is_server_mode_enabled; + argparse::ArgValue server_port_num; + /* Atom netlist options */ argparse::ArgValue absorb_buffer_luts; argparse::ArgValue const_gen_inference; diff --git a/vpr/src/base/vpr_api.cpp b/vpr/src/base/vpr_api.cpp index 47733286088..d153a26aa39 100644 --- a/vpr/src/base/vpr_api.cpp +++ b/vpr/src/base/vpr_api.cpp @@ -291,6 +291,7 @@ void vpr_init_with_options(const t_options* options, t_vpr_setup* vpr_setup, t_a &vpr_setup->RouterOpts, &vpr_setup->AnalysisOpts, &vpr_setup->NocOpts, + &vpr_setup->ServerOpts, &vpr_setup->RoutingArch, &vpr_setup->PackerRRGraph, vpr_setup->Segments, @@ -1045,7 +1046,7 @@ void vpr_init_graphics(const t_vpr_setup& vpr_setup, const t_arch& arch, bool is /* Startup X graphics */ init_graphics_state(vpr_setup.ShowGraphics, vpr_setup.GraphPause, vpr_setup.RouterOpts.route_type, vpr_setup.SaveGraphics, - vpr_setup.GraphicsCommands, is_flat); + vpr_setup.GraphicsCommands, is_flat, vpr_setup.ServerOpts.is_server_mode_enabled, vpr_setup.ServerOpts.port_num); if (vpr_setup.ShowGraphics || vpr_setup.SaveGraphics || !vpr_setup.GraphicsCommands.empty()) alloc_draw_structs(&arch); } @@ -1257,6 +1258,7 @@ void vpr_setup_vpr(t_options* Options, t_router_opts* RouterOpts, t_analysis_opts* AnalysisOpts, t_noc_opts* NocOpts, + t_server_opts* ServerOpts, t_det_routing_arch* RoutingArch, std::vector** PackerRRGraph, std::vector& Segments, @@ -1281,6 +1283,7 @@ void vpr_setup_vpr(t_options* Options, RouterOpts, AnalysisOpts, NocOpts, + ServerOpts, RoutingArch, PackerRRGraph, Segments, diff --git a/vpr/src/base/vpr_api.h b/vpr/src/base/vpr_api.h index b4c89e25051..038d8e20068 100644 --- a/vpr/src/base/vpr_api.h +++ b/vpr/src/base/vpr_api.h @@ -177,6 +177,7 @@ void vpr_setup_vpr(t_options* Options, t_router_opts* RouterOpts, t_analysis_opts* AnalysisOpts, t_noc_opts* NocOpts, + t_server_opts* ServerOpts, t_det_routing_arch* RoutingArch, std::vector** PackerRRGraph, std::vector& Segments, diff --git a/vpr/src/base/vpr_context.h b/vpr/src/base/vpr_context.h index 18420590f2e..5a39c3be30c 100644 --- a/vpr/src/base/vpr_context.h +++ b/vpr/src/base/vpr_context.h @@ -32,6 +32,10 @@ #include "noc_storage.h" #include "noc_traffic_flows.h" #include "noc_routing.h" +#include "gateio.h" +#include "taskresolver.h" +#include "tatum/report/TimingPath.hpp" + /** * @brief A Context is collection of state relating to a particular part of VPR @@ -549,6 +553,75 @@ struct NocContext : public Context { std::unique_ptr noc_flows_router; }; +/** + * @brief State relating to server mode + * + * This should contain only data structures that + * related to server state. + */ +class ServerContext : public Context { + public: + const server::GateIO& gateIO() const { return gate_io_; } + server::GateIO& mutable_gateIO() { return gate_io_; } + + const server::TaskResolver& task_resolver() const { return task_resolver_; } + server::TaskResolver& mutable_task_resolver() { return task_resolver_; } + + void set_crit_paths(const std::vector& crit_paths) { crit_paths_ = crit_paths; } + const std::vector& crit_paths() const { return crit_paths_; } + + void set_critical_path_num(int critical_path_num) { critical_path_num_ = critical_path_num; } + int critical_path_num() const { return critical_path_num_; } + + void set_path_type(const std::string& path_type) { path_type_ = path_type; } + const std::string& path_type() const { return path_type_; } + + void set_crit_path_elements(std::map> crit_path_element_indexes) { crit_path_element_indexes_ = crit_path_element_indexes; } + std::map> crit_path_element_indexes() const { return crit_path_element_indexes_; } + + void set_draw_crit_path_contour(bool draw_crit_path_contour) { draw_crit_path_contour_ = draw_crit_path_contour; } + bool draw_crit_path_contour() const { return draw_crit_path_contour_; } + + private: + server::GateIO gate_io_; + server::TaskResolver task_resolver_; + + /** + * @brief Stores the critical path items. + * + * This value is used when rendering the critical path by the selected index. + * Once calculated upon request, it provides the value for a specific critical path + * to be rendered upon user request. + */ + std::vector crit_paths_; + + /** + * @brief Stores the number of critical paths items. + * + * This value is used to generate a critical path report with a certain number of items, + * which will be sent back to the client upon request. + */ + int critical_path_num_ = 1; + + /** + * @brief Stores the critical path type. + * + * This value is used to generate a specific type of critical path report and send + * it back to the client upon request. + */ + std::string path_type_ = "setup"; + + /** + * @brief Stores the selected critical path elements. + * + * This value is used to render the selected critical path elements upon client request. + * The std::map key plays role of path index, where the element indexes are stored as std::set. + */ + std::map> crit_path_element_indexes_; + + bool draw_crit_path_contour_ = false; +}; + /** * @brief This object encapsulates VPR's state. * @@ -632,6 +705,9 @@ class VprContext : public Context { const PackingMultithreadingContext& packing_multithreading() const { return packing_multithreading_; } PackingMultithreadingContext& mutable_packing_multithreading() { return packing_multithreading_; } + const ServerContext& server() const { return server_; } + ServerContext& mutable_server() { return server_; } + private: DeviceContext device_; @@ -648,6 +724,8 @@ class VprContext : public Context { FloorplanningContext constraints_; NocContext noc_; + ServerContext server_; + PackingMultithreadingContext packing_multithreading_; }; diff --git a/vpr/src/base/vpr_types.h b/vpr/src/base/vpr_types.h index d36c0cf887d..58b217bc55c 100644 --- a/vpr/src/base/vpr_types.h +++ b/vpr/src/base/vpr_types.h @@ -1844,6 +1844,12 @@ struct t_TokenPair { struct t_lb_type_rr_node; /* Defined in pack_types.h */ +/// @brief Stores settings for VPR server mode +struct t_server_opts { + bool is_server_mode_enabled = false; + int port_num = -1; +}; + ///@brief Store settings for VPR struct t_vpr_setup { bool TimingEnabled; ///* PackerRRGraph; std::vector Segments; ///graphics_commands = graphics_commands; draw_state->is_flat = is_flat; + if (enable_server) { + /* Set up a server and its callback to be triggered at 100ms intervals by the timer's timeout event. */ + server::GateIO& gate_io = g_vpr_ctx.mutable_server().mutable_gateIO(); + if (!gate_io.isRunning()) { + gate_io.start(port_num); + g_timeout_add(/*interval_ms*/ 100, server::update, &application); + } + } #else //Suppress unused parameter warnings (void)show_graphics_val; diff --git a/vpr/src/draw/draw.h b/vpr/src/draw/draw.h index 1c39f12f49b..8785ae78c3f 100644 --- a/vpr/src/draw/draw.h +++ b/vpr/src/draw/draw.h @@ -56,7 +56,9 @@ void init_graphics_state(bool show_graphics_val, enum e_route_type route_type, bool save_graphics, std::string graphics_commands, - bool is_flat); + bool is_flat, + bool server, + int port_num); /* Allocates the structures needed to draw the placement and routing.*/ void alloc_draw_structs(const t_arch* arch); diff --git a/vpr/src/server/bytearray.h b/vpr/src/server/bytearray.h new file mode 100644 index 00000000000..d532c33b66a --- /dev/null +++ b/vpr/src/server/bytearray.h @@ -0,0 +1,82 @@ +#ifndef BYTEARRAY_H +#define BYTEARRAY_H + +#include +#include +#include +#include + +namespace comm { + +/** + * @brief ByteArray as a simple wrapper over std::vector +*/ +class ByteArray : public std::vector { +public: + static const std::size_t DEFAULT_SIZE_HINT = 1024; + + ByteArray(const char* data) + : std::vector(reinterpret_cast(data), + reinterpret_cast(data + std::strlen(data))) + {} + + ByteArray(const char* data, std::size_t size) + : std::vector(reinterpret_cast(data), + reinterpret_cast(data + size)) + {} + + ByteArray(std::size_t sizeHint = DEFAULT_SIZE_HINT) { + reserve(sizeHint); + } + + template + ByteArray(Iterator first, Iterator last): std::vector(first, last) {} + + void append(const ByteArray& appendix) { + insert(end(), appendix.begin(), appendix.end()); + } + + void append(uint8_t b) { + push_back(b); + } + + std::size_t findSequence(const char* sequence, std::size_t sequenceSize) { + const std::size_t mSize = size(); + if (mSize >= sequenceSize) { + for (std::size_t i = 0; i <= mSize - sequenceSize; ++i) { + bool found = true; + for (std::size_t j = 0; j < sequenceSize; ++j) { + if (at(i + j) != sequence[j]) { + found = false; + break; + } + } + if (found) { + return i; + } + } + } + return std::size_t(-1); + } + + std::string to_string() const { + return std::string(reinterpret_cast(this->data()), this->size()); + } + + uint32_t calcCheckSum() { + return calcCheckSum(*this); + } + + template + static uint32_t calcCheckSum(const T& iterable) { + uint32_t sum = 0; + for (uint8_t c : iterable) { + sum += static_cast(c); + } + return sum; + } +}; + +} // namespace comm + +#endif // BYTEARRAY_H diff --git a/vpr/src/server/commconstants.h b/vpr/src/server/commconstants.h new file mode 100644 index 00000000000..6845b80e32f --- /dev/null +++ b/vpr/src/server/commconstants.h @@ -0,0 +1,38 @@ +#ifndef COMMCONSTS_H +#define COMMCONSTS_H + +namespace comm { + +constexpr const char* KEY_JOB_ID = "JOB_ID"; +constexpr const char* KEY_CMD = "CMD"; +constexpr const char* KEY_OPTIONS = "OPTIONS"; +constexpr const char* KEY_DATA = "DATA"; +constexpr const char* KEY_STATUS = "STATUS"; +constexpr const char* ECHO_DATA = "ECHO"; + +const unsigned char ZLIB_COMPRESSOR_ID = 'z'; +const unsigned char NONE_COMPRESSOR_ID = '\x0'; + +constexpr const char* OPTION_PATH_NUM = "path_num"; +constexpr const char* OPTION_PATH_TYPE = "path_type"; +constexpr const char* OPTION_DETAILS_LEVEL = "details_level"; +constexpr const char* OPTION_IS_FLOAT_ROUTING = "is_flat_routing"; +constexpr const char* OPTION_PATH_ELEMENTS = "path_elements"; +constexpr const char* OPTION_HIGHTLIGHT_MODE = "hight_light_mode"; +constexpr const char* OPTION_DRAW_PATH_CONTOUR = "draw_path_contour"; + +constexpr const char* CRITICAL_PATH_ITEMS_SELECTION_NONE = "none"; + +// please don't change values as they are involved in socket communication +constexpr const char* KEY_SETUP_PATH_LIST = "setup"; +constexpr const char* KEY_HOLD_PATH_LIST = "hold"; +// + +enum CMD { + CMD_GET_PATH_LIST_ID=0, + CMD_DRAW_PATH_ID +}; + +} // namespace comm + +#endif diff --git a/vpr/src/server/convertutils.cpp b/vpr/src/server/convertutils.cpp new file mode 100644 index 00000000000..be73c4590e3 --- /dev/null +++ b/vpr/src/server/convertutils.cpp @@ -0,0 +1,77 @@ +#include "convertutils.h" +#include +#include + +std::optional tryConvertToInt(const std::string& str) +{ + std::optional result; + + std::istringstream iss(str); + int intValue; + if (iss >> intValue) { + // Check if there are no any characters left in the stream + char remaining; + if (!(iss >> remaining)) { + result = intValue; + } + } + return result; +} + +namespace { +std::string getPrettyStrFromFloat(float value) +{ + std::ostringstream ss; + ss << std::fixed << std::setprecision(2) << value; // Set precision to 2 digit after the decimal point + return ss.str(); +} +} // namespace + +std::string getPrettyDurationStrFromMs(int64_t durationMs) +{ + std::string result; + if (durationMs >= 1000) { + result = getPrettyStrFromFloat(durationMs/1000.0f) + " sec"; + } else { + result = std::to_string(durationMs); + result += " ms"; + } + return result; +} + +std::string getPrettySizeStrFromBytesNum(int64_t bytesNum) +{ + std::string result; + if (bytesNum >= 1024*1024*1024) { + result = getPrettyStrFromFloat(bytesNum/float(1024*1024*1024)) + "Gb"; + } else if (bytesNum >= 1024*1024) { + result = getPrettyStrFromFloat(bytesNum/float(1024*1024)) + "Mb"; + } else if (bytesNum >= 1024) { + result = getPrettyStrFromFloat(bytesNum/float(1024)) + "Kb"; + } else { + result = std::to_string(bytesNum) + "bytes"; + } + return result; +} + + +std::string getTruncatedMiddleStr(const std::string& src, std::size_t num) { + std::string result; + static std::size_t minimalStringSizeToTruncate = 20; + if (num < minimalStringSizeToTruncate) { + num = minimalStringSizeToTruncate; + } + static std::string middlePlaceHolder("..."); + const std::size_t srcSize = src.size(); + if (srcSize > num) { + int prefixNum = num/2; + int suffixNum = num/2 - middlePlaceHolder.size(); + result.append(std::move(src.substr(0, prefixNum))); + result.append(middlePlaceHolder); + result.append(std::move(src.substr(srcSize - suffixNum))); + } else { + result = src; + } + + return result; +} diff --git a/vpr/src/server/convertutils.h b/vpr/src/server/convertutils.h new file mode 100644 index 00000000000..f06f84114f0 --- /dev/null +++ b/vpr/src/server/convertutils.h @@ -0,0 +1,15 @@ +#ifndef CONVERTUTILS_H +#define CONVERTUTILS_H + +#include +#include +#include + +const std::size_t DEFAULT_PRINT_STRING_MAX_NUM = 100; + +std::optional tryConvertToInt(const std::string&); +std::string getPrettyDurationStrFromMs(int64_t durationMs); +std::string getPrettySizeStrFromBytesNum(int64_t bytesNum); +std::string getTruncatedMiddleStr(const std::string& src, std::size_t num = DEFAULT_PRINT_STRING_MAX_NUM); + +#endif // CONVERTUTILS_H diff --git a/vpr/src/server/gateio.cpp b/vpr/src/server/gateio.cpp new file mode 100644 index 00000000000..4c8c2fe9278 --- /dev/null +++ b/vpr/src/server/gateio.cpp @@ -0,0 +1,264 @@ +#include "gateio.h" +#include "telegramparser.h" +#include "telegrambuffer.h" +#include "commconstants.h" +#include "convertutils.h" + +#include "sockpp/tcp6_acceptor.h" + +#include + +namespace server { + +GateIO::GateIO() +{ + m_isRunning.store(false); +} + +GateIO::~GateIO() +{ + stop(); +} + +void GateIO::start(int portNum) +{ + if (!m_isRunning.load()) { + m_portNum = portNum; + std::cout << "starting server from thread=" << std::this_thread::get_id() << std::endl; + m_isRunning.store(true); + m_thread = std::thread(&GateIO::startListening, this); + } +} + +void GateIO::stop() +{ + if (m_isRunning.load()) { + m_isRunning.store(false); + if (m_thread.joinable()) { + m_thread.join(); + } + } +} + +void GateIO::takeRecievedTasks(std::vector& tasks) +{ + std::unique_lock lock(m_tasksMutex); + if (m_receivedTasks.size() > 0) { + m_logger.queue(LogLevel::Debug, "take", m_receivedTasks.size(), "num of received tasks"); + } + std::swap(tasks, m_receivedTasks); +} + +void GateIO::moveTasksToSendQueue(std::vector& tasks) +{ + std::unique_lock lock(m_tasksMutex); + for (TaskPtr& task: tasks) { + if (task->hasError()) { + m_logger.queue(LogLevel::Debug, "task id=", task->jobId(), "finished with error", task->error(), "moving it to send queue"); + } else { + m_logger.queue(LogLevel::Debug, "task id=", task->jobId(), "finished with success, moving it to send queue"); + } + + m_sendTasks.push_back(std::move(task)); + } +} + +void GateIO::startListening() +{ +#ifdef ENABLE_CLIENT_ALIVE_TRACKER + std::unique_ptr clientAliveTrackerPtr = + std::make_unique(std::chrono::milliseconds{5000}, std::chrono::milliseconds{20000}); +#else + std::unique_ptr clientAliveTrackerPtr; +#endif + + static const std::string echoData{comm::ECHO_DATA}; + + comm::TelegramBuffer telegramBuff; + std::vector telegramFrames; + + sockpp::initialize(); + sockpp::tcp6_acceptor tcpServer(m_portNum); + tcpServer.set_non_blocking(true); + + const std::size_t chunkMaxBytesNum = 2*1024*1024; // 2Mb + + if (tcpServer) { + m_logger.queue(LogLevel::Info, "open server, port=", m_portNum); + } else { + m_logger.queue(LogLevel::Info, "fail to open server, port=", m_portNum); + } + + std::optional clientOpt; + + /// comm event loop + while(m_isRunning.load()) { + bool isCommunicationProblemDetected = false; + + /// check for the client connection + if (!clientOpt) { + sockpp::inet6_address peer; + sockpp::tcp6_socket client = tcpServer.accept(&peer); + if (client) { + m_logger.queue(LogLevel::Info, "client", client.address().to_string() , "connection accepted"); + client.set_non_blocking(true); + clientOpt = std::move(client); + + if (clientAliveTrackerPtr) { + clientAliveTrackerPtr->reset(); + } + } + } + + if (clientOpt) { + sockpp::tcp6_socket& client = clientOpt.value(); + + /// handle sending response + { + std::unique_lock lock(m_tasksMutex); + + if (!m_sendTasks.empty()) { + const TaskPtr& task = m_sendTasks.at(0); + try { + std::size_t bytesToSend = std::min(chunkMaxBytesNum, task->responseBuffer().size()); + std::size_t bytesActuallyWritten = client.write_n(task->responseBuffer().data(), bytesToSend); + if (bytesActuallyWritten <= task->origReponseBytesNum()) { + task->chopNumSentBytesFromResponseBuffer(bytesActuallyWritten); + m_logger.queue(LogLevel::Detail, + "sent chunk:", getPrettySizeStrFromBytesNum(bytesActuallyWritten), + "from", getPrettySizeStrFromBytesNum(task->origReponseBytesNum()), + "left:", getPrettySizeStrFromBytesNum(task->responseBuffer().size())); + if (clientAliveTrackerPtr) { + clientAliveTrackerPtr->onClientActivity(); + } + } + } catch(...) { + m_logger.queue(LogLevel::Detail, "error while writing chunk"); + isCommunicationProblemDetected = true; + } + + if (task->isResponseFullySent()) { + m_logger.queue(LogLevel::Info, + "sent:", task->telegramHeader().info(), task->info()); + } + } + + // remove reported tasks + std::size_t tasksBeforeRemoving = m_sendTasks.size(); + + auto partitionIter = std::partition(m_sendTasks.begin(), m_sendTasks.end(), + [](const TaskPtr& task) { return !task->isResponseFullySent(); }); + m_sendTasks.erase(partitionIter, m_sendTasks.end()); + bool removingTookPlace = tasksBeforeRemoving != m_sendTasks.size(); + if (!m_sendTasks.empty() && removingTookPlace) { + m_logger.queue(LogLevel::Detail, "left tasks num to send ", m_sendTasks.size()); + } + } // release lock + + /// handle receiving + std::string receivedMessage; + receivedMessage.resize(chunkMaxBytesNum); + std::size_t bytesActuallyReceived{0}; + try { + bytesActuallyReceived = client.read_n(&receivedMessage[0], chunkMaxBytesNum); + } catch(...) { + m_logger.queue(LogLevel::Error, "fail to recieving"); + isCommunicationProblemDetected = true; + } + + if ((bytesActuallyReceived > 0) && (bytesActuallyReceived <= chunkMaxBytesNum)) { + m_logger.queue(LogLevel::Detail, "received chunk:", getPrettySizeStrFromBytesNum(bytesActuallyReceived)); + telegramBuff.append(comm::ByteArray{receivedMessage.c_str(), bytesActuallyReceived}); + if (clientAliveTrackerPtr) { + clientAliveTrackerPtr->onClientActivity(); + } + } + + /// handle telegrams + telegramFrames.clear(); + telegramBuff.takeTelegramFrames(telegramFrames); + for (const comm::TelegramFramePtr& telegramFrame: telegramFrames) { + // process received data + std::string message{telegramFrame->data.to_string()}; + bool isEchoTelegram = false; + if (clientAliveTrackerPtr) { + if ((message.size() == echoData.size()) && (message == echoData)) { + m_logger.queue(LogLevel::Detail, "received", echoData); + clientAliveTrackerPtr->onClientActivity(); + isEchoTelegram = true; + } + } + + if (!isEchoTelegram) { + m_logger.queue(LogLevel::Detail, "received composed", getPrettySizeStrFromBytesNum(message.size()), ":", getTruncatedMiddleStr(message)); + std::optional jobIdOpt = comm::TelegramParser::tryExtractFieldJobId(message); + std::optional cmdOpt = comm::TelegramParser::tryExtractFieldCmd(message); + std::optional optionsOpt; + comm::TelegramParser::tryExtractFieldOptions(message, optionsOpt); + if (jobIdOpt && cmdOpt && optionsOpt) { + TaskPtr task = std::make_unique(jobIdOpt.value(), cmdOpt.value(), optionsOpt.value()); + const comm::TelegramHeader& header = telegramFrame->header; + m_logger.queue(LogLevel::Info, + "received:", header.info(), task->info(/*skipDuration*/true)); + std::unique_lock lock(m_tasksMutex); + m_receivedTasks.push_back(std::move(task)); + } else { + m_logger.queue(LogLevel::Error, "broken telegram detected, fail extract options from", message); + } + } + } + + // forward telegramBuffer errors + std::vector telegramBufferErrors; + telegramBuff.takeErrors(telegramBufferErrors); + for (const std::string& error: telegramBufferErrors) { + m_logger.queue(LogLevel::Info, error); + } + + /// handle client alive tracker + if (clientAliveTrackerPtr) { + if (clientAliveTrackerPtr->isTimeToSentEcho()) { + comm::TelegramHeader echoHeader = comm::TelegramHeader::constructFromData(echoData); + std::string message = echoHeader.buffer().to_string(); + message.append(echoData); + try { + std::size_t bytesActuallySent = client.write(message); + if (bytesActuallySent == message.size()) { + m_logger.queue(LogLevel::Detail, "sent", echoData); + clientAliveTrackerPtr->onEchoSent(); + } + } catch(...) { + m_logger.queue(LogLevel::Debug, "fail to sent", echoData); + isCommunicationProblemDetected = true; + } + } + } + + /// handle client alive + if (clientAliveTrackerPtr) { + if (clientAliveTrackerPtr->isClientTimeout()) { + m_logger.queue(LogLevel::Error, "client didn't repond too long"); + isCommunicationProblemDetected = true; + } + } + + /// handle communication problem + if (isCommunicationProblemDetected) { + clientOpt = std::nullopt; + if (!telegramBuff.empty()) { + m_logger.queue(LogLevel::Debug, "clear telegramBuff"); + telegramBuff.clear(); + } + } + } + + std::this_thread::sleep_for(std::chrono::milliseconds{LOOP_INTERVAL_MS}); + } +} + +void GateIO::printLogs() +{ + m_logger.flush(); +} + +} // namespace server diff --git a/vpr/src/server/gateio.h b/vpr/src/server/gateio.h new file mode 100644 index 00000000000..0cc299023dc --- /dev/null +++ b/vpr/src/server/gateio.h @@ -0,0 +1,149 @@ +#ifndef GATEIO_H +#define GATEIO_H + +#include "task.h" + +#include +#include +#include +#include +#include +#include +#include +#include + +namespace server { + +/** + * @brief Implements the socket communication layer with the outside world. + * Operable only with a single client. As soon as client connection is detected + * it begins listening on the specified port number for incoming client requests, + * collects and encapsulates them into tasks. + * The incoming tasks are extracted and handled by the top-level logic (TaskResolver). + * Once the tasks are resolved by the TaskResolver, they are returned + * to be sent back to the client as a response. + * + * Note: + * - gateio is not started automatically upon creation; you have to use the 'start' method with the port number. + * - The gateio runs in a separate thread to ensure smooth IO behavior. + * - The socket is initialized in a non-blocking mode to function properly in a multithreaded environment. +*/ +class GateIO +{ + class ClientAliveTracker { + public: + ClientAliveTracker(const std::chrono::milliseconds& echoIntervalMs, const std::chrono::milliseconds& clientTimeoutMs) + : m_echoIntervalMs(echoIntervalMs), m_clientTimeoutMs(clientTimeoutMs) { + reset(); + } + ClientAliveTracker()=default; + + void onClientActivity() { + m_lastClientActivityTime = std::chrono::high_resolution_clock::now(); + } + + void onEchoSent() { + m_lastEchoSentTime = std::chrono::high_resolution_clock::now(); + } + + bool isTimeToSentEcho() const { + return (durationSinceLastClientActivityMs() > m_echoIntervalMs) && (durationSinceLastEchoSentMs() > m_echoIntervalMs); + } + bool isClientTimeout() const { return durationSinceLastClientActivityMs() > m_clientTimeoutMs; } + + void reset() { + onClientActivity(); + } + + private: + std::chrono::high_resolution_clock::time_point m_lastClientActivityTime; + std::chrono::high_resolution_clock::time_point m_lastEchoSentTime; + std::chrono::milliseconds m_echoIntervalMs; + std::chrono::milliseconds m_clientTimeoutMs; + + std::chrono::milliseconds durationSinceLastClientActivityMs() const { + auto now = std::chrono::high_resolution_clock::now(); + return std::chrono::duration_cast(now - m_lastClientActivityTime); + } + std::chrono::milliseconds durationSinceLastEchoSentMs() const { + auto now = std::chrono::high_resolution_clock::now(); + return std::chrono::duration_cast(now - m_lastEchoSentTime); + } + }; + + enum class LogLevel: int { + Error, + Info, + Detail, + Debug + }; + + class TLogger { + public: + TLogger() { + m_logLevel = static_cast(LogLevel::Info); + } + ~TLogger() {} + + template + void queue(LogLevel logLevel, Args&&... args) { + if (static_cast(logLevel) <= m_logLevel) { + std::unique_lock lock(m_logStreamMutex); + if (logLevel == LogLevel::Error) { + m_logStream << "ERROR:"; + } + ((m_logStream << ' ' << std::forward(args)), ...); + m_logStream << "\n"; + } + } + + void flush() { + std::unique_lock lock(m_logStreamMutex); + if (!m_logStream.str().empty()) { + std::cout << m_logStream.str(); + m_logStream.str(""); + } + } + + private: + std::stringstream m_logStream; + std::mutex m_logStreamMutex; + std::atomic m_logLevel; + }; + +public: + explicit GateIO(); + ~GateIO(); + + const int LOOP_INTERVAL_MS = 100; + + bool isRunning() const { return m_isRunning.load(); } + + void takeRecievedTasks(std::vector&); + void moveTasksToSendQueue(std::vector&); + + void printLogs(); // called from main thread + + void start(int portNum); + void stop(); + +private: + int m_portNum = -1; + + std::atomic m_isRunning; // is true when started + + std::thread m_thread; // thread to execute socket IO work + + std::mutex m_tasksMutex; + std::vector m_receivedTasks; // tasks from client (requests) + std::vector m_sendTasks; // task to client (reponses) + + TLogger m_logger; + + void startListening(); // thread worker function +}; + +} // namespace server + +#endif // GATEIO_H + diff --git a/vpr/src/server/gtkcomboboxhelper.cpp b/vpr/src/server/gtkcomboboxhelper.cpp new file mode 100644 index 00000000000..a96d4b93347 --- /dev/null +++ b/vpr/src/server/gtkcomboboxhelper.cpp @@ -0,0 +1,50 @@ +#include "gtkcomboboxhelper.h" +#include + +namespace { + +/** + * @brief Helper function to retrieve the count of items in a GTK combobox. + */ +gint get_items_count(gpointer combo_box) { + GtkComboBoxText* combo = GTK_COMBO_BOX_TEXT(combo_box); + + // Get the model of the combo box + GtkTreeModel* model = gtk_combo_box_get_model(GTK_COMBO_BOX(combo)); + + // Get the number of items (indexes) in the combo box + gint count = gtk_tree_model_iter_n_children(model, NULL); + return count; +} + +} // namespace + +/** + * @brief Helper function to retrieve the index of an item by its text. + * Returns -1 if the item with the specified text is absent. + */ +gint get_item_index_by_text(gpointer combo_box, const gchar* target_item) { + gint result_index = -1; + GtkComboBoxText* combo = GTK_COMBO_BOX_TEXT(combo_box); + + // Get the model of the combo box + GtkTreeModel* model = gtk_combo_box_get_model(GTK_COMBO_BOX(combo)); + + gchar* current_item_text = nullptr; + + for (gint index=0; index + +/** + * @brief Helper function to retrieve the index of an item by its text. + * Returns -1 if the item with the specified text is absent. + */ +gint get_item_index_by_text(gpointer combo_box, const gchar* target_item); + +#endif // GTKCOMBOBOXHELPER_H diff --git a/vpr/src/server/pathhelper.cpp b/vpr/src/server/pathhelper.cpp new file mode 100644 index 00000000000..3d8cafe9457 --- /dev/null +++ b/vpr/src/server/pathhelper.cpp @@ -0,0 +1,96 @@ +#include "pathhelper.h" +#include "globals.h" +#include "vpr_net_pins_matrix.h" +#include "VprTimingGraphResolver.h" +#include "tatum/TimingReporter.hpp" + +#include "draw_types.h" +#include "draw_global.h" +#include "net_delay.h" +#include "concrete_timing_info.h" + +#include "timing_info_fwd.h" +#include "AnalysisDelayCalculator.h" +#include "vpr_types.h" + +#include +#include + +namespace server { + +namespace { + +/** + * @brief helper function to calculate the setup critical path with specified parameters. + */ +CritPathsResult generate_setup_timing_report(const SetupTimingInfo& timing_info, const AnalysisDelayCalculator& delay_calc, const t_analysis_opts& analysis_opts, bool is_flat, bool usePathElementSeparator) { + auto& timing_ctx = g_vpr_ctx.timing(); + auto& atom_ctx = g_vpr_ctx.atom(); + + VprTimingGraphResolver resolver(atom_ctx.nlist, atom_ctx.lookup, *timing_ctx.graph, delay_calc, is_flat); + resolver.set_detail_level(analysis_opts.timing_report_detail); + + tatum::TimingReporter timing_reporter(resolver, *timing_ctx.graph, *timing_ctx.constraints); + + std::vector paths; + std::stringstream ss; + timing_reporter.report_timing_setup(paths, ss, *timing_info.setup_analyzer(), analysis_opts.timing_report_npaths, usePathElementSeparator); + return CritPathsResult{paths, ss.str()}; +} + +/** + * @brief helper function to calculate the hold critical path with specified parameters. + */ +CritPathsResult generate_hold_timing_report(const HoldTimingInfo& timing_info, const AnalysisDelayCalculator& delay_calc, const t_analysis_opts& analysis_opts, bool is_flat, bool usePathElementSeparator) { + auto& timing_ctx = g_vpr_ctx.timing(); + auto& atom_ctx = g_vpr_ctx.atom(); + + VprTimingGraphResolver resolver(atom_ctx.nlist, atom_ctx.lookup, *timing_ctx.graph, delay_calc, is_flat); + resolver.set_detail_level(analysis_opts.timing_report_detail); + + tatum::TimingReporter timing_reporter(resolver, *timing_ctx.graph, *timing_ctx.constraints); + + std::vector paths; + std::stringstream ss; + timing_reporter.report_timing_hold(paths, ss, *timing_info.hold_analyzer(), analysis_opts.timing_report_npaths, usePathElementSeparator); + return CritPathsResult{paths, ss.str()}; +} + +} // namespace + +/** + * @brief Unified helper function to calculate the critical path with specified parameters. + */ +CritPathsResult calcCriticalPath(const std::string& type, int critPathNum, e_timing_report_detail detailsLevel, bool is_flat_routing, bool usePathElementSeparator) +{ + // shortcuts + auto& atom_ctx = g_vpr_ctx.atom(); + + //Load the net delays + const Netlist<>& router_net_list = is_flat_routing ? (const Netlist<>&)g_vpr_ctx.atom().nlist : (const Netlist<>&)g_vpr_ctx.clustering().clb_nlist; + const Netlist<>& net_list = router_net_list; + + NetPinsMatrix net_delay = make_net_pins_matrix(net_list); + load_net_delay_from_routing(net_list, + net_delay); + + //Do final timing analysis + auto analysis_delay_calc = std::make_shared(atom_ctx.nlist, atom_ctx.lookup, net_delay, is_flat_routing); + + e_timing_update_type timing_update_type = e_timing_update_type::AUTO; // FULL, INCREMENTAL, AUTO + auto timing_info = make_setup_hold_timing_info(analysis_delay_calc, timing_update_type); + timing_info->update(); + + t_analysis_opts analysis_opt; + analysis_opt.timing_report_detail = detailsLevel; + analysis_opt.timing_report_npaths = critPathNum; + + if (type == "setup") { + return generate_setup_timing_report(*timing_info, *analysis_delay_calc, analysis_opt, is_flat_routing, usePathElementSeparator); + } else if (type == "hold") { + return generate_hold_timing_report(*timing_info, *analysis_delay_calc, analysis_opt, is_flat_routing, usePathElementSeparator); + } + return CritPathsResult{std::vector(), ""}; +} + +} // namespace server diff --git a/vpr/src/server/pathhelper.h b/vpr/src/server/pathhelper.h new file mode 100644 index 00000000000..211696f54d0 --- /dev/null +++ b/vpr/src/server/pathhelper.h @@ -0,0 +1,31 @@ +#ifndef PATHHELPER_H +#define PATHHELPER_H + +#include +#include +#include + +#include "tatum/report/TimingPath.hpp" +#include "vpr_types.h" + +namespace server { + +/** + * @brief Structure to retain the calculation result of the critical path. + * + * It contains the critical path list and the generated report as a string. +*/ +struct CritPathsResult { + bool isValid() const { return !report.empty(); } + std::vector paths; + std::string report; +}; + +/** + * @brief Unified helper function to calculate the critical path with specified parameters. + */ +CritPathsResult calcCriticalPath(const std::string& type, int critPathNum, e_timing_report_detail detailsLevel, bool is_flat_routing, bool usePathElementSeparator); + +} // namespace server + +#endif // PATHHELPER_H diff --git a/vpr/src/server/serverupdate.cpp b/vpr/src/server/serverupdate.cpp new file mode 100644 index 00000000000..09b94443414 --- /dev/null +++ b/vpr/src/server/serverupdate.cpp @@ -0,0 +1,44 @@ +#include "serverupdate.h" +#include "gateio.h" +#include "taskresolver.h" +#include "globals.h" +#include "ezgl/application.hpp" + +#ifndef NO_GRAPHICS + +namespace server { + +gboolean update(gpointer data) { + bool isRunning = g_vpr_ctx.server().gateIO().isRunning(); + if (isRunning) { + // shortcuts + ezgl::application* app = static_cast(data); + GateIO& gate_io = g_vpr_ctx.mutable_server().mutable_gateIO(); + TaskResolver& task_resolver = g_vpr_ctx.mutable_server().mutable_task_resolver(); + + std::vector tasksBuff; + + gate_io.takeRecievedTasks(tasksBuff); + task_resolver.addTasks(tasksBuff); + + bool process_task = task_resolver.update(app); + + tasksBuff.clear(); + task_resolver.takeFinished(tasksBuff); + + gate_io.moveTasksToSendQueue(tasksBuff); + gate_io.printLogs(); + + // Call the redraw method of the application if any of task was processed + if (process_task) { + app->refresh_drawing(); + } + } + + // Return TRUE to keep the timer running, or FALSE to stop it + return isRunning; +} + +} // namespace server + +#endif // NO_GRAPHICS diff --git a/vpr/src/server/serverupdate.h b/vpr/src/server/serverupdate.h new file mode 100644 index 00000000000..17301988b45 --- /dev/null +++ b/vpr/src/server/serverupdate.h @@ -0,0 +1,23 @@ +#ifndef SERVERUPDATE_H +#define SERVERUPDATE_H + +#include + +#ifndef NO_GRAPHICS + +namespace server { + +/** + * @brief Main server update callback. + * + * This function is a periodic callback invoked at a fixed interval to manage and handle incoming client requests. + * It acts as the central control point for processing client interactions and orchestrating server-side operations + * within the specified time intervals. + */ +gboolean update(gpointer); + +} // namespace server + +#endif // NO_GRAPHICS + +#endif // SERVERUPDATE_H diff --git a/vpr/src/server/task.cpp b/vpr/src/server/task.cpp new file mode 100644 index 00000000000..0ad89688ac5 --- /dev/null +++ b/vpr/src/server/task.cpp @@ -0,0 +1,111 @@ +#include "task.h" + +#include + +#include "telegrambuffer.h" + +#include "convertutils.h" +#include "commconstants.h" +#include "zlibutils.h" + +namespace server { + +Task::Task(int jobId, int cmd, const std::string& options) +: m_jobId(jobId), m_cmd(cmd), m_options(options) +{ + m_creationTime = std::chrono::high_resolution_clock::now(); +} + +void Task::chopNumSentBytesFromResponseBuffer(std::size_t bytesSentNum) +{ + if (m_responseBuffer.size() >= bytesSentNum) { + m_responseBuffer.erase(0, bytesSentNum); + } else { + m_responseBuffer.clear(); + } + if (m_responseBuffer.empty()) { + m_isResponseFullySent = true; + } +} + +bool Task::optionsMatch(const class std::unique_ptr& other) +{ + if (other->options().size() != m_options.size()) { + return false; + } + return other->options() == m_options; +} + +void Task::fail(const std::string& error) { + m_isFinished = true; + m_error = error; + bakeResponse(); + } +void Task::success(const std::string& result) { + m_result = result; + m_isFinished = true; + bakeResponse(); + } + +std::string Task::info(bool skipDuration) const { + std::stringstream ss; + ss << "task[" + << "id=" << std::to_string(m_jobId) + << ",cmd=" << std::to_string(m_cmd); + if (!skipDuration) { + ss << ",exists=" << getPrettyDurationStrFromMs(timeMsElapsed()); + } + ss << "]"; + return ss.str(); + } + +int64_t Task::timeMsElapsed() const + { + auto now = std::chrono::high_resolution_clock::now(); + return std::chrono::duration_cast(now - m_creationTime).count(); +} + +void Task::bakeResponse() +{ + std::stringstream ss; + ss << "{"; + + ss << "\"" << comm::KEY_JOB_ID << "\":\"" << m_jobId << "\","; + ss << "\"" << comm::KEY_CMD << "\":\"" << m_cmd << "\","; + ss << "\"" << comm::KEY_OPTIONS << "\":\"" << m_options << "\","; + if (hasError()) { + ss << "\"" << comm::KEY_DATA << "\":\"" << m_error << "\","; + } else { + ss << "\"" << comm::KEY_DATA << "\":\"" << m_result << "\","; + } + int status = hasError() ? 0 : 1; + ss << "\"" << comm::KEY_STATUS << "\":\"" << status << "\""; + + ss << "}"; + + std::optional bodyOpt; + uint8_t compressorId = comm::NONE_COMPRESSOR_ID; +#ifndef FORCE_DISABLE_ZLIB_TELEGRAM_COMPRESSION + bodyOpt = tryCompress(ss.str()); + if (bodyOpt) { + compressorId = comm::ZLIB_COMPRESSOR_ID; + } +#endif + if (!bodyOpt) { + // fail to compress, use raw + compressorId = comm::NONE_COMPRESSOR_ID; + bodyOpt = std::move(ss.str()); + } + + std::string body = bodyOpt.value(); + m_telegramHeader = comm::TelegramHeader::constructFromData(body, compressorId); + + m_responseBuffer.append(m_telegramHeader.buffer().begin(), m_telegramHeader.buffer().end()); + m_responseBuffer.append(std::move(body)); + body.clear(); + m_origReponseBytesNum = m_responseBuffer.size(); +} + + +} // namespace server + diff --git a/vpr/src/server/task.h b/vpr/src/server/task.h new file mode 100644 index 00000000000..a14fd4228f6 --- /dev/null +++ b/vpr/src/server/task.h @@ -0,0 +1,73 @@ +#ifndef TASK_H +#define TASK_H + +#include +#include +#include + +#include "telegramheader.h" + +namespace server { + +/** + * @brief Implements the server task. + * + * This structure aids in encapsulating the client request, request result, and result status. + * It generates a JSON data structure to be sent back to the client as a response. + */ +class Task { +public: + Task(int jobId, int cmd, const std::string& options = ""); + + Task(const Task&) = delete; + Task& operator=(const Task&) = delete; + + int jobId() const { return m_jobId; } + int cmd() const { return m_cmd; } + + void chopNumSentBytesFromResponseBuffer(std::size_t bytesSentNum); + + bool optionsMatch(const class std::unique_ptr& other); + + const std::string& responseBuffer() const { return m_responseBuffer; } + + bool isFinished() const { return m_isFinished; } + bool hasError() const { return !m_error.empty(); } + const std::string& error() const { return m_error; } + + std::size_t origReponseBytesNum() const { return m_origReponseBytesNum; } + + bool isResponseFullySent() const { return m_isResponseFullySent; } + + void fail(const std::string& error); + void success(const std::string& result = ""); + + std::string info(bool skipDuration = false) const; + + const comm::TelegramHeader& telegramHeader() const { return m_telegramHeader; } + + const std::string& options() const { return m_options; } + +private: + int m_jobId = -1; + int m_cmd = -1; + std::string m_options; + std::string m_result; + std::string m_error; + bool m_isFinished = false; + comm::TelegramHeader m_telegramHeader; + std::string m_responseBuffer; + std::size_t m_origReponseBytesNum = 0; + bool m_isResponseFullySent = false; + + std::chrono::high_resolution_clock::time_point m_creationTime; + + int64_t timeMsElapsed() const; + + void bakeResponse(); +}; +using TaskPtr = std::unique_ptr; + +} // namespace server + +#endif // TASK_H diff --git a/vpr/src/server/taskresolver.cpp b/vpr/src/server/taskresolver.cpp new file mode 100644 index 00000000000..56729dbcf0f --- /dev/null +++ b/vpr/src/server/taskresolver.cpp @@ -0,0 +1,165 @@ +#include "taskresolver.h" + +#include "commconstants.h" +#include "globals.h" +#include "pathhelper.h" +#include "telegramoptions.h" +#include "telegramparser.h" +#include "gtkcomboboxhelper.h" + +#include + +namespace server { + +void TaskResolver::addTask(TaskPtr& newTask) +{ + // pre-process task before adding, where we could quickly detect failure scenarios + for (const auto& task: m_tasks) { + if (task->cmd() == newTask->cmd()) { + if (task->optionsMatch(newTask)) { + std::string msg = "similar task is already in execution, reject new " + newTask->info()+ " and waiting for old " + task->info() + " execution"; + newTask->fail(msg); + } else { + // handle case when task has same cmd but different options + if (newTask->jobId() > task->jobId()) { + std::string msg = "old " + task->info() + " is overriden by a new " + newTask->info(); + task->fail(msg); + } + } + } + } + + // add task + m_tasks.push_back(std::move(newTask)); +} + +void TaskResolver::addTasks(std::vector& tasks) +{ + for (TaskPtr& task: tasks) { + addTask(task); + } +} + +void TaskResolver::takeFinished(std::vector& result) +{ + for (auto it=m_tasks.begin(); it != m_tasks.end();) { + TaskPtr& task = *it; + if (task->isFinished()) { + result.push_back(std::move(task)); + it = m_tasks.erase(it); + } else { + ++it; + } + } +} + +e_timing_report_detail TaskResolver::getDetailsLevelEnum(const std::string& pathDetailsLevelStr) const { + e_timing_report_detail detailesLevel = e_timing_report_detail::NETLIST; + if (pathDetailsLevelStr == "netlist") { + detailesLevel = e_timing_report_detail::NETLIST; + } else if (pathDetailsLevelStr == "aggregated") { + detailesLevel = e_timing_report_detail::AGGREGATED; + } else if (pathDetailsLevelStr == "detailed") { + detailesLevel = e_timing_report_detail::DETAILED_ROUTING; + } else if (pathDetailsLevelStr == "debug") { + detailesLevel = e_timing_report_detail::DEBUG; + } else { + std::cerr << "unhandled option" << pathDetailsLevelStr << std::endl; + } + return detailesLevel; +} + +bool TaskResolver::update(ezgl::application* app) +{ + bool has_processed_task = false; + for (auto& task: m_tasks) { + if (!task->isFinished()) { + switch(task->cmd()) { + case comm::CMD_GET_PATH_LIST_ID: { + processGetPathListTask(app, task); + has_processed_task = true; + break; + } + case comm::CMD_DRAW_PATH_ID: { + processDrawCriticalPathTask(app, task); + has_processed_task = true; + break; + } + default: break; + } + } + } + + return has_processed_task; +} + +void TaskResolver::processGetPathListTask(ezgl::application*, const TaskPtr& task) +{ + TelegramOptions options{task->options(), {comm::OPTION_PATH_NUM, comm::OPTION_PATH_TYPE, comm::OPTION_DETAILS_LEVEL, comm::OPTION_IS_FLOAT_ROUTING}}; + if (!options.hasErrors()) { + ServerContext& server_ctx = g_vpr_ctx.mutable_server(); // shortcut + + server_ctx.set_crit_path_elements(std::map>{}); // reset selection if path list options has changed + + // read options + const int nCriticalPathNum = options.getInt(comm::OPTION_PATH_NUM, 1); + const std::string pathType = options.getString(comm::OPTION_PATH_TYPE); + const std::string detailsLevel = options.getString(comm::OPTION_DETAILS_LEVEL); + const bool isFlat = options.getBool(comm::OPTION_IS_FLOAT_ROUTING, false); + + // calculate critical path depending on options and store result in server context + CritPathsResult crit_paths_result = calcCriticalPath(pathType, nCriticalPathNum, getDetailsLevelEnum(detailsLevel), isFlat, /*usePathElementSeparator*/true); + + // setup context + server_ctx.set_path_type(pathType); + server_ctx.set_critical_path_num(nCriticalPathNum); + server_ctx.set_crit_paths(crit_paths_result.paths); + + if (crit_paths_result.isValid()) { + std::string msg{crit_paths_result.report}; + task->success(msg); + } else { + std::string msg{"Critical paths report is empty"}; + std::cerr << msg << std::endl; + task->fail(msg); + } + } else { + std::string msg{"options errors in get crit path list telegram: " + options.errorsStr()}; + std::cerr << msg << std::endl; + task->fail(msg); + } +} + +void TaskResolver::processDrawCriticalPathTask(ezgl::application* app, const TaskPtr& task) +{ + TelegramOptions options{task->options(), {comm::OPTION_PATH_ELEMENTS, comm::OPTION_HIGHTLIGHT_MODE, comm::OPTION_DRAW_PATH_CONTOUR}}; + if (!options.hasErrors()) { + ServerContext& server_ctx = g_vpr_ctx.mutable_server(); // shortcut + + const std::map> path_elements = options.getMapOfSets(comm::OPTION_PATH_ELEMENTS); + const std::string highLightMode = options.getString(comm::OPTION_HIGHTLIGHT_MODE); + const bool drawPathContour = options.getBool(comm::OPTION_DRAW_PATH_CONTOUR, false); + + // set critical path elements to render + server_ctx.set_crit_path_elements(path_elements); + server_ctx.set_draw_crit_path_contour(drawPathContour); + + // update gtk UI + GtkComboBox* toggle_crit_path = GTK_COMBO_BOX(app->get_object("ToggleCritPath")); + gint highLightModeIndex = get_item_index_by_text(toggle_crit_path, highLightMode.c_str()); + if (highLightModeIndex != -1) { + gtk_combo_box_set_active(toggle_crit_path, highLightModeIndex); + task->success(); + } else { + std::string msg{"cannot find ToggleCritPath qcombobox index for item " + highLightMode}; + std::cerr << msg << std::endl; + task->fail(msg); + } + } else { + std::string msg{"options errors in highlight crit path telegram: " + options.errorsStr()}; + std::cerr << msg << std::endl; + task->fail(msg); + } +} + +} // namespace server diff --git a/vpr/src/server/taskresolver.h b/vpr/src/server/taskresolver.h new file mode 100644 index 00000000000..afb1ddcd266 --- /dev/null +++ b/vpr/src/server/taskresolver.h @@ -0,0 +1,51 @@ +#ifndef TASKRESOLVER_H +#define TASKRESOLVER_H + +#include "task.h" +#include "vpr_types.h" + +#include + +namespace ezgl { + class application; +} + +namespace server { + +/** + * @brief Resolve server task. + * + * Process and resolve server task, store result and status for processed task. +*/ + +class TaskResolver { +public: + TaskResolver()=default; + ~TaskResolver()=default; + + int tasksNum() const { return m_tasks.size(); } + + /* add tasks to process */ + void addTask(TaskPtr&); + void addTasks(std::vector&); + + /* process tasks */ + bool update(ezgl::application*); + + /* extract finished tasks */ + void takeFinished(std::vector&); + + const std::vector& tasks() const { return m_tasks; } + +private: + std::vector m_tasks; + + void processGetPathListTask(ezgl::application*, const TaskPtr&); + void processDrawCriticalPathTask(ezgl::application*, const TaskPtr&); + + e_timing_report_detail getDetailsLevelEnum(const std::string& pathDetailsLevelStr) const; +}; + +} // namespace server + +#endif // TASKRESOLVER_H diff --git a/vpr/src/server/telegrambuffer.cpp b/vpr/src/server/telegrambuffer.cpp new file mode 100644 index 00000000000..fa811a9e667 --- /dev/null +++ b/vpr/src/server/telegrambuffer.cpp @@ -0,0 +1,74 @@ +#include "telegrambuffer.h" + +namespace comm { + +void TelegramBuffer::append(const ByteArray& bytes) +{ + m_rawBuffer.append(bytes); +} + +bool TelegramBuffer::checkRawBuffer() +{ + std::size_t signatureStartIndex = m_rawBuffer.findSequence(TelegramHeader::SIGNATURE, TelegramHeader::SIGNATURE_SIZE); + if (signatureStartIndex != std::size_t(-1)) { + if (signatureStartIndex != 0) { + m_rawBuffer.erase(m_rawBuffer.begin(), m_rawBuffer.begin()+signatureStartIndex); + } + return true; + } + return false; +} + +void TelegramBuffer::takeTelegramFrames(std::vector& result) +{ + if (m_rawBuffer.size() <= TelegramHeader::size()) { + return; + } + + bool mayContainFullTelegram = true; + while(mayContainFullTelegram) { + mayContainFullTelegram = false; + if (!m_headerOpt) { + if (checkRawBuffer()) { + TelegramHeader header(m_rawBuffer); + if (header.isValid()) { + m_headerOpt = std::move(header); + } + } + } + + if (m_headerOpt) { + const TelegramHeader& header = m_headerOpt.value(); + std::size_t wholeTelegramSize = TelegramHeader::size() + header.bodyBytesNum(); + if (m_rawBuffer.size() >= wholeTelegramSize) { + ByteArray data(m_rawBuffer.begin() + TelegramHeader::size(), m_rawBuffer.begin() + wholeTelegramSize); + uint32_t actualCheckSum = data.calcCheckSum(); + if (actualCheckSum == header.bodyCheckSum()) { + TelegramFramePtr telegramFramePtr = std::make_shared(TelegramFrame{header, std::move(data)}); + data.clear(); + result.push_back(telegramFramePtr); + } else { + m_errors.push_back("wrong checkSums " + std::to_string(actualCheckSum) +" for " + header.info() + " , drop this chunk"); + } + m_rawBuffer.erase(m_rawBuffer.begin(), m_rawBuffer.begin() + wholeTelegramSize); + m_headerOpt.reset(); + mayContainFullTelegram = true; + } + } + } +} + +std::vector TelegramBuffer::takeTelegramFrames() +{ + std::vector result; + takeTelegramFrames(result); + return result; +} + +void TelegramBuffer::takeErrors(std::vector& errors) +{ + errors.clear(); + std::swap(errors, m_errors); +} + +} // namespace comm diff --git a/vpr/src/server/telegrambuffer.h b/vpr/src/server/telegrambuffer.h new file mode 100644 index 00000000000..fa8f25f0d6d --- /dev/null +++ b/vpr/src/server/telegrambuffer.h @@ -0,0 +1,48 @@ +#ifndef TELEGRAMBUFFER_H +#define TELEGRAMBUFFER_H + +#include "bytearray.h" +#include "telegramframe.h" + +#include +#include +#include +#include + +namespace comm { + +/** + * @brief Implements Telegram Buffer as a wrapper over BytesArray + * + * It aggregates received bytes and return only well filled frames, separated by telegram delimerer byte. +*/ +class TelegramBuffer +{ + static const std::size_t DEFAULT_SIZE_HINT = 1024; + +public: + TelegramBuffer(std::size_t sizeHint = DEFAULT_SIZE_HINT): m_rawBuffer(sizeHint) {} + ~TelegramBuffer()=default; + + bool empty() { return m_rawBuffer.empty(); } + + void clear() { m_rawBuffer.clear(); } + + void append(const ByteArray&); + void takeTelegramFrames(std::vector&); + std::vector takeTelegramFrames(); + void takeErrors(std::vector&); + + const ByteArray& data() const { return m_rawBuffer; } + +private: + ByteArray m_rawBuffer; + std::vector m_errors; + std::optional m_headerOpt; + + bool checkRawBuffer(); +}; + +} // namespace comm + +#endif // TELEGRAMBUFFER_H diff --git a/vpr/src/server/telegramframe.h b/vpr/src/server/telegramframe.h new file mode 100644 index 00000000000..b942efa10ec --- /dev/null +++ b/vpr/src/server/telegramframe.h @@ -0,0 +1,19 @@ +#ifndef TELEGRAMFRAME_H +#define TELEGRAMFRAME_H + +#include "telegramheader.h" +#include "bytearray.h" + +#include + +namespace comm { + +struct TelegramFrame { + TelegramHeader header; + ByteArray data; +}; +using TelegramFramePtr = std::shared_ptr; + +} // namespace comm + +#endif // TELEGRAMFRAME_H diff --git a/vpr/src/server/telegramheader.cpp b/vpr/src/server/telegramheader.cpp new file mode 100644 index 00000000000..cb95e4ced5e --- /dev/null +++ b/vpr/src/server/telegramheader.cpp @@ -0,0 +1,76 @@ +#include "telegramheader.h" +#include "convertutils.h" + +#include + +namespace comm { + +TelegramHeader::TelegramHeader(uint32_t length, uint32_t checkSum, uint8_t compressorId) + : m_bodyBytesNum(length) + , m_bodyCheckSum(checkSum) + , m_compressorId(compressorId) +{ + m_buffer.resize(TelegramHeader::size()); + + // Write signature into a buffer + std::memcpy(m_buffer.data(), TelegramHeader::SIGNATURE, TelegramHeader::SIGNATURE_SIZE); + + // Write the length into the buffer in big-endian byte order + std::memcpy(m_buffer.data() + TelegramHeader::LENGTH_OFFSET, &length, TelegramHeader::LENGTH_SIZE); + + // Write the checksum into the buffer in big-endian byte order + std::memcpy(m_buffer.data() + TelegramHeader::CHECKSUM_OFFSET, &checkSum, TelegramHeader::CHECKSUM_SIZE); + + // Write compressor id + std::memcpy(m_buffer.data() + TelegramHeader::COMPRESSORID_OFFSET, &compressorId, TelegramHeader::COMPRESSORID_SIZE); + + m_isValid = true; +} + +TelegramHeader::TelegramHeader(const ByteArray& buffer) +{ + m_buffer.resize(TelegramHeader::size()); + + bool hasError = false; + + if (buffer.size() >= TelegramHeader::size()) { + // Check the signature to ensure that this is a valid header + if (std::memcmp(buffer.data(), TelegramHeader::SIGNATURE, TelegramHeader::SIGNATURE_SIZE)) { + hasError = true; + } + + // Read the length from the buffer in big-endian byte order + std::memcpy(&m_bodyBytesNum, buffer.data() + TelegramHeader::LENGTH_OFFSET, TelegramHeader::LENGTH_SIZE); + + // Read the checksum from the buffer in big-endian byte order + std::memcpy(&m_bodyCheckSum, buffer.data() + TelegramHeader::CHECKSUM_OFFSET, TelegramHeader::CHECKSUM_SIZE); + + // Read the checksum from the buffer in big-endian byte order + std::memcpy(&m_compressorId, buffer.data() + TelegramHeader::COMPRESSORID_OFFSET, TelegramHeader::COMPRESSORID_SIZE); + + if (m_bodyBytesNum == 0) { + hasError = false; + } + if (m_bodyCheckSum == 0) { + hasError = false; + } + } + + if (!hasError) { + m_isValid = true; + } +} + +std::string TelegramHeader::info() const { + std::stringstream ss; + ss << "header" << (m_isValid?"":"(INVALID)") << "[" + << "l=" << getPrettySizeStrFromBytesNum(m_bodyBytesNum) + << "/s=" << m_bodyCheckSum; + if (m_compressorId) { + ss << "/c=" << m_compressorId; + } + ss << "]"; + return ss.str(); +} + +} // namespace comm diff --git a/vpr/src/server/telegramheader.h b/vpr/src/server/telegramheader.h new file mode 100644 index 00000000000..eb9cac30e1c --- /dev/null +++ b/vpr/src/server/telegramheader.h @@ -0,0 +1,61 @@ +#ifndef TELEGRAMHEADER_H +#define TELEGRAMHEADER_H + +#include "bytearray.h" + +#include +#include + +namespace comm { + +class TelegramHeader { +public: + static constexpr const char SIGNATURE[] = "IPA"; + static constexpr size_t SIGNATURE_SIZE = sizeof(SIGNATURE); + static constexpr size_t LENGTH_SIZE = sizeof(uint32_t); + static constexpr size_t CHECKSUM_SIZE = LENGTH_SIZE; + static constexpr size_t COMPRESSORID_SIZE = 1; + + static constexpr size_t LENGTH_OFFSET = SIGNATURE_SIZE; + static constexpr size_t CHECKSUM_OFFSET = LENGTH_OFFSET + LENGTH_SIZE; + static constexpr size_t COMPRESSORID_OFFSET = CHECKSUM_OFFSET + CHECKSUM_SIZE; + + TelegramHeader()=default; + explicit TelegramHeader(uint32_t length, uint32_t checkSum, uint8_t compressorId = 0); + explicit TelegramHeader(const ByteArray& body); + ~TelegramHeader()=default; + + template + static comm::TelegramHeader constructFromData(const T& body, uint8_t compressorId = 0) { + uint32_t bodyCheckSum = ByteArray::calcCheckSum(body); + return comm::TelegramHeader{static_cast(body.size()), bodyCheckSum, compressorId}; + } + + static constexpr size_t size() { + return SIGNATURE_SIZE + LENGTH_SIZE + CHECKSUM_SIZE + COMPRESSORID_SIZE; + } + + bool isValid() const { return m_isValid; } + + const ByteArray& buffer() const { return m_buffer; } + + uint32_t bodyBytesNum() const { return m_bodyBytesNum; } + uint32_t bodyCheckSum() const { return m_bodyCheckSum; } + uint8_t compressorId() const { return m_compressorId; } + + bool isBodyCompressed() const { return m_compressorId != 0; } + + std::string info() const; + +private: + bool m_isValid = false; + ByteArray m_buffer; + + uint32_t m_bodyBytesNum = 0; + uint32_t m_bodyCheckSum = 0; + uint8_t m_compressorId = 0; +}; + +} // namespace comm + +#endif // TELEGRAMHEADER_H diff --git a/vpr/src/server/telegramoptions.h b/vpr/src/server/telegramoptions.h new file mode 100644 index 00000000000..1b37b15f8b2 --- /dev/null +++ b/vpr/src/server/telegramoptions.h @@ -0,0 +1,162 @@ +#ifndef TELEGRAMOPTIONS_H +#define TELEGRAMOPTIONS_H + +#include "convertutils.h" + +#include +#include +#include +#include +#include + +namespace server { + +/** + * @brief Option class Parser + * + * Parse the string of options in the format "TYPE:KEY1:VALUE1;TYPE:KEY2:VALUE2", + * for example "int:path_num:11;string:path_type:debug;int:details_level:3;bool:is_flat_routing:0". + * It provides a simple interface to check value presence and access them. +*/ + +class TelegramOptions { +private: + enum { + INDEX_TYPE=0, + INDEX_NAME, + INDEX_VALUE, + TOTAL_INDEXES_NUM + }; + + struct Option { + std::string type; + std::string value; + }; + +public: + TelegramOptions(const std::string& data, const std::vector& expectedKeys) { + // parse data string + std::vector options = splitString(data, ';'); + for (const std::string& optionStr: options) { + std::vector fragments = splitString(optionStr, ':'); + if (fragments.size() == TOTAL_INDEXES_NUM) { + std::string name = fragments[INDEX_NAME]; + Option option{fragments[INDEX_TYPE], fragments[INDEX_VALUE]}; + if (isDataTypeSupported(option.type)) { + m_options[name] = option; + } else { + m_errors.emplace_back("bad type for option [" + optionStr + "]"); + } + } else { + m_errors.emplace_back("bad option [" + optionStr + "]"); + } + } + + // check keys presense + checkKeysPresence(expectedKeys); + } + + ~TelegramOptions() {} + + bool hasErrors() const { return !m_errors.empty(); } + + std::map> getMapOfSets(const std::string& name) { + std::map> result; + std::string dataStr = getString(name); + if (!dataStr.empty()) { + std::vector pathes = splitString(dataStr, '|'); + for (const std::string& path: pathes) { + std::vector pathStruct = splitString(path, '#'); + if (pathStruct.size() == 2) { + std::string pathIndexStr = pathStruct[0]; + std::string pathElementIndexesStr = pathStruct[1]; + std::vector pathElementIndexes = splitString(pathElementIndexesStr, ','); + std::set elements; + for (const std::string& pathElementIndex: pathElementIndexes) { + if (std::optional optValue = tryConvertToInt(pathElementIndex.c_str())) { + elements.insert(optValue.value()); + } + } + if (std::optional optPathIndex = tryConvertToInt(pathIndexStr.c_str())) { + result[optPathIndex.value()] = elements; + } + } else { + m_errors.emplace_back("wrong path data structure = " + path); + } + } + } + return result; + } + + std::string getString(const std::string& name) { + std::string result; + if (auto it = m_options.find(name); it != m_options.end()) { + result = it->second.value; + } + return result; + } + + int getInt(const std::string& name, int failValue) { + if (std::optional opt = tryConvertToInt(m_options[name].value)) { + return opt.value(); + } else { + m_errors.emplace_back("cannot get int value for option " + name); + return failValue; + } + } + + bool getBool(const std::string& name, bool failValue) { + if (std::optional opt = tryConvertToInt(m_options[name].value)) { + return opt.value(); + } else { + m_errors.emplace_back("cannot get bool value for option " + name); + return failValue; + } + } + + std::string errorsStr() const { + std::string result; + for (const std::string& error: m_errors) { + result += error + ";"; + } + return result; + } + +private: + std::unordered_map m_options; + std::vector m_errors; + + + std::vector splitString(const std::string& input, char delimiter) + { + std::vector tokens; + std::istringstream tokenStream(input); + std::string token; + + while (std::getline(tokenStream, token, delimiter)) { + tokens.push_back(token); + } + + return tokens; + } + + bool isDataTypeSupported(const std::string& type) { + static std::set supportedTypes{"int", "string", "bool"}; + return supportedTypes.count(type) != 0; + } + + bool checkKeysPresence(const std::vector& keys) { + bool result = true; + for (const std::string& key: keys) { + if (m_options.find(key) == m_options.end()) { + m_errors.emplace_back("cannot find required option " + key); + result = false; + } + } + return result; + } +}; + +} // namespace server + +#endif // TELEGRAMOPTIONS_H diff --git a/vpr/src/server/telegramparser.cpp b/vpr/src/server/telegramparser.cpp new file mode 100644 index 00000000000..07bef029091 --- /dev/null +++ b/vpr/src/server/telegramparser.cpp @@ -0,0 +1,79 @@ +#include "telegramparser.h" +#include "convertutils.h" +#include "commconstants.h" + + +namespace comm { + +bool TelegramParser::tryExtractJsonValueStr(const std::string& jsonString, const std::string& key, std::optional& result) +{ + // Find the position of the key + size_t keyPos = jsonString.find("\"" + key + "\":"); + + if (keyPos == std::string::npos) { + // Key not found + return false; + } + + // Find the position of the value after the key + size_t valuePosStart = jsonString.find("\"", keyPos + key.length() + std::string("\":\"").size()); + + if (valuePosStart == std::string::npos) { + // Value not found + return false; + } + + // Find the position of the closing quote for the value + size_t valueEnd = jsonString.find("\"", valuePosStart + std::string("\"").size()); + + if (valueEnd == std::string::npos) { + // Closing quote not found + return false; + } + + // Extract the value substring + result = std::move(jsonString.substr(valuePosStart + 1, (valueEnd - valuePosStart) - 1)); + return true; +} + +std::optional TelegramParser::tryExtractFieldJobId(const std::string& message) +{ + std::optional result; + std::optional strOpt; + if (tryExtractJsonValueStr(message, comm::KEY_JOB_ID, strOpt)) { + result = tryConvertToInt(strOpt.value()); + } + return result; +} + +std::optional TelegramParser::tryExtractFieldCmd(const std::string& message) +{ + std::optional result; + std::optional strOpt; + if (tryExtractJsonValueStr(message, comm::KEY_CMD, strOpt)) { + result = tryConvertToInt(strOpt.value()); + } + return result; +} + +bool TelegramParser::tryExtractFieldOptions(const std::string& message, std::optional& result) +{ + return tryExtractJsonValueStr(message, comm::KEY_OPTIONS, result); +} + +bool TelegramParser::tryExtractFieldData(const std::string& message, std::optional& result) +{ + return tryExtractJsonValueStr(message, comm::KEY_DATA, result); +} + +std::optional TelegramParser::tryExtractFieldStatus(const std::string& message) +{ + std::optional result; + std::optional strOpt; + if (tryExtractJsonValueStr(message, comm::KEY_STATUS, strOpt)) { + result = tryConvertToInt(strOpt.value()); + } + return result; +} + +} // namespace comm diff --git a/vpr/src/server/telegramparser.h b/vpr/src/server/telegramparser.h new file mode 100644 index 00000000000..6f9eca4f37b --- /dev/null +++ b/vpr/src/server/telegramparser.h @@ -0,0 +1,29 @@ +#ifndef TELEGRAMPARSER_H +#define TELEGRAMPARSER_H + +#include +#include + +namespace comm { + +/** + * @brief Dummy JSON parser using regular expressions. + * + * This module provides helper methods to extract values for a keys as "JOB_ID", "CMD", or "OPTIONS" + * from a JSON schema structured as follows: {JOB_ID:num, CMD:enum, OPTIONS:string}. + */ +class TelegramParser { +public: + static std::optional tryExtractFieldJobId(const std::string& message); + static std::optional tryExtractFieldCmd(const std::string& message); + static bool tryExtractFieldOptions(const std::string& message, std::optional& result); + static bool tryExtractFieldData(const std::string& message, std::optional& result); + static std::optional tryExtractFieldStatus(const std::string& message); + +private: + static bool tryExtractJsonValueStr(const std::string& jsonString, const std::string& key, std::optional& result); +}; + +} // namespace comm + +#endif // TELEGRAMPARSER_H diff --git a/vpr/src/server/zlibutils.cpp b/vpr/src/server/zlibutils.cpp new file mode 100644 index 00000000000..0cddc55a183 --- /dev/null +++ b/vpr/src/server/zlibutils.cpp @@ -0,0 +1,79 @@ +#include "zlibutils.h" + +#include // Include cstring for memset +#include + +std::string tryCompress(const std::string& decompressed) +{ + z_stream zs; + memset(&zs, 0, sizeof(zs)); + + if (deflateInit(&zs, Z_BEST_COMPRESSION) != Z_OK) { + return ""; + } + + zs.next_in = (Bytef*)decompressed.data(); + zs.avail_in = decompressed.size(); + + int retCode; + char resultBuffer[32768]; + std::string result; + + do { + zs.next_out = reinterpret_cast(resultBuffer); + zs.avail_out = sizeof(resultBuffer); + + retCode = deflate(&zs, Z_FINISH); + + if (result.size() < zs.total_out) { + result.append(resultBuffer, zs.total_out - result.size()); + } + } while (retCode == Z_OK); + + deflateEnd(&zs); + + if (retCode != Z_STREAM_END) { + return ""; + } + + return result; +} + +std::string tryDecompress(const std::string& compressed) +{ + z_stream zs; + memset(&zs, 0, sizeof(zs)); + + if (inflateInit(&zs) != Z_OK) { + return ""; + } + + zs.next_in = (Bytef*)compressed.data(); + zs.avail_in = compressed.size(); + + int retCode; + char resultBuffer[32768]; + std::string result; + + do { + zs.next_out = reinterpret_cast(resultBuffer); + zs.avail_out = sizeof(resultBuffer); + + retCode = inflate(&zs, 0); + + if (result.size() < zs.total_out) { + result.append(resultBuffer, zs.total_out - result.size()); + } + + } while (retCode == Z_OK); + + inflateEnd(&zs); + + if (retCode != Z_STREAM_END) { + return ""; + } + + return result; +} + + diff --git a/vpr/src/server/zlibutils.h b/vpr/src/server/zlibutils.h new file mode 100644 index 00000000000..7ef1873e6d1 --- /dev/null +++ b/vpr/src/server/zlibutils.h @@ -0,0 +1,10 @@ +#ifndef ZLIBUTILS_H +#define ZLIBUTILS_H + +#include +#include + +std::string tryCompress(const std::string& decompressed); +std::string tryDecompress(const std::string& compressed); + +#endif diff --git a/vpr/thirdparty/sockpp/.editorconfig b/vpr/thirdparty/sockpp/.editorconfig new file mode 100644 index 00000000000..3418f791d89 --- /dev/null +++ b/vpr/thirdparty/sockpp/.editorconfig @@ -0,0 +1,17 @@ +# This file is for unifying the coding style for different editors and IDEs +# http://EditorConfig.org + +root = true + +[*] +charset = utf-8 +indent_style = tab +indent_size = 4 +end_of_line = lf +insert_final_newline = true + +[*.{cpp,c,h}] +trim_trailing_whitespace = true + +[*.md] +trim_trailing_whitespace = false diff --git a/vpr/thirdparty/sockpp/.gitattributes b/vpr/thirdparty/sockpp/.gitattributes new file mode 100644 index 00000000000..70f7a9bc58a --- /dev/null +++ b/vpr/thirdparty/sockpp/.gitattributes @@ -0,0 +1,34 @@ +# .gitattributes for the 'sockpp' repository + +# Use default native line endings +* text=auto + +# Specify file type for line handling +*.cpp text +*.c text +*.h text +*.hpp text +*.md text +*.txt text +*.ac text +*.yml text +*.cfg text +*.config text +*.html text +Makefile text +*.jpg binary +*.png binary +*.tif binary +*.project text +*.properties text +*.py text +*.txt text +*.url text +*.xml text + +# Windows-specific text files +*.bat text eol=crlf + +# Linux-specific text files +*.sh text eol=lf + diff --git a/vpr/thirdparty/sockpp/.gitignore b/vpr/thirdparty/sockpp/.gitignore new file mode 100644 index 00000000000..e443ae50530 --- /dev/null +++ b/vpr/thirdparty/sockpp/.gitignore @@ -0,0 +1,45 @@ +# Directories for temporaries & build targets +obj/ +lib/ + +# Compiled Object files +*.slo +*.lo +*.o +*.obj + +# Precompiled Headers +*.gch +*.pch + +# Compiled Dynamic libraries +*.so +*.dylib +*.dll + +# Fortran module files +*.mod +*.smod + +# Compiled Static libraries +*.lai +*.la +*.a +*.lib + +# Executables +*.exe +*.out +*.app + +# Preliminary Notes +*.fodt + +# Mercurial Files +.hg/ +.hgignore +.hgtags + +# SlickEdit files +*.vpj + diff --git a/vpr/thirdparty/sockpp/.hgeol b/vpr/thirdparty/sockpp/.hgeol new file mode 100644 index 00000000000..90d14eef7bb --- /dev/null +++ b/vpr/thirdparty/sockpp/.hgeol @@ -0,0 +1,32 @@ +# .hgeol for the 'sockpp' repository + +[patterns] +**.cpp = native +**.c = native +**.h = native +**.hpp = native +**.md = native +**.txt = native +**.ac = native +**.yml = native +**.cfg = native +**.config = native +**.html = native +Makefile = native +**.jpg = BIN +**.png = BIN +**.tif = BIN +**.project = native +**.properties = native +**.py = native +**.txt = native +**.url = native +**.xml = native + +# Windows-specific text files +**.bat = CRLF +**.vcproj = CRLF + +# Linux-specific text files +*.sh = LF + diff --git a/vpr/thirdparty/sockpp/.travis.yml b/vpr/thirdparty/sockpp/.travis.yml new file mode 100644 index 00000000000..0a0528534ce --- /dev/null +++ b/vpr/thirdparty/sockpp/.travis.yml @@ -0,0 +1,83 @@ +language: cpp +sudo: required +dist: xenial +os: linux + +before_install: + - ./travis_install_catch2.sh + +matrix: + include: + - compiler: gcc + addons: + apt: + sources: + - ubuntu-toolchain-r-test + packages: + - g++-5 + env: COMPILER=g++-5 + - compiler: gcc + addons: + apt: + sources: + - ubuntu-toolchain-r-test + packages: + - g++-6 + env: COMPILER=g++-6 + - compiler: gcc + addons: + apt: + sources: + - ubuntu-toolchain-r-test + packages: + - g++-7 + env: COMPILER=g++-7 + - compiler: gcc + addons: + apt: + sources: + - ubuntu-toolchain-r-test + packages: + - g++-8 + env: COMPILER=g++-8 + - compiler: clang + addons: + apt: + sources: + - ubuntu-toolchain-r-test + - llvm-toolchain-precise-3.8 + packages: + - clang-3.8 + env: COMPILER=clang++-3.8 + - compiler: clang + addons: + apt: + sources: + - ubuntu-toolchain-r-test + packages: + - clang-4.0 + env: COMPILER=clang++-4.0 + - compiler: clang + addons: + apt: + sources: + - ubuntu-toolchain-r-test + - llvm-toolchain-xenial-7 + packages: + - clang-7 + env: COMPILER=clang++-7 + - compiler: clang + addons: + apt: + sources: + - ubuntu-toolchain-r-test + - llvm-toolchain-xenial-8 + packages: + - clang-8 + env: COMPILER=clang++-8 + exclude: + - compiler: gcc + + +script: if [ "$COMPILER" == "" ]; then CXX=g++ ./travis_build.sh; else CXX=$COMPILER ./travis_build.sh; fi + diff --git a/vpr/thirdparty/sockpp/CHANGELOG.md b/vpr/thirdparty/sockpp/CHANGELOG.md new file mode 100644 index 00000000000..410d2d478e5 --- /dev/null +++ b/vpr/thirdparty/sockpp/CHANGELOG.md @@ -0,0 +1,123 @@ +# Change Log for _sockpp_ + +## [Version 1.0.0](https://github.com/fpagliughi/sockpp/compare/v0.8.3..v1.0.0) - (2023-12-17) + +This is a release of the previous 0.8.x line as the initial, stable API. + +## [Version 0.8.3](https://github.com/fpagliughi/sockpp/compare/v0.8.2..v0.8.3) - (2023-12-11) + +- [#64](https://github.com/fpagliughi/sockpp/pull/84) Added support for Catch2 v3.x for unit tests. (v2.x still supported) + + +## [Version 0.8.2](https://github.com/fpagliughi/sockpp/compare/v0.8.1..v0.8.2) - (2023-12-05) + +- [#89](https://github.com/fpagliughi/sockpp/issue/89) Fixed generator expression for older CMake +- [#91](https://github.com/fpagliughi/sockpp/issue/91) Fixed uniform_int_distribution<> in UNIX socket example + + +## [Version 0.8.1](https://github.com/fpagliughi/sockpp/compare/v0.8.0..v0.8.1) - (2023-01-30) + +- Cherry picked most of the non-TLS commits in PR [#17](https://github.com/fpagliughi/sockpp/pull/17) + - Connector timeouts + - Stateless reads & writes for streaming sockets w/ functions returning `ioresult` + - Some small bug fixes + - No shutdown on invalid sockets +- [#38](https://github.com/fpagliughi/sockpp/issues/38) Made system libs public for static builds to fix Windows +- [#73](https://github.com/fpagliughi/sockpp/issue/73) Clone a datagram (UDP) socket +- [#74](https://github.com/fpagliughi/sockpp/issue/74) Added `` to properly get `timeval` in *nix builds. +- [#56](https://github.com/fpagliughi/sockpp/issue/56) handling unix paths with maximum length (no NUL term) +- Fixed outstanding build warnings on Windows when using MSVC + +## [Version 0.8.0](https://github.com/fpagliughi/sockpp/compare/v0.7.1..v0.8.0) - (2023-01-17) + +- [Breaking] Library initializer now uses a static singleton created via `socket_initializer::initialize()` call, which can be called repeatedly with no ill effect. Also added global `socketpp::initialize()` function as shortcut. +- Improvements to CMake to better follow modern standards. + - CMake required version bumped up to 3.12 + - Generating CMake files for downstream projects (config, target, version) + - Windows builds default to shared DLL, not static library + - Lots of cleanup + +## [Version 0.7.1](https://github.com/fpagliughi/sockpp/compare/v0.7..v0.7.1) + +Released: 2022-01-24 + +- [Experimental] **SocketCAN**, CAN bus support on Linux +- [#37](https://github.com/fpagliughi/sockpp/pull/37) socket::get_option() not returning length on Windows +- [#39](https://github.com/fpagliughi/sockpp/pull/39) Using *SSIZE_T* for *ssize_t* in Windows +- [#53](https://github.com/fpagliughi/sockpp/pull/53) Add Conan support +- [#55](https://github.com/fpagliughi/sockpp/pull/55) Fix Android strerror +- [#60](https://github.com/fpagliughi/sockpp/pull/60) Add missing move constructor for connector template. +- Now `acceptor::open()` uses the *SO_REUSEPORT* option instead of *SO_REUSEADDR* on non-Windows systems. Also made reuse optional. + +## Version 0.7 + +- Base `socket` class + - `shutdown()` added + - `create()` added + - `bind()` moved into base socket (from `acceptor`) +- Unix-domain socket pairs (stream and datagram) +- Non-blocking I/O +- Scatter/Gather I/O +- `stream_socket` cloning. +- Set and get socket options using template types. +- `stream_socket::read_n()` and `write_n()` now properly handle EINTR return. +- `to_timeval()` can convert from any `std::chrono::duration` type. +- `socket::close()` and `shutdown()` check for errors, set last error, and return a bool. +- _tcpechomt.cpp_: Example of a client sharing a socket between read and write threads - using `clone()`. +- Windows enhancements: + - Implemented socket timeouts on Windows + - Fixed bug in Windows socket cloning. + - Fixed bug in Windows `socket::last_error_string`. + - Unit tests working on Windows +- More unit tests + +## Version 0.6 + +- UDP support + - The base `datagram_socket` added to the Windows build + - The `datagram_socket` cleaned up for proper parameter and return types. + - New `datagram_socket_tmpl` template class for defining UDP sockets for the different address families. + - New datagram classes for IPv4 (`udp_socket`), IPv6 (`udp6_socket`), and Unix-domain (`unix_dgram_socket`) +- Windows support + - Windows support was broken in release v0.5. It is now fixed, and includes the UDP features. +- Proper move semantics for stream sockets and connectors. +- Separate tcp socket header files for each address family (`tcp_socket.h`, `tcp6_socket.h`, etc). +- Proper implementation of Unix-domain streaming socket. +- CMake auto-generates a version header file, _version.h_ +- CI dropped tests for gcc-4.9, and added support for clang-7 and 8. + +## Version 0.5 + +- (Breaking change) Updated the hierarchy of network address classes, now derived from a common base class. + - Removed `sock_address_ref` class. Now a C++ reference to `sock_address` will replace it (i.e. `sock_address&`). + - `sock_address` is now an abstract base class. + - All the network address classes now derive from `sock_address` + - Consolidates a number of overloaded functions that took different forms of addresses to just take a `const sock_address&` + - Adds a new `sock_address_any` class that can contain any address, and is used by base classes that need a generic address. +- The `acceptor` and `connector` classes are still concrete, generic classes, but now a template derives from each of them to specialize. +- The connector and acceptor classes for each address family (`tcp_connector`, `tcp_acceptor`, `tcp6_connector`, etc) are now typedef'ed to template specializations. +- The `acceptor::bind()` and `acceptor::listen()` methods are now public. +- CMake build now honors the `CMAKE_BUILD_TYPE` flag. + +## Version 0.4 + +The work in this branch is proceeding to add support for IPv6 and refactor the class hierarchies to better support the different address families without so much redundant code. + + - IPv6 support: `inet6_address`, `tcp6_acceptor`, `tcp_connector`, etc. + - (Breaking change) The `sock_address` class is now contains storage for any type of address and follows copy semantics. Previously it was a non-owning reference class. That reference class now exists as `sock_addresss_ref`. + - Generic base classses are being re-implemented to use _sock_address_ and _sock_address_ref_ as generic addresses. + - (Breaking change) In the `socket` class(es) the `bool address(address&)` and `bool peer_address(addr&)` forms of getting the socket addresses have been removed in favor of the ones that simply return the address. + Added `get_option()` and `set_option()` methods to the base `socket`class. + - The GNU Make build system (Makefile) was deprecated and removed. + +## Version 0.3 + + - Socket class hierarcy now splits out for streaming and datagram sockets. + - Support for UNIX-domain sockets. + - New modern CMake build system. + - GNU Make system marked for deprecation. + +## Version 0.2 + + - Initial working version for IPv4. + - API using boolean return values for pass/fail functions instead of syscall-style integers. \ No newline at end of file diff --git a/vpr/thirdparty/sockpp/CMakeLists.txt b/vpr/thirdparty/sockpp/CMakeLists.txt new file mode 100644 index 00000000000..b6f828f0e97 --- /dev/null +++ b/vpr/thirdparty/sockpp/CMakeLists.txt @@ -0,0 +1,223 @@ +# CMakeLists.txt +# +# Top-level CMake build file for the 'sockpp' library. +# +# --------------------------------------------------------------------------- +# This file is part of the "sockpp" C++ socket library. +# +# Copyright (c) 2017-2023 Frank Pagliughi +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# 1. Redistributions of source code must retain the above copyright notice, +# this list of conditions and the following disclaimer. +# +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# 3. Neither the name of the copyright holder nor the names of its +# contributors may be used to endorse or promote products derived from this +# software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS +# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, +# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR +# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, +# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +# --------------------------------------------------------------------------- + +# --- CMake required version --- + +cmake_minimum_required(VERSION 3.12) + +# --- Project setup --- + +project(sockpp VERSION "1.0.0") + +# --- Build Options --- + +option(SOCKPP_BUILD_SHARED "Build shared library" ON) +option(SOCKPP_BUILD_STATIC "Build static library" OFF) +option(SOCKPP_BUILD_EXAMPLES "Build example applications" OFF) +option(SOCKPP_BUILD_TESTS "Build unit tests" OFF) +option(SOCKPP_BUILD_DOCUMENTATION "Create Doxygen reference documentation" OFF) +option(SOCKPP_BUILD_CAN "Build the Linux SocketCAN components" OFF) + +# --- Setting naming variables --- + +set(SOCKPP_SHARED_LIBRARY sockpp) +set(SOCKPP_STATIC_LIBRARY sockpp-static) +set(SOCKPP_OBJECT_LIBRARY sockpp-objs) + +set(SOCKPP_INCLUDE_DIR ${PROJECT_SOURCE_DIR}/include) +set(SOCKPP_GENERATED_DIR ${CMAKE_CURRENT_BINARY_DIR}/generated) + +# --- Generate a version header --- + +configure_file( + ${PROJECT_SOURCE_DIR}/cmake/version.h.in + ${SOCKPP_GENERATED_DIR}/include/sockpp/version.h + @ONLY +) + +# --- Common library sources, etc --- + +add_subdirectory(src) + +# --- System libraries --- + +if(WIN32) + set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ON) + set(LIBS_SYSTEM ws2_32) +endif() + +# --- Collect the targets names --- + +if(${SOCKPP_BUILD_SHARED}) + list(APPEND SOCKPP_TARGETS ${SOCKPP_SHARED_LIBRARY}) +endif() + +if(${SOCKPP_BUILD_STATIC}) + list(APPEND SOCKPP_TARGETS ${SOCKPP_STATIC_LIBRARY}) +endif() + +# --- Create the libraries and export them --- + +if(NOT SOCKPP_TARGETS) + message(FATAL_ERROR "No targets are specified") +endif() + +if(${SOCKPP_BUILD_SHARED}) + message(STATUS "Creating shared library: ${SOCKPP_SHARED_LIBRARY}") + add_library(${SOCKPP_SHARED_LIBRARY} SHARED $) + + target_compile_features(${SOCKPP_SHARED_LIBRARY} PUBLIC cxx_std_14) + + target_include_directories(${SOCKPP_SHARED_LIBRARY} + PUBLIC + $ + $ + PRIVATE + ${SOCKPP_GENERATED_DIR}/include + ) + + target_link_libraries(${SOCKPP_SHARED_LIBRARY} PUBLIC ${LIBS_SYSTEM}) + + set_target_properties(${SOCKPP_SHARED_LIBRARY} PROPERTIES + VERSION ${PROJECT_VERSION} + SOVERSION ${PROJECT_VERSION_MAJOR} + CXX_EXTENSIONS OFF + ) + + list(APPEND TARGET_FILES ${SOCKPP_SHARED_LIBRARY}) +endif() + +if(${SOCKPP_BUILD_STATIC}) + message(STATUS "Creating static library: ${SOCKPP_STATIC_LIBRARY}") + add_library(${SOCKPP_STATIC_LIBRARY} STATIC $) + + target_compile_features(${SOCKPP_STATIC_LIBRARY} PUBLIC cxx_std_14) + + target_include_directories(${SOCKPP_STATIC_LIBRARY} + PUBLIC + $ + $ + PRIVATE + ${SOCKPP_GENERATED_DIR}/include + ) + + target_link_libraries(${SOCKPP_STATIC_LIBRARY} PUBLIC ${LIBS_SYSTEM}) + + set_target_properties(${SOCKPP_STATIC_LIBRARY} PROPERTIES + CXX_EXTENSIONS OFF + ) + + # On *nix systems, the static library can have the same base filename + # as the shared library, thus 'libsockpp.a' for the static lib. + # On Windows they need different names to tell the static lib from the + # DLL import library. + if(UNIX) + set_target_properties(${SOCKPP_STATIC_LIBRARY} PROPERTIES + OUTPUT_NAME ${SOCKPP_SHARED_LIBRARY} + ) + endif() + + list(APPEND TARGET_FILES ${SOCKPP_STATIC_LIBRARY}) +endif() + +# --- Install Targets --- + +include(GNUInstallDirs) + +install(TARGETS ${TARGET_FILES} + EXPORT sockpp-targets + ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR} + LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} + RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} +) + +install(EXPORT sockpp-targets + FILE + sockppTargets.cmake + NAMESPACE + Sockpp:: + DESTINATION + ${CMAKE_INSTALL_LIBDIR}/cmake/sockpp +) + +include(CMakePackageConfigHelpers) + +write_basic_package_version_file( + ${SOCKPP_GENERATED_DIR}/cmake/sockppConfigVersion.cmake + VERSION ${PROJECT_VERSION} + COMPATIBILITY AnyNewerVersion +) + +install(DIRECTORY include/ ${SOCKPP_GENERATED_DIR}/include/ + DESTINATION ${CMAKE_INSTALL_INCLUDEDIR} +) + +install( + FILES + ${PROJECT_SOURCE_DIR}/cmake/sockppConfig.cmake + ${SOCKPP_GENERATED_DIR}/cmake/sockppConfigVersion.cmake + DESTINATION + ${CMAKE_INSTALL_LIBDIR}/cmake/sockpp +) + +# --- Documentation --- + +if(SOCKPP_BUILD_DOCUMENTATION) + add_subdirectory(doc) +endif() + +# --- Default library for examples and unit tests --- + +if(SOCKPP_BUILD_SHARED) + set(SOCKPP_LIB ${SOCKPP_SHARED_LIBRARY}) +else() + set(SOCKPP_LIB ${SOCKPP_STATIC_LIBRARY}) +endif() + +# --- Example applications --- + +if(SOCKPP_BUILD_EXAMPLES) + add_subdirectory(examples) +endif() + +# --- Unit Tests --- + +if(SOCKPP_BUILD_TESTS) + add_subdirectory(tests/unit) +endif() + diff --git a/vpr/thirdparty/sockpp/CONTRIBUTING.md b/vpr/thirdparty/sockpp/CONTRIBUTING.md new file mode 100644 index 00000000000..f532f6b08ef --- /dev/null +++ b/vpr/thirdparty/sockpp/CONTRIBUTING.md @@ -0,0 +1,21 @@ +# Contributing to _sockpp_ + +Thank you for your interest in the _sockpp_ library! + +Contributions are accepted and much appreciated. You can contribute updates, bug fixes, and bug reports through the GitHub site for the project. + +1. New and unstable development is done in the `develop` branch. Please make all pull requests against the `develop` branch. + +1. Please follow the naming and format conventions of the existing code. + +1. New features should be zero cost. Existing applications that do not use the feature(s) should not pay a cost in speed or size due to the new additions. + +1. Prefer smaller, targeted pull requests (PR's). + 1. Put each different new feature in a separate PR. + 1. Separate bug fixes and new features in individual PR's. + +1. Include unit tests for new features. + +1. Please indicate the system, OS, and compiler used for development and whether there are any known incompatibilities with other supported systems. + +1. Please only contribute code for which you have legal right to ownership. **Do not** contribute any code written at an employer site or on equipment ownd by an employer, if you have not been given explicit, written consent by the employer to contribute to open-source projects. diff --git a/vpr/thirdparty/sockpp/Doxyfile b/vpr/thirdparty/sockpp/Doxyfile new file mode 100644 index 00000000000..0fa5d4df61e --- /dev/null +++ b/vpr/thirdparty/sockpp/Doxyfile @@ -0,0 +1,2606 @@ +# Doxyfile 1.9.3 + +# This file describes the settings to be used by the documentation system +# doxygen (www.doxygen.org) for a project. +# +# All text after a double hash (##) is considered a comment and is placed in +# front of the TAG it is preceding. +# +# All text after a single hash (#) is considered a comment and will be ignored. +# The format is: +# TAG = value [value, ...] +# For lists, items can also be appended using: +# TAG += value [value, ...] +# Values that contain spaces should be placed between quotes (\" \"). + +#--------------------------------------------------------------------------- +# Project related configuration options +#--------------------------------------------------------------------------- + +# This tag specifies the encoding used for all characters in the configuration +# file that follow. The default is UTF-8 which is also the encoding used for all +# text before the first occurrence of this tag. Doxygen uses libiconv (or the +# iconv built into libc) for the transcoding. See +# https://www.gnu.org/software/libiconv/ for the list of possible encodings. +# The default value is: UTF-8. + +DOXYFILE_ENCODING = UTF-8 + +# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by +# double-quotes, unless you are using Doxywizard) that should identify the +# project for which the documentation is generated. This name is used in the +# title of most generated pages and in a few other places. +# The default value is: My Project. + +PROJECT_NAME = sockpp + +# The PROJECT_NUMBER tag can be used to enter a project or revision number. This +# could be handy for archiving the generated documentation or if some version +# control system is used. + +PROJECT_NUMBER = + +# Using the PROJECT_BRIEF tag one can provide an optional one line description +# for a project that appears at the top of each page and should give viewer a +# quick idea about the purpose of the project. Keep the description short. + +PROJECT_BRIEF = "Modern C++ socket library wrapper" + +# With the PROJECT_LOGO tag one can specify a logo or an icon that is included +# in the documentation. The maximum height of the logo should not exceed 55 +# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy +# the logo to the output directory. + +PROJECT_LOGO = + +# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path +# into which the generated documentation will be written. If a relative path is +# entered, it will be relative to the location where doxygen was started. If +# left blank the current directory will be used. + +OUTPUT_DIRECTORY = doc + +# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub- +# directories (in 2 levels) under the output directory of each output format and +# will distribute the generated files over these directories. Enabling this +# option can be useful when feeding doxygen a huge amount of source files, where +# putting all generated files in the same directory would otherwise causes +# performance problems for the file system. +# The default value is: NO. + +CREATE_SUBDIRS = NO + +# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII +# characters to appear in the names of generated files. If set to NO, non-ASCII +# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode +# U+3044. +# The default value is: NO. + +ALLOW_UNICODE_NAMES = NO + +# The OUTPUT_LANGUAGE tag is used to specify the language in which all +# documentation generated by doxygen is written. Doxygen will use this +# information to generate all constant output in the proper language. +# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese, +# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States), +# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian, +# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages), +# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian, +# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian, +# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish, +# Ukrainian and Vietnamese. +# The default value is: English. + +OUTPUT_LANGUAGE = English + +# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member +# descriptions after the members that are listed in the file and class +# documentation (similar to Javadoc). Set to NO to disable this. +# The default value is: YES. + +BRIEF_MEMBER_DESC = YES + +# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief +# description of a member or function before the detailed description +# +# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the +# brief descriptions will be completely suppressed. +# The default value is: YES. + +REPEAT_BRIEF = YES + +# This tag implements a quasi-intelligent brief description abbreviator that is +# used to form the text in various listings. Each string in this list, if found +# as the leading text of the brief description, will be stripped from the text +# and the result, after processing the whole list, is used as the annotated +# text. Otherwise, the brief description is used as-is. If left blank, the +# following values are used ($name is automatically replaced with the name of +# the entity):The $name class, The $name widget, The $name file, is, provides, +# specifies, contains, represents, a, an and the. + +ABBREVIATE_BRIEF = + +# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then +# doxygen will generate a detailed section even if there is only a brief +# description. +# The default value is: NO. + +ALWAYS_DETAILED_SEC = NO + +# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all +# inherited members of a class in the documentation of that class as if those +# members were ordinary class members. Constructors, destructors and assignment +# operators of the base classes will not be shown. +# The default value is: NO. + +INLINE_INHERITED_MEMB = NO + +# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path +# before files name in the file list and in the header files. If set to NO the +# shortest path that makes the file name unique will be used +# The default value is: YES. + +FULL_PATH_NAMES = YES + +# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path. +# Stripping is only done if one of the specified strings matches the left-hand +# part of the path. The tag can be used to show relative paths in the file list. +# If left blank the directory from which doxygen is run is used as the path to +# strip. +# +# Note that you can specify absolute paths here, but also relative paths, which +# will be relative from the directory where doxygen is started. +# This tag requires that the tag FULL_PATH_NAMES is set to YES. + +STRIP_FROM_PATH = + +# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the +# path mentioned in the documentation of a class, which tells the reader which +# header file to include in order to use a class. If left blank only the name of +# the header file containing the class definition is used. Otherwise one should +# specify the list of include paths that are normally passed to the compiler +# using the -I flag. + +STRIP_FROM_INC_PATH = + +# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but +# less readable) file names. This can be useful is your file systems doesn't +# support long names like on DOS, Mac, or CD-ROM. +# The default value is: NO. + +SHORT_NAMES = NO + +# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the +# first line (until the first dot) of a Javadoc-style comment as the brief +# description. If set to NO, the Javadoc-style will behave just like regular Qt- +# style comments (thus requiring an explicit @brief command for a brief +# description.) +# The default value is: NO. + +JAVADOC_AUTOBRIEF = YES + +# If the JAVADOC_BANNER tag is set to YES then doxygen will interpret a line +# such as +# /*************** +# as being the beginning of a Javadoc-style comment "banner". If set to NO, the +# Javadoc-style will behave just like regular comments and it will not be +# interpreted by doxygen. +# The default value is: NO. + +JAVADOC_BANNER = NO + +# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first +# line (until the first dot) of a Qt-style comment as the brief description. If +# set to NO, the Qt-style will behave just like regular Qt-style comments (thus +# requiring an explicit \brief command for a brief description.) +# The default value is: NO. + +QT_AUTOBRIEF = NO + +# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a +# multi-line C++ special comment block (i.e. a block of //! or /// comments) as +# a brief description. This used to be the default behavior. The new default is +# to treat a multi-line C++ comment block as a detailed description. Set this +# tag to YES if you prefer the old behavior instead. +# +# Note that setting this tag to YES also means that rational rose comments are +# not recognized any more. +# The default value is: NO. + +MULTILINE_CPP_IS_BRIEF = NO + +# By default Python docstrings are displayed as preformatted text and doxygen's +# special commands cannot be used. By setting PYTHON_DOCSTRING to NO the +# doxygen's special commands can be used and the contents of the docstring +# documentation blocks is shown as doxygen documentation. +# The default value is: YES. + +PYTHON_DOCSTRING = YES + +# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the +# documentation from any documented member that it re-implements. +# The default value is: YES. + +INHERIT_DOCS = YES + +# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new +# page for each member. If set to NO, the documentation of a member will be part +# of the file/class/namespace that contains it. +# The default value is: NO. + +SEPARATE_MEMBER_PAGES = NO + +# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen +# uses this value to replace tabs by spaces in code fragments. +# Minimum value: 1, maximum value: 16, default value: 4. + +TAB_SIZE = 4 + +# This tag can be used to specify a number of aliases that act as commands in +# the documentation. An alias has the form: +# name=value +# For example adding +# "sideeffect=@par Side Effects:^^" +# will allow you to put the command \sideeffect (or @sideeffect) in the +# documentation, which will result in a user-defined paragraph with heading +# "Side Effects:". Note that you cannot put \n's in the value part of an alias +# to insert newlines (in the resulting output). You can put ^^ in the value part +# of an alias to insert a newline as if a physical newline was in the original +# file. When you need a literal { or } or , in the value part of an alias you +# have to escape them by means of a backslash (\), this can lead to conflicts +# with the commands \{ and \} for these it is advised to use the version @{ and +# @} or use a double escape (\\{ and \\}) + +ALIASES = + +# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources +# only. Doxygen will then generate output that is more tailored for C. For +# instance, some of the names that are used will be different. The list of all +# members will be omitted, etc. +# The default value is: NO. + +OPTIMIZE_OUTPUT_FOR_C = NO + +# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or +# Python sources only. Doxygen will then generate output that is more tailored +# for that language. For instance, namespaces will be presented as packages, +# qualified scopes will look different, etc. +# The default value is: NO. + +OPTIMIZE_OUTPUT_JAVA = NO + +# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran +# sources. Doxygen will then generate output that is tailored for Fortran. +# The default value is: NO. + +OPTIMIZE_FOR_FORTRAN = NO + +# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL +# sources. Doxygen will then generate output that is tailored for VHDL. +# The default value is: NO. + +OPTIMIZE_OUTPUT_VHDL = NO + +# Set the OPTIMIZE_OUTPUT_SLICE tag to YES if your project consists of Slice +# sources only. Doxygen will then generate output that is more tailored for that +# language. For instance, namespaces will be presented as modules, types will be +# separated into more groups, etc. +# The default value is: NO. + +OPTIMIZE_OUTPUT_SLICE = NO + +# Doxygen selects the parser to use depending on the extension of the files it +# parses. With this tag you can assign which parser to use for a given +# extension. Doxygen has a built-in mapping, but you can override or extend it +# using this tag. The format is ext=language, where ext is a file extension, and +# language is one of the parsers supported by doxygen: IDL, Java, JavaScript, +# Csharp (C#), C, C++, Lex, D, PHP, md (Markdown), Objective-C, Python, Slice, +# VHDL, Fortran (fixed format Fortran: FortranFixed, free formatted Fortran: +# FortranFree, unknown formatted Fortran: Fortran. In the later case the parser +# tries to guess whether the code is fixed or free formatted code, this is the +# default for Fortran type files). For instance to make doxygen treat .inc files +# as Fortran files (default is PHP), and .f files as C (default is Fortran), +# use: inc=Fortran f=C. +# +# Note: For files without extension you can use no_extension as a placeholder. +# +# Note that for custom extensions you also need to set FILE_PATTERNS otherwise +# the files are not read by doxygen. When specifying no_extension you should add +# * to the FILE_PATTERNS. +# +# Note see also the list of default file extension mappings. + +EXTENSION_MAPPING = + +# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments +# according to the Markdown format, which allows for more readable +# documentation. See https://daringfireball.net/projects/markdown/ for details. +# The output of markdown processing is further processed by doxygen, so you can +# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in +# case of backward compatibilities issues. +# The default value is: YES. + +MARKDOWN_SUPPORT = YES + +# When the TOC_INCLUDE_HEADINGS tag is set to a non-zero value, all headings up +# to that level are automatically included in the table of contents, even if +# they do not have an id attribute. +# Note: This feature currently applies only to Markdown headings. +# Minimum value: 0, maximum value: 99, default value: 5. +# This tag requires that the tag MARKDOWN_SUPPORT is set to YES. + +TOC_INCLUDE_HEADINGS = 5 + +# When enabled doxygen tries to link words that correspond to documented +# classes, or namespaces to their corresponding documentation. Such a link can +# be prevented in individual cases by putting a % sign in front of the word or +# globally by setting AUTOLINK_SUPPORT to NO. +# The default value is: YES. + +AUTOLINK_SUPPORT = YES + +# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want +# to include (a tag file for) the STL sources as input, then you should set this +# tag to YES in order to let doxygen match functions declarations and +# definitions whose arguments contain STL classes (e.g. func(std::string); +# versus func(std::string) {}). This also make the inheritance and collaboration +# diagrams that involve STL classes more complete and accurate. +# The default value is: NO. + +BUILTIN_STL_SUPPORT = NO + +# If you use Microsoft's C++/CLI language, you should set this option to YES to +# enable parsing support. +# The default value is: NO. + +CPP_CLI_SUPPORT = NO + +# Set the SIP_SUPPORT tag to YES if your project consists of sip (see: +# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen +# will parse them like normal C++ but will assume all classes use public instead +# of private inheritance when no explicit protection keyword is present. +# The default value is: NO. + +SIP_SUPPORT = NO + +# For Microsoft's IDL there are propget and propput attributes to indicate +# getter and setter methods for a property. Setting this option to YES will make +# doxygen to replace the get and set methods by a property in the documentation. +# This will only work if the methods are indeed getting or setting a simple +# type. If this is not the case, or you want to show the methods anyway, you +# should set this option to NO. +# The default value is: YES. + +IDL_PROPERTY_SUPPORT = YES + +# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC +# tag is set to YES then doxygen will reuse the documentation of the first +# member in the group (if any) for the other members of the group. By default +# all members of a group must be documented explicitly. +# The default value is: NO. + +DISTRIBUTE_GROUP_DOC = NO + +# If one adds a struct or class to a group and this option is enabled, then also +# any nested class or struct is added to the same group. By default this option +# is disabled and one has to add nested compounds explicitly via \ingroup. +# The default value is: NO. + +GROUP_NESTED_COMPOUNDS = NO + +# Set the SUBGROUPING tag to YES to allow class member groups of the same type +# (for instance a group of public functions) to be put as a subgroup of that +# type (e.g. under the Public Functions section). Set it to NO to prevent +# subgrouping. Alternatively, this can be done per class using the +# \nosubgrouping command. +# The default value is: YES. + +SUBGROUPING = YES + +# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions +# are shown inside the group in which they are included (e.g. using \ingroup) +# instead of on a separate page (for HTML and Man pages) or section (for LaTeX +# and RTF). +# +# Note that this feature does not work in combination with +# SEPARATE_MEMBER_PAGES. +# The default value is: NO. + +INLINE_GROUPED_CLASSES = NO + +# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions +# with only public data fields or simple typedef fields will be shown inline in +# the documentation of the scope in which they are defined (i.e. file, +# namespace, or group documentation), provided this scope is documented. If set +# to NO, structs, classes, and unions are shown on a separate page (for HTML and +# Man pages) or section (for LaTeX and RTF). +# The default value is: NO. + +INLINE_SIMPLE_STRUCTS = NO + +# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or +# enum is documented as struct, union, or enum with the name of the typedef. So +# typedef struct TypeS {} TypeT, will appear in the documentation as a struct +# with name TypeT. When disabled the typedef will appear as a member of a file, +# namespace, or class. And the struct will be named TypeS. This can typically be +# useful for C code in case the coding convention dictates that all compound +# types are typedef'ed and only the typedef is referenced, never the tag name. +# The default value is: NO. + +TYPEDEF_HIDES_STRUCT = NO + +# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This +# cache is used to resolve symbols given their name and scope. Since this can be +# an expensive process and often the same symbol appears multiple times in the +# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small +# doxygen will become slower. If the cache is too large, memory is wasted. The +# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range +# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536 +# symbols. At the end of a run doxygen will report the cache usage and suggest +# the optimal cache size from a speed point of view. +# Minimum value: 0, maximum value: 9, default value: 0. + +LOOKUP_CACHE_SIZE = 0 + +# The NUM_PROC_THREADS specifies the number threads doxygen is allowed to use +# during processing. When set to 0 doxygen will based this on the number of +# cores available in the system. You can set it explicitly to a value larger +# than 0 to get more control over the balance between CPU load and processing +# speed. At this moment only the input processing can be done using multiple +# threads. Since this is still an experimental feature the default is set to 1, +# which effectively disables parallel processing. Please report any issues you +# encounter. Generating dot graphs in parallel is controlled by the +# DOT_NUM_THREADS setting. +# Minimum value: 0, maximum value: 32, default value: 1. + +NUM_PROC_THREADS = 1 + +#--------------------------------------------------------------------------- +# Build related configuration options +#--------------------------------------------------------------------------- + +# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in +# documentation are documented, even if no documentation was available. Private +# class members and static file members will be hidden unless the +# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES. +# Note: This will also disable the warnings about undocumented members that are +# normally produced when WARNINGS is set to YES. +# The default value is: NO. + +EXTRACT_ALL = NO + +# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will +# be included in the documentation. +# The default value is: NO. + +EXTRACT_PRIVATE = NO + +# If the EXTRACT_PRIV_VIRTUAL tag is set to YES, documented private virtual +# methods of a class will be included in the documentation. +# The default value is: NO. + +EXTRACT_PRIV_VIRTUAL = NO + +# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal +# scope will be included in the documentation. +# The default value is: NO. + +EXTRACT_PACKAGE = NO + +# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be +# included in the documentation. +# The default value is: NO. + +EXTRACT_STATIC = NO + +# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined +# locally in source files will be included in the documentation. If set to NO, +# only classes defined in header files are included. Does not have any effect +# for Java sources. +# The default value is: YES. + +EXTRACT_LOCAL_CLASSES = YES + +# This flag is only useful for Objective-C code. If set to YES, local methods, +# which are defined in the implementation section but not in the interface are +# included in the documentation. If set to NO, only methods in the interface are +# included. +# The default value is: NO. + +EXTRACT_LOCAL_METHODS = NO + +# If this flag is set to YES, the members of anonymous namespaces will be +# extracted and appear in the documentation as a namespace called +# 'anonymous_namespace{file}', where file will be replaced with the base name of +# the file that contains the anonymous namespace. By default anonymous namespace +# are hidden. +# The default value is: NO. + +EXTRACT_ANON_NSPACES = NO + +# If this flag is set to YES, the name of an unnamed parameter in a declaration +# will be determined by the corresponding definition. By default unnamed +# parameters remain unnamed in the output. +# The default value is: YES. + +RESOLVE_UNNAMED_PARAMS = YES + +# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all +# undocumented members inside documented classes or files. If set to NO these +# members will be included in the various overviews, but no documentation +# section is generated. This option has no effect if EXTRACT_ALL is enabled. +# The default value is: NO. + +HIDE_UNDOC_MEMBERS = NO + +# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all +# undocumented classes that are normally visible in the class hierarchy. If set +# to NO, these classes will be included in the various overviews. This option +# has no effect if EXTRACT_ALL is enabled. +# The default value is: NO. + +HIDE_UNDOC_CLASSES = NO + +# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend +# declarations. If set to NO, these declarations will be included in the +# documentation. +# The default value is: NO. + +HIDE_FRIEND_COMPOUNDS = NO + +# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any +# documentation blocks found inside the body of a function. If set to NO, these +# blocks will be appended to the function's detailed documentation block. +# The default value is: NO. + +HIDE_IN_BODY_DOCS = NO + +# The INTERNAL_DOCS tag determines if documentation that is typed after a +# \internal command is included. If the tag is set to NO then the documentation +# will be excluded. Set it to YES to include the internal documentation. +# The default value is: NO. + +INTERNAL_DOCS = NO + +# With the correct setting of option CASE_SENSE_NAMES doxygen will better be +# able to match the capabilities of the underlying filesystem. In case the +# filesystem is case sensitive (i.e. it supports files in the same directory +# whose names only differ in casing), the option must be set to YES to properly +# deal with such files in case they appear in the input. For filesystems that +# are not case sensitive the option should be be set to NO to properly deal with +# output files written for symbols that only differ in casing, such as for two +# classes, one named CLASS and the other named Class, and to also support +# references to files without having to specify the exact matching casing. On +# Windows (including Cygwin) and MacOS, users should typically set this option +# to NO, whereas on Linux or other Unix flavors it should typically be set to +# YES. +# The default value is: system dependent. + +CASE_SENSE_NAMES = YES + +# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with +# their full class and namespace scopes in the documentation. If set to YES, the +# scope will be hidden. +# The default value is: NO. + +HIDE_SCOPE_NAMES = NO + +# If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will +# append additional text to a page's title, such as Class Reference. If set to +# YES the compound reference will be hidden. +# The default value is: NO. + +HIDE_COMPOUND_REFERENCE= NO + +# If the SHOW_HEADERFILE tag is set to YES then the documentation for a class +# will show which file needs to be included to use the class. +# The default value is: YES. + +SHOW_HEADERFILE = YES + +# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of +# the files that are included by a file in the documentation of that file. +# The default value is: YES. + +SHOW_INCLUDE_FILES = YES + +# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each +# grouped member an include statement to the documentation, telling the reader +# which file to include in order to use the member. +# The default value is: NO. + +SHOW_GROUPED_MEMB_INC = NO + +# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include +# files with double quotes in the documentation rather than with sharp brackets. +# The default value is: NO. + +FORCE_LOCAL_INCLUDES = NO + +# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the +# documentation for inline members. +# The default value is: YES. + +INLINE_INFO = YES + +# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the +# (detailed) documentation of file and class members alphabetically by member +# name. If set to NO, the members will appear in declaration order. +# The default value is: YES. + +SORT_MEMBER_DOCS = YES + +# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief +# descriptions of file, namespace and class members alphabetically by member +# name. If set to NO, the members will appear in declaration order. Note that +# this will also influence the order of the classes in the class list. +# The default value is: NO. + +SORT_BRIEF_DOCS = NO + +# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the +# (brief and detailed) documentation of class members so that constructors and +# destructors are listed first. If set to NO the constructors will appear in the +# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS. +# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief +# member documentation. +# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting +# detailed member documentation. +# The default value is: NO. + +SORT_MEMBERS_CTORS_1ST = NO + +# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy +# of group names into alphabetical order. If set to NO the group names will +# appear in their defined order. +# The default value is: NO. + +SORT_GROUP_NAMES = NO + +# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by +# fully-qualified names, including namespaces. If set to NO, the class list will +# be sorted only by class name, not including the namespace part. +# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. +# Note: This option applies only to the class list, not to the alphabetical +# list. +# The default value is: NO. + +SORT_BY_SCOPE_NAME = NO + +# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper +# type resolution of all parameters of a function it will reject a match between +# the prototype and the implementation of a member function even if there is +# only one candidate or it is obvious which candidate to choose by doing a +# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still +# accept a match between prototype and implementation in such cases. +# The default value is: NO. + +STRICT_PROTO_MATCHING = NO + +# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo +# list. This list is created by putting \todo commands in the documentation. +# The default value is: YES. + +GENERATE_TODOLIST = YES + +# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test +# list. This list is created by putting \test commands in the documentation. +# The default value is: YES. + +GENERATE_TESTLIST = YES + +# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug +# list. This list is created by putting \bug commands in the documentation. +# The default value is: YES. + +GENERATE_BUGLIST = YES + +# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO) +# the deprecated list. This list is created by putting \deprecated commands in +# the documentation. +# The default value is: YES. + +GENERATE_DEPRECATEDLIST= YES + +# The ENABLED_SECTIONS tag can be used to enable conditional documentation +# sections, marked by \if ... \endif and \cond +# ... \endcond blocks. + +ENABLED_SECTIONS = + +# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the +# initial value of a variable or macro / define can have for it to appear in the +# documentation. If the initializer consists of more lines than specified here +# it will be hidden. Use a value of 0 to hide initializers completely. The +# appearance of the value of individual variables and macros / defines can be +# controlled using \showinitializer or \hideinitializer command in the +# documentation regardless of this setting. +# Minimum value: 0, maximum value: 10000, default value: 30. + +MAX_INITIALIZER_LINES = 30 + +# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at +# the bottom of the documentation of classes and structs. If set to YES, the +# list will mention the files that were used to generate the documentation. +# The default value is: YES. + +SHOW_USED_FILES = YES + +# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This +# will remove the Files entry from the Quick Index and from the Folder Tree View +# (if specified). +# The default value is: YES. + +SHOW_FILES = YES + +# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces +# page. This will remove the Namespaces entry from the Quick Index and from the +# Folder Tree View (if specified). +# The default value is: YES. + +SHOW_NAMESPACES = YES + +# The FILE_VERSION_FILTER tag can be used to specify a program or script that +# doxygen should invoke to get the current version for each file (typically from +# the version control system). Doxygen will invoke the program by executing (via +# popen()) the command command input-file, where command is the value of the +# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided +# by doxygen. Whatever the program writes to standard output is used as the file +# version. For an example see the documentation. + +FILE_VERSION_FILTER = + +# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed +# by doxygen. The layout file controls the global structure of the generated +# output files in an output format independent way. To create the layout file +# that represents doxygen's defaults, run doxygen with the -l option. You can +# optionally specify a file name after the option, if omitted DoxygenLayout.xml +# will be used as the name of the layout file. See also section "Changing the +# layout of pages" for information. +# +# Note that if you run doxygen from a directory containing a file called +# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE +# tag is left empty. + +LAYOUT_FILE = + +# The CITE_BIB_FILES tag can be used to specify one or more bib files containing +# the reference definitions. This must be a list of .bib files. The .bib +# extension is automatically appended if omitted. This requires the bibtex tool +# to be installed. See also https://en.wikipedia.org/wiki/BibTeX for more info. +# For LaTeX the style of the bibliography can be controlled using +# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the +# search path. See also \cite for info how to create references. + +CITE_BIB_FILES = + +#--------------------------------------------------------------------------- +# Configuration options related to warning and progress messages +#--------------------------------------------------------------------------- + +# The QUIET tag can be used to turn on/off the messages that are generated to +# standard output by doxygen. If QUIET is set to YES this implies that the +# messages are off. +# The default value is: NO. + +QUIET = NO + +# The WARNINGS tag can be used to turn on/off the warning messages that are +# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES +# this implies that the warnings are on. +# +# Tip: Turn warnings on while writing the documentation. +# The default value is: YES. + +WARNINGS = YES + +# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate +# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag +# will automatically be disabled. +# The default value is: YES. + +WARN_IF_UNDOCUMENTED = YES + +# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for +# potential errors in the documentation, such as documenting some parameters in +# a documented function twice, or documenting parameters that don't exist or +# using markup commands wrongly. +# The default value is: YES. + +WARN_IF_DOC_ERROR = YES + +# If WARN_IF_INCOMPLETE_DOC is set to YES, doxygen will warn about incomplete +# function parameter documentation. If set to NO, doxygen will accept that some +# parameters have no documentation without warning. +# The default value is: YES. + +WARN_IF_INCOMPLETE_DOC = YES + +# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that +# are documented, but have no documentation for their parameters or return +# value. If set to NO, doxygen will only warn about wrong parameter +# documentation, but not about the absence of documentation. If EXTRACT_ALL is +# set to YES then this flag will automatically be disabled. See also +# WARN_IF_INCOMPLETE_DOC +# The default value is: NO. + +WARN_NO_PARAMDOC = NO + +# If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when +# a warning is encountered. If the WARN_AS_ERROR tag is set to FAIL_ON_WARNINGS +# then doxygen will continue running as if WARN_AS_ERROR tag is set to NO, but +# at the end of the doxygen process doxygen will return with a non-zero status. +# Possible values are: NO, YES and FAIL_ON_WARNINGS. +# The default value is: NO. + +WARN_AS_ERROR = NO + +# The WARN_FORMAT tag determines the format of the warning messages that doxygen +# can produce. The string should contain the $file, $line, and $text tags, which +# will be replaced by the file and line number from which the warning originated +# and the warning text. Optionally the format may contain $version, which will +# be replaced by the version of the file (if it could be obtained via +# FILE_VERSION_FILTER) +# The default value is: $file:$line: $text. + +WARN_FORMAT = "$file:$line: $text" + +# The WARN_LOGFILE tag can be used to specify a file to which warning and error +# messages should be written. If left blank the output is written to standard +# error (stderr). In case the file specified cannot be opened for writing the +# warning and error messages are written to standard error. When as file - is +# specified the warning and error messages are written to standard output +# (stdout). + +WARN_LOGFILE = + +#--------------------------------------------------------------------------- +# Configuration options related to the input files +#--------------------------------------------------------------------------- + +# The INPUT tag is used to specify the files and/or directories that contain +# documented source files. You may enter file names like myfile.cpp or +# directories like /usr/src/myproject. Separate the files or directories with +# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING +# Note: If this tag is empty the current directory is searched. + +INPUT = include/sockpp + +# This tag can be used to specify the character encoding of the source files +# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses +# libiconv (or the iconv built into libc) for the transcoding. See the libiconv +# documentation (see: +# https://www.gnu.org/software/libiconv/) for the list of possible encodings. +# The default value is: UTF-8. + +INPUT_ENCODING = UTF-8 + +# If the value of the INPUT tag contains directories, you can use the +# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and +# *.h) to filter out the source-files in the directories. +# +# Note that for custom extensions or not directly supported extensions you also +# need to set EXTENSION_MAPPING for the extension otherwise the files are not +# read by doxygen. +# +# Note the list of default checked file patterns might differ from the list of +# default file extension mappings. +# +# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp, +# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, +# *.hh, *.hxx, *.hpp, *.h++, *.l, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, +# *.inc, *.m, *.markdown, *.md, *.mm, *.dox (to be provided as doxygen C +# comment), *.py, *.pyw, *.f90, *.f95, *.f03, *.f08, *.f18, *.f, *.for, *.vhd, +# *.vhdl, *.ucf, *.qsf and *.ice. + +FILE_PATTERNS = *.h \ + *.hh \ + *.hxx \ + *.hpp + +# The RECURSIVE tag can be used to specify whether or not subdirectories should +# be searched for input files as well. +# The default value is: NO. + +RECURSIVE = YES + +# The EXCLUDE tag can be used to specify files and/or directories that should be +# excluded from the INPUT source files. This way you can easily exclude a +# subdirectory from a directory tree whose root is specified with the INPUT tag. +# +# Note that relative paths are relative to the directory from which doxygen is +# run. + +EXCLUDE = + +# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or +# directories that are symbolic links (a Unix file system feature) are excluded +# from the input. +# The default value is: NO. + +EXCLUDE_SYMLINKS = NO + +# If the value of the INPUT tag contains directories, you can use the +# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude +# certain files from those directories. +# +# Note that the wildcards are matched against the file with absolute path, so to +# exclude all test directories for example use the pattern */test/* + +EXCLUDE_PATTERNS = + +# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names +# (namespaces, classes, functions, etc.) that should be excluded from the +# output. The symbol name can be a fully qualified name, a word, or if the +# wildcard * is used, a substring. Examples: ANamespace, AClass, +# ANamespace::AClass, ANamespace::*Test +# +# Note that the wildcards are matched against the file with absolute path, so to +# exclude all test directories use the pattern */test/* + +EXCLUDE_SYMBOLS = + +# The EXAMPLE_PATH tag can be used to specify one or more files or directories +# that contain example code fragments that are included (see the \include +# command). + +EXAMPLE_PATH = + +# If the value of the EXAMPLE_PATH tag contains directories, you can use the +# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and +# *.h) to filter out the source-files in the directories. If left blank all +# files are included. + +EXAMPLE_PATTERNS = + +# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be +# searched for input files to be used with the \include or \dontinclude commands +# irrespective of the value of the RECURSIVE tag. +# The default value is: NO. + +EXAMPLE_RECURSIVE = NO + +# The IMAGE_PATH tag can be used to specify one or more files or directories +# that contain images that are to be included in the documentation (see the +# \image command). + +IMAGE_PATH = + +# The INPUT_FILTER tag can be used to specify a program that doxygen should +# invoke to filter for each input file. Doxygen will invoke the filter program +# by executing (via popen()) the command: +# +# +# +# where is the value of the INPUT_FILTER tag, and is the +# name of an input file. Doxygen will then use the output that the filter +# program writes to standard output. If FILTER_PATTERNS is specified, this tag +# will be ignored. +# +# Note that the filter must not add or remove lines; it is applied before the +# code is scanned, but not when the output code is generated. If lines are added +# or removed, the anchors will not be placed correctly. +# +# Note that for custom extensions or not directly supported extensions you also +# need to set EXTENSION_MAPPING for the extension otherwise the files are not +# properly processed by doxygen. + +INPUT_FILTER = + +# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern +# basis. Doxygen will compare the file name with each pattern and apply the +# filter if there is a match. The filters are a list of the form: pattern=filter +# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how +# filters are used. If the FILTER_PATTERNS tag is empty or if none of the +# patterns match the file name, INPUT_FILTER is applied. +# +# Note that for custom extensions or not directly supported extensions you also +# need to set EXTENSION_MAPPING for the extension otherwise the files are not +# properly processed by doxygen. + +FILTER_PATTERNS = + +# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using +# INPUT_FILTER) will also be used to filter the input files that are used for +# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES). +# The default value is: NO. + +FILTER_SOURCE_FILES = NO + +# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file +# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and +# it is also possible to disable source filtering for a specific pattern using +# *.ext= (so without naming a filter). +# This tag requires that the tag FILTER_SOURCE_FILES is set to YES. + +FILTER_SOURCE_PATTERNS = + +# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that +# is part of the input, its contents will be placed on the main page +# (index.html). This can be useful if you have a project on for instance GitHub +# and want to reuse the introduction page also for the doxygen output. + +USE_MDFILE_AS_MAINPAGE = + +#--------------------------------------------------------------------------- +# Configuration options related to source browsing +#--------------------------------------------------------------------------- + +# If the SOURCE_BROWSER tag is set to YES then a list of source files will be +# generated. Documented entities will be cross-referenced with these sources. +# +# Note: To get rid of all source code in the generated output, make sure that +# also VERBATIM_HEADERS is set to NO. +# The default value is: NO. + +SOURCE_BROWSER = NO + +# Setting the INLINE_SOURCES tag to YES will include the body of functions, +# classes and enums directly into the documentation. +# The default value is: NO. + +INLINE_SOURCES = NO + +# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any +# special comment blocks from generated source code fragments. Normal C, C++ and +# Fortran comments will always remain visible. +# The default value is: YES. + +STRIP_CODE_COMMENTS = YES + +# If the REFERENCED_BY_RELATION tag is set to YES then for each documented +# entity all documented functions referencing it will be listed. +# The default value is: NO. + +REFERENCED_BY_RELATION = NO + +# If the REFERENCES_RELATION tag is set to YES then for each documented function +# all documented entities called/used by that function will be listed. +# The default value is: NO. + +REFERENCES_RELATION = NO + +# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set +# to YES then the hyperlinks from functions in REFERENCES_RELATION and +# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will +# link to the documentation. +# The default value is: YES. + +REFERENCES_LINK_SOURCE = YES + +# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the +# source code will show a tooltip with additional information such as prototype, +# brief description and links to the definition and documentation. Since this +# will make the HTML file larger and loading of large files a bit slower, you +# can opt to disable this feature. +# The default value is: YES. +# This tag requires that the tag SOURCE_BROWSER is set to YES. + +SOURCE_TOOLTIPS = YES + +# If the USE_HTAGS tag is set to YES then the references to source code will +# point to the HTML generated by the htags(1) tool instead of doxygen built-in +# source browser. The htags tool is part of GNU's global source tagging system +# (see https://www.gnu.org/software/global/global.html). You will need version +# 4.8.6 or higher. +# +# To use it do the following: +# - Install the latest version of global +# - Enable SOURCE_BROWSER and USE_HTAGS in the configuration file +# - Make sure the INPUT points to the root of the source tree +# - Run doxygen as normal +# +# Doxygen will invoke htags (and that will in turn invoke gtags), so these +# tools must be available from the command line (i.e. in the search path). +# +# The result: instead of the source browser generated by doxygen, the links to +# source code will now point to the output of htags. +# The default value is: NO. +# This tag requires that the tag SOURCE_BROWSER is set to YES. + +USE_HTAGS = NO + +# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a +# verbatim copy of the header file for each class for which an include is +# specified. Set to NO to disable this. +# See also: Section \class. +# The default value is: YES. + +VERBATIM_HEADERS = YES + +#--------------------------------------------------------------------------- +# Configuration options related to the alphabetical class index +#--------------------------------------------------------------------------- + +# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all +# compounds will be generated. Enable this if the project contains a lot of +# classes, structs, unions or interfaces. +# The default value is: YES. + +ALPHABETICAL_INDEX = YES + +# In case all classes in a project start with a common prefix, all classes will +# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag +# can be used to specify a prefix (or a list of prefixes) that should be ignored +# while generating the index headers. +# This tag requires that the tag ALPHABETICAL_INDEX is set to YES. + +IGNORE_PREFIX = + +#--------------------------------------------------------------------------- +# Configuration options related to the HTML output +#--------------------------------------------------------------------------- + +# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output +# The default value is: YES. + +GENERATE_HTML = YES + +# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a +# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of +# it. +# The default directory is: html. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_OUTPUT = html + +# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each +# generated HTML page (for example: .htm, .php, .asp). +# The default value is: .html. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_FILE_EXTENSION = .html + +# The HTML_HEADER tag can be used to specify a user-defined HTML header file for +# each generated HTML page. If the tag is left blank doxygen will generate a +# standard header. +# +# To get valid HTML the header file that includes any scripts and style sheets +# that doxygen needs, which is dependent on the configuration options used (e.g. +# the setting GENERATE_TREEVIEW). It is highly recommended to start with a +# default header using +# doxygen -w html new_header.html new_footer.html new_stylesheet.css +# YourConfigFile +# and then modify the file new_header.html. See also section "Doxygen usage" +# for information on how to generate the default header that doxygen normally +# uses. +# Note: The header is subject to change so you typically have to regenerate the +# default header when upgrading to a newer version of doxygen. For a description +# of the possible markers and block names see the documentation. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_HEADER = + +# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each +# generated HTML page. If the tag is left blank doxygen will generate a standard +# footer. See HTML_HEADER for more information on how to generate a default +# footer and what special commands can be used inside the footer. See also +# section "Doxygen usage" for information on how to generate the default footer +# that doxygen normally uses. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_FOOTER = + +# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style +# sheet that is used by each HTML page. It can be used to fine-tune the look of +# the HTML output. If left blank doxygen will generate a default style sheet. +# See also section "Doxygen usage" for information on how to generate the style +# sheet that doxygen normally uses. +# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as +# it is more robust and this tag (HTML_STYLESHEET) will in the future become +# obsolete. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_STYLESHEET = + +# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined +# cascading style sheets that are included after the standard style sheets +# created by doxygen. Using this option one can overrule certain style aspects. +# This is preferred over using HTML_STYLESHEET since it does not replace the +# standard style sheet and is therefore more robust against future updates. +# Doxygen will copy the style sheet files to the output directory. +# Note: The order of the extra style sheet files is of importance (e.g. the last +# style sheet in the list overrules the setting of the previous ones in the +# list). For an example see the documentation. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_EXTRA_STYLESHEET = + +# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or +# other source files which should be copied to the HTML output directory. Note +# that these files will be copied to the base HTML output directory. Use the +# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these +# files. In the HTML_STYLESHEET file, use the file name only. Also note that the +# files will be copied as-is; there are no commands or markers available. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_EXTRA_FILES = + +# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen +# will adjust the colors in the style sheet and background images according to +# this color. Hue is specified as an angle on a color-wheel, see +# https://en.wikipedia.org/wiki/Hue for more information. For instance the value +# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300 +# purple, and 360 is red again. +# Minimum value: 0, maximum value: 359, default value: 220. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_HUE = 220 + +# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors +# in the HTML output. For a value of 0 the output will use gray-scales only. A +# value of 255 will produce the most vivid colors. +# Minimum value: 0, maximum value: 255, default value: 100. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_SAT = 100 + +# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the +# luminance component of the colors in the HTML output. Values below 100 +# gradually make the output lighter, whereas values above 100 make the output +# darker. The value divided by 100 is the actual gamma applied, so 80 represents +# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not +# change the gamma. +# Minimum value: 40, maximum value: 240, default value: 80. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_GAMMA = 80 + +# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML +# page will contain the date and time when the page was generated. Setting this +# to YES can help to show when doxygen was last run and thus if the +# documentation is up to date. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_TIMESTAMP = YES + +# If the HTML_DYNAMIC_MENUS tag is set to YES then the generated HTML +# documentation will contain a main index with vertical navigation menus that +# are dynamically created via JavaScript. If disabled, the navigation index will +# consists of multiple levels of tabs that are statically embedded in every HTML +# page. Disable this option to support browsers that do not have JavaScript, +# like the Qt help browser. +# The default value is: YES. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_DYNAMIC_MENUS = YES + +# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML +# documentation will contain sections that can be hidden and shown after the +# page has loaded. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_DYNAMIC_SECTIONS = NO + +# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries +# shown in the various tree structured indices initially; the user can expand +# and collapse entries dynamically later on. Doxygen will expand the tree to +# such a level that at most the specified number of entries are visible (unless +# a fully collapsed tree already exceeds this amount). So setting the number of +# entries 1 will produce a full collapsed tree by default. 0 is a special value +# representing an infinite number of entries and will result in a full expanded +# tree by default. +# Minimum value: 0, maximum value: 9999, default value: 100. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_INDEX_NUM_ENTRIES = 100 + +# If the GENERATE_DOCSET tag is set to YES, additional index files will be +# generated that can be used as input for Apple's Xcode 3 integrated development +# environment (see: +# https://developer.apple.com/xcode/), introduced with OSX 10.5 (Leopard). To +# create a documentation set, doxygen will generate a Makefile in the HTML +# output directory. Running make will produce the docset in that directory and +# running make install will install the docset in +# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at +# startup. See https://developer.apple.com/library/archive/featuredarticles/Doxy +# genXcode/_index.html for more information. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_DOCSET = NO + +# This tag determines the name of the docset feed. A documentation feed provides +# an umbrella under which multiple documentation sets from a single provider +# (such as a company or product suite) can be grouped. +# The default value is: Doxygen generated docs. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_FEEDNAME = "Doxygen generated docs" + +# This tag determines the URL of the docset feed. A documentation feed provides +# an umbrella under which multiple documentation sets from a single provider +# (such as a company or product suite) can be grouped. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_FEEDURL = + +# This tag specifies a string that should uniquely identify the documentation +# set bundle. This should be a reverse domain-name style string, e.g. +# com.mycompany.MyDocSet. Doxygen will append .docset to the name. +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_BUNDLE_ID = org.doxygen.Project + +# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify +# the documentation publisher. This should be a reverse domain-name style +# string, e.g. com.mycompany.MyDocSet.documentation. +# The default value is: org.doxygen.Publisher. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_PUBLISHER_ID = org.doxygen.Publisher + +# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher. +# The default value is: Publisher. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_PUBLISHER_NAME = Publisher + +# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three +# additional HTML index files: index.hhp, index.hhc, and index.hhk. The +# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop +# on Windows. In the beginning of 2021 Microsoft took the original page, with +# a.o. the download links, offline the HTML help workshop was already many years +# in maintenance mode). You can download the HTML help workshop from the web +# archives at Installation executable (see: +# http://web.archive.org/web/20160201063255/http://download.microsoft.com/downlo +# ad/0/A/9/0A939EF6-E31C-430F-A3DF-DFAE7960D564/htmlhelp.exe). +# +# The HTML Help Workshop contains a compiler that can convert all HTML output +# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML +# files are now used as the Windows 98 help format, and will replace the old +# Windows help format (.hlp) on all Windows platforms in the future. Compressed +# HTML files also contain an index, a table of contents, and you can search for +# words in the documentation. The HTML workshop also contains a viewer for +# compressed HTML files. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_HTMLHELP = NO + +# The CHM_FILE tag can be used to specify the file name of the resulting .chm +# file. You can add a path in front of the file if the result should not be +# written to the html output directory. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +CHM_FILE = + +# The HHC_LOCATION tag can be used to specify the location (absolute path +# including file name) of the HTML help compiler (hhc.exe). If non-empty, +# doxygen will try to run the HTML help compiler on the generated index.hhp. +# The file has to be specified with full path. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +HHC_LOCATION = + +# The GENERATE_CHI flag controls if a separate .chi index file is generated +# (YES) or that it should be included in the main .chm file (NO). +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +GENERATE_CHI = NO + +# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc) +# and project file content. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +CHM_INDEX_ENCODING = + +# The BINARY_TOC flag controls whether a binary table of contents is generated +# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it +# enables the Previous and Next buttons. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +BINARY_TOC = NO + +# The TOC_EXPAND flag can be set to YES to add extra items for group members to +# the table of contents of the HTML help documentation and to the tree view. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +TOC_EXPAND = NO + +# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and +# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that +# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help +# (.qch) of the generated HTML documentation. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_QHP = NO + +# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify +# the file name of the resulting .qch file. The path specified is relative to +# the HTML output folder. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QCH_FILE = + +# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help +# Project output. For more information please see Qt Help Project / Namespace +# (see: +# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#namespace). +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_NAMESPACE = org.doxygen.Project + +# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt +# Help Project output. For more information please see Qt Help Project / Virtual +# Folders (see: +# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#virtual-folders). +# The default value is: doc. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_VIRTUAL_FOLDER = doc + +# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom +# filter to add. For more information please see Qt Help Project / Custom +# Filters (see: +# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-filters). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_CUST_FILTER_NAME = + +# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the +# custom filter to add. For more information please see Qt Help Project / Custom +# Filters (see: +# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-filters). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_CUST_FILTER_ATTRS = + +# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this +# project's filter section matches. Qt Help Project / Filter Attributes (see: +# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#filter-attributes). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_SECT_FILTER_ATTRS = + +# The QHG_LOCATION tag can be used to specify the location (absolute path +# including file name) of Qt's qhelpgenerator. If non-empty doxygen will try to +# run qhelpgenerator on the generated .qhp file. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHG_LOCATION = + +# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be +# generated, together with the HTML files, they form an Eclipse help plugin. To +# install this plugin and make it available under the help contents menu in +# Eclipse, the contents of the directory containing the HTML and XML files needs +# to be copied into the plugins directory of eclipse. The name of the directory +# within the plugins directory should be the same as the ECLIPSE_DOC_ID value. +# After copying Eclipse needs to be restarted before the help appears. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_ECLIPSEHELP = NO + +# A unique identifier for the Eclipse help plugin. When installing the plugin +# the directory name containing the HTML and XML files should also have this +# name. Each documentation set should have its own identifier. +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES. + +ECLIPSE_DOC_ID = org.doxygen.Project + +# If you want full control over the layout of the generated HTML pages it might +# be necessary to disable the index and replace it with your own. The +# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top +# of each HTML page. A value of NO enables the index and the value YES disables +# it. Since the tabs in the index contain the same information as the navigation +# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +DISABLE_INDEX = NO + +# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index +# structure should be generated to display hierarchical information. If the tag +# value is set to YES, a side panel will be generated containing a tree-like +# index structure (just like the one that is generated for HTML Help). For this +# to work a browser that supports JavaScript, DHTML, CSS and frames is required +# (i.e. any modern browser). Windows users are probably better off using the +# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can +# further fine tune the look of the index (see "Fine-tuning the output"). As an +# example, the default style sheet generated by doxygen has an example that +# shows how to put an image at the root of the tree instead of the PROJECT_NAME. +# Since the tree basically has the same information as the tab index, you could +# consider setting DISABLE_INDEX to YES when enabling this option. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_TREEVIEW = NO + +# When both GENERATE_TREEVIEW and DISABLE_INDEX are set to YES, then the +# FULL_SIDEBAR option determines if the side bar is limited to only the treeview +# area (value NO) or if it should extend to the full height of the window (value +# YES). Setting this to YES gives a layout similar to +# https://docs.readthedocs.io with more room for contents, but less room for the +# project logo, title, and description. If either GENERATE_TREEVIEW or +# DISABLE_INDEX is set to NO, this option has no effect. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +FULL_SIDEBAR = NO + +# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that +# doxygen will group on one line in the generated HTML documentation. +# +# Note that a value of 0 will completely suppress the enum values from appearing +# in the overview section. +# Minimum value: 0, maximum value: 20, default value: 4. +# This tag requires that the tag GENERATE_HTML is set to YES. + +ENUM_VALUES_PER_LINE = 4 + +# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used +# to set the initial width (in pixels) of the frame in which the tree is shown. +# Minimum value: 0, maximum value: 1500, default value: 250. +# This tag requires that the tag GENERATE_HTML is set to YES. + +TREEVIEW_WIDTH = 250 + +# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to +# external symbols imported via tag files in a separate window. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +EXT_LINKS_IN_WINDOW = NO + +# If the OBFUSCATE_EMAILS tag is set to YES, doxygen will obfuscate email +# addresses. +# The default value is: YES. +# This tag requires that the tag GENERATE_HTML is set to YES. + +OBFUSCATE_EMAILS = YES + +# If the HTML_FORMULA_FORMAT option is set to svg, doxygen will use the pdf2svg +# tool (see https://github.com/dawbarton/pdf2svg) or inkscape (see +# https://inkscape.org) to generate formulas as SVG images instead of PNGs for +# the HTML output. These images will generally look nicer at scaled resolutions. +# Possible values are: png (the default) and svg (looks nicer but requires the +# pdf2svg or inkscape tool). +# The default value is: png. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_FORMULA_FORMAT = png + +# Use this tag to change the font size of LaTeX formulas included as images in +# the HTML documentation. When you change the font size after a successful +# doxygen run you need to manually remove any form_*.png images from the HTML +# output directory to force them to be regenerated. +# Minimum value: 8, maximum value: 50, default value: 10. +# This tag requires that the tag GENERATE_HTML is set to YES. + +FORMULA_FONTSIZE = 10 + +# Use the FORMULA_TRANSPARENT tag to determine whether or not the images +# generated for formulas are transparent PNGs. Transparent PNGs are not +# supported properly for IE 6.0, but are supported on all modern browsers. +# +# Note that when changing this option you need to delete any form_*.png files in +# the HTML output directory before the changes have effect. +# The default value is: YES. +# This tag requires that the tag GENERATE_HTML is set to YES. + +FORMULA_TRANSPARENT = YES + +# The FORMULA_MACROFILE can contain LaTeX \newcommand and \renewcommand commands +# to create new LaTeX commands to be used in formulas as building blocks. See +# the section "Including formulas" for details. + +FORMULA_MACROFILE = + +# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see +# https://www.mathjax.org) which uses client side JavaScript for the rendering +# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX +# installed or if you want to formulas look prettier in the HTML output. When +# enabled you may also need to install MathJax separately and configure the path +# to it using the MATHJAX_RELPATH option. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +USE_MATHJAX = NO + +# With MATHJAX_VERSION it is possible to specify the MathJax version to be used. +# Note that the different versions of MathJax have different requirements with +# regards to the different settings, so it is possible that also other MathJax +# settings have to be changed when switching between the different MathJax +# versions. +# Possible values are: MathJax_2 and MathJax_3. +# The default value is: MathJax_2. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_VERSION = MathJax_2 + +# When MathJax is enabled you can set the default output format to be used for +# the MathJax output. For more details about the output format see MathJax +# version 2 (see: +# http://docs.mathjax.org/en/v2.7-latest/output.html) and MathJax version 3 +# (see: +# http://docs.mathjax.org/en/latest/web/components/output.html). +# Possible values are: HTML-CSS (which is slower, but has the best +# compatibility. This is the name for Mathjax version 2, for MathJax version 3 +# this will be translated into chtml), NativeMML (i.e. MathML. Only supported +# for NathJax 2. For MathJax version 3 chtml will be used instead.), chtml (This +# is the name for Mathjax version 3, for MathJax version 2 this will be +# translated into HTML-CSS) and SVG. +# The default value is: HTML-CSS. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_FORMAT = HTML-CSS + +# When MathJax is enabled you need to specify the location relative to the HTML +# output directory using the MATHJAX_RELPATH option. The destination directory +# should contain the MathJax.js script. For instance, if the mathjax directory +# is located at the same level as the HTML output directory, then +# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax +# Content Delivery Network so you can quickly see the result without installing +# MathJax. However, it is strongly recommended to install a local copy of +# MathJax from https://www.mathjax.org before deployment. The default value is: +# - in case of MathJax version 2: https://cdn.jsdelivr.net/npm/mathjax@2 +# - in case of MathJax version 3: https://cdn.jsdelivr.net/npm/mathjax@3 +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest + +# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax +# extension names that should be enabled during MathJax rendering. For example +# for MathJax version 2 (see https://docs.mathjax.org/en/v2.7-latest/tex.html +# #tex-and-latex-extensions): +# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols +# For example for MathJax version 3 (see +# http://docs.mathjax.org/en/latest/input/tex/extensions/index.html): +# MATHJAX_EXTENSIONS = ams +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_EXTENSIONS = + +# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces +# of code that will be used on startup of the MathJax code. See the MathJax site +# (see: +# http://docs.mathjax.org/en/v2.7-latest/output.html) for more details. For an +# example see the documentation. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_CODEFILE = + +# When the SEARCHENGINE tag is enabled doxygen will generate a search box for +# the HTML output. The underlying search engine uses javascript and DHTML and +# should work on any modern browser. Note that when using HTML help +# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) +# there is already a search function so this one should typically be disabled. +# For large projects the javascript based search engine can be slow, then +# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to +# search using the keyboard; to jump to the search box use + S +# (what the is depends on the OS and browser, but it is typically +# , /