Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Makefile for tf2.16.1 support #74

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 28 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,20 +77,42 @@ Repeat compilation.

If only building for native systems, it is possible to significantly reduce the complexity of the build by removing Bazel (and Docker). This simple approach builds only what is needed, removes build-time depenency fetching, increases the speed, and uses upstream Debian packages.

To prepare your system, you'll need the following packages (both available on Debian Bookworm, Bullseye or Buster-Backports):
To prepare your system, you'll need the following packages (both available on Debian Bookworm / Ubuntu 24.04):
```
sudo apt install libabsl-dev libflatbuffers-dev
sudo apt install libabsl-dev libusb-1.0-0-dev xxd
```

Next, you'll need to clone the [Tensorflow Repo](https://github.com/tensorflow/tensorflow) at the desired checkout (using TF head isn't advised). If you are planning to use libcoral or pycoral libraries, this should match the ones in those repos' WORKSPACE files. For example, if you are using TF2.15, we can check that [tag in the TF Repo](https://github.com/tensorflow/tensorflow/tree/r2.15) get the latest commit for that stable release and then checkout that address:
Next, build [FlatBuffers](https://github.com/google/flatbuffers) v23.5.26 required by TensorFlow v2.16.1 from source:

```
git clone --depth 1 --branch v23.5.26 https://github.com/google/flatbuffers.git
cd flatbuffers/
mkdir build && cd build
cmake .. \
-DFLATBUFFERS_BUILD_SHAREDLIB=ON \
-DFLATBUFFERS_BUILD_TESTS=OFF \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/usr/local
make -j$(nproc)
sudo make install
```

Next, you'll need to clone the [Tensorflow Repo](https://github.com/tensorflow/tensorflow) at the desired checkout (using TF head isn't advised). If you are planning to use libcoral or pycoral libraries, this should match the ones in those repos' WORKSPACE files. For example, if you are using TF2.16.1, we can check that [tag in the TF Repo](https://github.com/tensorflow/tensorflow/tree/v2.16.1) and then checkout that address:
Next, you'll need to clone the [Tensorflow Repo](https://github.com/tensorflow/tensorflow) at the desired checkout (using TF head isn't advised). If you are planning to use libcoral or pycoral libraries, this should match the ones in those repos' WORKSPACE files. For example, if you are using TF2.5, we can check that [tag in the TF Repo](https://github.com/tensorflow/tensorflow/commit/a4dfb8d1a71385bd6d122e4f27f86dcebb96712d) and then checkout that address:
```
git clone https://github.com/tensorflow/tensorflow
git checkout v2.16.1
git clone --depth 1 --branch v2.16.1 https://github.com/tensorflow/tensorflow
```

To build the library:
```
TFROOT=<Directory of Tensorflow> make -f makefile_build/Makefile -j$(nproc) libedgetpu
git clone https://github.com/google-coral/libedgetpu.git
cd libedgetpu
TFROOT=<Directory of Tensorflow> make -f makefile_build/Makefile -j$(nproc)
```

To build packages for Debian/Ubuntu:
```
debuild -us -uc -tc -b -d
```

## Support
Expand Down
29 changes: 20 additions & 9 deletions makefile_build/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,16 @@ CC=gcc
CXX=g++
FLATC=flatc

ARCH := $(shell uname -m)
ifeq ($(ARCH),armv7)
CPU := armv7a
else ifeq ($(ARCH),aarch64)
CPU := aarch64
else ifeq ($(ARCH),x86_64)
CPU := k8
endif


LIBEDGETPU_CFLAGS := \
-fPIC \
-Wall \
Expand All @@ -22,7 +32,7 @@ LIBEDGETPU_CFLAGS := \
LIBEDGETPU_CXXFLAGS := \
-fPIC \
-Wall \
-std=c++14 \
-std=c++17 \
-DDARWINN_PORT_DEFAULT

LIBEDGETPU_LDFLAGS := \
Expand All @@ -32,7 +42,7 @@ LIBEDGETPU_LDFLAGS := \
-Wl,--version-script=$(BUILDROOT)/tflite/public/libedgetpu.lds \
-fuse-ld=gold \
-lflatbuffers \
-labsl_flags \
-labsl_flags_usage \
-labsl_flags_internal \
-labsl_flags_reflection \
-labsl_flags_marshalling \
Expand All @@ -59,7 +69,6 @@ LIBEDGETPU_INCLUDES := \
$(BUILDDIR)/$(BUILDROOT)
LIBEDGETPU_INCLUDES := $(addprefix -I,$(LIBEDGETPU_INCLUDES))

LIBEDGETPU_CSRCS := $(TFROOT)/tensorflow/lite/c/common.c
LIBEDGETPU_COBJS := $(call TOBUILDDIR,$(patsubst %.c,%.o,$(LIBEDGETPU_CSRCS)))

LIBEDGETPU_CCSRCS := \
Expand Down Expand Up @@ -140,6 +149,8 @@ LIBEDGETPU_CCSRCS := \
$(BUILDROOT)/tflite/edgetpu_c.cc \
$(BUILDROOT)/tflite/edgetpu_delegate_for_custom_op.cc \
$(BUILDROOT)/tflite/edgetpu_delegate_for_custom_op_tflite_plugin.cc \
$(TFROOT)/tensorflow/lite/c/common_internal.cc \
$(TFROOT)/tensorflow/lite/array.cc \
$(TFROOT)/tensorflow/lite/util.cc
LIBEDGETPU_CCOBJS := $(call TOBUILDDIR,$(patsubst %.cc,%.o,$(LIBEDGETPU_CCSRCS)))

Expand Down Expand Up @@ -202,13 +213,13 @@ $(LIBEDGETPU_STD_CCOBJS) : $(BUILDDIR)/%-throttled.o: %.cc
@$(CXX) -DTHROTTLE_EDGE_TPU $(LIBEDGETPU_CXXFLAGS) $(LIBEDGETPU_INCLUDES) -c $< -MD -MT $@ -MF $(@:%o=%d) -o $@

libedgetpu: | firmware $(LIBEDGETPU_FLATC_OBJS) $(LIBEDGETPU_COBJS) $(LIBEDGETPU_CCOBJS) $(LIBEDGETPU_MAX_CCOBJS)
@mkdir -p $(BUILDDIR)/direct/k8
@mkdir -p $(BUILDDIR)/direct/$(CPU)
@echo "Building libedgetpu.so"
@$(CXX) $(LIBEDGETPU_CCFLAGS) $(LIBEDGETPU_LDFLAGS) $(LIBEDGETPU_COBJS) $(LIBEDGETPU_CCOBJS) $(LIBEDGETPU_MAX_CCOBJS) -o $(BUILDDIR)/direct/k8/libedgetpu.so.1.0
@ln -sf $(BUILDDIR)/direct/k8/libedgetpu.so.1.0 $(BUILDDIR)/direct/k8/libedgetpu.so.1
@$(CXX) $(LIBEDGETPU_CCFLAGS) $(LIBEDGETPU_LDFLAGS) $(LIBEDGETPU_COBJS) $(LIBEDGETPU_CCOBJS) $(LIBEDGETPU_MAX_CCOBJS) -o $(BUILDDIR)/direct/$(CPU)/libedgetpu.so.1.0
@ln -sf $(BUILDDIR)/direct/$(CPU)/libedgetpu.so.1.0 $(BUILDDIR)/direct/$(CPU)/libedgetpu.so.1

libedgetpu-throttled: | firmware $(LIBEDGETPU_FLATC_OBJS) $(LIBEDGETPU_COBJS) $(LIBEDGETPU_CCOBJS) $(LIBEDGETPU_STD_CCOBJS)
@mkdir -p $(BUILDDIR)/throttled/k8
@mkdir -p $(BUILDDIR)/throttled/$(CPU)
@echo "Building throttled libedgetpu.so"
@$(CXX) $(LIBEDGETPU_CCFLAGS) $(LIBEDGETPU_LDFLAGS) $(LIBEDGETPU_COBJS) $(LIBEDGETPU_CCOBJS) $(LIBEDGETPU_STD_CCOBJS) -o $(BUILDDIR)/throttled/k8/libedgetpu.so.1.0
@ln -sf $(BUILDDIR)/throttled/k8/libedgetpu.so.1.0 $(BUILDDIR)/throttled/k8/libedgetpu.so.1
@$(CXX) $(LIBEDGETPU_CCFLAGS) $(LIBEDGETPU_LDFLAGS) $(LIBEDGETPU_COBJS) $(LIBEDGETPU_CCOBJS) $(LIBEDGETPU_STD_CCOBJS) -o $(BUILDDIR)/throttled/$(CPU)/libedgetpu.so.1.0
@ln -sf $(BUILDDIR)/throttled/$(CPU)/libedgetpu.so.1.0 $(BUILDDIR)/throttled/$(CPU)/libedgetpu.so.1
32 changes: 27 additions & 5 deletions makefile_build/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,40 @@

If only building for native systems, it is possible to significantly reduce the complexity of the build by removing Bazel (and Docker). This simple approach builds only what is needed, removes build-time depenency fetching, increases the speed, and uses upstream Debian packages.

To prepare your system, you'll need the following packages (both available on Debian Bullseye):
To prepare your system, you'll need the following packages (both available on Debian Bookworm / Ubuntu 24.04):
```
sudo apt install libabsl-dev libflatbuffers-dev
sudo apt install libabsl-dev libusb-1.0-0-dev xxd
```

Next, build [FlatBuffers](https://github.com/google/flatbuffers) v23.5.26 required by TensorFlow v2.16.1 from source:

```
git clone --depth 1 --branch v23.5.26 https://github.com/google/flatbuffers.git
cd flatbuffers/
mkdir build && cd build
cmake .. \
-DFLATBUFFERS_BUILD_SHAREDLIB=ON \
-DFLATBUFFERS_BUILD_TESTS=OFF \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/usr/local
make -j$(nproc)
sudo make install
```

Next, you'll need to clone the [Tensorflow Repo](https://github.com/tensorflow/tensorflow) at the desired checkout (using TF head isn't advised). If you are planning to use libcoral or pycoral libraries, this should match the ones in those repos' WORKSPACE files. For example, if you are using TF2.16.1, we can check that [tag in the TF Repo](https://github.com/tensorflow/tensorflow/tree/v2.16.1) and then checkout that address:
Next, you'll need to clone the [Tensorflow Repo](https://github.com/tensorflow/tensorflow) at the desired checkout (using TF head isn't advised). If you are planning to use libcoral or pycoral libraries, this should match the ones in those repos' WORKSPACE files. For example, if you are using TF2.5, we can check that [tag in the TF Repo](https://github.com/tensorflow/tensorflow/commit/a4dfb8d1a71385bd6d122e4f27f86dcebb96712d) and then checkout that address:
```
git clone https://github.com/tensorflow/tensorflow
git checkout a4dfb8d1a71385bd6d122e4f27f86dcebb96712d -b tf2.5
git clone --depth 1 --branch v2.16.1 https://github.com/tensorflow/tensorflow
```

To build the library:
```
TFROOT=<Directory of Tensorflow> make -j$(nproc) libedgetpu
git clone https://github.com/google-coral/libedgetpu.git
cd libedgetpu
TFROOT=<Directory of Tensorflow> make -f makefile_build/Makefile -j$(nproc)
```

To build packages for Debian/Ubuntu:
```
debuild -us -uc -tc -b -d
```