Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Apple Silicon #9

Merged
merged 6 commits into from
Sep 23, 2021
Merged

Support Apple Silicon #9

merged 6 commits into from
Sep 23, 2021

Conversation

jonatanklosko
Copy link
Member

See #8.

@jonatanklosko
Copy link
Member Author

jonatanklosko commented Sep 23, 2021

@dannote could you try running XLA_BUILD=true USE_BAZEL_VERSION=4.2.1 mix compile on this branch?

d="$${d/llvm\/include\/llvm/llvm}"
d="$${d/llvm\/include\/llvm-c/llvm-c}"
d="$${d/llvm\\/include\\/llvm/llvm}"
d="$${d/llvm\\/include\\/llvm-c/llvm-c}"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@seanmor5 my hypothesis is that the escape in \/ would be first interpreted by Bazel and for bazel \/ would be an invalid escape. On the other hand \\ would be a valid escape and appear as \ in the actual script.

@jonatanklosko
Copy link
Member Author

jonatanklosko commented Sep 23, 2021

I bumped the tensorflow version as per #8 (comment).

@dannote
Copy link

dannote commented Sep 23, 2021

The escape sequences are good now, and the build succeeded. I'll try to rebuild it from scratch to be sure.

@dannote
Copy link

dannote commented Sep 23, 2021

Target //tensorflow/compiler/xla/extension:xla_extension up-to-date:
  bazel-bin/tensorflow/compiler/xla/extension/xla_extension.tar.gz
INFO: Elapsed time: 1726.330s, Critical Path: 134.30s
INFO: 9549 processes: 1867 internal, 7682 local.
INFO: Build completed successfully, 9549 total actions
INFO: Build completed successfully, 9549 total actions

All is good, thank you!

@dannote
Copy link

dannote commented Sep 23, 2021

Nitpick: USE_BAZEL_VERSION is a Bazelisk's variable, not Bazel's

@seanmor5
Copy link
Contributor

@dannote Are you able to build EXLA with the compiled XLA package?

@jonatanklosko
Copy link
Member Author

jonatanklosko commented Sep 23, 2021

We changed the cache location, so the current exla xla dependency won't pick up the build version.

@dannote you can try running this in XLA_BUILD=true iex:

Mix.install([
  {:exla, "~> 0.1.0-dev", github: "elixir-nx/nx", sparse: "exla"},
  {:nx, "~> 0.1.0-dev", github: "elixir-nx/nx", sparse: "nx", override: true},
  {:xla, "~> 0.1.1", github: "elixir-nx/xla", branch: "jk-m1", override: true}
], force: true)

defmodule Test do
  import Nx.Defn

  @defn_compiler EXLA
  defn norm(x), do: Nx.dot(x, x)
end

Test.norm(Nx.tensor([1, 1]))

(I will make a release shortly so the archives can build)

Nitpick: USE_BAZEL_VERSION is a Bazelisk's variable, not Bazel's

Ah right, thanks :)

@dannote
Copy link

dannote commented Sep 23, 2021

@seanmor5 I tried the following:

diff --git a/exla/mix.exs b/exla/mix.exs
index b1eb089..419e84a 100644
--- a/exla/mix.exs
+++ b/exla/mix.exs
@@ -40,7 +40,7 @@ defmodule EXLA.MixProject do
   defp deps do
     [
       {:nx, path: "../nx"},
-      {:xla, "~> 0.1.0", runtime: false},
+      {:xla, "~> 0.1.0", runtime: false, github: "elixir-nx/xla", branch: "jk-m1"},
       {:elixir_make, "~> 0.6", runtime: false},
       {:benchee, "~> 1.0", only: :dev},
       {:ex_doc, "~> 0.23", only: :dev}
diff --git a/exla/mix.lock b/exla/mix.lock
index 1a4fd73..7fb7ff9 100644
--- a/exla/mix.lock
+++ b/exla/mix.lock
@@ -7,5 +7,5 @@
   "makeup": {:hex, :makeup, "1.0.5", "d5a830bc42c9800ce07dd97fa94669dfb93d3bf5fcf6ea7a0c67b2e0e4a7f26c
   "makeup_elixir": {:hex, :makeup_elixir, "0.15.1", "b5888c880d17d1cc3e598f05cdb5b5a91b7b17ac4eaf5f297
   "nimble_parsec": {:hex, :nimble_parsec, "1.1.0", "3a6fca1550363552e54c216debb6a9e95bd8d32348938e13de
-  "xla": {:hex, :xla, "0.1.0", "c8ca0b6fc8442bcb96196d20e214ba7c90780d88aa33299133f66fd0786938ec", [:m
+  "xla": {:git, "https://github.com/elixir-nx/xla.git", "b65225885738478343753924b6dee684a81f61db", [b
 }

But it failed with:

** (Mix) failed to extract xla archive, reason: {"/Users/dannote/Library/Caches/xla/0.1.1/cache/build/xla_extension-aarch64-darwin-cpu.tar.gz", :enoent}

It looks like it put the cache in the wrong location.

@dannote
Copy link

dannote commented Sep 23, 2021

@jonatanklosko Aha, I'll try once again

@jonatanklosko
Copy link
Member Author

Hmm, the location looks good, that's where the build should end up when you compiled on this branch. Is there anything in /Users/dannote/Library/Caches/xla?

@jonatanklosko
Copy link
Member Author

Btw. compiling inside the exla project is even better than iex actually.

@dannote
Copy link

dannote commented Sep 23, 2021

Same with iex:

==> exla
Unpacking /Users/dannote/Library/Caches/xla/0.1.1/cache/build/xla_extension-aarch64-darwin-cpu.tar.gz into /Users/dannote/Library/Caches/mix/installs/elixir-1.12.0-erts-12.0.3/e775e5b67aef19d0dc7646c8d1a3c7b7/deps/exla/exla/cache
could not compile dependency :exla, "mix compile" failed. You can recompile this dependency with "mix deps.compile exla", update it with "mix deps.update exla" or clean it with "mix deps.clean exla"
** (Mix.Error) failed to extract xla archive, reason: {"/Users/dannote/Library/Caches/xla/0.1.1/cache/build/xla_extension-aarch64-darwin-cpu.tar.gz", :enoent}
    (mix 1.12.0) lib/mix.ex:454: Mix.raise/2
    /Users/dannote/Library/Caches/mix/installs/elixir-1.12.0-erts-12.0.3/e775e5b67aef19d0dc7646c8d1a3c7b7/deps/exla/exla/mix.exs:86: EXLA.MixProject.compile/1
    (mix 1.12.0) lib/mix/task.ex:458: Mix.Task.run_alias/5
    (mix 1.12.0) lib/mix/tasks/compile.all.ex:90: Mix.Tasks.Compile.All.run_compiler/2
    (mix 1.12.0) lib/mix/tasks/compile.all.ex:70: Mix.Tasks.Compile.All.compile/4
    (mix 1.12.0) lib/mix/tasks/compile.all.ex:57: Mix.Tasks.Compile.All.with_logger_app/2
    (mix 1.12.0) lib/mix/tasks/compile.all.ex:35: Mix.Tasks.Compile.All.run/1
    (mix 1.12.0) lib/mix/task.ex:394: anonymous fn/3 in Mix.Task.run_task/3
iex(1)> 

I have it in /Users/dannote/Library/Caches/xla/cache/build/xla_extension-aarch64-darwin-cpu.tar.gz

@jonatanklosko
Copy link
Member Author

Ahh I see, so that's something to fix. In the meanwhile you can create 0.1.1 and move cache there, so the location is as in the error.

@dannote
Copy link

dannote commented Sep 23, 2021

I did:

mkdir -p ~/Library/Caches/xla/cache/0.1.1/cache/build
cp ~/Library/Caches/xla/cache/build/xla_extension-aarch64-darwin-cpu.tar.gz ~/Library/Caches/xla/cache/0.1.1/cache/build

Now I'm stuck at:

==> exla
Unpacking /Users/dannote/Library/Caches/xla/0.1.1/cache/build/xla_extension-aarch64-darwin-cpu.tar.gz into /Users/dannote/Development/nx/exla/cache
mkdir -p /Users/dannote/Development/nx/exla/_build/dev/lib/exla/priv
ln -sf /Users/dannote/Development/nx/exla/cache/xla_extension/lib /Users/dannote/Development/nx/exla/_build/dev/lib/exla/priv/lib
c++ -fPIC -I/Users/dannote/.asdf/installs/erlang/24.0.3/erts-12.0.3/include -isystem cache/xla_extension/include -O3 -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-comment -shared -std=c++14 c_src/exla/exla.cc c_src/exla/exla_nif_util.cc c_src/exla/exla_client.cc -o /Users/dannote/Development/nx/exla/_build/dev/lib/exla/priv/libexla.so -L/Users/dannote/Development/nx/exla/_build/dev/lib/exla/priv/lib -lxla_extension -flat_namespace -undefined suppress
In file included from c_src/exla/exla.cc:5:
c_src/exla/exla_log_sink.h:63:12: warning: variable 'msg' is used uninitialized whenever switch case is taken [-Wsometimes-uninitialized]
      case absl::LogSeverity::kFatal:
           ^~~~~~~~~~~~~~~~~~~~~~~~~
c_src/exla/exla_log_sink.h:74:39: note: uninitialized use occurs here
    enif_send(env_, &sink_pid_, NULL, msg);
                                      ^~~
c_src/exla/exla_log_sink.h:49:21: note: initialize the variable 'msg' to silence this warning
    ERL_NIF_TERM msg;
                    ^
                     = 0
In file included from c_src/exla/exla.cc:4:
c_src/exla/exla_client.h:29:20: warning: private field 'device_' is not used [-Wunused-private-field]
  xla::PjRtDevice* device_;
                   ^
c_src/exla/exla_client.h:30:15: warning: private field 'client_' is not used [-Wunused-private-field]
  ExlaClient* client_;
              ^
3 warnings generated.
c_src/exla/exla_client.cc:21:3: warning: ignoring return value of function declared with 'warn_unused_result' attribute [-Wunused-result]
  buffer_->BlockHostUntilReady();
  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
c_src/exla/exla_client.cc:112:54: warning: field 'fingerprint_' will be initialized after field 'client_' [-Wreorder-ctor]
                                                     fingerprint_(std::move(fingerprint)),
                                                     ^
In file included from c_src/exla/exla_client.cc:1:
c_src/exla/exla_client.h:29:20: warning: private field 'device_' is not used [-Wunused-private-field]
  xla::PjRtDevice* device_;
                   ^
c_src/exla/exla_client.h:30:15: warning: private field 'client_' is not used [-Wunused-private-field]
  ExlaClient* client_;
              ^
4 warnings generated.
ld: warning: dylib (/Users/dannote/Development/nx/exla/_build/dev/lib/exla/priv/lib/libxla_extension.so) was built for newer macOS version (12.0) than being linked (11.3)
install_name_tool -change bazel-out/darwin-opt/bin/tensorflow/compiler/xla/extension/libxla_extension.so @loader_path/lib/libxla_extension.so /Users/dannote/Development/nx/exla/_build/dev/lib/exla/priv/libexla.so
Compiling 15 files (.ex)

19:24:10.402 [warn]  The on_load function for module Elixir.EXLA.NIF returned:
{:error,
 {:load_failed,
  'Failed to load NIF library: \'dlopen(/Users/dannote/Development/nx/exla/_build/dev/lib/exla/priv/libexla.so, 0x0002): Library not loaded: bazel-out/darwin_arm64-opt/bin/tensorflow/compiler/xla/extension/libxla_extension.so\n  Referenced from: /Users/dannote/Development/nx/exla/_build/dev/lib/exla/priv/libexla.so\n  Reason: tried: \'bazel-out/darwin_arm64-opt/bin/tensorflow/compiler/xla/extension/libxla_extension.so\' (no such file), \'/usr/local/lib/libxla_extension.so\' (no such file), \'/usr/lib/libxla_extension.so\' (no such file)\''}}

Maybe that's because I am on Beta.

@jonatanklosko
Copy link
Member Author

Looks similar to what @josevalim fixed in elixir-nx/nx#463.

You can try modifying the path here: https://github.com/elixir-nx/nx/blob/9ae7b1cf8eca59fcae85d4b211637b4b966f61ca/exla/Makefile#L25 to bazel-out/darwin_arm64-opt/bin/tensorflow/compiler/xla/extension/libxla_extension.so

@josevalim
Copy link
Contributor

@dannote i fixed the exla issue here: elixir-nx/nx@ff619a8

@dannote
Copy link

dannote commented Sep 23, 2021

@jonatanklosko Yes, that worked. I tried the code above in XLA_BUILD=true iex -S mix and ended up with:

19:35:19.989 [warn] domain=elixir.xla file=tensorflow/core/platform/profile_utils/cpu_utils.cc line=128   Failed to get CPU frequency: 0 Hz
 
19:35:20.018 [info] domain=elixir.xla file=tensorflow/compiler/xla/service/service.cc line=171   XLA service 0x15a81cfb0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

19:35:20.018 [info] domain=elixir.xla file=tensorflow/compiler/xla/service/service.cc line=179     StreamExecutor device (0): Host, Default Version

@jonatanklosko
Copy link
Member Author

Absolutely beautiful!! Can you also try mix test in the exla repo, just so we know where we stand?

@dannote
Copy link

dannote commented Sep 23, 2021

$ XLA_BUILD=true mix test

Generated exla app
To run multi-device tests, set XLA_FLAGS=--xla_force_host_platform_device_count=8

19:45:32.677 [warn] domain=elixir.xla file=tensorflow/core/platform/profile_utils/cpu_utils.cc line=128   Failed to get CPU frequency: 0 Hz

19:45:32.679 [info] domain=elixir.xla file=tensorflow/compiler/xla/service/service.cc line=171   XLA service 0x131f05d80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

19:45:32.679 [info] domain=elixir.xla file=tensorflow/compiler/xla/service/service.cc line=179     StreamExecutor device (0): Host, Default Version
Excluding tags: [:platform, :multi_device]
Including tags: [platform: :host]

....................................................................................................................................................................................................................................................................................................................

Finished in 7.0 seconds (7.0s async, 0.00s sync)
1 doctest, 308 tests, 0 failures, 1 excluded

Randomized with seed 684543

FYI it will fail without XLA_BUILD=true with the following error:

** (RuntimeError) none of the precompiled archives matches your target
  Expected:
    * xla_extension-aarch64-darwin-cpu.tar.gz
  Found:
    * xla_extension-x86_64-darwin-cpu.tar.gz
    * xla_extension-x86_64-linux-cpu.tar.gz
    * xla_extension-x86_64-linux-cuda102.tar.gz
    * xla_extension-x86_64-linux-cuda110.tar.gz
    * xla_extension-x86_64-linux-cuda111.tar.gz
    * xla_extension-x86_64-linux-tpu.tar.gz
    * xla_extension-x86_64-windows-cpu.tar.gz

You can compile XLA locally by setting an environment variable: XLA_BUILD=true
    (xla 0.1.1) lib/xla.ex:162: XLA.download_matching!/1
    (xla 0.1.1) lib/xla.ex:32: XLA.archive_path!/0
    mix.exs:69: EXLA.MixProject.compile/1
    (mix 1.12.0) lib/mix/task.ex:458: Mix.Task.run_alias/5
    (mix 1.12.0) lib/mix/tasks/compile.all.ex:90: Mix.Tasks.Compile.All.run_compiler/2
    (mix 1.12.0) lib/mix/tasks/compile.all.ex:70: Mix.Tasks.Compile.All.compile/4
    (mix 1.12.0) lib/mix/tasks/compile.all.ex:57: Mix.Tasks.Compile.All.with_logger_app/2
    (mix 1.12.0) lib/mix/tasks/compile.all.ex:35: Mix.Tasks.Compile.All.run/1

@jonatanklosko
Copy link
Member Author

jonatanklosko commented Sep 23, 2021

Amazing, looks like bumping tensorflow is smooth this time!

FYI it will fail without XLA_BUILD=true with the following error:

Yup that's expected. We always want to reflect the current environment variables, and if XLA_BUILD is not set the behaviour is to look for a precompiled binary online (which is not available in this case). You want to export XLA_BUILD=true in your config file like .bashrc, so it's always in place :)


I will publish the new version, so the archives can build, then we can verify it works on linux too, and then bump on the exla side!

Thanks for all the help @dannote ❤️

@jonatanklosko jonatanklosko merged commit 903e349 into main Sep 23, 2021
@jonatanklosko jonatanklosko deleted the jk-m1 branch September 23, 2021 16:53
@dannote
Copy link

dannote commented Sep 23, 2021

Great! 🍾 Thank you, too!

@zacky1972
Copy link

I got the following error:

18:06:11.511 [warn]  The on_load function for module Elixir.EXLA.NIF returned:
{:error,
 {:load_failed,
  'Failed to load NIF library: \'dlopen(/Users/zacky/github/nx_nif/_build/dev/lib/exla/priv/libexla.so, 2): Symbol not found: __ZN3xla12EqTotalOrderENS_5XlaOpES0_N4absl12lts_202103244SpanIKxEE\n  Referenced from: /Users/zacky/github/nx_nif/_build/dev/lib/exla/priv/libexla.so\n  Expected in: flat namespace\n in /Users/zacky/github/nx_nif/_build/dev/lib/exla/priv/libexla.so\''}}

I tested the file type of libexla.so:

% file _build/dev/lib/exla/priv/libexla.so 
_build/dev/lib/exla/priv/libexla.so: Mach-O 64-bit dynamically linked shared library arm64

@zacky1972
Copy link

Warnings are here:

In file included from c_src/exla/exla.cc:5:
c_src/exla/exla_log_sink.h:63:12: warning: variable 'msg' is used uninitialized whenever switch case is taken [-Wsometimes-uninitialized]
      case absl::LogSeverity::kFatal:
           ^~~~~~~~~~~~~~~~~~~~~~~~~
c_src/exla/exla_log_sink.h:74:39: note: uninitialized use occurs here
    enif_send(env_, &sink_pid_, NULL, msg);
                                      ^~~
c_src/exla/exla_log_sink.h:49:21: note: initialize the variable 'msg' to silence this warning
    ERL_NIF_TERM msg;
                    ^
                     = 0
In file included from c_src/exla/exla.cc:4:
c_src/exla/exla_client.h:29:20: warning: private field 'device_' is not used [-Wunused-private-field]
  xla::PjRtDevice* device_;
                   ^
c_src/exla/exla_client.h:30:15: warning: private field 'client_' is not used [-Wunused-private-field]
  ExlaClient* client_;
              ^
3 warnings generated.
c_src/exla/exla_client.cc:21:3: warning: ignoring return value of function declared with 'warn_unused_result' attribute [-Wunused-result]
  buffer_->BlockHostUntilReady();
  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
c_src/exla/exla_client.cc:112:54: warning: field 'fingerprint_' will be initialized after field 'client_' [-Wreorder-ctor]
                                                     fingerprint_(std::move(fingerprint)),
                                                     ^
In file included from c_src/exla/exla_client.cc:1:
c_src/exla/exla_client.h:29:20: warning: private field 'device_' is not used [-Wunused-private-field]
  xla::PjRtDevice* device_;
                   ^
c_src/exla/exla_client.h:30:15: warning: private field 'client_' is not used [-Wunused-private-field]
  ExlaClient* client_;
              ^
4 warnings generated.

@jonatanklosko
Copy link
Member Author

@zacky1972 just to be clear, are you specifying any XLA_TARGET or compiling just with XLA_BUILD=true?

@zacky1972
Copy link

zacky1972 commented Sep 24, 2021

Compiling just with XLA_BUILD=true.
Should I set XLA_TARGET?

@jonatanklosko
Copy link
Member Author

Should I set XLA_TARGET?

No, I was just wondering if you used any and that could be the issue :)

@atomkirk
Copy link

atomkirk commented Aug 3, 2023

I'm running this on an M1 Max:

Mix.install([
  {:axon, "~> 0.5.1"},
  {:exla, "~> 0.5.3"},
  {:nx, "~> 0.5.3"},
  {:req, "~> 0.3.1"}
])
XLA_BUILD=true elixir lstm.ex

and got:

Loading: 
Loading: 
Loading: 
Loading: 
Loading: 
Loading: 
Loading: 
Loading: 
Loading: 
Loading: 0 packages loaded
ERROR: /Users/adamkirk/.cache/xla_extension/tf-d5b57ca93e506df258271ea00fc29cf98383a374/tensorflow/compiler/xla/extension/BUILD:134:8: //tensorflow/compiler/xla/extension:xla_extension: no such attribute 'extension' in '_real_pkg_tar' rule
ERROR: /Users/adamkirk/.cache/xla_extension/tf-d5b57ca93e506df258271ea00fc29cf98383a374/tensorflow/compiler/xla/extension/BUILD:134:8: //tensorflow/compiler/xla/extension:xla_extension: no such attribute 'mode' in '_real_pkg_tar' rule
ERROR: Skipping '//tensorflow/compiler/xla/extension:xla_extension': Error evaluating '//tensorflow/compiler/xla/extension:xla_extension': error loading package 'tensorflow/compiler/xla/extension': Package 'tensorflow/compiler/xla/extension' contains errors
WARNING: Target pattern parsing failed.
ERROR: Error evaluating '//tensorflow/compiler/xla/extension:xla_extension': error loading package 'tensorflow/compiler/xla/extension': Package 'tensorflow/compiler/xla/extension' contains errors
INFO: Elapsed time: 49.444s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (1 packages loaded)
make: *** [/Users/adamkirk/Library/Caches/xla/0.4.4/cache/build/xla_extension-aarch64-darwin-cpu.tar.gz] Error 1
could not compile dependency :xla, "mix compile" failed. Errors may have been logged above. You can recompile this dependency with "mix deps.compile xla", update it with "mix deps.update xla" or clean it with "mix deps.clean xla"
** (Mix.Error) Could not compile with "make" (exit status: 2).
You need to have gcc and make installed. Try running the
commands "gcc --version" and / or "make --version". If these programs
are not installed, you will be prompted to install them.

    (mix 1.14.2) lib/mix.ex:513: Mix.raise/2
    (elixir_make 0.7.7) lib/elixir_make/compiler.ex:53: ElixirMake.Compiler.compile/1
    (mix 1.14.2) lib/mix/task.ex:421: anonymous fn/3 in Mix.Task.run_task/4
    (mix 1.14.2) lib/mix/tasks/compile.all.ex:92: Mix.Tasks.Compile.All.run_compiler/2
    (mix 1.14.2) lib/mix/tasks/compile.all.ex:72: Mix.Tasks.Compile.All.compile/4
    (mix 1.14.2) lib/mix/tasks/compile.all.ex:59: Mix.Tasks.Compile.All.with_logger_app/2
    (mix 1.14.2) lib/mix/tasks/compile.all.ex:33: Mix.Tasks.Compile.All.run/1
    (mix 1.14.2) lib/mix/task.ex:421: anonymous fn/3 in Mix.Task.run_task/4

I'm having trouble understanding from the issues if my machine should be supported…

@jonatanklosko
Copy link
Member Author

@atomkirk is there a reason you XLA_BUILD=true? We now have precompiled binary compatible with M1 that should be downloaded automatically.

If you indeed intend to build from source, then see this section and make sure you have the right environment, in particular the Bazel version.

@atomkirk
Copy link

atomkirk commented Aug 3, 2023

Yes it installs fine if I don't do XLA_BUILD=true, but axon doesn't appear to be using my GPU. Something in one of theses issues made me think I needed to compile it myself.

Maybe this PR is required before it'll actually use the GPU? elixir-nx/nx#1247

@jonatanklosko
Copy link
Member Author

If you mean the M1 GPU then yeah it's not supported yet, but that PR should make it happen :)

@josevalim
Copy link
Contributor

@atomkirk when we can support metal it will just happen behind scenes, all you will need is a mix deps.update exla. For now you have to wait :)

@atomkirk
Copy link

atomkirk commented Aug 3, 2023

Ok, I ordered a jetson nano just now so I can keep learning. 🎉 Thanks!

@atomkirk
Copy link

atomkirk commented Aug 6, 2023

For anyone finding this via google and thinking of doing the same. The jetson nano only supports up to ubuntu 18 and xla dropped support for 18. So it's a dead end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants