-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added interactive installer package in pkg/installer #4055
base: master
Are you sure you want to change the base?
Conversation
jsfakian
commented
Jul 4, 2024
- This patch introduces a new interactive installation mode for EVE-OS. It allows users to choose the installation parameters in a text-based interface, which is more user-friendly than editing grub.cfg.
- We store the installation parameters in a JSON file (installer.json), which we can use for bulk installations with similar configurations.
Dear all, This is a huge PR; unfortunately, I could not do it in parts. Please commend me for any questions or concerns. |
9fb1f16
to
7ad44db
Compare
# change path to the "002-interactive-installer" in "pkg/mkimage-raw-efi/config" | ||
- name: interactive-installer | ||
image: INTERACTIVE_INSTALLER_TAG | ||
command: ["/bin/sh", "-c", "/sbin/run-installer.sh </dev/console >/dev/console 2>&1"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite get how this works. Does this mean the interactive installer will run on every single boot? This is in the rootfs.yml.in
, which makes it part of every rootfs in the onboot
serial path. Is that intended? Don't we usually have distinct installer images and running rootfs images?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We want this because we will include a maintenance mode afterwards. So, we need to have it running for each boot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# FIXME: 002-installer? why not linuxkit build? | ||
WORKDIR /eco/media/root-rw/root/containers/onboot/002-installer | ||
# FIXME: 003-installer? why not linuxkit build? | ||
WORKDIR /eco/media/root-rw/root/containers/onboot/003-installer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oy, this is hideous. And not at all the fault of this PR. Why do we do this? I will open a separate issue to address this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oy, this is hideous. And not at all the fault of this PR. Why do we do this? I will open a separate issue to address this.
because of cross-container dependencies. And linuxkit doesn't provide a way to do it better. (I do not remeber exact dependency though)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened #4056 to track it. I hope that Roman (referenced there) knows and we can come up with a cleaner solution.
pkg/grub/rootfs-installer.cfg
Outdated
set_global dom0_console "console=tty0 console=hvc0 console=ttymxc2,115200 earlycon=ec_imx6q,0x30880000,115200" | ||
fi | ||
fi | ||
# Nvidia's Jetson Xavier NX developer kit (variant identified by the Asset-Tag) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the custom configs here are already defined in the regular rootfs.cfg file, you can move all of them to a common config file and just use it on both rootfs.cfg and rootfs-installer.cfg, so we don't have duplicated definitions which is really hard to maintain.....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think he addressed this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We cannot use GPL code in the installer, only permissive licenses compatible with Apache 2.0
continue | ||
fi | ||
key=$(echo "$key" | tr '[:upper:]' '[:lower:]') | ||
assoc_array["$key"]="$value" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using arrays here and in other places implies the use of bash, so far we have based all of our scripts in sh (and we don't even install bash), that's not a big deal include bash, but in this case you could just store the keypairs KEY=VAR into a file and then use a small regex to parse the file in the code below.... while the full handling of config files is not implemented in the installer....
pkg/interactive-installer/Dockerfile
Outdated
RUN eve-alpine-deploy.sh | ||
ENV RUSTUP_HOME=/usr/local | ||
ENV CARGO_HOME=/usr/local/cargo | ||
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain none |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is really no way to fetch the binary packages with ADD?
We try to not allow network access for containers during building as much as possible. Quoting explanation from @eriknordmark:
We build as much as possible that way to ensure that any source is done using ADD commands so that they make it into the SPDX SBoM file.
If networking is enabled then someone might later add some curl/wget/git pull way to add source which would not be included in the SBoM.
Original thread: #3916 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah, I missed this one. Nice catch @rene and thank you.
I will add to it in the build.yml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If possible, we should just have the rust installed as part of eve/alpine
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we cannot do it as part of eve/alpine
, then we should do something like:
ADD https://static.rust-lang.org/dist/rust-1.79.0-x86_64-unknown-linux-musl.tar.xz
RUN # do the install command here based on https://forge.rust-lang.org/infra/other-installation-methods.html#standalone-installers
That both uses ADD
and points to a very specific release; we cannot have the general curl | sh
which is non-reproducible.
FYI ADD
works even with network: no
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is partially addressed. The network: yes
has been addressed, it looks like it all works now with network: no
, so thank you for that @jsfakian .
For the other part, can we avoid having to install rust here explicitly? This may be the first package built on rust, but it will not be the last. We should not have to start tracking down each place we add it.
Can we have a separate PR adding rust to eve-alpine, and then just add it as part of BUILD_PKGS
? eve-alpine is currently mostly based on alpine:3.16 (I think someone is looking at updating it), but we have the mechanism to pull in from other repositories, e.g. edge, here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do this if we can, and not add more technical debt by having each package that needs rust install it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened #4145 to add rust to our base eve-alpine. If it works and goes in quickly, you will be able to just use a more recent version of FROM lfedge/eve-alpine
and then add rust
to the BUILD_PKGS
and have it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have already tried that, but the code does not compile. It needs a newer version of Rust. If we upgrade to Alpine 3.20 it will be fine. I can try to change the code to make it work though.
pkg/interactive-installer/Dockerfile
Outdated
|
||
WORKDIR /usr/local/my-installer | ||
COPY . . | ||
RUN cargo install --debug --path . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It occurs to me now, that if we have network: no
(as we should), then rust might be available from alpine packages or we can use ADD
. But that won't solve cargo install
, which requires the network. Instead, run cargo vendor
and commit to git, so you can run offline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did you solve this? I see network: no
in the build.yml, so you must have gotten it to work. Was it the vendor_part_*
stuff? What is the process I need to follow if I need to add a new package? Can we document that in the README?
7ad44db
to
e0bc581
Compare
e0bc581
to
a6ad472
Compare
@rene which package is GPL? |
pkg/interactive-installer/Dockerfile
Outdated
@@ -0,0 +1,25 @@ | |||
# syntax=docker/dockerfile-upstream:1.5.0-rc2-labs | |||
FROM lfedge/eve-alpine:9fb9b9cbf7d90066a70e4704d04a6fe248ff52bb AS build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FROM lfedge/eve-alpine:9fb9b9cbf7d90066a70e4704d04a6fe248ff52bb AS build | |
FROM lfedge/eve-alpine:1f7685f95a475c6bbe682f0b976f12180b6c8726 AS build |
or am I outdated?
a6ad472
to
83f5fbd
Compare
e511c15
to
3ea0659
Compare
pkg/interactive-installer/README.md
Outdated
## Description | ||
|
||
The EVE-OS interactive installer provides a user-friendly text-based user interface (TUI) to configure the parameters of the installation. The primary goal of this installer mode is to facilitate the installation of EVE-OS on-edge devices while providing flexibility, and automation. | ||
The EVE-OS interactive installer provides a user-friendly environment for installing EVE-OS. Utilizing a TUI, allows users to configure installation parameters with ease. The primary goal of this installer is to simplify the installation process on edge devices while offering flexibility and automation, similar to the installation wizards used for setting up new Wi-Fi routers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I commented earlier - maybe comment was lost?
You have two paragraphs which are almost identical - please make it into one paragraph.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I misunderstood the first time, thank you.
{ | ||
"destination": "/opt/interactive-installer", | ||
"type": "bind", | ||
"source": "/containers/onboot/002-interactive-installer/tmp/upper", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why tmp/upper and not lower like the other bind mounts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First, I tried using "lower" as in the other bind mounts, but it did not work. I also need to figure out why. Maybe @deitch has an idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This thing really is ugly, isn't it? All of these explicit mounts into specific paths? What is the purpose of mounting it?
but it did not work. I also need to figure out why
Can you describe what it does, what you expected, and how it failed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to access installer.json from the pkg/mkimage-raw-efi container to expose the parameters the user inserted. I searched for the file ("installer.json"), and it is not in /containers/onboot/002-interactive-installer/lower
, as I expected, maybe because it is an onboot container, not a service one!
3ea0659
to
672b3ea
Compare
ENV PATH "$PATH:/usr/local/cargo/bin" | ||
|
||
WORKDIR /usr/local/my-installer | ||
COPY . . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a left-over?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you are referring to "COPY . ." it is not a leftover. It copies all local files to the container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But it even copies the Dockerfile
into the container, doesn't it?
And the implication of that is, that if you add something at the end of the Dockerfile, it will on the next build of the container start from here again.
Also you copy the vendor directory again at https://github.com/lf-edge/eve/pull/4055/files#diff-bfb0a10960fc27681a077dedc90580981dd639615e9e16de2b6917e942a951b1R33 .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an intermediate stage; hence, all unnecessary files will be discarded. However, I changed it to copy only the required files.
672b3ea
to
5e33297
Compare
pkg/interactive-installer/Dockerfile
Outdated
RUN cat /usr/local/my-installer/vendor_part_* > vendor.tar.gz \ | ||
&& tar xf vendor.tar.gz \ | ||
&& rm vendor.tar.gz vendor_part_* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't it possible to just use a pipe?
RUN cat /usr/local/my-installer/vendor_part_* > vendor.tar.gz \ | |
&& tar xf vendor.tar.gz \ | |
&& rm vendor.tar.gz vendor_part_* | |
RUN cat /usr/local/my-installer/vendor_part_* | tar xf - |
629e511
to
e057f38
Compare
@eriknordmark, @deitch, @rene, @rucoder, I think I covered most (if not all) of the comments. Should we merge, or do we need more changes? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some comments, so I will sum up here.
- README: thank you for updating with how it works. It needs a bit more, primarily anything that isn't clearly obvious, like how you got the vendored dependencies in place.
- rust installation: can we avoid doing that explicitly here? This is the first, but not the last, to require rust; we don't want to create new technical debt where each package installs its own rust.
- interactive installer on each boot: from what I read, the interactive installer will run on each boot of installed EVE, not just installer EVE. I don't think we want that, I think we want it solely on the installer image. But correct me.
Thank you @deitch.
|
e057f38
to
58f97d7
Compare
58f97d7
to
4fc1140
Compare
- This patch introduces a new interactive installation mode for EVE-OS. It allows users to choose the installation parameters in a text-based used interface, which is more user-friendly compared to editing grub.cfg. - We store the installation parameters in a json file (installer.json), which we can then use for bulk installations with similar configurations. Signed-off-by: Ioannis Sfakianakis <[email protected]>
4fc1140
to
e57e23e
Compare
@jsfakian, is this PR still in progress? If so, should we mark it as WIP and convert it to a draft? |
Hey @OhmSpectator, this PR is still in progress. @deitch is finishing up the separation of the installer and live image. We are debugging now. When it is finished, I will change this PR according to @deitch's PR. We don't need to change anything, IMHO. |
Actually, I hope it will become much simpler. It will be entirely self-container in |
#4248 is in. You should be able to rebase on that one. But... you should not have to change anything except what is in If you need to change how |