Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump up XEN version to 4.18.0 #3534

Closed
wants to merge 2 commits into from

Conversation

rucoder
Copy link
Contributor

@rucoder rucoder commented Oct 26, 2023

Highlights:

  • QEMU 8.0.4
  • libgcc instead of musl

TODO:

  • port CPU pinning patches @OhmSpectator is working on it
  • Some patches have to be cleaned up. Need to update Commit messages in patches based on original PR descriptions. Original patch files were not generated using git format-patch and lack all required information
  • Need to update unit tests according to new domain config file
  • Test XEN on QEMU and HW
  • Test KVM on QEMU and HW
  • Test if apk add ninja-build does the trick instead of building ninja from source
    • ninja-build is not real ninja, but 'samurai' works fine
  • XEN 4.18.0-rc5 is out. Up-merge to it
  • XEN 4.18 is finally out. Up-merge to it
  • investigate musl vs glibc

@eriknordmark Looking at XEN release page at https://downloads.xenproject.org/release/xen/ I would expect XEN 4.18.0 in December this year.

@deitch
Copy link
Contributor

deitch commented Oct 26, 2023

I had to build ninja from source since Alpine dropped a support for it in 3.8

It looks like it has support. apk add ninja-build (plus some weird compatibility stuff with apk add ninja-is-really-ninja). Does that not work?

@rucoder
Copy link
Contributor Author

rucoder commented Oct 26, 2023

I had to build ninja from source since Alpine dropped a support for it in 3.8

It looks like it has support. apk add ninja-build (plus some weird compatibility stuff with apk add ninja-is-really-ninja). Does that not work?

oh, thanks for finding it! I'll give it a try.

@@ -20,7 +20,7 @@ RUN [ -f "$(basename ${XEN_SOURCE})" ] || tar --absolute-names -xz < /xen.tar.gz
WORKDIR /xen/xen
COPY *.patch arch /tmp/
RUN cp /tmp/"$(uname -m)"/*.patch /tmp/
RUN for p in /tmp/*.patch ; do patch -p1 < "$p" || exit 1 ; done
#RUN for p in /tmp/*.patch ; do patch -p1 < "$p" || exit 1 ; done
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we instead delete the content of the patch directories? Or will we need some of them?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still need them, but it's just a temporary workaround to avoid applying the not-yet-ported patches.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(it should be converted to a draft PR)

@@ -108,8 +103,6 @@ const qemuConfTemplate = `# This file is automatically generated by domainmgr
driver = "intel-iommu"
caching-mode = "on"
{{ end }}
[realtime]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to QEMU's documentation:

The -realtime mlock=on|off argument has been replaced by the -overcommit mem-lock=on|off argument.

Even if mem-lock=off is the default it can make sense to let this option explicit in the config... or at least explain the removal in your commit message

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Essentially, this option enables or disables mlock() call (https://www.man7.org/linux/man-pages/man2/mlock.2.html). This is probably workload-specific.

@@ -54,12 +54,7 @@ const qemuConfTemplate = `# This file is automatically generated by domainmgr
[machine]
type = "{{.Machine}}"
dump-guest-core = "off"
{{- if .DomainStatus.CPUs }}
Copy link
Contributor

@rene rene Oct 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these being removed? I think it will affect the unit tests, so you need to update them as well (same for the realtime config).... the removal reason should also be explained in the commit message...

Edit: I see... you didn't port @OhmSpectator patches yet...

ENV PKGS_arm64 libfdt

RUN eve-alpine-deploy.sh

ADD https://github.com/ninja-build/ninja.git#release /ninja
WORKDIR /ninja
RUN python3 ./configure.py --bootstrap
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need all these layers? Is this build too heavy? I think you could squash all these commands into a single RUN....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rene QEMU now uses ninja. I couldn't find APK for it in Alpine so it was faster to build it from source. Now @deitch ponited out that it is called ninja-build. I still need to give it a try

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rene Are you sure intermediate layers are stored? It really makes it less readable to have these long blocks of RUN all connected with &&

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes @christoph-zededa , that's the definition of RUN command on docker... @rucoder , I understand that you need to build ninja, that's fine, my point is that you can just group the commands (using &&) instead of use multiple RUN commands and create one layer per command....

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rene since docker is using buildkit I have not seen them.

@eriknordmark eriknordmark marked this pull request as draft October 27, 2023 11:29
@rucoder
Copy link
Contributor Author

rucoder commented Oct 27, 2023

just to clarify: this is a WIP and very DRAFT PR for experementing with latest XEN/QEMU so I can share my work with other people. I've updated the description

@OhmSpectator
Copy link
Member

The new QEMU version natively supports CPU Pinning options, making the CPU pinning patches unnecessary. Here is the branch that adapts Pillar to use these options:
https://github.com/OhmSpectator/eve/tree/feature/use-native-qemu-cpu-affinity
It's done on top of this PR.

@rucoder
Copy link
Contributor Author

rucoder commented Oct 29, 2023

I had to build ninja from source since Alpine dropped a support for it in 3.8

It looks like it has support. apk add ninja-build (plus some weird compatibility stuff with apk add ninja-is-really-ninja). Does that not work?

@deitch ninja-build is not ninja :) but samurai package does the trick

- I had to build ninja from source since Alpine dropped a support for
  it in 3.8

Highlights:
- QEMU 8.0.4
- libgcc instead of musl

Signed-off-by: Mikhail Malyshev <[email protected]>
@rucoder rucoder changed the title Bump up XEN version to 4.18.0-rc4 Bump up XEN version to 4.18.0 Dec 8, 2023
@rouming
Copy link
Contributor

rouming commented Jul 15, 2024

@rucoder still plan to work on that? or can be closed?

@rucoder
Copy link
Contributor Author

rucoder commented Jul 15, 2024

@rucoder still plan to work on that? or can be closed?

@rouming eventually..... a lot of work was done so I'd like to finish it

@rouming
Copy link
Contributor

rouming commented Oct 1, 2024

Closing this in a favor of already merged #4186

@rouming rouming closed this Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants