-
Notifications
You must be signed in to change notification settings - Fork 814
run: handle relabeling bind mounts ourselves #6132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: nalind The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
6c3b52f
to
fe8b66d
Compare
Handle requested relabeling of bind mounts (i.e., the "z" and "Z" flags) directly, instead of letting the runtime handle the relabeling. Signed-off-by: Nalin Dahyabhai <[email protected]>
switch spec.Mounts[i].Type { | ||
default: | ||
continue | ||
case "bind", "rbind": | ||
// all good, keep going | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
switch spec.Mounts[i].Type { | |
default: | |
continue | |
case "bind", "rbind": | |
// all good, keep going | |
} | |
if switch spec.Mounts[i].Type == "bind" || switch spec.Mounts[i].Type == "rbind" { | |
Do you think if
is easier for readability ? We are just handling one case.
DISTRO_NV: "${FEDORA_NAME}" | ||
IMAGE_NAME: "${FEDORA_CACHE_IMAGE_NAME}" | ||
STORAGE_DRIVER: 'vfs' | ||
BUILDAH_RUNTIME: runc | ||
RUNTIME_N: " using runc" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have dropped all runc testing not to long ago in podman. It was a huge maintenance headache.
Every image update now has yet another big failure point which causes regressions. As the person who currently manages all the updates I am strongly against making my life harder with these things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What failure point do you mean? I wouldn't expect that installing the package, which should already be available, would be more than a one-time change. Beyond that, I'm not clear on what it is that you think I'm asking you to take on here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost every CI image update contains one or more regressions in our dependencies, kernel, systemd, golang, crun, glibc, well you get the point. By testing with runc every runc update is now a new potential failure point that then blocks image updates and requires me (or whoever does the image update) to do the work of investigating the test failure, reporting the bug and/or adding a work around or skip to test. Can that be done? Sure, but we decided over a year ago that we no longer want to test runc (containers/podman#22706) and I at least am not interested in doing a ton of extra work for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then those can be marked as not required for the "Total Success" check. But the idea of fixing this bug that runc exposes, and then not taking available steps to help ensure that we won't encounter something similar again, doesn't sit well with me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure that would not block the image update but then you have a permanent red check until someone takes the time to fix it. My general experiences around such things have no been good at all as people just quickly ignore that then. And overall it does not remove the time someone has to invest to fix that.
So to me the question is do we want to spend time dealing with runc issues or not, as I remember the discussion a year ago the answer was no. If you think we should then we likely should discus that as a team again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could have caught this bug earlier. If we need to have a conversation about that, then I think we should have the conversation.
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Handle requested relabeling of bind mounts (i.e., the "z" and "Z" flags) directly, instead of letting the runtime handle the relabeling.
How to verify it
We now run integration tests using both crun and runc on Fedora, which exercises their relabeling logic, or the lack thereof.
Which issue(s) this PR fixes:
Fixes #6071.
Special notes for your reviewer:
Does this PR introduce a user-facing change?