-
Notifications
You must be signed in to change notification settings - Fork 814
ERRO[0025] unlinkat /var/tmp/buildah2410054376/mounts3022885724/bind626918239: device or resource busy #5988
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Poking through the debuglog and the code, I'm thinking perhaps this problem is stemming from within containers/storage based on
|
stumbled across what appears to be the same issue in a build (also using VFS storage driver), to me it seems the problem starts to appear with buildah version 1.37.6:
this is with everything works as expected with |
Interesting, and thanks for providing details. Knowing this behavior crept in via a patch release is actually really helpful. I just checked, and it was 1.37.5 that fixed the issue for me, which makes sense based on your experience. Checking the git history, there are only 17 commits between 1.37.5 and 1.37.6. Of these, almost half are merge or changelog update commits. So that narrows things down quite a bit! |
Based on the string There are several conditionals that would all emit a similar message, but I think this is coming from the the 4th one, dealing with a failure from func GetBindMount(...cut...
...cut...
overlayDir := ""
if mountedImage != "" || mountIsReadWrite(newMount) {
if newMount, overlayDir, err = convertToOverlay(newMount, store, mountLabel, tmpDir, 0, 0); err != nil {
return newMount, "", "", "", err
}
}
succeeded = true
return newMount, mountedImage, intermediateMount, overlayDir, nil
} |
didn't get to look at the details of the commit, but it sounds very plausible to me. at least I can confirm that removing the |
and I noticed that I may have shortened the output in my earlier comment a bit much, so in case it could be helpful, the apparently problematic line in my build is: |
All good data points, thanks again for sharing. For VFS I don't think it matters if the source is another stage or w/in the context dir, both should just resolve to directories on the "host" side. SELinux could be to blame, however the way I was reproducing it, nested w/in |
Something interesting one of my colleagues noticed: If you do a By my reading of |
@nalind I think we need your expert eyes on this, it also affects |
Read-write bind mounts get converted into overlays to match the expectation that writes to them get discarded. This was part of the patch set that we backported to multiple branches for CVE-2024-11218. |
Thanks for taking a look at this Nalin, I appreciate it. So it is as I/we feared, overlay is being forced. My understanding/belief is this would also be reproducible if the VFS driver was configured in As a fix, is it possible to detect if VFS is being used in |
following this as we are running into this issue as well-
|
Just for the record I discovered Konflux has a fork in https://github.com/konflux-ci/buildah-container/ - notice the buildah submodule there is ~4 months old at this time. |
Don't we want to encourage people to provide an "emptydir" (in kube terms) i.e. transient non-overlayfs volume? Or honestly use |
This is perfectly valid and I agree this is probably a better way to run nested builds. However, two things:
|
Yes I agree (though I'm not the one writing the patches for this so it's easy to do 😄 ) One observation I would have is I think few of us have nested builds near the top of mind, it's certainly not in my day-to-day usage. But probably one thing that would make sense (tying together with my comment above) is to do "reverse dependency testing" by running the Konflux buildah task against proposed updates to buildah. The Konflux buildah task is a beast but it is how many things get built for production so we certainly need it to continue to work. |
This is a really good suggestion. While konflux may be the eventual destination, there's no reason why the current tests couldn't have caught this. It runs chroot tests and it runs VFS tests. It must simply be missing a test that tries to rw mount from a previous layer. Edit: There's possibly a secondary avenue as well - |
Ref: containers/buildah#5988 Having this test in the AIO build is merely a convenience, as it will exercise both buildah and podman packages as they appear in their respective purpose-built images. Signed-off-by: Chris Evich <[email protected]>
A friendly reminder that this issue had no activity for 30 days. |
X-ref: #6126 |
When building inside a rootless container using buildah's
vfs
storage driver andchroot
isolation (As is very often done to build images in CI environments), specifying read/write bind volumes from other stages results in an error. This behavior does not reproduce using buildah 1.37 or earlier. Also verified this same behavior using a vanillaregistry.fedoraproject.org/fedora-minimal
images +dnf5 install buildah
. That is to say, I think it's a buildah problem, not a buildah image problem.Reproduction (host) environment:
quay.io/buildah/upstream:latest
container image (buildah version 1.40.0-dev (image-spec 1.1.0, runtime-spec 1.2.0)
)quay.io/buildah/stable:v1.38
container imagequay.io/buildah/stable:v1.37
container imageSteps to reproduce:
Containerfile
somewhere in the users homedirpodman run -it --rm -v ./Containerfile:/root/Containerfile:ro,Z quay.io/buildah/stable:v1.38 buildah --storage-driver=vfs build --isolation=chroot /root
quay.io/buildah/stable:v1.37
(or any other earlier version)Unexpected results:
Expected results (
from v1.37
):Debug output from v1.38 is below (v1.40.0-dev output is substantially similar):
buildah_v1.38_debug.log.txt
Note: Also attempted with the following
Containerfile
with similar results:The text was updated successfully, but these errors were encountered: