Skip to content

[BUG] Podman REST API return a list of containers with partial/missing "Mounts" attribute values #24878

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
D3vil0p3r opened this issue Dec 19, 2024 · 25 comments · May be fixed by #25697
Open
Labels
HTTP API Bug is in RESTful API kind/bug Categorizes issue or PR as related to a bug.

Comments

@D3vil0p3r
Copy link

D3vil0p3r commented Dec 19, 2024

Issue Description

Podman REST API container list returns a list of containers with partial/missing Mounts attribute values.

Steps to reproduce the issue

Start podman daemon:

systemctl --user start podman

To simplify the reproduction, let's use Podman py:

import podman
from podman import PodmanClient

client = podman.from_env()
client.ping()
image = client.images.pull('docker.io/athenaos/base:latest')  
docker_create_function = client.containers.create
docker_args = {"image": image ,
               "name": "exegol-default",
               "mounts": [
                    {'target': '/opt/resources', 'source': '/home/athena/Desktop', 'type': 'bind'},
                    {'target': '/workspaces', 'source': '/home/athena/Documents', 'type': 'bind'}
                ]}

container = docker_create_function(**docker_args)
print(container.attrs.get("Mounts", []))

print("Now use client.containers.list:")
docker_containers = client.containers.list(all=True, filters={"name": "exegol-"})

for container in docker_containers:
    print(container.attrs.get("Mounts", []))

Run this script. It will create a exegol-default container with different mounts elements.

One note: when the container is created, the container attribute mounts has the right values. When the container is listed (so when the /containers/json API is invoked), no.

Then, enable API service:

podman system service tcp:localhost:8080 --time=0 &

and run:

curl -k http://localhost:8080/v4.0.0/libpod/containers/json?all=true | jq

It will return only target info of mounts, instead of other information:

...
    "Labels": null,
    "Mounts": [
      "/opt/resources",
      "/workspaces"
    ],
    "Names": [
      "exegol-default"
    ],
...

Describe the results you received

...
    "Labels": null,
    "Mounts": [
      "/opt/resources",
      "/workspaces"
    ],
    "Names": [
      "exegol-default"
    ],
...

Describe the results you expected

I expect that Podman API return all the elements of Mounts, so not only target but also source, type and so on. I expect a result as occurs by docker API when running:

curl --cacert ca.pem --cert client-cert.pem --key client-key.pem https://localhost:1234/containers/json?all=true | jq

By docker API the output is correct:

...
    "Mounts": [
      {
        "Type": "bind",
        "Source": "/home/athena/Exegol/exegol-resources",
        "Destination": "/opt/resources",
        "Mode": "",
        "RW": true,
        "Propagation": "rprivate"
      },
      {
        "Type": "bind",
        "Source": "/home/athena",
        "Destination": "/workspaces",
        "Mode": "",
        "RW": true,
        "Propagation": "rprivate"
      }
    ]
...

podman info output

host:
  arch: amd64
  buildahVersion: 1.38.0
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-1:2.1.12-1
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: e8896631295ccb0bfdda4284f1751be19b483264'
  cpuUtilization:
    idlePercent: 82.38
    systemPercent: 6.71
    userPercent: 10.91
  cpus: 4
  databaseBackend: sqlite
  distribution:
    distribution: athena
    version: Rolling release
  eventLogger: journald
  freeLocks: 2047
  hostname: athenaos
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.6.65-1-lts
  linkmode: dynamic
  logDriver: journald
  memFree: 264495104
  memTotal: 3966132224
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.13.1-1
      path: /usr/lib/podman/aardvark-dns
      version: aardvark-dns 1.13.1
    package: netavark-1.13.1-1
    path: /usr/lib/podman/netavark
    version: netavark 1.13.1
  ociRuntime:
    name: runc
    package: runc-1.2.3-1
    path: /usr/bin/runc
    version: |-
      runc version 1.2.3
      spec: 1.2.0
      go: go1.23.4
      libseccomp: 2.5.5
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-2024_11_27.c0fbc7e-1
    version: |
      pasta 2024_11_27.c0fbc7e
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 6514012160
  swapTotal: 8053059584
  uptime: 7h 45m 8.00s (Approximately 0.29 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/athena/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/athena/.local/share/containers/storage
  graphRootAllocated: 631544741888
  graphRootUsed: 30339878912
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 3
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/athena/.local/share/containers/storage/volumes
version:
  APIVersion: 5.3.1
  Built: 1732225906
  BuiltTime: Thu Nov 21 22:51:46 2024
  GitCommit: 4cbdfde5d862dcdbe450c0f1d76ad75360f67a3c
  GoVersion: go1.23.3
  Os: linux
  OsArch: linux/amd64
  Version: 5.3.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Arch Linux

Additional information

Initial issue reported to containers/podman-py#488

@D3vil0p3r D3vil0p3r added the kind/bug Categorizes issue or PR as related to a bug. label Dec 19, 2024
@rhatdan
Copy link
Member

rhatdan commented Dec 20, 2024

This is calling into the podman ps api.

$ podman ps -a
CONTAINER ID  IMAGE                            COMMAND     CREATED        STATUS      PORTS       NAMES
c42e7d2b4e7a  docker.io/library/alpine:latest  /bin/sh     4 minutes ago  Created                 exegol-default
$ podman ps -a --format '{{ .Mounts }}'
[/opt/resources /workspaces]

Not sure if that is correct, but this is the way it is coded.

@rhatdan
Copy link
Member

rhatdan commented Dec 20, 2024

As opposed to the podman inspect api.

podman inspect exegol-default   --format '{{ .Mounts }}'
[{bind  /tmp/foobar /opt/resources   [nosuid nodev rbind] true rprivate } {bind  /tmp/workspaces /workspaces   [nosuid nodev rbind] true rprivate }]

@D3vil0p3r
Copy link
Author

I guess that, unlike podman inspect, podman ps invokes the same problematic /containers/json API. Do you know what is the source file in the repository that implements /containers/json API?

@rhatdan
Copy link
Member

rhatdan commented Dec 20, 2024

Sorry, perhaps @Luap99 or @mheon can help.

@D3vil0p3r
Copy link
Author

I just tested docker API by the container list call and docker API works well:

curl --cacert ca.pem --cert client-cert.pem --key client-key.pem https://localhost:1234/containers/json?all=true | jq

Output:

...
    "Mounts": [
      {
        "Type": "bind",
        "Source": "/home/athena/Exegol/exegol-resources",
        "Destination": "/opt/resources",
        "Mode": "",
        "RW": true,
        "Propagation": "rprivate"
      },
      {
        "Type": "bind",
        "Source": "/home/athena",
        "Destination": "/workspaces",
        "Mode": "",
        "RW": true,
        "Propagation": "rprivate"
      }
    ]
...

I think podman container listing API should be fixed to show the same consistent information.

@rhatdan
Copy link
Member

rhatdan commented Dec 20, 2024

I tend to agree with you.

It looks like the Docker API does the correct thing and looks up the mounts on the server side.

@mheon mheon added the HTTP API Bug is in RESTful API label Dec 20, 2024
@mheon
Copy link
Member

mheon commented Dec 20, 2024

Should be a simple enough fix, but most of us are on PTO so I doubt it'll be touched until the new year.

@D3vil0p3r
Copy link
Author

Should be a simple enough fix, but most of us are on PTO so I doubt it'll be touched until the new year.

In terms of time effort, in your opinion how much time is it needed to work on the code to fix it?

@mheon
Copy link
Member

mheon commented Dec 21, 2024

Probably a few hours? From what I can see, the results from podman inspect at the command line are correct and have a fully populated mounts struct, and the REST API uses the same underlying code, so the problem is likely somewhere in the handler code and is stripping the full Mount structs

@rhatdan
Copy link
Member

rhatdan commented Dec 23, 2024

The containers.list function is calling

docker_containers = client.containers.list(all=True, filters={"name": "exegol-"})

Which is the same thing as podman --remote ps --all would call. This looks like in Docker and Podman the ps command only outputs the Destination of Mounts, which is probably why the API only returns it in the libpod API. The Docker API for the same call returns the entire mount object. I think it would be smarter to merge the two Calls together into a signle function in code. Since the only difference I easily see is around additional filters that libpod provides. I don't really know what the compat API and the libpod API required different implementations.

@D3vil0p3r
Copy link
Author

Thanks @rhatdan . Sorry for the ping @mheon , is this issue in plan to be fixed on January? Thanks for letting me know.

@Luap99
Copy link
Member

Luap99 commented Jan 7, 2025

I don't think this is a simple fix or the correct expectation that these types match.
The inspect type was modelled after the docker inspect output, the other types are podman specific and do not match docker. The REST API does not use the same code at all. These are two different types entirely and are parsed/populated differently.

And as this payload is part of the stable API we cannot change it without breaking users so such API changes could only be done in a major version.

If you want to get the docker like output you should call the docker compat API endpoints not the libpod one.

@D3vil0p3r
Copy link
Author

So, it means that by podman API, the only Mount info we can get is only the "Destination"? It seems "poor". I think that adding as output on Mount also the Type, Source, Mode, RW and Propagation attributes could be worth for those devs or libraries leveraging on podman API.

@Luap99
Copy link
Member

Luap99 commented Jan 7, 2025

you can inspect the single container to get all the data.
I am not saying the podman API design makes much sense, but we cannot just change types in an external API.

@D3vil0p3r
Copy link
Author

D3vil0p3r commented Jan 7, 2025

Mmh... Currently I am working on Exegol project to make Podman support, where the code lists all the containers by leveraging the mentioned API call by Podman Py libs. A workaround would be getting only the names of the collected containers and then getting data from each single container call.

I agree that any changes on the API should be released for a major release, but my concern is just to start to work on it. I don't know where the related code to change and test is hosted, if on this repo or another open one. If you link me the related files, I can try to work on it.

@mabulgu
Copy link

mabulgu commented Feb 1, 2025

If no one has already taken this, can I assign this to myself @mheon ? Came from this issue and might be sth interesting to work on. Unless of course, you have a tight deadline on those as I will need some time to go thru the rest interface first.

@mabulgu
Copy link

mabulgu commented Feb 2, 2025

/assign

@mheon Please let me know if you have another plan for this one so that I can drop it.

edit: By another plan, I mean: not touching the external API because of the reasons that @Luap99 mentioned and marking this as "wont do" or similar.

@D3vil0p3r
Copy link
Author

D3vil0p3r commented Feb 15, 2025

@mabulgu any news on this? PS: I think the touch could be also done but only for a major release. I think that this fix (because imho the described topic is an issue) is a "must have". The current values in "Mounts" are programmatically almost useless if not combined with the other values as produced by docker API. Why I say this: because the only info I get from API is the destination directory of mount, but I cannot get: the source it is associated, the type, the mode, the grants... So currently that Mount field is not effective. Sorry for my point.

@mabulgu
Copy link

mabulgu commented Feb 15, 2025

/unassign

Unfortunately, I don't have the planned capacity to work on this anymore.

@D3vil0p3r AFAIU you expect a quick solution but this is an OSS project and not everyone works on these as a part of their full-time job, that's why I said "Unless, of course, you have a tight deadline on those ..." to the maintainers while assigning the issue to myself.

I think the touch could be also done but only for a major release. I think that this fix (because imho the described topic is an issue) is a "must have". The current values in "Mounts" are programmatically almost useless if not combined with the other values as produced by docker API. Why I say this: because the only info I get from API is the destination directory of mount, but I cannot get: the source it is associated, the type, the mode, the grants... So currently that Mount field is not effective. Sorry for my point.

Good point. I think you can also draft a PR with what you expect to have here and since this is sth you reported, probably you will be the most motivated person to resolve this:) Just an idea;) Good luck!

@D3vil0p3r
Copy link
Author

Yes @mabulgu I would try to create a PR and work on it. Can I kindly ask you where I can find the source files/repositories related to this "Mount" part of the podman API?

@mabulgu
Copy link

mabulgu commented Feb 15, 2025

@D3vil0p3r I would start digging from here:

func ListContainers(w http.ResponseWriter, r *http.Request) {

I'd notice the call for abi and there for ps packages for listing containers. I believe you can get more help from the maintainer folks as my direction might be misleading.

@D3vil0p3r
Copy link
Author

D3vil0p3r commented Feb 15, 2025

Thanks. By looking at the code, what I can say currently is that the issue is in

and

Mounts []string `json:"mounts,omitempty"`

so Mounts is uncorrectly defined as an array of string.

Its value is taken from:

Mounts: ctr.UserVolumes(),

and UserVolumes returns only the "destination" volume directories:

func (c *Container) UserVolumes() []string {

I think that the solution to this issue should be an implementation as occurs for podman inspect, in container_inspect.go:

Mounts []InspectMount `json:"Mounts"`

where InspectMount type is correctly defined in the same file.

@Luap99
Copy link
Member

Luap99 commented Feb 17, 2025

Just to be clear we cannot just change the API types, they must be stable. We could decide to break the API with 6.0. for this but there is not even a plan for a 6.0 yet so that would take a long time before we could accept a change.

And IMO I am not sure the assumption is correct, container list and inspect return completely different types. Just changing one single Mounts filed is not going to solve that difference. There are a ton of other fields with different types as well.
So if this should actually be fixed then the entire types should match, so that a container list just returns an array with the inspect output basically but that should be discussed as well as there are downsides to this as well. Most likely it will be slower if we have to provide the full inspect info each time.

@D3vil0p3r
Copy link
Author

D3vil0p3r commented Feb 17, 2025

we cannot just change the API types, they must be stable.

I agree about your point and I thank you for explaining your point.

I am not saying that container list and inspection data must be the same, mostly if they deal with different data types. My point is only related to the "Mounts", that, currently, could not be effective, not only because its usage could be incomplete, for example, if I need an information about "Mounts" from container list API (1 call), and I need to retrieve info related to its source, if read-write, currently I am forced to do N inspect API calls, one for each container in the list. Another my personal reason is also that, by reading this output:

    "Mounts": [
      "/opt/resources",
      "/workspaces"
    ],

how can be clear to the user if those paths correspond to "Destination" or "Source"? Yes, probably by reading docs if they detail this info, or just empirically, but I don't see this approach efficient. It is just my opinion as standard average user.

@D3vil0p3r D3vil0p3r linked a pull request Mar 26, 2025 that will close this issue
@D3vil0p3r
Copy link
Author

D3vil0p3r commented Mar 27, 2025

I opened the following PR: #25697

Currently over 80 tests, only 5 tests are failing:

sys podman debian-13 root host sqlite

...
<+042ms> # # podman __completeNoDesc  rm arg arg
<+251ms> # time="2025-03-27T00:01:36Z" level=warning msg="Could not find mount at destination \"/image1\" when parsing user volumes for container 0f2ba48bcafe3947ae78a0b0c81d82f829ff3b9c2459370ca09e574df22747ad"
         # time="2025-03-27T00:01:36Z" level=warning msg="Could not find mount at destination \"/image2\" when parsing user volumes for container 0f2ba48bcafe3947ae78a0b0c81d82f829ff3b9c2459370ca09e574df22747ad"
         # created-c-t408-yclhpsxv
         # running-c-t408-yclhpsxv
         # pause-c-t408-yclhpsxv
         # exited-c-t408-yclhpsxv
         # d664e1c39456-infra
         # a712002bbf3e-infra
         # 352d0f7601c4-infra
         # c30b7ca8b40d-infra
         # running-p-t408-yclhpsxv-con
         # degraded-p-t408-yclhpsxv-con
         # exited-p-t408-yclhpsxv-con
         # 316500eb65ca-infra
         # liveness-exec-t435-zqhpefme-unhealthy-liveness-ctr-t435-zqhpefme-unhealthy
         # d4afab7f683b-infra
         # p-t437-ucpnms2l-c-t437-ucpnms2l
         # 455e3062919d-infra
         # p-t436-bxirogve-c-t436-bxirogve
         # p-t436-bxirogve-c-not-mounted-t436-bxirogve
         # :4
         # Completion ended with directive: ShellCompDirectiveNoFileComp
         # #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
         # #| FAIL: Command succeeded, but issued unexpected warnings
         # #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
         # # [teardown]

sys podman debian-13 rootless host sqlite

...
<+050ms> # $ podman __completeNoDesc  network disconnect arg
<+138ms> # [Debug] [Error] looking up volume 3b2825b748d22b590738f80608eb53334fb13c71fee980a03cfc8606402a8f8c in container 09166dc62d37dfc3387bb8b59e9c05ef12af1ffb4392b94c64a42ce8aba06a02 config: no such volume
         # :4
         # Completion ended with directive: ShellCompDirectiveNoFileComp
         # #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
         # #|     FAIL: network disconnect: actual container listed in suggestions
         # #| expected: '.*-c-t408-sceqk8xv
         # ' (using expr)
         # #|   actual: '[Debug] [Error] looking up volume 3b2825b748d22b590738f80608eb53334fb13c71fee980a03cfc8606402a8f8c in container 09166dc62d37dfc3387bb8b59e9c05ef12af1ffb4392b94c64a42ce8aba06a02 config: no such volume'
         # #|         > ':4'
         # #|         > 'Completion ended with directive: ShellCompDirectiveNoFileComp'
         # #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
         # # [teardown]

sys podman fedora-41 root host sqlite

...
<+099ms> # # podman __completeNoDesc  network reload
<+252ms> # time="2025-03-26T18:57:48-05:00" level=warning msg="Could not find mount at destination \"/image\" when parsing user volumes for container 97880d8120d9e3d43279ffafb47258db43f023404dde383ef8fc15d4479b0e7c"
         # created-c-t408-sdg6r3tc
         # running-c-t408-sdg6r3tc
         # pause-c-t408-sdg6r3tc
         # exited-c-t408-sdg6r3tc
         # 8597b542beea-infra
         # ed268883dcf0-infra
         # fe64fd137605-infra
         # 2357d27fe86d-infra
         # running-p-t408-sdg6r3tc-con
         # degraded-p-t408-sdg6r3tc-con
         # exited-p-t408-sdg6r3tc-con
         # 33cf8009e7da-infra
         # liveness-exec-t435-ijlwpq0v-healthy-liveness-ctr-t435-ijlwpq0v-healthy
         # 25d070c186b5-infra
         # p-t436-se8eqjuc-c-t436-se8eqjuc
         # 38ce1f12224d-infra
         # p-t437-beskcj3f-c-t437-beskcj3f
         # :4
         # Completion ended with directive: ShellCompDirectiveNoFileComp
         # #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
         # #| FAIL: Command succeeded, but issued unexpected warnings
         # #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
         # # [teardown]

sys podman fedora-41 rootless host boltdb

...
<+050ms> # $ podman __completeNoDesc  restart
<+229ms> # time="2025-03-26T18:59:06-05:00" level=warning msg="Could not find mount at destination \"/test2\" when parsing user volumes for container 1e5b5d15239e5839d4bb4b76d9e50490363458f067ef703fa1c8af0f86a120db"
         # time="2025-03-26T18:59:06-05:00" level=warning msg="Could not find mount at destination \"/test1\" when parsing user volumes for container 1e5b5d15239e5839d4bb4b76d9e50490363458f067ef703fa1c8af0f86a120db"
         # time="2025-03-26T18:59:06-05:00" level=warning msg="Could not find mount at destination \"/test_same\" when parsing user volumes for container 1e5b5d15239e5839d4bb4b76d9e50490363458f067ef703fa1c8af0f86a120db"
         # created-c-t408-bzgieod5
         # running-c-t408-bzgieod5
         # pause-c-t408-bzgieod5
         # exited-c-t408-bzgieod5
         # 95cbd3a83a93-infra
         # 6ce0c5a2641a-infra
         # 1244f3e8fc04-infra
         # 2af92b728162-infra
         # running-p-t408-bzgieod5-con
         # degraded-p-t408-bzgieod5-con
         # exited-p-t408-bzgieod5-con
         # 6a39fcfa9967-infra
         # liveness-exec-t435-aeegygvf-unhealthy-liveness-ctr-t435-aeegygvf-unhealthy
         # cc1f93968f00-infra
         # p-t436-hdsnqipr-c-t436-hdsnqipr
         # p-t436-hdsnqipr-c-not-mounted-t436-hdsnqipr
         # :4
         # Completion ended with directive: ShellCompDirectiveNoFileComp
         # #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
         # #| FAIL: Command succeeded, but issued unexpected warnings
         # #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
         # # [teardown]

sys podman rawhide root host sqlite

...
[+0535s] not ok 5 |012| podman images - bare manifest list in 1853ms
         # tags: ci:parallel
         # (from function `bail-now' in file test/system/[helpers.bash, line 187](https://github.com/containers/podman/blob/dd476f0a5ded2fed156ae4f2bb5abff65f884ae8/test/system/helpers.bash#L187),
         #  from function `die' in file test/system/[helpers.bash, line 970](https://github.com/containers/podman/blob/dd476f0a5ded2fed156ae4f2bb5abff65f884ae8/test/system/helpers.bash#L970),
         #  from function `run_podman' in file test/system/[helpers.bash, line 572](https://github.com/containers/podman/blob/dd476f0a5ded2fed156ae4f2bb5abff65f884ae8/test/system/helpers.bash#L572),
         #  in test file test/system/[012-manifest.bats, line 80](https://github.com/containers/podman/blob/dd476f0a5ded2fed156ae4f2bb5abff65f884ae8/test/system/012-manifest.bats#L80))
         #   `run_podman images --format '{{.ID}}' --no-trunc' failed
         #
<+     > # # podman inspect --format {{.ID}} quay.io/libpod/testimage:20241011
<+103ms> # b82e560ed57b77a897379e160371adcf1b000ca885e69c62cbec674777a83850
         #
<+067ms> # # podman manifest create m-t5-f5ujzv0p:1.0
<+076ms> # 020bc71395759b7d1174a6d779cf2dfa24fab8ba7c839659144698720b166896
         #
<+023ms> # # podman manifest inspect --verbose 020bc71395759b7d1174a6d779cf2dfa24fab8ba7c839659144698720b166896
<+079ms> # {
         #     "schemaVersion": 2,
         #     "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
         #     "manifests": []
         # }
         #
<+030ms> # # podman manifest inspect -v 020bc71395759b7d1174a6d779cf2dfa24fab8ba7c839659144698720b166896
<+071ms> # {
         #     "schemaVersion": 2,
         #     "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
         #     "manifests": []
         # }
         #
<+036ms> # # podman images --format {{.ID}} --no-trunc
<+862ms> # Error: locating image "368019224a99af9960e555215256ddb9b8e4e8d763387b1ae2bd24420a17b328" for loading instance list: locating image with ID "368019224a99af9960e555215256ddb9b8e4e8d763387b1ae2bd24420a17b328": image not known
<+008ms> # [ rc=125 (** EXPECTED 0 **) ]
         # #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
         # #| FAIL: exit code is 125; expected 0
         # #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
         # [ teardown - ignore 'image not known' errors below ]
         #
<+024ms> # # podman manifest rm m-t5-f5ujzv0p:1.0 localhost:27387/m-t5-f5ujzv0p:1.0
<+085ms> # Untagged: localhost/m-t5-f5ujzv0p:1.0
         # Deleted: 020bc71395759b7d1174a6d779cf2dfa24fab8ba7c839659144698720b166896
         # Error: localhost:27387/m-t5-f5ujzv0p:1.0: image not known
<+005ms> # [ rc=1 ]
         # # [teardown]

and

...
<+026ms> # # podman __completeNoDesc  inspect b8
<+391ms> # time="2025-03-26T18:57:54-05:00" level=warning msg="Could not find mount at destination \"/image1\" when parsing user volumes for container ab3b9497dd4de439253301aaaec428cb73c4413ab4529abb2dbda7f2185eca51"
         # time="2025-03-26T18:57:54-05:00" level=warning msg="Could not find mount at destination \"/image2\" when parsing user volumes for container ab3b9497dd4de439253301aaaec428cb73c4413ab4529abb2dbda7f2185eca51"
         # b84189e81ffc-infra
         # b82e560ed57b
         # b84189e81ffc
         # :4
         # Completion ended with directive: ShellCompDirectiveNoFileComp
         # #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
         # #| FAIL: Command succeeded, but issued unexpected warnings
         # #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
         # # [teardown]

If you have some hint, please let me know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
HTTP API Bug is in RESTful API kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
5 participants