You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that in the example provided in stable diffusion.cpp, a context is created and used for a single image generation (either image-to-image or text-to-image). I wondered if it’s possible to reuse the same context for multiple generating without recreating it each time.
I tried reusing the context for multiple generations but encountered a segmentation fault either in image-to-image or text-to-image. For example, I copy and paste results = txt2img(...); after the original generate results code section.
Could you provide an example of a context that has been reused for multiple generations, or is this approach not supported?
The text was updated successfully, but these errors were encountered:
I noticed that in the example provided in stable diffusion.cpp, a context is created and used for a single image generation (either image-to-image or text-to-image). I wondered if it’s possible to reuse the same context for multiple generating without recreating it each time.
I tried reusing the context for multiple generations but encountered a segmentation fault either in image-to-image or text-to-image. For example, I copy and paste
results = txt2img(...);
after the original generate results code section.Could you provide an example of a context that has been reused for multiple generations, or is this approach not supported?
The text was updated successfully, but these errors were encountered: