Comfyui best upscale model reddit
Comfyui best upscale model reddit. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. But it does nothing, I look my gpu or cpu is doing no extra work, I'm downloading nothing. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. This is done after the refined image is upscaled and encoded into a latent. The resolution is okay, but if possible I would like to get something better. 56 denoise which is quite high and giving it just enough freedom to totally screw up your image. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. And for some reason some times it just need to download model AutoencoderKL. vs (AOM3 model) - >(SWAP VAE NOW!) -> Image -> Upscale -> Refining with other model + nice VAE I gave up on latent upscale. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. The hires script is overriding the ksamplers denoise so your actually using . The world’s best aim trainer, trusted by top pros, streamers, and players like you. Tried the llite custom nodes with lllite models and impressed. Best method to upscale faces after doing a faceswap with reactor It's a 128px model so the output faces after faceswapping is blurry and low res. - latent upscale looks much more detailed, but gets rid of the detail of the original image. if I wanted to do an upscale image like ESRGAN thar requires working image space it suggests that it matters whether I do (AOM3 model) - > Image -> Upscale -> Refining with other model + nice VAE on final step only. This is the 'latent chooser' node - it works but is slightly unreliable. 80 is usually mutated but sometimes looks great. Instructions to use any base model added to the scripts shared post. so i. 5 to 0. I want to upscale my image with a model, and then select the final size of it. Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. a1111: That workflow consists of vid frames at 15fps into vae encode and CNs, a few loras, animatediff v3, lineart and scribble-sparsectrl CNs, ksampler basic with low cfg, small upscale, AD detailer to fix face (with lineart and depth CNs in segs, and same loras, and animatediff), upscale w/model, interpolate, combine to 30fps. I'm sure I'm just doing something wrong when implementing the CN. Search for upscale and click on Install for the models you want. Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. ComfyUI Upscaling is best for a dozen or so Upscales alas would take all week to do 100+ The idea is simple, use the refiner as a model for upscaling instead of using a 1. It only generates its preview. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. Please share your tips, tricks, and workflows for using this software to create your AI art. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. You can use it in any picture, you will need ComfyUI_UltimateSDUpscale The "FACE_MODEL" output from the ReActor node can be used with the Save Face Model node to create a insightface model, then that can be used as a ReActor input instead of an image. This is what I have so far (using the custom nodes to reduce the visual clutteR) . Newcomers should familiarize themselves with easier to understand workflows, as it can be somewhat complex to understand a workflow with so many nodes in detail, despite the attempt at a clear structure. I'm trying to find a way of upscaling the SD video up from its 1024x576. Due to the complexity of the workflow, a basic understanding of ComfyUI and ComfyUI Manager is recommended. 6 denoise and either: Cnet strength 0. > <. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. After borrowing many ideas, and learning ComfyUI. If caption file exists (e. Look at this workflow : Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. But some times it just does the upscale, some other times it finds this model and I see the steps to load it. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. I share many results and many ask to share. Because the upscale model of choice can only output 4x image and they want 2x. I run some tests this morning. Basic latent upscale, basic upscaling via model in pixel space, with tile controlnet, with sd ultimate upscale, with LDSR, with SUPIR and whatnot. From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. Reply reply Top 1% Rank by size I get good results using stepped upscalers, ultimateSD upscaler and stuff. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y . Moreover batch folder processing added. 5 I'd go for Photon, RealisticVision or epiCRealism. - comfyanonymous/ComfyUI Generates a SD1. Welcome to the unofficial ComfyUI subreddit. use our SOTA batch captioners like LLaVA) it will be used as prompt. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. 19K subscribers in the comfyui community. The following allows you to use the A1111 models etc within ComfyUI to prevent having to manage two installations or model files / loras etc . I liked the ability in MJ, to choose an image from the batch and upscale just that image. with a denoise setting of 0. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. LOL yeah I push the denoising on Ultimate Upscale too, quite often, just saying "I'll fix it in Photoshop". Also, both have a denoise value that drastically changes the result. So I'm happy to announce today: my tutorial and workflow are available. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. 25 i get a good blending of the face without changing the image to much. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. 45 is minimum and fairly jagged. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. New to Comfyui, so not an expert. 5 combined with controlnet tile and foolhardy upscale model. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. Latest version can be downloaded here. I find Upscale useful, but as I often Upscale to 6144 x 6144 GigaPixel has the batch speed and capacity to make 100+ Upscales worthwhile. This model yields way better results. Now go back to img2img generated mask the important parts of your images and upscale that. I haven't been able to replicate this in Comfy. fix. I added a switch toggle for the group on the right. got prompt Prompt executed in X. true. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. I'm using a simplified of Murphylanga ultimate tile upscale. And when purely upscaling, the best upscaler is called LDSR. Also converted base used model to Juggernaut-XL-v9 . Thanks. Usually I use two my wokrflows: I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. the best part about it though >. Please share your tips, tricks, and… FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. Must download - stage_a. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. In A1111, you can do hires fix with any model upscaler that you want, like 4xUltraSharp and you can also choose the dimensions and denoising strength? Is there a way to do this in ComfyUI? I know of the Hires Script node, but when you choose the Upscaler Model on that one, you can't choose the denoising strength or number of steps. Download this first, put it into the folder inside conmfy ui called custom nodes, after that restar comfy ui, then u should see a new button on the left tab the last one, click that, then click missing custom nodes, and just install the one, after you have installed it once more restart comfy ui and it ahould work. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. - image upscale is less detailed, but more faithful to the image you upscale. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. in a1111 the controlnet Welcome to the unofficial ComfyUI subreddit. Please keep posted images SFW. 5 model, and can be applied to Automatic easily. 5 -ish new size Seed: 12345 (same seed) CFG: 3 (same CFG) Steps: 5 (same) Denoise: this is where you have to test. txt after you removed the extension « txt » Welcome to the unofficial ComfyUI subreddit. Near the top there is system information for VRAM, RAM, what device was used (graphics card), and version information for ComfyUI. XX Seconds The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. the factor 2. Upgrade your FPS skills with over 25,000 player-created scenarios, infinite customization, cloned game physics, coaching playlists, and guided training and analysis. Upscaling: Increasing the resolution and sharpness at the same time. The downside is that it takes a very long time. That's because latent upscale turns the base image into noise (blur). 65 seems to be the best. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. < Nodes! eeeee!, so because you can move these around and connect them however you want you can also tell it to save out an image at any point along the way, which is great! because I often forget that stuff. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. It's especially amazing with SD1. 202 votes, 58 comments. Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. There are also "face detailer" workflows for faces specifically. yalm. 5 to get a 1024x1024 final image (512 *4*0. All images are generated using both the SDXL Base model and the Refiner model, each automatically configured to perform a certain amount of diffusion steps according to the “Base/Refiner Step Ratio” formula defined in the dedicated widget. I rarely use upscale by model on its own because of the odd artifacts you can get. Upscale x1. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. Within ComfyUI use extra_model_paths. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample Creat a new comfyui, I have created a comfyuiSUPIR only for supir, and in the new comfyui, link the model folders with the full path for base models folder and the checkpoint folder ( at least) in comfy/extra-model. 5, euler, sgm_uniform or CNet strength 0. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Reddit page for Nucleus Co-op, a free and open source program for Windows that allows split-screen play on many games that do not initially support it, the app purpose is to make it as easy as possible for the average user to play games locally using only one PC and one game copy. There is no tiling in the default A1111 hires. Instead, I use Tiled KSampler with 0. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). model: base sd v1. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. . Good for depth, open pose so far so good. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. This means that your prompt (a. all in one workflow would be awesome. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. You just have to use the node "upscale by" using bicubic method and a fractional value (0. yaml file. I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P If you'd like to load LoRA, you need to connect "MODEL" and "CLIP" to the node, and after that, all the nodes that require these two wires should be connected with the ones from load LoRAs, so of course, the workflow should work without any problems. g. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. safetensors , clip model( it's name is simply model. Import times for custom nodes: This list will show which custom nodes loaded (or failed to load). 6. After generating my images I usually do Hires. In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. 40. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. Does anyone have any suggestions, would it be better to do an ite We would like to show you a description here but the site won’t allow us. safetensor) ii. This information tells us what hardware ComfyUI sees and is using. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) 43 votes, 16 comments. example If you are looking to share between SD it might look something like this. Jan 13, 2024 · TLDR: Both seem to do better and worse in different parts of the image, so potentially combining the best of both (photoshop, seg/masking) can improve your upscales. 15-0. For upscaling there are many options. For SD 1. I'm trying to combine the Ultimate SD Upscale with a Blur Control Net like I do in Automatic1111, but I keep getting errors in ComfyUI. Warning: the workflow does not save image generated by the SDXL Base model. messing around with upscale by model is pointless for high res fix. there is an example as part of the install. It has more settings to deal with than ultimate upscale, and it's very important to follow all of the recommended settings in the wiki. If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. a. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. Do you all prefer separate workflows or one massive all encompassing workflow? * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. just remove . 20) On comfyui manager go to the Pip install packages Upscale Latent By: 1. 5 model) >> FaceDetailer. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. 9, end_percent 0. Best Detailer + Upscaler nodes and models? Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. 9 , euler Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. You can construct an image generation workflow by chaining different blocks (called nodes) together. We would like to show you a description here but the site won’t allow us. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. 5 if you want to divide by 2) after upscaling by a model. For a dozen days, I've been working on a simple but efficient workflow for upscale. higher denoise), it adds appropriate details. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. It s not necessary an inferior model, 1. r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. This is not the case. I like doing a basic first pass latent upscale before that. I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) 2. Best of Reddit; Topics; Content Policy; SDXL is fine as the upscale model at these low denoise values (0. If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. Downloading the model - It's best if you download the model using the comfymanager itself, it creates the correct path and it doesn't create any mess. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. Which options on the nodes of the encoder and decoder would work best for this kind of a system ? I mean tile sizes for encoder, decoder (512 or 1024?) and diffusion dtype of supir model loader, should leave it as auto or any ideas? Thank you again and keep the good work up. 5, see workflow for more info. Which models to download - i. (also may want to try an upscale model>latent upscale, but thats just my personal preference really) If you let it get creative (i. 0. 5 it s in mature state where almost all the models and loras are based on it, so you get better quality and speed with it. 0-RC , its taking only 7. Basically if i find the sdxl turbo preview close enough to what i have in mind, i 1click the group toggle node and i use the normal sdxl model to iterate on sdxl turbos result, effectively iterating with a 2nd ksampler at a denoise strength of 0. 20K subscribers in the comfyui community. You can't use that model for generations/ksampler, it's still only useful for swapping. 5=1024). The right upscaler will always depend on the model and style of image you are generating; Ultrasharp works well for a lot of things, but sometimes has artifacts for me with very photographic or very stylized anime models. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. And I'm sometimes too busy scrutinizing the city, landscape, object, vehicle or creature in which I'm trying to encourage insane detail to see what hallucinations it has manifested in the sky. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. But basically txt2img, img2img, 4x upscale with a few different upscalers. Since you have only 6GB VRAM i would choose tile controlnet + sd ultimate upscale. Though, from what someone else stated it comes to use case. e. In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). A step-by-step guide to mastering image quality. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. Appreciate just looking into it. Reply reply Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… To get the absolute best upscales, requires a variety of techniques and often requires regional upscaling at some points. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. You could also try a standard checkpoint with say 13, and 30. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. k. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Hi, does anyone know if there's an Upscale Model Blend Node, like with A1111? Being able to get a mix of models in A1111 is great where two models… That's because of the model upscale. There's "latent upscale by", but I don't want to upscale the latent image. The best method I Ty i will try this. ComfyUI is amazing. pth or 4x_foolhardy_Remacri. Please share your tips, tricks, and… 60 votes, 30 comments. grri skolix gbwefez kheq nojizaj rgo bbb uorylrq rcx blyfr