Alex Lowe avatar

Comfyui best upscale model reddit

Comfyui best upscale model reddit. This way it replicates the sd upscale/ultimate upscale scripts from A1111. If you let it get creative (i. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. I'm trying to combine the Ultimate SD Upscale with a Blur Control Net like I do in Automatic1111, but I keep getting errors in ComfyUI. Taking the output of a KSampler and running it through a latent upscaling node results in major artifacts (lots of horizontal and vertical lines, and blurring). Always wanted to integrate one myself. Moreover batch folder processing added. 15-0. attach to it a "latent_image" in this case it's "upscale latent" "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. There are also "face detailer" workflows for faces specifically. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. You could also try a standard checkpoint with say 13, and 30. It's a lot faster that tiling but outputs aren't detailed. 5 to 0. If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. You create nodes and "wire" them together. 19K subscribers in the comfyui community. All of this can be done in Comfy with a few nodes. Sep 7, 2024 · Here is an example of how to use upscale models like ESRGAN. * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. ComfyUI uses a flowchart diagram model. the factor 2. It's especially amazing with SD1. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. There is no tiling in the default A1111 hires. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. 45 is minimum and fairly jagged. - comfyanonymous/ComfyUI Welcome to the unofficial ComfyUI subreddit. Though, from what someone else stated it comes to use case. Category: Universal Models, Official Research Models, Art/Pixel Art, Model Collections, Pretrained Models. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. 80 is usually mutated but sometimes looks great. I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. This model yields way better results. I rarely use upscale by model on its own because of the odd artifacts you can get. Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. 6. Download this first, put it into the folder inside conmfy ui called custom nodes, after that restar comfy ui, then u should see a new button on the left tab the last one, click that, then click missing custom nodes, and just install the one, after you have installed it once more restart comfy ui and it ahould work. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Upscaling: Increasing the resolution and sharpness at the same time. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. 5 for the diffusion after scaling. Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. 5 combined with controlnet tile and foolhardy upscale model. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. Good for depth, open pose so far so good. Hope someone can advise. Now go back to img2img generated mask the important parts of your images and upscale that. One does an image upscale and the other a latent upscale. all in one workflow would be awesome. Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. We would like to show you a description here but the site won’t allow us. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. I love to go with an SDXL model for the initial image and with a good 1. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. The downside is that it takes a very long time. There's "latent upscale by", but I don't want to upscale the latent image. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. But it's weird. Also, both have a denoise value that drastically changes the result. 0-RC , its taking only 7. Please keep posted images SFW. Latest version can be downloaded here. Please share your tips, tricks, and workflows for using this… Hi, guys. 25 i get a good blending of the face without changing the image to much. 5, see workflow for more info. Best method to upscale faces after doing a faceswap with reactor It's a 128px model so the output faces after faceswapping is blurry and low res. Id say it allows a very high level of access and customization, more thanA1111 - but with added complexity. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. If caption file exists (e. Please share your tips, tricks, and… I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). The 4X upscalers I've tried aren't great with it, I suspect the starting detail is too low. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. I generate an image that I like then mute the first ksampler, unmute Ult. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). in a1111 the controlnet 101 votes, 27 comments. I get good results using stepped upscalers, ultimateSD upscaler and stuff. use our SOTA batch captioners like LLaVA) it will be used as prompt. I'm new to the channel and to ComfyUI, and I come looking for a solution to an upscaling problem. 5 -ish new size Seed: 12345 (same seed) CFG: 3 (same CFG) Steps: 5 (same) Denoise: this is where you have to test. So latent upscaling gives really nice results but it is really slow on my 2060 super. And when purely upscaling, the best upscaler is called LDSR. I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P Generates a SD1. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird That's because of the model upscale. Curious my best option/operation/workflow and upscale model The idea is simple, use the refiner as a model for upscaling instead of using a 1. Is there a way to "pause the flow" to the latent upscale until a switch is flipped? So that one could do latent upscale only on the images one favors. It has more settings to deal with than ultimate upscale, and it's very important to follow all of the recommended settings in the wiki. For upscaling I mainly used the chaiNNer application with models from the Upscale Wiki Model Database but I also used the fast stable diffuison automatic1111 google colab and also the replicate website super resolution collection Welcome to the unofficial ComfyUI subreddit. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. g. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Upscale Latent By: 1. Also converted base used model to Juggernaut-XL-v9 . 65 seems to be the best. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. I haven't been able to replicate this in Comfy. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. I am curious both which nodes are the best for this, and which models. Tried the llite custom nodes with lllite models and impressed. It is a node - image upscale is less detailed, but more faithful to the image you upscale. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. To get the absolute best upscales, requires a variety of techniques and often requires regional upscaling at some points. Thanks. Still working on the the whole thing but I got the idea down Which options on the nodes of the encoder and decoder would work best for this kind of a system ? I mean tile sizes for encoder, decoder (512 or 1024?) and diffusion dtype of supir model loader, should leave it as auto or any ideas? Thank you again and keep the good work up. For SD 1. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. You can also run a regular AI upscale then a downscale (4x * 0. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? (in the 250 pixel range)? I assume most everything is 512 and higher based on SD1. Thanks Hi, does anyone know if there's an Upscale Model Blend Node, like with A1111? Being able to get a mix of models in A1111 is great where two models… From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. . In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). Then another node under loaders> "load upscale model" node. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. You can use it in any picture, you will need ComfyUI_UltimateSDUpscale Welcome to the unofficial ComfyUI subreddit. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. higher denoise), it adds appropriate details. Then output everything to Video Combine . true. model: base sd v1. with a denoise setting of 0. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) 43 votes, 16 comments. The world’s best aim trainer, trusted by top pros, streamers, and players like you. So I made a upscale test workflow that uses the exact same latent input and destination size. It uses CN tile with ult SD upscale. Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… messing around with upscale by model is pointless for high res fix. - latent upscale looks much more detailed, but gets rid of the detail of the original image. Do you all prefer separate workflows or one massive all encompassing workflow? Welcome to the unofficial ComfyUI subreddit. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I'm sure I'm just doing something wrong when implementing the CN. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. In this ComfyUI tutorial we look at my favorite upscaler, the Ultimate SD Upscaler and it doesn't seem to get as much attention as it deserves. Here is a workflow that I use currently with Ultimate SD Upscale. But basically txt2img, img2img, 4x upscale with a few different upscalers. 5 I'd go for Photon, RealisticVision or epiCRealism. After generating my images I usually do Hires. pth or 4x_foolhardy_Remacri. 5, but I have some really old images I'd like to add detail to. 0. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). g Use a X2 Upscaler model. 5), with an ESRGAN model. SD upscaler and upscale from that. Please share your tips, tricks, and workflows for using this software to create your AI art. so i. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. Usually I use two my wokrflows: So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Instructions to use any base model added to the scripts shared post. For some context, I am trying to upscale images of an anime village, something like Ghibli style. e. Search for upscale and click on Install for the models you want. Upgrade your FPS skills with over 25,000 player-created scenarios, infinite customization, cloned game physics, coaching playlists, and guided training and analysis. fix. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. Reply reply 15K subscribers in the comfyui community. r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. That's because latent upscale turns the base image into noise (blur). If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. Jan 13, 2024 · TLDR: Both seem to do better and worse in different parts of the image, so potentially combining the best of both (photoshop, seg/masking) can improve your upscales. 5 model) >> FaceDetailer. I want to upscale my image with a model, and then select the final size of it. Welcome to the unofficial ComfyUI subreddit. Upscale x1. 5 model, and can be applied to Automatic easily. FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. wlpx qmxn exbg yagkgo cvu zqah twwias kvnmjw bxxh bjopq