Comfyui workflow json example reddit

Comfyui workflow json example reddit. Welcome to the TickTick Reddit! This community is devoted to the discussion of Nel file scaricabile troverai un file JSON da importare in ComfyUI, contenente due workflow pronti all’uso: uno con Portrait Master, dedicato ai ritratti, e uno per inserire manualmente i prompt positivi e negativi. Here's the big issue AI-only driven techniques face for filmmaking. The idea is that it creates a tall canvas and renders 4 vertical sections separately, combining them as they go. Workflow in Json format. Can your ComfyUI-serverless be adapted to work if the ComfyUI workflow was hosted on Runpod, Kaggle, Google Colab, or some other site ? Any help would be appreciated. ControlNet Inpaint Example. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Ability to change default paths (loaded from paths. Since I used ComfyUI, I downloaded tons of workflows, but only around 10% of them work. For more details on using the workflow, check out the full guide Note that ComfyUI workflow uses the masquerade custom nodes, but they're a bit broken, I can't be totally sure I downloaded the json but I don't have the images you set up as an example. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, Hello everyone, I got some exiting updates to share for One Button Prompt. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Updated IP Adapter Workflow Example - Asking . Examples of what Welcome to the unofficial ComfyUI subreddit. For each of the sequences, I generated about ten of them and then chose the one I Plus, you want to upscale in latent space if possible. com/comfyanonymous/ComfyUI. 1. 5/clip_some_other_model. This workflow needs a bunch of custom nodes and models that are a pain to track down: Its solvable, ive been working on a workflow for this for like 2 weeks trying to perfect it for comfyUI but man no matter what you do there are usually some kind of artifacting, its a challenging problem to solve, unless you You can use folders too, so eg cascade/clip_model. Check ComfyUI here: https://github. Upcoming tutorial - SDXL Lora + using 1. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Welcome to the unofficial ComfyUI subreddit. found sdxl_styles. Instructions and listing of necessary Resources are in Note files. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. While I have you, can I ask where best to insert the base LoRA in your workflow? I created a ComfyUI workflow for Nel file scaricabile troverai un file JSON da importare in ComfyUI, contenente due workflow pronti all’uso: uno con Portrait Master, dedicato ai ritratti, e uno per inserire manualmente i prompt positivi e negativi. Sytan's SDXL Offical ComyfUI 1. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. This tool also lets you export your workflows in a “launcher. It's simple and straight to the point. ComfyUI Tip: Add a node to your workflow quickly via double-clicking For example, if you want to use "FaceDetailer", just type "Face". In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). . The proper way to use it is with the new SDTurboScheduler node but it might also work with the regular schedulers. here to share my current workflow for switching between prompts. It does not work with SDXL for me at the moment. Download. 4 - The best workflow examples are through the github examples pages. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool Here you can download my ComfyUI workflow with 4 inputs. 5 . Check comfyUI image examples in the link. Then you finally have an idea of whats going on, and you can move on to control nets, ipadapters, detailers, clip vision, and 20 A repository of well documented easy to follow workflows for ComfyUI - cubiq/ComfyUI_Workflows Description. So in this workflow each of them will run on your input image and you can select the one that produces the best results. But for a base to start at it'll work. would be really nice if there was a workflow folder under Comfy as a default save/load spot. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper 17K subscribers in the comfyui community. Save this image then load it or drag it on ComfyUI to get the workflow. The video is just too fast. Still have the problem. The entire comfy workflow is there which you can use. json workflow file from the C:\Downloads\ComfyUI\workflows folder. It's thought to be as faster as possible to get the best clips and later upscale them. When rendering human creations, I still find significantly better results with 1. [Load VAE] and [Load Lora] are not plugged in this config for DreamShaper. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. This is like the exact same example workflow that exists (and many others) on Kosinkadink's AnimateDiff Evolved GitHub renderartist • This is a great idea Welcome to the unofficial ComfyUI subreddit. You can also turn each process on/off for each run. png. If you find it confusing, please post here for help or create an Issue in GitHub. 43 KB. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. I did Install Missing Custom Nodes, Update All, and etc etc, but there are many issues every time I load the workflows, and it looks pretty complicated to solve it. A repository of well documented easy to follow workflows for ComfyUI - cubiq/ComfyUI_Workflows I'm making changes to several nodes in a workflow, but only specific ones are rerunning like for example the KSampler node. 0 for ComfyUI - Now with support for Stable Diffusion Video, a better Upscaler, a new Caption Generator, a new Inpainter (w inpainting/outpainting Welcome to the unofficial ComfyUI subreddit. It covers the following topics: Merge 2 images together with this ComfyUI workflow. com/models/628682/flux-1-checkpoint You can download this webp animated image and load it or drag it on ComfyUI to get the workflow. I think it is just the same as the 1. ComfyUI Examples. ComfyUI Fooocus Inpaint with Segmentation Workflow. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images Flux Dev. json file - use settings-example. ckpt A few months ago, I suggested the possibility of creating a frictionless mechanism to turn ComfyUI workflows (no matter how complex) into simple and customizable front-end for end-users. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. SDXL Default ComfyUI workflow. I provide one example JSON to demonstrate how it works. The drawback of comfyui is that it cannot change the topology of the workflow once it has already started running. I also combined ELLA in the workflow to make it easier to get what I want. Mute the two Save Image nodes in Group E Click Queue Prompt to generate a batch of 4 image previews in Group B. Has anyone else messed around with gligen much? Thanks. Fusion Workflow - JSON From An Alert upvotes r/ticktick. from a folder but mainly its a workflow designed make or change an initial image to send to our sampler Two workflows included. It looks freaking amazing! Anyhow, here is a screenshot and the . Prompt: A couple in a Get the Reddit app Scan this QR code to download the app now. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. It's perfect for animating hair while keeping the rest of the face still, as you can see in the examples. json, and verify / edit the paths to your model folders Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I use a google colab VM to run Comfyui. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. com or https://imgur. However, without the reference_only ControlNetthis works poorly. EZ way, kust download this one and run like another checkpoint ;) https://civitai. Endless Nodes, but I couldn't find anything that actually can still be installed and works. Simply download the . ComfyUI won't load my workflow JSON upvote /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. Also it's possible to share the setup as a project of some kind and share this workflow with others for finetuning. safetensors (5Gb - from the infamous SD3, instead of 20Gb - default from PixArt). Or what I started doing tonight was disconnect my upscale section but put a load image box at the start of upscale, generate a batch of images with a fixed seed if I like one of them then i load it at the start of the upscale and regeneration, because the seed hasn't changed it skips And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. I've also added a ` TaraApiKeySaver` I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. 5 models and it easily generated 2k images without any distortion, which is better than khoya deep shrink. I've tried using an empty positive prompt (as suggested in demos) and describing the content to be replaced without success -- Welcome to the unofficial ComfyUI subreddit. r/ticktick. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. Img2Img works by loading an image Starting workflow. I tried to open SuperBeasts-POM-SmoothBatchCreative-V1. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: Workflow. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). What is the best workflow you know of? For example I had very good results using resolve and multiple layers that were AI generated and did the rest in standard VFX so to speak. It's pretty easy to prune a workflow in json before sending it to ComfyUI. Load the . ComfyUI provides a powerful yet intuitive way to harness Stable Diffusion through a flowchart interface. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Reply reply For example: ffmpeg -i my-cool-video. ComfyUI-Custom-Scripts. ckpt model For ease, you can download these models from here. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. That's how I made and shared this. sft file in your: ComfyUI/models/unet/ folder. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. The way ComfyUI is built up, every image or video saves the workflow in the metadata, which means that once an image has been generated with ComfyUI, you can simply drag and drop it to get that complete workflow. Andy Lau is ready for inpainting. Belittling their efforts will get you banned. K12sysadmin is open to view and closed to post. 5/clip_model_somemodel. I really really love how lightweight and flexible it is. Nodes/graph/flowchart interface to experiment Img2Img Examples. Join the largest ComfyUI community. Ability to save full metadata for generated images (as JSON or embedded in PNG, disabled by default). Animation using ComfyUI Workflow by Future Thinker If you have the SDXL 0. The workflow is saved as a json file. The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. Discussion, samples, tips and tricks on the Sigma FP. Or check it out in the app stores     TOPICS Welcome to the unofficial ComfyUI subreddit. (for 12 gb VRAM Max is about 720p resolution). Upload your json workflow so that others can test for you. You can pull PNGs from Automatic1111 for the creation of some Comfy workflows but as far as I can tell it doesn't work with ControlNet or ADetailer images sadly. 1 or not. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in Each workflow runs in its own isolated environment Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc. I would also love to see some repo of actual JSON or images (since Comfy does build the workflow from the image if everything necessary is installed). For example, this is what the workflow produces: Other than that, there were a few mistakes in version 3. For your all-in-one workflow, use the Generate tab. But all of the other API workflows listed in Custom ComfyUI Workflow dropdown in the plugin window within Photoshop are non-functional, giving variations of "ComfyUI Node type is not found" errors. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. or through searching reddit, the comfyUI manual needs updating imo. But reddit will strip it away. If you download custom nodes, those workflows (. They are images of Thanks for the tips on Comfy! I'm enjoying it a lot so far. 0 for ComfyUI - Now with support for SD 1. So, I just made this workflow ComfyUI. ckpt model v3_sd15_mm. Nobody needs all that, LOL. The comfyui workflow is just a bit easier to drag and drop and get going right a way. The examples were generated with the Not a specialist, just a knowledgeable beginner. Please share your tips, tricks, and workflows for using this software to create your AI art. More examples. And above all, BE NICE. The closest I found was SaveImgPrompt. Otherwise, please change the flare to "Workflow not included" edit: I didn't see a sample . I haven't decided if I want to go through the frustration of trying this again after spending a full day trying to get the last . x, SDXL, LoRA, and upscaling makes ComfyUI flexible. I think most of the time I only want the prompt and seed to be reused and keep the layout of my nodes unchanged. I know it's simple for now. safetensors 3. Also, if this is new and exciting to For example, we take a simple prompt, Create a list, Verify with the guideline, improve and then send it to `TaraPrompter` to actually generate the final prompt that we can send. ComfyUI Tatoo Workflow | ComfyUI Workflow | OpenArt That being said, even for making apps, I believe using ComfyScript is better than directly modifying JSON, especially if the workflow is complex. This should update and may ask you the click restart. To add content, your account must be vetted/verified. 7 MB Stage B >> \models\unet\SD Cascade stage_b_bf16. But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. pt 或者 face_yolov8n. Its just not intended as an upscale from the resolution used in the base model stage. example to sdfx. I notice the names of the settings in the krita json don't match what's in comfy's json at all, so I can't simply copy them across. The trick of this method is to use new SD3 ComfyUI nodes for loading t5xxl_fp8_e4m3fn. Right now the only way I see is putting an There are a lot of upscale variants in ComfyUI. This ComfyUI workflow lets you remove backgrounds or replace backgrounds which is a must for anyone wanting to enhance their products by either removing a background or replacing the background with something new. I have an image that I want to do a simple zoom out on. Merging 2 Images A collection of simple but powerful ComfyUI workflows for Stable Diffusion with curated default settings. safetensors sd15_lora_beta. You can use more steps to increase the quality. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. Actually natsort are not involved in Junction at all. Save your workflow using this format which is different than the normal json workflows. - Ling-APE/ComfyUI-All-in-One-FluxDev yes, I've experienced that when the json file is not good. x, 2. Like 1024, 1280, 2048, 1536. ) to integrate it with comfyUI for a "$0 budget sprite game". All of these were generated using this simple Comfy workflow:https: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I was not aware that reddit strips off the metadata of the png. Even with 4 regions and a global condition, they just combine them all 2 at a It is a simple workflow of Flux AI on ComfyUI. Learned from the following video: Stable Cascade in ComfyUI Made Simple 6m 56s posted Feb 19, 2024 by How Do? channel on YouTube . Let's break down the main parts of this workflow so that you can understand it better. Ignore the prompts and setup /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. json as a template). The examples were generated with the Welcome to the unofficial ComfyUI subreddit. Stage A >> \models\vae\SD Cascade stage_a. A lot of people are just discovering this technology, and want to show off what they created. This workflow needs a bunch of custom nodes and models that are a pain to track down: ComfyUI Path Helper MarasIT Nodes KJNodes Mikey Nodes AnimateDiff AnimateDiff Evolved IPAdapter plus If you drag in a png made with comfyui, you'll see the workflow in comfyui with the nodes etc. ComfyUI was generating normal images just fine. Pick an image that you want to inpaint. I've uploaded the json files that krita and comfy used for this. image saving and postprocess need was-node-suite-comfyui to be installed. It now officialy supports ComfyUI and there is now a new Prompt Variant mode. We will walk through a simple example of using ComfyUI, introduce some concepts, and gradually move on to more complicated workflows. You can load this image in ComfyUI to get the full workflow. Ability to load prompt information from JSON and PNG files. 1 ComfyUI install guidance, workflow and example. Welcome to the unofficial ComfyUI subreddit. No errors in the shell on drag and drop, nothing on the page updates at all Tried multiple PNG and JSON files, including multiple known-good ones Pulled latest from github I removed all custom nodes. This is the link to the workflow. 5 models like epicRealism or Jaugeraut, but I know once more models come out with the SDXL base, we'll see incredible results. For more details on using the workflow, check out the full guide Does anyone else here use this Photoshop plugin? I managed to set up the sdxl_turbo_txt2img_api JSON file that is described in the documentation. Click New Fixed Random in the Seed node in Group A. Because there are an infinite number of things that can happen in front of a virtual camera there are then an infinite number of variables and scenarios that generative models will face. This is just a slightly modified ComfyUI workflow from an example provided in the examples repo. With ComfyUI Workflow Manager -Can I easily change or modify where my json workflows are stored and saved? Yes we just enabled this feature, please go to /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. config. ComfyUI Workflow | OpenArt Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from there. This workflow needs a bunch of custom nodes and models that are a pain to If necessary, updates of the workflow will be made available on Github. the good thing is no upscale needed. 1 that are now corrected. This is an example of an image that I generated with the advanced workflow. For example you have [11,22,33], then by default you "pluck" starting from the first element, which the first pin with type INT will output 11. safetensors vs 1. As far as I can see from the workflow you sent the full image to clip_vision which is basically turning the full image into an embedding, which contain a Reddit removes the ComfyUI metadata when you upload your pic. Simple workflow for using the new Stable Video Diffusion model in ComfyUI for image to video generation. safetensors sd15_t2v_beta. the diagram doesn't load into comfyui so I can't test it out. I've been using comfyui for a few weeks now and really like the flexibility it offers. g. pt 到 models/ultralytics/bbox/ Will load a workflow from JSON via the load menu, but not drag and drop. The graphic style This is the workflow I use in ComfyUi to render 4k pictures with Dream shaper XL model. 5 base models, and modify latent image dimensions and upscale values to Welcome to the unofficial ComfyUI subreddit. This is an interesting implementation of that idea, with a lot of potential. All the images in this repo contain metadata which means they can be loaded into ComfyUI Go on github repos for the example workflows. json of the file I just used. ComfyUI-Impact-Pack. Still great on OP’s part for sharing the workflow. EDIT: For example this workflow shows the use of the other prompt windows. a search of the subreddit Didn't turn up any answers to my question. json but I am having problems with a couple of nodes: I have a tutorial here for those who want to learn it instead of ComfyUI based workflow. Please let me know if you have any questions! My Discord - jojo studio /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Share, discover, & run thousands of ComfyUI workflows. 13 GB Stage C >> \models\unet\SD Cascade Do you have ComfyUI manager. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. You can find the Flux Dev diffusion model weights here. Reply reply aliguana23 • when i download it, it downloads as webp without the workflow. make sure to also rename sdfx. WAS suite has some workflow stuff in its github links somewhere as well. I made an open source ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. but mine do include workflows for the most part in the video description. I played with hi-diffusion in comfyui with sd1. Download the following inpainting workflow. The first one is very similar to the old workflow and just called "simple". json file, change your input images and your prompts and you are good to go! ControlNet Depth ComfyUI workflow I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. An example of what this workflow can make. hopefully this will be useful to you. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. rgthree does it, I've written CLI tools to do the same based on /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. It might seem daunting at first, but you actually don't need to fully learn how these are connected. (also fixed the json with a better sampler layout. mp4 -vf fps=10/1 frame%03d. json to work. A good place to start if you have no idea how any of this works Does anyone know why ComfyUI produces images that look like this? Important: This is the output I get using the old tutorial. Thanks for the responses tho, I was unaware that the meta data of the generated files contain the entire workflow. json files. The _offset field is a way to quickly skip ahead some data of same types. More on number 3: I know people would say "just right click on the image and save it", but this isn't the same at all. ; When the workflow opens, download the dependent nodes by pressing "Install Missing Custom Nodes" in Comfy Manager. Input your choice of checkpoint and lora in their respective nodes in Group A. There is also a UltimateSDUpscale node suite (as an extension). ; Download this workflow and drop it into ComfyUI - or you can use one of the workflows others in the community made below. Drag and drop the JSON file to ComfyUI. This workflow requires quite a few custom nodes and models to run: PhotonLCM_v10. We have four main sections: Masks, IPAdapters, Prompts, and Outputs. You switched accounts on another tab or window. Now I've enabled Developer mode in Comfy and I have managed to save the workflow in JSON API format but I need help setting up the API. It's a bit messy, but if you want to use it as a reference, it might help you. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] You can achieve the same thing in a1111, comfy is just awesome because you can save the workflow 100% and share it with others. Features. anyway. In addition, I provide some sample images that can be imported into the program. It looks freaking amazing! You signed in with another tab or window. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to But actually I got the same problem as with "euler", just very wildly different results like in the examples above. Tidying up ComfyUI workflow for SDXL to fit it on 16:9 Monitor Toggle for "workflow loading" when dropping in image in ComfyUI. 9 leaked repo, you can read the README. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. This json file can then be processed automatically across multiple repos to construct an overall map of everything. json” file format, which lets anyone using the ComfyUI Launcher import your workflow w/ 100% reproducibility. It's not for beginners, but that's OK. [This is a JSON uploaded to PasteBin, link also in comments] This means using natural language descriptions to automatically produce the corresponding JSON configurations. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. These are examples demonstrating how to do img2img. Currently the extension still needs some improvement, for example you can only do resolution which can be divided by 256. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. 9(just search in youtube sdxl 0. Put the flux1-dev. With some nervous trepidation, I release my first node for ComfyUI, an implementation of the DemoFusion iterative mixing sampling process. Is there a way to load the workflow from an image within It's perfect for animating hair while keeping the rest of the face still, as you can see in the examples. ComfyUI is a completely different conceptual approach to generative art. I hope that having a comparison was useful nevertheless. Is there a way to copy normal webUI parameters ( the usual PNG info) into ComfyUI directly with a simple ctrlC ctrlV? Dragging and dropping 1111 PNGs into ComfyUI works most of the time. When I saw a certain Reddit thread, I was immediately inspired to test and create my own PIXART-Σ (PixArt-Sigma) ComfyUI workflow. safetensors -- makes it easier to remember Im trying to understand how to control the animation from the notes of the author, it seems that if you reduce the linear_key_frame_influence_value of the Batch Creative interpolation node, like to 0. json file, change your input images and your prompts and you are good to go! Inpainting Workflow. json or drag and drop the workflow image (I think the image has to not be from reddit, reddit removes metadata, I believe) into the UI. ComfyUI/web folder is where you want to save/load . So if you ever wanted to use the same effect as the OP, all you have to do is load his image and everything is already there for you. It's quite straight forward, but maybe it could be simpler. Adding LORAs in my next iteration. However, when I change values in some other nodes like something like Canny Edge node or DW Pose Estimator, they don't rerun. json file from CivitAI. Reload to refresh your session. You can save the workflow as json file and load it again from that file. People are running Bots which generate Art all the time and post it automatically to Discord and other places, I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. 3. I 表情代码:修改自ComfyUI-AdvancedLivePortrait face crop 模型参考 comfyui-ultralytics-yolo 下载 face_yolov8m. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. /r/StableDiffusion is back open after the protest of Reddit killing open /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Do you want to save the image? choose a save image node and you'll find the outputs in the folders or you can right click and save that way too. OP probably thinks that comfyUI has the workflow included with the PNG, and it does. Achieves high FPS using frame interpolation (w/ RIFE). I think ComfyUI is good for those who wish to do a reproducible workflow which then can be used to output multiple images of the same kind with the same steps. See my own response here: Flux Schnell. For example, it would be very cool if one could place the node numbers on a grid (of This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. Grab the ComfyUI workflow JSON here. It is not much an inconvenience when I'm at my main PC. ComfyUI Workflow is here: If anyone sees any flaws in my workflow, please let me know. \Stable_Diffusion\stable Makeing a bit of progress this week in ComfyUI. This is why I used Rem as an example, to show you can "transplant" the kick to a different character using a character LoRA. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very Each workflow runs in its own isolated environment Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc. Look for the example that uses controlnet lineart. Hi everyone. Some very cool stuff! For those who don't know what One Button 18K subscribers in the comfyui community. I would like to ask you the following two questions Can we currently use the stable diffusion turbo class model to make the speed faster Examples. You can then load or drag the 6 min read. I'm just wondering what other folks use it for. Ending Workflow. 0 workflow with Mixed Diffusion, and reliable high quality High Res Fix, now officially released! Below are some example /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I have searched far and wide but could not find a node that lets me save the current workflow to a json file. Yes. In case you ever wanted to see what happened if you went from Prompt A to Prompt B with multiple steps in between, now you can! (The workflow was intended to be attached to the screenshot at the bottom of this post, but instead, here's a link to comfy uis inpainting and masking aint perfect. Ability to change default values of UI settings (loaded from settings. I used the workflow kindly provided by the user u/LumaBrik, mainly playing with parameters like CFG Guidance, Augmentation level, and motion bucket. So every time I reconnect I have to load a presaved workflow to continue where I started. They can create the impression of watching an animation when presented as an animated GIF or other video format. This guide is about how to setup ComfyUI on your Windows computer to run Flux. The ui feels professional and directed. ComfyUI-Image-Selector. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Last but not least, I have the JSON template SDXL Turbo Examples. here is a example: "Help me create a ComfyUI workflow that takes an input image, uses SAM to identify and inpaint watermarks for removal, then applies various methods to upscale the watermark-free image. 0/Download workflow . com and then post a link back here if you are willing to share it. You may plug them to use with 1. json) will be/are For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. There are plenty of workflows made you can find. You can apply poses with it in same workflow. I understand how outpainting is supposed to work in comfyui (workflow. All the images in this repo contain metadata which means they can be loaded into ComfyUI I just tried a few things, and it looks like the only way I'm able to make this work is to use the "Save (API Format)" button in Comfy and then upload the resulting Flux. Krita's json settings First, I generated a series of images in a 9:16 aspect ratio, some in comfyui with sdxl, and others in midjourney. It is a simple workflow of Flux AI on ComfyUI. You can just use someone elses workflow of 0. I recently discovered the existence of the Gligen nodes in Comfyui and thought I would share some of the images I made using them (more in the civitai post link). So, if you are using that, I recommend you to take a look at this new one. The experiments are more advanced examples Drag and drop doesn't work for . Hands are still bad though. Then when _offset have something like INT,1, then the first pin that have type INT will be 22. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. If you haven't already, install ComfyUI and Comfy Manager - you can find instructions on their pages. com/models/628682/flux-1-checkpoint It would be great to have a set of nodes that can further process the metadata, for example extract the seed and prompt to re-use in the workflow. The denoise controls the amount of noise added to the image. You can Load these images in ComfyUI to get the full workflow. Mixing ControlNets But standard A1111 inpaint works mostly same as this ComfyUI example you provided. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. for example, is "I want to compose a very K12sysadmin is for K12 techs. Just load your image, and prompt and go. The only references I've been able to find makes mention of this inpainting model, using raw python or auto1111. In ComfyUI go into settings and enable dev mode options. Here is an example of 3 characters each with its own pose, outfit, features, and expression : /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. Comfy UI is actually very good, it has many capabilities that are simply beyond other interfaces. That actually does create a json, but the json Hey all- I'm attempting to replicate my workflow from 1111 and SD1. 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. You signed out in another tab or window. json file - use paths-example. Step 2: Upload an image. 0 and upscalers It's a complex workflow with a lot of variables, I annotated the workflow trying to explain what is going on. In the original post is a youtube link where everything is explained while zooming in on the workflow in Comfyui. It didn't work out. safetensors and 1. Where can one get such things? It would be nice to An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. Installing ComfyUI. SDXL most definitely doesn't work with the old control net. safetensors 73. it is VERY memory efficient and has a great deal of flexibility especially where a user has need of a complex set of instructions I'm looking for a workflow (or tutorial) that enables removal of an object or region (generative fill) in an image. Or open it in Visual Code and that can tell you if it ok or not. Think about mass producing stuff, like game assets. For Flux schnell you can get the checkpoint here that you can put in your: ComfyUI/models/checkpoints/ directory. Oh, and if you would like to try out the workflow, check out the comments! I couldn't put it in the description as my account awaits verification. json. Search the sub for what you need and download the . This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. Merge 2 images together with this ComfyUI workflow. We would like to show you a description here but the site won’t allow us. Upscaling ComfyUI workflow. Please keep posted images SFW. If you want to automate it, I'm pretty sure there are Python packages that can do it, maybe even a tool that can read information out of a file, like for example ComfyUI workflow json file. Much appreciated if you can post the json workflow or a picture generated from this workflow so it can be easier to setup. Either you maintain a ComfyUI install with every custom node on the planet installed (don't do this), or you steal some code that consumes the JSON and draws the workflow & Here are approx. A video snapshot is a variant on this theme. Then there's a full render of the image with a prompt that describes the whole thing. Tried another browser (both FF and Chrome. Example: An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. json file - Thank you very much for your contribution. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. Its default workflow works out of the box, and I definitely appreciate all the examples for different work flows. Official list of SDXL resolutions (as defined in SDXL paper). The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other Img2Img Examples. gah chrome I'm new to comfyui, does the sample image work as a "workflow save", as if it was a json with all the nodes? Reply reply Dezordan I couldn't decipher it either, but I think I found something that works. I am trying to find a workflow to automate by learning the manual steps (blender+etc. If it's the best way to install control net because when I tried manually doing it . And that’s the best part Welcome to the unofficial ComfyUI subreddit. This repo contains examples of what is achievable with ComfyUI. json file so I just roughly reproduced the workflow shown in the video on the Github site, and this works! Maybe it even works better than before--at least I'm getting good results with fewer samples. AP Workflow 7. As always, I'd like to remind you that this is a workflow designed to learn how to build a pipeline and how SDXL works. rgthree-comfy. I looked into the code and when you save your workflow you are actually "downloading" the json file so it goes to your default browser download folder. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have So, I started to get into Animatediff Vid2Vid using ComfyUI yesterday and starting to get the hang of it, where I keep running into issues is identifying key frames for prompt travel. I am thinking of the scenario, where you have generated, say, a 1000 images with a randomized prompt and low quality settings and then have selected the 100 best and want to create high quality Welcome to the unofficial ComfyUI subreddit. If you want the exact input image you can find it on on Ubuntu it's downloads. AP Workflow 6. Here are the models that you will need to run this workflow:- Loosecontrol Model ControlNet_Checkpoint v3_sd15_adapter. Breakdown of workflow content. json inside Resource - Update I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. That will give you a Save(API Format) option on the main menu. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. A few examples of my ComfyUI workflow to make very You can just open another tab of comfyui and load a different workflow in there. It lets you change the aspect ratio, resolution, steps and everything without having to edit the nodes. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. One year passes very quickly and progress is never linear or promised. SD1. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. be/ppE1W0-LJas - the tutorial. Table of contents. Img2Img ComfyUI workflow. (I've also edited the post to include a link to the workflow) That's awesome! ComfyUI had been one of the two repos I keep installed, SD-UX fork of auto and this. 5 by using XL in comfy. 5 but with 1024X1024 latent noise, Just find it weird that in the official example the nodes are not the same as if you try to add them by yourself Many of the workflow examples can be copied either visually or by downloading a shared file containing the workflow. SDXL Turbo is a SDXL model that can generate consistent images in a single step. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. Resoltuons 512x512, 600x400 and 800x400 is the limit that I've have tested, I dont't know how it will work at higher resolutions. https://youtu. json you had used, helpful. Flux Schnell is a distilled 4 step model. This ComfyUI Examples. 0. 😋 the workflow is basically an image loader combined with a whole bunch of little modules for doing various tasks like build a prompt with an image, generate a color gradient, batchload images. For other types of detailer, just type "Detailer". If you have previously generated images you want to upscale, you'd modify the HiRes to include the IMG2IMG nodes. Support for SD 1. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. You create the workflow as you do in ComfyUI and then switch to that interfase. You can then load or drag the following This repo is divided into macro categories, in the root of each directory you'll find the basic json files and an experiments directory. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Honestly the real way this needs to work is for every custom node author to use a json file that describes functionality of each node's inputs/outputs and general functionality of the node(s). You signed in with another tab or window. *Edit* KSampler is where the image generation is taking place and it outputs a latent image. 85 or even 0. You can write workflows in code instead of separate files, use control flows directly, call Python libraries, and cache results across different workflows. There are a couple abandoned suites that say they can do that, e. ComfyUI workflow ComfyUI . 50, the graph will show lines more “spaced out” meaning that the frames are more distributed. SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. 10 votes, 10 comments. There is a latent workflow and a pixel space ESRGAN workflow in the examples. Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers Welcome to the unofficial ComfyUI subreddit. The "workflow" is different, but if you're willing to put in the effort to thoroughly learn a game like that and enjoy the process, then learning ComfyUI shouldn't be that much of a challenge Reply reply More replies More replies Can someone give examples what you can do with the adapter in general? (Beyond what's in the videos) I've used it a little and it feels like a way to have an instant lora for a character. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. What it's great for: Once you've achieved the artwork you're looking for, it's time to delve deeper and use inpainting, Get the Reddit app Scan this QR code to download the app now xpost from r/comfyui: New IPAdapter workflow. ) That's a bit presumptuous considering you don't know my requirements. You can then load or drag the following image in ComfyUI to get the workflow: Well, I feel dumb. So OP, please upload the PNG to civitai. I've been especially digging the detail in the clothing more than anything else. ymlplgbc wuon uxnxu qmbtty bfto zxuani kbsrhgvc uftee xabj innx  »

LA Spay/Neuter Clinic