Comfyui xyz reddit so just lora1, lora2, lora3 etc. The way you make a control node the easiest, is to do the conversion, then drag a line out from that new point. - comfyanonymous/ComfyUI The weights are also interpreted differently. Having been generating very large batches for character training (per this tutorial which worked really well for me the first time), it occurs to me that the lack of interactivity of the process might make it an ideal use case for ComfyUI, and the lower overhead of ComfyUI might make it a bit quicker. 2), Anime Style, Manga Style, Hand drawn, cinematic, Sharp focus, humorous illustration, big depth of field, Masterpiece, concept art, trending on artstation, Vivid colors, Simplified style, trending on ArtStation, trending on CGSociety I'm starting to make my way towards ComfyUI from A1111. I. start with simple workflows . Welcome to the unofficial ComfyUI subreddit. Heres an example of building a prompt from a randomly assembled string. 1) in A1111. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and the capability to automatically generate captions for an image directory Can someone please explain or provide a picture on how to connect 2 positive prompts to a model? 1st prompt: (Studio ghibli style, Art by Hayao Miyazaki:1. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Thanks! Now if I'm doing video ComfyUI all the way. Jan 10, 2024 · XYZ grids are a fascinating tool for AI art as they can compare different checkpoints, loras, cfg scale, steps, etc. Configure the input parameters according to your requirements. Are there any example workflows of this? There are a couple custom node packages with XY grids that are easy enough to use with a single sampler, but I can't wrap my head around how to add the refiner. In a1111 you can also do all this Stuff, but then you are Trapped in dragging images from a to b and manually adjust the steps. mainly using WAS suite, (ignore the multiple clips thing im doing, screenshot is just one I had hanging around and showing it. if a box is in red then it's missing . 0 etc. Replied in your other thread, but I'll share my comment here for anyone else looking for the answer: Upload your text file to the input folder inside ComfyUI and use this as the file path: input/example. For those of you familiar with FL Studio, and specifically with Patcher, you might know what I'm about to describe. Also, if this is new and exciting to you, feel free to post Nov 20, 2024 · the main comment to that post makes some very valid points that IMHO still stand to date. 5 so that may give you a lot of your errors. Updated ComfyUI Workflow: SDXL (Base+Refiner) + XY Plot + Control-LoRAs + ReVision + ControlNet XL OpenPose + Upscaler I have updated the workflow submitted last week, cleaning up a bit the layout and adding many functions I wanted to learn better. I was going to make a post regarding your tutorial ComfyUI Fundamentals - Masking - Inpainting. xyz Jan 17, 2025 · use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example. ) Welcome to the unofficial ComfyUI subreddit. One thing that really bugs me is that I used to live "X/Y" graph because it I set the batch to 2, 3, 4 etc images it would show ALL of them on the grid png not just the first one. Detail, consistency and creativity with more complex scenes is where you start seeing differences. Welcome to the unofficial ComfyUI subreddit. Again, would really appreciate any of your Comfy 101 materials, resources, and creators, as well as your advice re. InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. May 13, 2024 · ComfyUI 是一个基于 Stable Diffusion 的AI绘画创作工具,最近发展势头特别迅猛,但是 ComfyUI 的上手门槛有点高,用户需要对 Stable Diffusion 以及各种数字技术的原理有一定的了解才行。 Welcome to the unofficial ComfyUI subreddit. Launch ComfyUI and start using the SuperPrompter node in your workflows! (Alternately you can just paste the github address into the comfy manager Git installation option) 📋 Usage: Add the SuperPrompter node to your ComfyUI workflow. But one of the really cool things is has is a separate tab for a "Control Surface". 9, 1. but can anybody provide me hints or advice, it would be welcome. 5, . The thing I am worried is that the updates are not up to date or custom nodes in stand alone ComfyUI doesnt exist in ComfyUI extension. I've been elbows deep in Automatic1111 since Spring and having a blast. Not sure why one would, but I have noticed it's doable in Comfy and broken in I still don't understand everything happening behind generating AI images in ComfyUI using the ksampler and its choices. I remember the previous arguement in A1111 was that the speed benefit of ComfyUI is negated by A1111. comfyui manager will identify what is missing and download for you . A checkpoint is your main model and then loras add smaller models to vary output in specific ways . I want to load it into comfyui, push a button, and come back in several hours to a hard drive full of images. I've been googling around for a couple hours and I haven't found a great solution for this. The file path for input is relative to the ComfyUI folder, no absolute path is required. However, I decided to give it a try because the old toolbox has become complex and annoying as more and more options and stuff was squeezed onto it. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. I love downloading new nodes and trying them out. I'm using a 10gb card but I find to run a text2img2vid pipeline like you are I need to launch ComfyUI with the --novram --disable-smart-memory parameters to force it to unload models as it moves through the pipeline. ComfyUI Manager issue. AP Workflow 3. So my though is that you set the batch count to 3 for example and then you use a node that changes the weight for the lora on each bath. 75 then test by prompting the image you are looking for ie, "Dog with lake in the background" through run an X,Y script with Checkpoint name and list your checkpoints, it should print out a nice picture showing the image gradually changing Welcome to the unofficial ComfyUI subreddit. 8, 0. I have a text file full of prompts. txt. This tool enables you to enhance your image generation workflow by leveraging the power of language models. loose the <> brackets, (the brackets are in your prompt) you are just replacing a simple text/name. 0. if you get a result with 2M Kar Welcome to the unofficial ComfyUI subreddit. the 08 i assume u want the weight to be 0. A lot of people are just discovering this technology, and want to show off what they created. try civitai . Restart seems to be a great way to get to the same results as 2M respective samplers at half the steps (although takes twice as long if you don't adjust settings accordingly). There's something I don't get about inpainting in ComfyUI: Why do the inpainting models behave so differently than in A1111. on the Y value if you want a variable weight value on the grid. Please share your tips, tricks, and workflows for using this software to create your AI art. 8, so write 0. 19K subscribers in the comfyui community. Hello u/Ferniclestix, great tutorials, I've watched most of them, really helpful to learn the comfyui basics. Just for reference, thanks! Updated ComfyUI Workflow: SDXL (Base+Refiner) + XY Plot + Control-LoRAs + ReVision + ControlNet XL OpenPose + Upscaler Welcome to the unofficial ComfyUI subreddit. 25, . ComfyUI custom nodes - merge, grid (aka xyz-plot) and others - Nolasaurus/ComfyUI-nodes-xyz_plot There is making a batch using the Empty Latent Image node, batch_size widget, and there is making a batch in the control panel. Firstly, the config file seems to be in AppData/Roaming/ComfyUI, not the ComfyUI installation directory and it is called extra_models_config. Instead of doing a 90% 10% checkpoint merge I can just do a prompt saving me a ton of GB and also I could just keep the ratio in a wildcard file instead of several hundres And then it gets comfy compared to a1111 where you just cant one click reproduce workflows. Much Python installing with the server restart. But I'm always exploring better ways to do this. Im quite new to ComfyUI. We would like to show you a description here but the site won’t allow us. To get around that, youy can right click on the ksampler, and use the "convert (xyz) to input". And above all, BE NICE. For anyone else missing the XYZ grid from auto1111 in comfy, or hating the complexity of the Custom Nodes plugin, here's an elegant solution to compare all your checkpoint models using vanilla ComfyUI with a minimal change to your workflow. Take your two models and do a weighted sum merge in the merge checkpoints tab and create a checkpoint at . 22, the latest one available). ) Jun 14, 2024 · 文章浏览阅读1. Also: changed to Image -> Save Image WAS node. ) We would like to show you a description here but the site won’t allow us. And then you have your images and maybe you think "mhh this one looks great, but maybe i want to change xyz a little bit. its super ez to get it to grab random words each time from a list, to get it to step through them one by one is more difficult. 5> If it makes no difference you need to go closer to 1 if it weights too heavily on your image you need to go lower. There are a few ways to make XYZ grids and we will be covering two methods in this workflow with the platforms Auto1111 and ComfyUI. I want to test some basic lora weight comparisons, like in WebUI where you do XYZ plot. Circular repetitions, AnimateLCM with IP-Adapters and Masks, ComfyUI Workflow by Purz. Please share your tips, tricks, and…. Faces at this point are the easy mode of image generation. (i don't need the plot just individual images so i can compare myself). Ever since I saw that little ComfyUI avatar in the SDXL Discord's live chat last week, I had an interest in checking out the app, especially since it was so stable at launch. For instance (word:1. Please keep posted images SFW. and remember sdxl does not play well with 1. I assume there must be a way w this X,Y,Z version, but everytime I try to have it com We would like to show you a description here but the site won’t allow us. I think the only place it would excel is if you had to batch automate inpainting a single image repeatedly, a limitation of both a1111 and forge. I know i can stop at a denoise level less than 1 which will leave some noise, and if i do that i can alter the prompts and finish it to make it somehow more specific to my needs. Dec 5, 2024 · After noticing the new UI without the floating toolbar and the top menu, my first reaction was to instinctively revert to the old interface. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and the capability to automatically generate captions for an image directory For comfyUI, my difficulty in using it is the smoothness and stability of the UI. Posted by u/PyrZern - 21 votes and 16 comments mostly built for SDXL and uses comfyUI as its backend, making it as fast and as VRAM efficient as comfyui has few options, but includes mostly what matters comes with a list of effective and easy-to-use styles Welcome to the unofficial ComfyUI subreddit. But Forge is a different story. 30 votes, 11 comments. Basically it doesn't open after downloading (v. yaml like it used to be. I love how simple the ComfyUI install was. yaml, not extra_model_paths. Img2img and inpainting, no way I would ever use comfyUI over forge or a1111. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and the capability to automatically generate captions for an image directory A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Workflow Included Share We would like to show you a description here but the site won’t allow us. Upon launching Comfy, you will be met with a preset workflow. anyway block weight is just , <thisyouryourLora:1> the :1 at the end is the weight i usually start at <thisisyourLora:0. Belittling their efforts will get you banned. It's late and I'm on my phone so I'll try to check your link in the morning. ComfyUI has absolutely no security baked in (neither from the local/execution standpoint, nor from the remote/network authentication standpoint), and the custom node manager makes things easier and smoother on one hand, but even more dangerous on the other. Yep. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. IF there is anything you would like me to cover for a comfyUI tutorial let me know. I would like to take chaiNNer as a comparison, I like its smoothness and stability, while for comfyUI which only uses webui, the difficulty is that the operation always interrupts my thoughts. It is heaps faster than A1111, and in some cases, even ComfyUI itself. Here's the thing, ComfyUI is very intimidating at first so I completely understand why people are put off by it. Then find example workflows . I was eager to try out the different new samplers and here are my findings. " Welcome to the unofficial ComfyUI subreddit. e. 1) in ComfyUI is much stronger than (word:1. GitHub - if-ai/ComfyUI-IF_AI_tools: ComfyUI-IF_AI_tools is a set of custom nodes for ComfyUI that allows you to generate prompts using a local Large Language Model (LLM) via Ollama. What are your favorite custom nodes (or node packs) and what do you use them… Welcome to the unofficial ComfyUI subreddit. The Empty Latent Image will run however many you enter through each step of the workflow. You can then change the value one time, for both of them. (I will be sorting out workflows for tutorials at a later date in the youtube description for each, many can be found in r/comfyui where I first posted most of these. I think the noise is also generated differently where A1111 uses GPU by default and ComfyUI uses CPU by default, which makes using the same seed give different results. I'm sure the solution is much easier than what I'm making it, but everything I try just makes ComfyUI crash on start up. Fernicles SDTools V3 - ComfyUI nodes First off, its a good idea to get the custom nodes off git, specifically WAS Suite, Derfu's Nodes, and Davemanes nodes. Via the ComfyUI custom node manager, searched for WAS and installed it. ComfyUI node suite for composition, stream webcams or media files in and out, animation, flow control, making masks, shapes and textures like Houdini and Substance Designer, read MIDI devices. Convert to input your model in the Load Checkpoint node Link this ckpt to a primitive node AP Workflow 3. 11 votes, 18 comments. . com Welcome to the unofficial ComfyUI subreddit. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Basically, in patcher, you can string plugins together in much the same way as ComfyUI. 2 for ComfyUI (XY Plot, ControlNet/Control-LoRAs, Fine-tuned SDXL models, SDXL Base+Refiner, ReVision, Detailer, 2 Upscalers, Prompt Builder, etc. Then link both ksamplers to a single control node. I am now just setting up ComfyUI and I have issues (already LOL) with opening the ComfyUI Manager from CivitAI. First: added IO -> Save Text File WAS node and hooked it up to the prompt. See if that works :) There are also a few custom nodes for image resizing that may help but I don't have the names of those in You can do a model merge for sure. Idea: A custom loop button on the Side Menu, how much time you wanna loop it like Auto Queue with a cap and also make a controller node, by which loop count can be controlled by the values which comes from inside the workflow. I switched to ComyfUI from A1111 last year and haven't looked back, in fact I can't remember the last time I used A1111. Also, the file looks way different. ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. I have yet to find anything that I could do in A1111 that I can't do in ComfyUI including XYZ Plots. Release: AP Workflow 9. Restarted ComfyUI server and refreshed the web page. A great tutorial for folks! I don't know if you plan to do a tutorial on it later but explaining how emphasis works in prompting and the difference between how ComfyUI does it vs other tools like Auto1111 would help a lot of people migrating over to Comfy understand why their prompts might not be working in the way they expect. If you wish to install a non-Debian-packaged Python package, Switch to SwarmUI if you suffer from ComfyUI or the easiest way to use SDXL Tutorial | Guide I've tested SwarmUI and it's actually really nice and also works stably in a free google colab. on automatic your best bet would be regional prompter to check xyplot - i can't help with it as I've never used it. 9k次,点赞12次,收藏13次。ComfyUI 是一个基于 Stable Diffusion 的AI绘画创作工具,最近发展势头特别迅猛,但是 ComfyUI 的上手门槛有点高,用户需要对 Stable Diffusion 以及各种数字技术的原理有一定的了解才行。 I'm trying to find the source of this image/page where they used xyz prompt to prompt them individually. Let say with Welcome to the unofficial ComfyUI subreddit. ComfyUI. pisw tzffjt kvrrt xhk fbd rqcusz unppkgj wpo khtga gtlycj vvd eauw bktr opxxi ruz