Best comfyui trigger reddit. 5 + AnimateDiffv3 in ComfyUI. Tried out the slightly different prompting method in this video to good results. His tutorial/demo: look into style align batch align. You can see the Ksampler & Eff. To disable/mute a node (or group of nodes) select them and press CTRL + m. so I wrote a custom node that shows a Lora's trigger words, examples and what base model it uses. I'm using LoRas for both characters, as well as a LoRa for the theme itself, and every generated picture comes out with the characters merged, blending some features from each other. Im using 100 motion bucket and 0. TDComfyUI - TouchDesigner interface for ComfyUI API. We learned that downloading other workflows and trying to run them often doesn't work because of missing custom nodes, unknown model files, etc. But when Krita plugin happened I switched to that. 5 for the moment) 3. Use these commands (everything after # are comments): python -m venv venv # create virtual environment. https . Has anyone else hit this ceiling? I've only managed to find a few people that were able to produce workflows which exceed the limitations of the execution method. Left is the initial image. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. This lets you sit your embeddings to the side and The top image is before the FaceDetailer, and the bottom is after. 3 - On SDXL it works better above 1024x1024 and will even have trouble going below that, there is a list hanging around with the resolutions SDXL is best used at. 8K subscribers in the comfyui community. Dragging it will copy its path in the command prompt. For using a pre-trained model, you can use it just like any existing model. I'm not sure if this is what you want: fix the seed of the initial image, and when you adjust the subsequent seed (such as in the upscale or facedetailer node), the workflow would resume from the point of alteration. I just have mine in a plain txt file and CTRL + F for whatever I'm looking for, and copy-paste the trigger words/best settings whenever I download LORAs and models. 5 arguably doesnt need it on comfyUI because most of us run a series of samplers and stepped upscale practices anyway making On Event/On Trigger: This option is currently unused. I came across the SaveImageWebsocket node Is there a way to make something like that without creating the custom node as I visualise? I want to trigger words from lora based on the lora model, the best if it will be possible to type which words are triggering lora as I show below I use a script that updates Comfyui and checks all the Custom Nodes. Add a Comment. In that command prompt, type this: python -m venv [venv folder path] It seems pretty obvious that it's the best program for maximum customization. My progress is slow. As title, which animateanyone implementstion gives the best results? I see there’s AnimateAnyone-Evolved, Moore-AnimateAnyone, and a thinkdiffusion fork of Evolved. 3. This doesn't happen every time, sometimes if I queue different models one after another 2nd model takes a longer time. 75s/it to 114+s/it. Image Realistic Composite & Refine ComfyUI Workflow. 0 If you make a Lora with a trigger word, make that trigger word the name of the Lora Best of Reddit; Topics open a command prompt, and type this: pip install -r. But the resulted image is not something that I expected. Can someone guide me to the best all-in-one workflow that includes base model, refiner model, hi-res fix, and one LORA. Loader SDXL in the screenshot. nomadoor. ai. This is usually a Save Image node most of the time but a Preview Image node would also work if you don't intend to save what you are testing. Middle is looped 75 times, 0. Please share your tips, tricks, and workflows for using this software to create your AI art. 4 and Latte! I hope you find it useful! great episode as always! 23K subscribers in the comfyui community. Best simple but capable workflow. You could split the layouts and have one that produces only 512x512. Following official guide. txt file in the command prompt. I have installed all missing models and could get prompt queued. com (Personal favorite) ThinkDiffusion. 18K subscribers in the comfyui community Problem: My first pain point was Textual Embeddings. txt, it will only see the replacement text in a. ComfyUI wants all input at the start, so there is no way to pause in the middle without queueing up the layout again after making changes. STEP 1: Open the venv folder, then type on its path. I vaguely remember Comfy himself mentioning some fix (not sure if it was to this problem though), so have you tried to run update script recently? If not, that might be the solution. Hi, I'm looking for input and suggestions on how I can improve my output image results using tips and tricks as well as various workflow setups. I still have a long way to go to get the face 'fixed' to where I want it, but the pressing thing for me right now is losing the contrast/colors of the untouched image. With this component you can run ComfyUI workflow in TouchDesigner. Smarter folk than me must have figured stuff out, so I'm just wondering if you'd be so kind to share your ComfyUI workspace for SDXL 1. Plug the image output of the Load node into the Tagger, and the other two outputs in the inputs of the Save node. •. Workflows are much more easily reproducible and versionable. RunDiffusion. I played with hi-diffusion in comfyui with sd1. I'm more careful now. 01 aug level, which im not sure what is best for these. If it's too much movement like a dance I haven't figured out how to fix the blur yet. But if you choose increment/decrement, which would be the delta (rate of change), you can't change how much it increments/decrements. Bypass (Accessible via Right click -> Bypass): Functions similarly to "never", but with a distinction. You can set the noise seed manually by right clicking on the sampler, and under bypass, choose convert seed to input. Beginners' guide for ComfyUI 😊 We discussed the fundamental comfyui workflow in this post 😊 You can express your creativity with ComfyUI #ComfyUI #CreativeDesign #ImaginativePictures #Jarvislabs. Does anyone have a way of getting LORA trigger words in comfyui? I was using civitAI helper on A1111 and don't know if there's anything similar for getting that information. ComfyUI Node for Stable Audio Diffusion v 1. • 1 yr. Step one: Hook up IPAdapter x2. Dear all, i am new to the comfyui and just installed it. I think it would be interesting to see what other types of workflows are causing this issue, because in my case, I was only able to trigger it when my workflow exceeded 250 nodes. Press go 😉. But I'll eventually get there. Scaling and GPUs can get overwhelmingly expensive so you'll have to add additional safeguards. If you want to go deep with maximum control and lots of learning then comfy. The prompt goes through saying literally " b, c ,". From what I gather only A1111 and its derivatives can correctly append metadata like prompts, CFG scale, used checkpoints/LoRAs and so on while I also use the cheapest option with the pod that can be terminated. Best AnimateAnyone implementation. But a lot of the time, when I try to use custom nodes or even just new checkpoints. This is more creative node, doesnt gives structured outputs. 15K subscribers in the comfyui community. The NerdyRodent Youtube dude just posted a video yesterday about a Visual Style Prompting node. Then you generate an accessible unique Comfy URL to connect a websocket to and pass prompts via the API. Been playing around with ComfyUI and got really frustrated with trying to remember what base model a lora uses and its trigger words. Flowt. New Features: Anthropic API Support: Harness the power of Anthropic's advanced language models like Claude-3 (Opus, Sonnet, and Haiku) to generate high-quality prompts and image descriptions. If I understood you right you may use groups with upscaling, face restoration etc. 5s/it with ComfyUI and around 7it/s with A1111, using an RTX3060 12gb card, does this sound Welcome to the unofficial ComfyUI subreddit. Or more easily, there are several custom node sets that include toggle switches to direct workflow. com to make it easier for people to share and discover ComfyUI workflows. What is the best workflow that people have used with the most capability without using custom nodes? use custom nodes. Reply reply. Award. To duplicate parts of a workflow from one area to another, select the nodes as usual, CTRL + C to copy, but use CTRL + SHIFT + V to paste. 11 votes, 11 comments. What is the best way to add a custom theme to Comfyui? I've been working on creating a custom theme for Comfyui for a few days now. Belittling their efforts will get you banned. In Automatic1111 you can browse from within the program, in Comfy, you have to remember your embeddings, or go to the folder. assuming your using a fixed seed you could link the output to a preview and a save node then press ctrl+m with the save node to disable it until you want to use it, re-enable and hit queue prompt again when you want to save 1. SDXL Best Workflow in ComfyUI. 20 an hour — it's shown as 2. Hardest part is always the eyes. e training data have 2 folders 20_bluefish and 20_redfish, bluefish and redfish are the trigger words), CMIIW Best versatile SDXL checkpoint? I dunno about "best" but Dreamshaper XL or Juggernaut XL are both solid choices for general purpose work. ai I should do that. Find tips, tricks and refiners to enhance your image quality. But I can only get it to accept replacement text from one text file. com. Also ComfyUI's internal apis are like horrendous. Thanks tons! Welcome to the unofficial ComfyUI subreddit. Might be a good tool for this but I haven't played w/ it enough yet. Yes, you can actually run two instances at once by having a second tab of ComfyUI up, and load the second workflow on the second tab. Let me know if there are more and affordable ones. Comfyui is much better suited for studio use than other GUIs available now. txt. Copy that folder’s path and write it down in the widget of the Load node. 5 models and it easily generated 2k images without any distortion, which is better than khoya deep shrink. What else is out there for drawing/painting a latent to be fed into ComfyUI other than the Photoshop one (s)? Two that I can think of, both linked here. 5 models, including the vision-enabled variants, for enhanced ComfyUI: Add resources manually? I'm mainly using ComfyUI on my home computer for generating images. Created it separately in preparation for the #comfy101 tiled feature. Here is how it works: Gather the images for your LoRA database, in a single folder. I'm working on a new frontend to ComfyUI where you can interact with the generation using a traditional user interface instead of the graph-based UI. But adding trigger words in the promt for a lora in ComfyUI does nothing besides how the model is interpretating that words as any other word in the prompt. SD interprets the whole prompt as 1 concept and the closer tokens are together the more they will influence each other. Sometimes I want something that don't require as much tinkering in that case you can fire Enfugue and start the fun. there are big holes in the basic functionality of comfyUI and as a result there are nodes that basically cant really do much or are very narrow scope. TouchDesigner is a visual programming environment aimed at the creation of multimedia applications. 5 (and using lower denoising strengths can produce smoother animations), so let me show you the difference at 0. A text box node? I have a workflow that uses wildcards to generate two sets of images, a before and an after. • 1 mo. 2) Given that I have the og geometry. I also install ComfyUI manager. My next idea is to load the text output into a sort of text box and then edit that text and then Welcome to the unofficial ComfyUI subreddit. Right is the the same, with the same seeds, but kept in latent the whole time. But if you train Lora with several folder to teach it multiple char/concept, the name in the folder is the trigger word (i. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner I've been exploring the ComfyUI API and trying to integrate it into my own application. It says by default masterpiece best quality girl, how does CLIP interprets best quality as 1 concept rather than 2? That's not really how it works. To install ComfyUI with Comfy CLI. Also, if this is new and exciting to you, feel free to post in the default does not use commas. 2 3. Copy that path (we’ll need it later). Hi all, sorry if this seems obvious or has been posted before, but i'm wondering if there's any way to get some basic info nodes. For example, one that shows the image metadata like PNG info in A1111, or better still, one that shows the LORA info so i can see what the trigger words and training data was etc. venv\Scripts\activate # activate the environment (run if on Windows) source venv/bin/activate # activate the environment (run on Linux / Mac) pip install comfy-cli # install Comfy CLI. Pretty easy. The runpod template is ai-dock . SD 1. I want to send a prompt to the API and receive the generated image as a result. Learn how to use Comfy UI, a powerful GUI for Stable Diffusion, with this full guide. that seems to offer a template to run. Hi-Diffusion is quite impressive, comfyui extension now available. If you only have one folder in the training dataset, Lora's filename is the trigger word. 3) my fav: Considering AnimateDiff, This already tries to solve a problem of temporal consistency. Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows I've been using the Dynamic Prompts custom nodes more and more, and I've only just now started dealing with variables. IC-Light - For manipulating the illumination of images, GitHub repo and ComfyUI node by kijai (only SD1. Running this with a few loras to get better color and less detail, Sparsectrl scribble but fed with lineart, as well as lineart CN, ADv3 adapter lora after AD, then FreeU_v2 into simple k sampler. Resource - Update. You can use the same environment and models with Enfugue. Step three: Feed your source into the compositional and your style into the style. Thanks! I have found increasing the sampler steps decreases the blur a lot, and I also mess around with augmentation level. You could write this as a python extension. but i dont know how. Somehow transform the parts of the first image and place them in the correct spatial location in the next view, use this as a latent in a img2img setting (but it would have a lot of "non filled parts". (It has a great system prompt). - Achieving Random/Semi-random Loras + Trigger words Please tell me there's a custom node somewhere that allows dynamically loading lora, something similar to Dynamic Prompts, but for loras (Dynamic Prompts automatically supports lora in A1111 since lora is called via text). Return in the default folder and type on its path too, then remove it and type “cmd” instead. 1. - basically high res fix is not needed for SDXL ( even 1. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. r/StableDiffusion. The 'best' is the one you like the most. Longer Generation While Switching Checkpoints. I have some advice. Please share your tips, tricks, and workflows for You should be able to drop images into comfyui as well and it will load up the workflow you were using for that generated image. When Canvas_Tab came out, it was awesome. txt and c. Instead of the node being ignored completely, its inputs are simply passed through. 115 upvotes · 30 comments. not really much different to existing stuff tho. There is LLMPromptGenerator node for creating consistent or random prompts from given prompts or descriptions. 05 units an hour. Start words like "generate" and "create". For instance, in a setup with multiple Lora Loaders, you can "bypass" one and it will effectively be For portraits mostly. As pointed out in the HandRefiner paper, MeshGraphormer is designed to generate 3D meshes from “correctly shaped” hands, so it’s only natural that it can’t handle “melted hands” generated by AI. You can rely on them because it outputs json %100 and you can extract the outputs using JsonToText node. Not sure how Comfy handles loras that have been trained on different character/styles etc for different trigger words! Best Comfyui Workflows, Ideas, and Nodes/Settings. Press Enter, it opens a command prompt. My current workflow involves going back and forth between a regional sampler, an upscaler basically you feed "CR text cycler" into "CR text list" and then feed that into whatever you would normally use for your prompt (clip textencoder or some loader, etc). Like if I have a. (if you’re on Windows; otherwise, I assume you should grab the other file, requirements. By mixing the previously introduced Nodes, you can create a nice retro looking tiled texture using only Prompt. And above all, BE NICE. I've tried a few approaches, such as using the /history and /view endpoints to retrieve the address of the image on the hard drive. css from the Comfyui web folder was overwritten by the update. edit 9/13: someone made something to help read LORA meta and civitai info I know there is the ComfyAnonymous workflow but it's lacking. Note that when you close the UIs, the last one you closed is the one that will pop up the next time you get on Comfy. Info Nodes. If you want it easy with lots of extensions then forge. When you create a ComfyUI workflow and use it to generate an image, that image contains the workflow information that created it. 25 denoising, encoded and decoded each loop. to connect them together just right click on the target node and use "convert X to input". That's definetly not a 'you' problem. Currently just going on civitAI and looking up the pages manually, but hoping there's an easier way. to get the kind of button functionality you want, you would need a different UI mod of some kind that sits above comfyUI. There must be an output or ComfyUI won't run. To train, you can install the dreambooth custom node through the Manager. Attention couple example workflow or ipadapter with attention mask, check latent vision's tutorial on YouTube. When I'm switching Checkpoints, generation time goes from 1. ago Best ComfyUI Cloud listings. I typically use the T4 option, which costs £0. The idea is cool I liked TD. txt). I've Seen a thread in this Sub from a talented fellow developer who made nxde_ai. I have attached the images and work flow. It's getting quite long now, and I only have a couple dozen things downloaded. RunComfy. But almost every time I try to use it beyond the basic workflow, things seem to break. Hope this helps you guys as much as its helping me. I'm already finding it quite useful personally and hope to impove it in the coming weeks. . It's pretty slow at 0. Make sure there is a space after that. 0 which also has a good upscaling node? I'm also trying to figure out how to batch render several images, but then only upsize the one that I want. Hello everyone, I am having trouble generating an image (meme) that should include two characters. Great job! I do something very similar and find creating composites to be the most powerful way to gain control and bring your vision to life. 1 0. ago. Everyone uses custom nodes just to get You can swap machines in and out but you'll always have certain machines in the line up. Are there any major differences between the three? And related, is MagicAnimate any good? 1. After a Comfyui update, which seems to appear every day, my User. Step two: Set one to compositional and one to style weight. and 'ctrl+B' or 'ctrl+M' that groups when you Welcome to the unofficial ComfyUI subreddit. trained a model in dreambooth and would like to use it in comfyui. I see what you're doing there with the "banana". 25. Welcome to the unofficial ComfyUI subreddit. Reply. Right now my workflows are either a tangled mess of spaghetti I have to constantly zoom in and out of to change a single parameter somewhere, or I spent more time tidying up and putting all relevant things neatly close to each other and then I can't easily rearrange things anymore except by scrolling even more. Please keep posted images SFW. Please share your tips, tricks, and workflows for using this…. TIA. OpenAI API Integration: Leverage the cutting-edge capabilities of OpenAI's GPT-4 and GPT-3. I need the wildcards to give the same values and I’ve tried using a seed input to input the same seed for both but that doesn’t work. I am trying to shift to ComfyUI because it make standardization of my process easier. A lot of people are just discovering this technology, and want to show off what they created. VFX artists are also typically very familiar with node based UIs as they are very common in that space. Make sure the images are all in png. I use ComfyUI extensively I am developing a Custom node and make tutorials about ComfyUI. txt and b. Need Help, comfyui cannot reproduce the image. ComfyUI is also trivial to extend with custom nodes. Ctrl+b to bypass a node. The little grey dot on the upper left of the various nodes will minimize a node if clicked. The best result I have gotten so far is from the regional sampler from Impact Pack, but it doesn't support SDE or UniPC samplers, unfortunately. u/henryll1. Vote. A question - Auto-start from node. There is a need to train MeshGraphormer with a dataset that includes images of melted hands. I was wondering if anyone else faced Comfyui Optimization. It'll be perfect if it includes upscale too (though I can upscale it in an extra step in the extras tap of automatic1111). Hack/Tip: Use WAS custom node, which lets you combine text together, and then you can send it to the Clip Text field. When I upload them, the prompts get automatically detected and displayed, but not the resources used. I am using ComfyUI on Runpod. r/comfyui Just a question. you get really consistent results while doing batch processing. Is it possible with custom nodes to create a work and add it to the queue? Because we have a very specific case, where we use comfyui trough a local network and want to trigger a job when a tcp-message gets in. It does cache node results so you could try a few things. Your best bet is to set up an external queue system and spin up ComfyUI instances in the cloud when requests are added to the external queue. So if anyone uses AWS would you be kind enough to tell me which Mute output upscale image with ctrl+m and use fixed seed. Then drag the requirements_win. So you can say, queue your first task in the first workflow, switch to second tab, load image In this video I cover Cascade, Lighting implemention, Layer Diffusion, SD3, Zeroscope, Yolo World, FLATTEN, Comfy Launcher, OneDiff, Proteus 0. I just finished adding prompt queue and history support today. Im comfyui, which attention is best for speed and no quality loss, xformers or pytorch cross attention, is changing attention really makes a difference, I just seen 1 second difference in speed, kindly guide me other optimization options. you are comparing it/s with s/it, that's very confusing. I did not got the AWS way but found some information with NISP. I also combined ELLA in the workflow to make it easier to get what I want. Please let me know what you think! 2. I am trying to reproduce a image which i got from civitai. I used the SDXL model and didn't use a separate Lora 😃 2. Unfortunately, I can't see the numbers for your final sampling step at the moment. I get around 1. Also use select from latent. Commas are just extra tokens. I have, once, left the damned thing running overnight, thinking I had stopped an active session running, when I had not. I was thinking about AWS for comfyui since Google has problems with payments. The Krita plugin is great but the nodal soup part isn't there so I can't change some things. I've been trying one or two new things a day. Hi Reddit! In October, we launched https://comfyworkflows. zg ng lj ms ws my wz es ko ff