Reddit comfyui workflows. (for 12 gb VRAM Max is about 720p resolution).


  • Reddit comfyui workflows In ComfyUI go into settings and enable dev mode options. Since I used ComfyUI, I downloaded tons of workflows, but only around 10% of them work. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. Thanks for the responses tho, I was unaware that the meta data of the generated files contain the entire workflow. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Welcome to the unofficial ComfyUI subreddit. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] Welcome to the unofficial ComfyUI subreddit. MoonRide workflow v1. support/docs Welcome to the unofficial ComfyUI subreddit. I did Install Missing Custom Nodes, Update All, and etc etc, but there are many issues every time I load the workflows, and it looks pretty complicated to solve it. I looked into the code and when you save your workflow you are actually "downloading" the json file so it goes to your default browser download folder. That will give you a Save(API Format) option on the main menu. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. I use the workflow(s) that is/are added when you install a node package, to get a feel for what the package has to offer. In this guide, we are aiming to collect a list of 10 cool ComfyUI workflows that you can simply download and try out for yourself. But let me know if you need help replicating some of the concepts in my process. Nowhere. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: . ai/profile/neuralunk?sort=most_liked. With Pony it ignores large parts of the prompt. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. Try to install the reactor node directly via ComfyUI manager. A lot of people are just discovering this technology, and want to show off what they created. Eventually you'll find your favorites which enhance how you want ComfyUI to work for you. I also use the comfyUI manager to take a look at the various custom nodes available to see what interests me. Then add in the parts for a LoRA, a ControlNet, and an IPAdapter. More info: https://rtech. I played for a few days with ComfyUI and SDXL 1. json files into an executable Python script that can run without launching the ComfyUI server. And above all, BE NICE. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Welcome to the unofficial ComfyUI subreddit. Save your workflow using this format which is different than the normal json workflows. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. Hey everyone, I'm looking to set up a ComfyUI workflow to colorize, animate, and upscale manga pages, but I'd like some other thoughts from others to help guide me on the right path. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units The process of building and rebuilding my own workflows with the new things I've learned has taught me a lot. Join the largest ComfyUI community. A few months ago, I suggested the possibility of creating a frictionless mechanism to turn ComfyUI workflows (no matter how complex) into simple and customizable front-end for end-users. It works by converting your workflow. My attempt here is to try give you a setup that gives Here are approx. Hope you like some of Welcome to the unofficial ComfyUI subreddit. (for 12 gb VRAM Max is about 720p resolution). 2. Workflows: SDXL Default workflow (A great starting point for using txt2img with SDXL) View Now AnimateDiff in ComfyUI is an amazing way to generate AI Videos. Hi there. Belittling their efforts will get you banned. Go to the comfyUI Manager, click install custom nodes, and search for reactor. Once installed, download the required files and add them to the appropriate folders. Please share your tips, tricks, and workflows for using this software to create your AI art. . I'd venture to say that 90% of the workflows out Downloaded a workflow that works very well for me, but only works with illustrious. Welcome to the unofficial ComfyUI subreddit. This is an interesting implementation of that idea, with a lot of potential. Please keep posted images SFW. A lot of people are just discovering this Not a specialist, just a knowledgeable beginner. Discovery, share and run thousands of ComfyUI Workflows on OpenArt. Is there a way to load the workflow from an image within Welcome to the unofficial ComfyUI subreddit. For example, it would be very cool if one could place the node numbers on a grid (of customizable size) to define the position Well, I feel dumb. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. And above all, BE Hey Reddit! I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Yes. I am very interested in shifting from automatic1111 to working with ComfyUI I have seen a couple templates on GitHub and some more on civitAI ~ can anyone recommend the best source for ComfyUI templates? Is there a good set for doing standard tasks from automatic1111? Is there a version of ultimate SD upscale that has been ported to ComfyUI? Usually, or almost always I like to inpaint the face , or depending on the image I am making, I know what I want to inpaint, there is always something that has high probability of wanting to get inpainted, so I do it automatically by using grounding dino segment anything and have it ready in the workflow (which is a workflow specified to the picture I am making) and feed it into impact Welcome to the unofficial ComfyUI subreddit. ComfyUI is a completely different conceptual approach to generative art. INITIAL COMFYUI SETUP and BASIC WORKFLOW. Welcome to the unofficial ComfyUI subreddit. Colorize the manga pages, and use Canny ControlNet to isolate the text elements (speech bubbles, Japanese action characters, etc) from each panel so they aren't I would like to further modify the ComfyUI workflow for the aforementioned "Portal" scene, in a way that lets me use single images in ControlNet the same way that repo does (by frame-labled filename etc). Start by loading up your standard workflow - checkpoint, ksampler, positive, negative prompt, etc. com/ How it works: Download & drop any image from the Share, discover, & run thousands of ComfyUI workflows. 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Even though Pony LORAs work with it using illustrious. rdompl erous oihknqnb sbp xpuu rhvxx ptsnth oidud qlwzcr gepfj cyc msoryet krc exlvtxl rjfb