What is comfyui github example.


What is comfyui github example ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. Releases a new stable version (e. Mar 13, 2023 · You can get an example of the json_data_object by enabling Dev Mode in the ComfyUI settings, and then clicking the newly added export button. This is simple custom node for ComfyUI which helps to generate images of actual couples, easier. By chaining different blocks (called nodes) together, you can construct an image generation workflow. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. , v0. This node also allows use of loras just by typing <lora:SDXL/16mm_film_style. 7. Reload the ComfyUI page after the update. Note that we use a denoise value of less than 1. # The original idea has been adapted and extended to fit the current project's needs Below is an example video generated using the AnimateLCM-FaceID. If you have another Stable Diffusion UI you might be able to reuse the dependencies. safetensors Examples below are accompanied by a tutorial in my YouTube video. 2) (best:1. First thing I always check when I want to install something is the github page of a program I want. KLing AI API is based on top of KLing AI. example¶ The KSampler is the core of any workflow and can be used to perform text to image and image to image generation tasks. This example contains 4 images composited together. This is a side project to experiment with using workflows as components. Put the model file in the folder ComfyUI > models > checkpoints. Download it, rename it to: lcm_lora_sdxl. # Many thanks to 2kpr for the original concept and implementation of memory-efficient offloading. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. I think the old repo isn't good enough to maintain. Since ESRGAN Welcome to the ComfyUI Serving Toolkit, a powerful tool for serving image generation workflows in Discord and other platforms (soon). Could this repository also add this feature? This way, we wouldn't need to search for workflows each time, but could instead find the relevant functionality directly within the ComfyUI interface. YOU NEED TO REMOVE comfyui_controlnet_preprocessors BEFORE USING THIS REPO. More than 150 million people use GitHub to discover, fork, and contribute to over 420 million projects. 98) (best:1. Wan 2. These are examples demonstrating the ConditioningSetArea node. - comfyanonymous/ComfyUI May 9, 2025 · What is ComfyUI? 1. ComfyUI_examples Audio Examples ACE Step Model. Chroma. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. In the above example the first frame will be cfg 1. 1 Models. The important thing with this model is to give it long descriptive prompts. Launch ComfyUI by running python main. (the cfg set in the sampler). Here, you'll find step - by - step instructions, in - depth explanations of key concepts, and practical examples that demystify the complex processes within ComfyUI. This is what the workflow looks like in ComfyUI: ComfyUI Unique3D is custom nodes that running AiuniAI/Unique3D into ComfyUI - jtydhr88/ComfyUI-Unique3D Mar 2, 2025 · ComfyUI: An intuitive interface that makes interacting with your workflows a breeze. 1 model, open-sourced by Alibaba in February 2025, is a benchmark model in the field of video generation. 7> to load a LoRA with 70% strength. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Nvidia Cosmos is a family of “World Models”. - comfyanonymous/ComfyUI May 12, 2025 · Flux. 3 billion parameters), covering various tasks including text-to-video (T2V) and image-to-video (I2V). You signed out in another tab or window. HiDream. safetensors and clip_l. It stitches together an AI-generated horizontal panorama of a landscape depicting different seasons. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. safetensors already in your ComfyUI/models/text_encoders/ directory you can find them on: this link. 5b. You can Load these images in ComfyUI to get the full workflow. ImageAssistedCFGGuider: Samples the conditioning, then adds in You signed in with another tab or window. Click Manager > Update All. 1 background image and 3 subjects. I noticed that it seems necessary to add the corresponding workflows to the example_workflows directory. A simple interface demo showing how you can link Gradio and ComfyUI together. Here is an example for outpainting: Redux SD3 Examples SD3. This way frames further away from the init frame get a gradually higher cfg. py Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. For example, if `FUNCTION = "execute"` then it will run Example(). Then I may discover that ComfyUI on Windows works only with Nvidia cards and AMD needs Feb 18, 2025 · Saved searches Use saved searches to filter your results more quickly See what ComfyUI can do with the example workflows. Mar 4, 2024 · On ComfyUI you can see reinvented things (wiper blades or door handle are way different to real photo) On the real photo the car has a protective white paper on the hood that disappear on ComfyUI photo but you can see on replicate one The wheels are covered by plastic that you can see on replicate upscale, but not on ComfyUI. You can serve on discord, or ComfyUI follows a weekly release cycle every Friday, with three interconnected repositories: ComfyUI Core. Sep 3, 2024 · If your interface has a fixed form, you need to use a method where you extract the prompt in the form of an API export from the corresponding workflow, modify that prompt, and then send the API request. This toolkit is designed to simplify the process of serving your ComfyUI workflow, making image generation bots easier than ever before. 06) (quality:1. The a1111 ui is actually doing something like (but across all the tokens): (masterpiece:0. Since general shapes like poses and subjects are denoised in the first sampling steps this lets us for example position subjects with specific poses anywhere on the image while keeping a great amount of consistency. If you don’t have t5xxl_fp16. ComfyUI-TeaCache is easy to use, simply connect the TeaCache node with the ComfyUI native nodes for seamless usage. DocVQA allows you to ask questions about the content of document images, and the model will provide answers based on the visual and textual information in the document. From understanding its unique node - based interface to mastering intricate workflows for generating stunning images, each tutorial is crafted to enhance your proficiency. py For example, if you for some reason do not want the advanced features of PCTextEncode, use NODE(CLIPTextEncode) in the prompt and you'll still get scheduling with ComfyUI's regular TE node. Download the text encoder files: clip_l_hidream. safetensors. Here’s a simple workflow in ComfyUI to do this with basic latent upscaling: Non latent Upscaling. The sample txt_2_img is given. py --force-fp16. 8 . 4) girl. HiDream I1 is a state of the art image diffusion model. So I can't test it right now. Apr 22, 2024 · ComfyUI is a web-based Stable Diffusion interface optimized for workflow customization. It’s a modular framework designed to enhance the user experience and productivity when working with See what ComfyUI can do with the example workflows. There is a high possibility that the existing components created may not be compatible ComfyUI nodes and helper nodes for different tasks. Builds a new release using the latest stable core version; ComfyUI Frontend. Includes example workflows. For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Either the model passes instructions when there is no prompt, or ConditioningZeroOut doesn't work and zero doesn't mean zero. There are always readme and instructions. Features Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. safetensors or clip_l. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. A repository of well documented easy to follow workflows for ComfyUI - cubiq/ComfyUI_Workflows This repository showcases an example of how to create a comfyui app that can generate custom profile pictures for your social media. Step 2: Update ComfyUI. Communicate with ComfyUI via API and Websocket. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. ComfyUI is a no-code interface designed to simplify interactions with complex AI models, particularly those used in image and video generation. GeometricCFGGuider: Samples the two conditionings, then blends between them using a user-chosen alpha. safetensors, stable_cascade_inpainting. Contribute to comfyanonymous/ComfyUI_examples development by creating an account on GitHub. LCM models are special models that are meant to be sampled in very few steps. The first step Flux or other models: (clip_l. Instead, you can use Impact/Inspire Pack's KSampler with Negative Cond Placeholder. This extension, as an extension of the Proof of Concept, lacks many features, is unstable, and has many parts that do not function properly. . then. Feb 21, 2025 · ComfyUI is a node-based GUI for Stable Diffusion. 75 and the last frame 2. py A custom node is defined using a Python class, which must include these four things: CATEGORY, which specifies where in the add new node menu the custom node will be located, INPUT_TYPES, which is a class method defining what inputs the node will take (see later for details of the dictionary returned), RETURN_TYPES, which defines what outputs the node will produce, and FUNCTION, the name of Apr 14, 2025 · Contribute to cubiq/ComfyUI_IPAdapter_plus development by creating an account on GitHub. comfyui-example. This image contain 4 different areas: night, evening, day, morning. Some code bits are inspired by other modules, some are custom-built for ease of use and incorporation with PonyXL v6. Nov 29, 2023 · Yes, I want to build a GUI using Vue, that grabs images created in input or output folders, and then lets the users call the API by filling out JSON templates that use the assets already in the comfyUI library. Usually it's a good idea to lower the weight to at least 0. 0) Serves as the foundation for the desktop release; ComfyUI Desktop. Use a LLM to generate the code you need, paste it in the node and voila!! you have your custom node which does exactly what you need. It is licensed under the Apache 2. For example: 896x1152 or 1536x640 are good resolutions. Reload to refresh your session. safetensors and save it to your ComfyUI/models Examples of ComfyUI workflows. Our API is designed to help developers focus on creating innovative AI experiences without the burden of managing GPU infrastructure. Then I may discover that ComfyUI on Windows works only with Nvidia cards and AMD needs directml which is slow, etc. But it takes 670 seconds to render one example image of galaxy in a bottle. 1 ComfyUI install guidance, workflow and example. safetensors and put it in your ComfyUI/models/loras directory. If you want to draw two different characters together without blending their features, so you could try to check out this custom node. 5. Examples of ComfyUI workflows. Follow their code on GitHub. 1-dev: An open-source text-to-image model that powers your conversions. You don't need to know how to write python code yourself. SDXL Examples. ; Flux. This is a custom node for ComfyUI that allows you to use the KLing AI API directly in ComfyUI. The noise parameter is an experimental exploitation of the IPAdapter models. g. Windows (ComfyUI portable): python -m pip install -r ComfyUI\custom_nodes\ComfyUI-KLingAI-API\requirements The nodes are all called "Simple String Repository". Dec 2, 2024 · I pulled the latest comfyui version and ran the inference with the default "Load CLIP" node with t5xxl_fp16. 0 (the min_cfg in the node) the middle frame 1. The dev model gives me what looks like random RGB noise. ComfyUI offers this option through the "Latent From Batch" node. py Follow the ComfyUI manual installation instructions for Windows and Linux. This project is designed to demonstrate the integration and utilization of the ComfyDeploy SDK within a Next. It also demonstrates how you can run comfy wokrflows behind a user interface - synthhaven/learn_comfyui_apps This project is designed to demonstrate the integration and utilization of the ComfyDeploy SDK within a Next. Many optimizations: Only re-executes the parts of the workflow that changes between executions. safetensors:0. Installation¶ ConditioningZeroOut is supposed to ignore the prompt no matter what is written. LCM Lora. GitHub Advanced Security. safetensors Implementation of MDM, MotionDiffuse and ReMoDiffuse into ComfyUI - Fannovel16/ComfyUI-MotionDiff Here is how you use it in ComfyUI (you can drag this into ComfyUI to get the workflow): noise_augmentation controls how closely the model will try to follow the image concept. ComfyUI has a lot of custom nodes but you will still have a special use case for which there's no custom nodes available. Say, for example, you want to upscale an image, and you may want to use different models to do the upscale. All the images in this page contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Feb 18, 2025 · Using CFG means doing two passes through the model on each step, so it's a lot slower, and costs more memory. I'm running it using RTX 4070 Ti SUPER and system has 128GB of ram. A very common practice is to generate a batch of 4 images and pick the best one to be upscaled and maybe apply some inpaint to it. So you'd expect to get no images. You switched accounts on another tab or window. This ComfyUI node setup demonstrates how the Stable Diffusion conditioning mechanism functions. Contribute to zhongpei/comfyui-example development by creating an account on GitHub. Here is an example you can drag in ComfyUI for inpainting, a reminder that you can right click images in the "Load Image" node and "Open in MaskEditor". I go to ComfyUI GitHub and read specification and installation instructions. I know there are obviously alternatives like Forge, but being able to make complex custom api workflows and then run th This is a rework of comfyui_controlnet_preprocessors based on ControlNet auxiliary models by 🤗. Aug 7, 2024 · I have the same problem on a Macbook Pro M3 Max running MacOS Sonoma. The there are three variations based on the number of potentially selected strings (Small for 3, no suffix for 5, and Large for 10), and each node has a "compact" version which is worse for automated workflow but more comfortable if you intend to set all selection parameters (1 required, 2 optional) manually. safetensors and t5xxl) if you don’t have them already in your ComfyUI Follow the ComfyUI manual installation instructions for Windows and Linux. 0 checkpoint model. The easiest way to update ComfyUI is through the ComfyUI Manager. 1-schnell. This fork includes support for Document Visual Question Answering (DocVQA) using the Florence2 model. May 12, 2025 · Flux. Nvidia Cosmos Models. Please check A workflow to generate a cartoonish picture using a model and then upscale it and turn it into a realistic one by applying a different checkpoint and optionally different prompts. LLM Agent Framework in ComfyUI includes MCP sever, Omost See what ComfyUI can do with the example workflows. GitHub community articles Repositories. LCM Examples. Examples of what is achievable with ComfyUI open in new window. Lightricks LTX-Video Model. Hypernetwork Examples. execute() OUTPUT_NODE ([`bool`]): If this node is an output node that outputs a result/image from the graph. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Definition and Features. In Aug 2, 2024 · ComfyUI noob here, I have downloaded fresh ComfyUI windows portable, downloaded t5xxl_fp16. Area Composition Examples. The primary focus is to showcase how developers can get started creating applications running ComfyUI workflows using Comfy Deploy. We would like to show you a description here but the site won’t allow us. Contribute to kijai/ComfyUI-FramePackWrapper development by creating an account on GitHub. Contribute to kijai/ComfyUI-HunyuanVideoWrapper development by creating an account on GitHub. For more information, see KLing AI API Documentation. Nov 29, 2023 · There's a basic workflow included in this repo and a few examples in the examples directory. With the schnell model or the fp8 checkpoint I can kinda barely see the image I'm supposed to be getting, but it's super noisy. The LCM SDXL lora can be downloaded from here. Download the Lumina 2. Or providing additional visual hints through nodes such as the Apply Style Model, Apply ControlNet or unCLIP Conditioning node. The example below shows how to use the KSampler in an image to image task, by connecting a model, a positive and negative embedding, and a latent image. 1 is a family of video models. Below is an example video generated using the AnimateLCM-FaceID. It covers the following topics: The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version. py A very short example is that when doing (masterpiece:1. This repository provides the official ComfyUI native node for InfiniteYou with FLUX. The advanced node enables filtering the prompt for multi-pass workflows. safetensors, clip_g. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. I'm working on adding sequential CFG support where it's not done as a batch for less memory use though, which ends up faster as you don't have to use block swap etc. But the new comfyui version broke the "CLIPLoader (GGUF)" node. /interrupt ComfyUI Custom Nodes. You will first need: Text encoder and VAE: comfyanonymous has 12 repositories available. ScaledCFGGuider: Samples the two conditionings, then adds it using a method similar to "Add Trained Difference" from merging models. The video was rendered correctly without noise at the beginning. LCM loras are loras that can be used to convert a regular model to a LCM model. A The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. The lower the value the more it will follow the concept. ComfyUI currently supports specifically the 7B and 14B text to video diffusion models and the 7B and 14B image to video diffusion models. json extension). 1. Flux is a family of diffusion models by black forest labs. 3] to use the prompt a dog, full body during the first 30% of sampling and a dog, fluffy during the last 70%. js application. At its core, ComfyUI serves as a bridge between the user and the underlying AI algorithms, making these powerful tools accessible to a much wider audience. Maybe "CLIPLoader (GGUF)" node was the cause of the problem. A full list of relevant nodes can be found in the sidebar. This repo contains examples of what is achievable with ComfyUI. Download the ace_step_v1_3. - Releases · comfyanonymous/ComfyUI Examples of ComfyUI workflows. Install the ComfyUI dependencies. Topics Trending For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. 1 ComfyUI Workflow. Weekly frontend updates are merged into the core Mar 6, 2025 · TeaCache has now been integrated into ComfyUI and is compatible with the ComfyUI native nodes. Contribute to gonzalu/ComfyUI_YFG_Comical development by creating an account on GitHub. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. If I wanted to do transitions like in the example above in the ComfyUI, I would have to make few times more nodes just to handle that prompt. Here is how you use it in ComfyUI (you can drag this into ComfyUI to get the workflow): noise_augmentation controls how closely the model will try to follow the image concept. Abstract (click to expand) Achieving flexible and high-fidelity identity-preserved image generation remains formidable, particularly with advanced Diffusion Transformers (DiTs) like FLUX. All old workflows still can be used For example, you can use text like a dog, [full body:fluffy:0. Stable Zero123 is a diffusion model that given an image with an object and a simple background can generate images of that object from different angles. Files to Download. Face Masking feature is available now, just add the "ReActorMaskHelper" Node to the workflow and connect it as shown below: Examples of ComfyUI workflows. Examples of such are guiding the process towards certain compositions using the Conditioning (Set Area), Conditioning (Set Mask), or GLIGEN Textbox Apply node. This repo contains examples of what is achievable with ComfyUI. Find and fix vulnerabilities Actions. Note that --force-fp16 will only work if you installed the latest pytorch nightly. Create an account on ComfyDeply setup your Communicate with ComfyUI via API and Websocket. 3B (1. Highly Recommended: After the base installation, install the ComfyUI Manager extension (). Follow the ComfyUI manual installation instructions for Windows and Linux. Here is an example of how the esrgan upscaler can be used for the upscaling step. For example I want to install ComfyUI. Create an account on ComfyDeply setup your ComfyICU provides a robust REST API that allows you to seamlessly integrate and execute your custom ComfyUI workflows in production environments. Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. 81) In ComfyUI the strengths are not averaged out like this so it will use the strengths exactly as you prompt them. The name of the worfklow sent in the inputs should be same as the name of the file (without the . Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: You can apply multiple hypernetworks by chaining multiple Hypernetwork Loader nodes in sequence. The prompt used is sourced from OpenAI's Sora: The prompt used is sourced from OpenAI's Sora: "A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. Contribute to 4rmx/comfyui-api-ws development by creating an account on GitHub. To use it you will need one of the t5xxl text encoder model files that you can find in: this repo, fp16 is recommended, if you don’t have that much memory fp8_scaled are recommended. THESE TWO CONFLICT WITH EACH OTHER. Apr 7, 2025 · Note for Windows users: There’s a standalone build available on the ComfyUI page, which bundles Python and dependencies for a more straightforwa rd setup. This is a model that is modified from flux and has had some changes in the architecture. The Wan2. Aug 2, 2024 · Good, i used CFG but it made the image blurry, i used regular KSampler node. The total steps is 16. 0 license and offers two versions: 14B (14 billion parameters) and 1. Let try the model withou the clip. 14) (girl:0. 3D Examples Stable Zero123. 0 on ComfyUI Step 1: Download the Lumina model. ; ComfyUI Manager and Custom-Scripts: These tools come pre-installed to enhance the functionality and customization of your applications. # This implementation is inspired by and based on the work of 2kpr. Note that in ComfyUI txt2img and img2img are the same node. Feb 8, 2025 · Run Lumina Image 2. LTX-Video is a very efficient video model by lightricks. Regular KSampler is incompatible with FLUX. Here is an example. - comfyanonymous/ComfyUI ReActorBuildFaceModel Node got "face_model" output to provide a blended face model directly to the main Node: Basic workflow 💾. There are many workflows included in the examples directory. 0. 3) (quality:1. - teward/ComfyUI-Helper-Nodes Contribute to kijai/ComfyUI-MimicMotionWrapper development by creating an account on GitHub. safetensors and vae to run FLUX. py Jun 3, 2023 · And imagine doing it with more advanced flows - for example my basic setup for SDXL is 3 positive + 3 negative prompts (one for each text encoder: base G+, base G-, base L+, base L-, refiner+, refiner-). You will first need: Text encoder and VAE: May 12, 2025 · Wan2. json workflow. But you do get images. gwj izjzv tvdg rynkuhwh fjeoqr fvurn whxyfb iacvqb nzgxx pwtd