在魔搭使用ComfyUI,玩转AIGC

简介: ComfyUI是一个功能强大、模块化程度高的AIGC图形和视频生成的用户界面和后台。

1.引言


ComfyUI是一个功能强大、模块化程度高的AIGC图形和视频生成的用户界面和后台。

image.png

在ComfyUI平台的前端页面上,用户可以基于节点/流程图的界面设计并执行AIGC文生图或者文生视频的pipeline。


更多信息,可访问ComfyUI的开源代码库




2.最佳实践

2.1环境配置和安装:

  1. python 3.10及以上版本
  2. pytorch 1.12及以上版本,推荐2.0及以上版本
  3. 建议使用CUDA 11.4及以上

本文在魔搭社区免费提供的GPU免费算力上体验:

image.png


开发者也可以使用modelscope的官方镜像,在云端或自有的设备上体验。


GPU环境镜像(python3.10):

registry.cn-beijing.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.1.0-py310-torch2.1.2-tf2.14.0-1.13.1
registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.1.0-py310-torch2.1.2-tf2.14.0-1.13.1
registry.us-west-1.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.1.0-py310-torch2.1.2-tf2.14.0-1.13.1



下载和部署ComfyUI

clone代码,并安装相关依赖:

#@title Environment Setup
from pathlib import Path
OPTIONS = {}
UPDATE_COMFY_UI = True  #@param {type:"boolean"}
WORKSPACE = 'ComfyUI'
OPTIONS['UPDATE_COMFY_UI'] = UPDATE_COMFY_UI
WORKSPACE = "/mnt/workspace/ComfyUI"
%cd /mnt/workspace/
![ ! -d $WORKSPACE ] && echo -= Initial setup ComfyUI =- && git clone https://github.com/comfyanonymous/ComfyUI
%cd $WORKSPACE
if OPTIONS['UPDATE_COMFY_UI']:
  !echo -= Updating ComfyUI =-
  !git pull
!echo -= Install dependencies =-



下载一些经典的文生图模型(包含SD基础模型,Lora,Controlnet等),并存放到models目录的相关子目录下。小伙伴们可以选择自己希望使用的模型并下载。

# Checkpoints
### SDXL
### I recommend these workflow examples: https://comfyanonymous.github.io/ComfyUI_examples/sdxl/
#!wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/stable-diffusion-xl-base-1.0/repo?Revision=master&FilePath=sd_xl_base_1.0.safetensors" -P ./models/checkpoints/
#!wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/stable-diffusion-xl-refiner-1.0/repo?Revision=master&FilePath=sd_xl_refiner_1.0.safetensors" -P ./models/checkpoints/
# SDXL ReVision
#!wget -c https://huggingface.co/comfyanonymous/clip_vision_g/resolve/main/clip_vision_g.safetensors -P ./models/clip_vision/
# SD1.5
#!wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/stable-diffusion-v1-5/repo?Revision=master&FilePath=v1-5-pruned-emaonly.ckpt" -P ./models/checkpoints/
# SD2
#!wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/stable-diffusion-2-1-base/repo?Revision=master&FilePath=v2-1_512-ema-pruned.safetensors" -P ./models/checkpoints/
#!wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/stable-diffusion-2-1/repo?Revision=master&FilePath=v2-1_768-ema-pruned.safetensors" -P ./models/checkpoints/
# Some SD1.5 anime style
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/Orange-Mixs/repo?Revision=master&FilePath=Models%2FAbyssOrangeMix2%2FAbyssOrangeMix2_hard.safetensors -P ./models/checkpoints/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/Orange-Mixs/repo?Revision=master&FilePath=Models%2FAbyssOrangeMix3%2FAOM3A1_orangemixs.safetensors -P ./models/checkpoints/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/Orange-Mixs/repo?Revision=master&FilePath=Models%2FAbyssOrangeMix3%2FAOM3A3_orangemixs.safetensors -P ./models/checkpoints/
!wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/anything-v3.0/repo?Revision=master&FilePath=Anything-V3.0-pruned-fp16.safetensors" -P ./models/checkpoints/
# Waifu Diffusion 1.5 (anime style SD2.x 768-v)
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/wd-1-5-beta3/repo?Revision=master&FilePath=wd-illusion-fp16.safetensors -P ./models/checkpoints/
# unCLIP models
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/illuminatiDiffusionV1_v11_unCLIP/repo?Revision=master&FilePath=illuminatiDiffusionV1_v11-unclip-h-fp16.safetensors -P ./models/checkpoints/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/wd-1.5-beta2_unCLIP/repo?Revision=master&FilePath=wd-1-5-beta2-aesthetic-unclip-h-fp16.safetensors -P ./models/checkpoints/
# VAE
!wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/sd-vae-ft-mse-original/repo?Revision=master&FilePath=vae-ft-mse-840000-ema-pruned.safetensors" -P ./models/vae/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/Orange-Mixs/repo?Revision=master&FilePath=VAEs%2Forangemix.vae.pt -P ./models/vae/
#!wget -c https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt -P ./models/vae/
# Loras
#!wget -c https://civitai.com/api/download/models/10350 -O ./models/loras/theovercomer8sContrastFix_sd21768.safetensors #theovercomer8sContrastFix SD2.x 768-v
#!wget -c https://civitai.com/api/download/models/10638 -O ./models/loras/theovercomer8sContrastFix_sd15.safetensors #theovercomer8sContrastFix SD1.x
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/stable-diffusion-xl-base-1.0/repo?Revision=master&FilePath=sd_xl_offset_example-lora_1.0.safetensors -P ./models/loras/ #SDXL offset noise lora
# T2I-Adapter
#!wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/T2I-Adapter/repo?Revision=master&FilePath=models%2Ft2iadapter_depth_sd14v1.pth -P ./models/controlnet/"
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/T2I-Adapter/repo?Revision=master&FilePath=models%2F -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/T2I-Adapter/repo?Revision=master&FilePath=models%2Ft2iadapter_sketch_sd14v1.pth -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/T2I-Adapter/repo?Revision=master&FilePath=models%2Ft2iadapter_keypose_sd14v1.pth -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/T2I-Adapter/repo?Revision=master&FilePath=models%2Ft2iadapter_openpose_sd14v1.pth -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/T2I-Adapter/repo?Revision=master&FilePath=models%2Ft2iadapter_color_sd14v1.pth -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/T2I-Adapter/repo?Revision=master&FilePath=models%2Ft2iadapter_canny_sd14v1.pth -P ./models/controlnet/
# T2I Styles Model
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/T2I-Adapter/repo?Revision=master&FilePath=models%2Ft2iadapter_style_sd14v1.pth -P ./models/style_models/
# CLIPVision model (needed for styles model)
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/clip-vit-large-patch14/repo?Revision=master&FilePath=pytorch_model.bin -O ./models/clip_vision/clip_vit14.bin
# ControlNet
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11e_sd15_ip2p_fp16.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11e_sd15_shuffle_fp16.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_canny_fp16.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11f1p_sd15_depth_fp16.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_inpaint_fp16.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_lineart_fp16.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_mlsd_fp16.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_normalbae_fp16.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_openpose_fp16.safetensors -P ./models/controlnet/
!wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_scribble_fp16.safetensors" -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_seg_fp16.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_softedge_fp16.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15s2_lineart_anime_fp16.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11u_sd15_tile_fp16.safetensors -P ./models/controlnet/
# ControlNet SDXL
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/control-lora/repo?Revision=master&FilePath=control-LoRAs-rank256%2Fcontrol-lora-canny-rank256.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/control-lora/repo?Revision=master&FilePath=control-LoRAs-rank256%2Fcontrol-lora-depth-rank256.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/control-lora/repo?Revision=master&FilePath=control-LoRAs-rank256%2Fcontrol-lora-recolor-rank256.safetensors -P ./models/controlnet/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/control-lora/repo?Revision=master&FilePath=control-LoRAs-rank256%2Fcontrol-lora-sketch-rank256.safetensors -P ./models/controlnet/
# Controlnet Preprocessor nodes by Fannovel16
#!cd custom_nodes && git clone https://github.com/Fannovel16/comfy_controlnet_preprocessors; cd comfy_controlnet_preprocessors && python install.py
# GLIGEN
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/GLIGEN_pruned_safetensors/repo?Revision=master&FilePath=gligen_sd14_textbox_pruned_fp16.safetensors -P ./models/gligen/
# ESRGAN upscale model
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/RealESRGAN_x4plus/repo?Revision=master&FilePath=RealESRGAN_x4plus.pth -P ./models/upscale_models/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/Real-ESRGAN/repo?Revision=master&FilePath=RealESRGAN_x2.pth -P ./models/upscale_models/
#!wget -c https://modelscope.cn/api/v1/models/AI-ModelScope/Real-ESRGAN/repo?Revision=master&FilePath=RealESRGAN_x.pth -P ./models/upscale_models/



同时,魔搭社区还有丰富的视频生成模型,lora,基础模型,id相关的如facechain,instantid等,如果仅需要下载单个模型文件,在模型的主页,选择模型文件,单机需要下载的模型文件,右上角即可点击下载或复制下载链接。

image.png


使用cloudflared运行ComfyUI

!wget "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/cloudflared-linux-amd64.deb"
!dpkg -i cloudflared-linux-amd64.deb
import subprocess
import threading
import time
import socket
import urllib.request
def iframe_thread(port):
  while True:
      time.sleep(0.5)
      sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
      result = sock.connect_ex(('127.0.0.1', port))
      if result == 0:
        break
      sock.close()
  print("\nComfyUI finished loading, trying to launch cloudflared (if it gets stuck here cloudflared is having issues)\n")
  p = subprocess.Popen(["cloudflared", "tunnel", "--url", "http://127.0.0.1:{}".format(port)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
  for line in p.stderr:
    l = line.decode()
    if "trycloudflare.com " in l:
      print("This is the URL to access ComfyUI:", l[l.find("http"):], end='')
    #print(l, end='')
threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()
!python main.py --dont-print-server



即可拉起一个ComfyUI的服务:

image.png


点击Queue Prompt,可稳定运行:

image.png


2.2load ComfyUI流程图

参考项目官网

下载更多的ComfyUI流程图。


本文以controlnet为例,从官网下载controlnet_example:

image.png


流程图片如下:

image.png


点击load,上传该图片:

image.png


就可以轻松复刻一个支持controlnet的工作流程啦!成品如下!

image.png


注意,上传图片后,需要检查图片中的模型文件和下载存储的模型名称是否一致,可直接一键点击如ckpt_name关联到存储的模型文件名字。


2.3使用comfyui manager和安装自定义node(以animatediff为例)

安装comfyui manager和animatediff

# #@title Environment Setup
from pathlib import Path
OPTIONS = {}
UPDATE_COMFY_UI = True  #@param {type:"boolean"}
INSTALL_COMFYUI_MANAGER = True  #@param {type:"boolean"}
INSTALL_ANIMATEDIFF = True  #@param {type:"boolean"}
INSTALL_CUSTOM_NODES_DEPENDENCIES = True  #@param {type:"boolean"}
OPTIONS['UPDATE_COMFY_UI'] = UPDATE_COMFY_UI
OPTIONS['INSTALL_COMFYUI_MANAGER'] = INSTALL_COMFYUI_MANAGER
OPTIONS['INSTALL_ANIMATEDIFF'] = INSTALL_ANIMATEDIFF
OPTIONS['INSTALL_CUSTOM_NODES_DEPENDENCIES'] = INSTALL_CUSTOM_NODES_DEPENDENCIES
current_dir = !pwd
WORKSPACE = f"{current_dir[0]}/ComfyUI"
%cd /mnt/workspace/
![ ! -d $WORKSPACE ] && echo -= Initial setup ComfyUI =- && git clone https://github.com/comfyanonymous/ComfyUI
%cd $WORKSPACE
if OPTIONS['UPDATE_COMFY_UI']:
  !echo "-= Updating ComfyUI =-"
  !git pull
# 安装comfyui manager
if OPTIONS['INSTALL_COMFYUI_MANAGER']:
  %cd custom_nodes
  ![ ! -d ComfyUI-Manager ] && echo -= Initial setup ComfyUI-Manager =- && git clone https://github.com/ltdrdata/ComfyUI-Manager
  %cd ComfyUI-Manager
  !git pull
# 安装animatediff(也可以在页面端comfyui manager安装)
if OPTIONS['INSTALL_ANIMATEDIFF']:
  %cd ../
  ![ ! -d ComfyUI-AnimateDiff-Evolved ] && echo -= Initial setup AnimateDiff =- && git clone https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved
  %cd ComfyUI-AnimateDiff-Evolved
  !git pull
%cd $WORKSPACE
if OPTIONS['INSTALL_CUSTOM_NODES_DEPENDENCIES']:
  !pwd
  !echo "-= Install custom nodes dependencies =-"
  ![ -f "custom_nodes/ComfyUI-Manager/scripts/colab-dependencies.py" ] && python "custom_nodes/ComfyUI-Manager/scripts/colab-dependencies.py"


在魔搭社区下载animatediff所需要的模型文件:

#@markdown ###Download standard resources
### SDXL
### I recommend these workflow examples: https://comfyanonymous.github.io/ComfyUI_examples/sdxl/
OPTIONS = {}
#@markdown **Models**
SDXL_1_0_BASE_AND_REFINER = True  #@param {type:"boolean"}
OPTIONS['SDXL_1_0_BASE_AND_REFINER'] = SDXL_1_0_BASE_AND_REFINER
if OPTIONS['SDXL_1_0_BASE_AND_REFINER']:
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/stable-diffusion-xl-base-1.0/repo?Revision=master&FilePath=sd_xl_base_1.0.safetensors" -P ./models/checkpoints/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/stable-diffusion-xl-refiner-1.0/repo?Revision=master&FilePath=sd_xl_refiner_1.0.safetensors" -P ./models/checkpoints/
SD_1_5_MODEL = True  #@param {type:"boolean"}
OPTIONS['SD_1_5_MODEL'] = SD_1_5_MODEL
if OPTIONS['SD_1_5_MODEL']:
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/stable-diffusion-v1-5/repo?Revision=master&FilePath=v1-5-pruned-emaonly.ckpt" -P ./models/checkpoints/
#@markdown **VAEs**
SDXL_1_0_VAE = True  #@param {type:"boolean"}
OPTIONS['SDXL_1_0_VAE'] = SDXL_1_0_VAE
if OPTIONS['SDXL_1_0_VAE']:
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/sdxl-vae-fp16-fix/repo?Revision=master&FilePath=diffusion_pytorch_model.safetensors" -O ./models/vae/sdxl-vae-fp16-fix.safetensors #sdxl-vae-fp16-fix.safetensors
SD_1_5_VAE = True  #@param {type:"boolean"}
OPTIONS['SD_1_5_VAE'] = SD_1_5_VAE
if OPTIONS['SD_1_5_VAE']:
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/sd-vae-ft-mse-original/repo?Revision=master&FilePath=vae-ft-mse-840000-ema-pruned.safetensors" -P ./models/vae/
#@markdown **Controlnets**
SDXL_1_0_CONTROLNETS = True  #@param {type:"boolean"}
OPTIONS['SDXL_1_0_CONTROLNETS'] = SDXL_1_0_CONTROLNETS
if OPTIONS['SDXL_1_0_CONTROLNETS']:
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/control-lora/repo?Revision=master&FilePath=control-LoRAs-rank256%2Fcontrol-lora-canny-rank256.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/control-lora/repo?Revision=master&FilePath=control-LoRAs-rank256%2Fcontrol-lora-depth-rank256.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/control-lora/repo?Revision=master&FilePath=control-LoRAs-rank256%2Fcontrol-lora-recolor-rank256.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/control-lora/repo?Revision=master&FilePath=control-LoRAs-rank256%2Fcontrol-lora-sketch-rank256.safetensors" -P ./models/controlnet/
SD_1_5_CONTROLNETS = True  #@param {type:"boolean"}
OPTIONS['SD_1_5_CONTROLNETS'] = SD_1_5_CONTROLNETS
if OPTIONS['SD_1_5_CONTROLNETS']:
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11e_sd15_ip2p_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11e_sd15_shuffle_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_canny_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11f1p_sd15_depth_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_inpaint_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_lineart_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_mlsd_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_normalbae_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_openpose_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_scribble_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_seg_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15_softedge_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11p_sd15s2_lineart_anime_fp16.safetensors" -P ./models/controlnet/
  !wget -c "https://modelscope.cn/api/v1/models/AI-ModelScope/ControlNet-v1-1_fp16_safetensors/repo?Revision=master&FilePath=control_v11u_sd15_tile_fp16.safetensors" -P ./models/controlnet/
#@markdown **AnimateDiff**
AD_MOTION_MODELS = True  #@param {type:"boolean"}
OPTIONS['AD_MOTION_MODELS'] = AD_MOTION_MODELS
if OPTIONS['AD_MOTION_MODELS']:
  !wget -c "https://modelscope.cn/api/v1/models/Shanghai_AI_Laboratory/animatediff/repo?Revision=master&FilePath=mm_sd_v14.ckpt" -P ./custom_nodes/ComfyUI-AnimateDiff-Evolved/models/
  !wget -c "https://modelscope.cn/api/v1/models/Shanghai_AI_Laboratory/animatediff/repo?Revision=master&FilePath=mm_sd_v15.ckpt" -P ./custom_nodes/ComfyUI-AnimateDiff-Evolved/models/
  !wget -c "https://modelscope.cn/api/v1/models/Shanghai_AI_Laboratory/animatediff/repo?Revision=master&FilePath=mm_sd_v15_v2.ckpt" -P ./custom_nodes/ComfyUI-AnimateDiff-Evolved/models/
AD_MOTION_LORAS = True  #@param {type:"boolean"}
OPTIONS['AD_MOTION_LORAS'] = AD_MOTION_LORAS
if OPTIONS['AD_MOTION_LORAS']:
  !wget -c "https://modelscope.cn/api/v1/models/Shanghai_AI_Laboratory/animatediff/repo?Revision=master&FilePath=v2_lora_PanLeft.ckpt" -P ./custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/
  !wget -c "https://modelscope.cn/api/v1/models/Shanghai_AI_Laboratory/animatediff/repo?Revision=master&FilePath=v2_lora_PanRight.ckpt" -P ./custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/
  !wget -c "https://modelscope.cn/api/v1/models/Shanghai_AI_Laboratory/animatediff/repo?Revision=master&FilePath=v2_lora_RollingAnticlockwise.ckpt" -P ./custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/
  !wget -c "https://modelscope.cn/api/v1/models/Shanghai_AI_Laboratory/animatediff/repo?Revision=master&FilePath=v2_lora_RollingClockwise.ckpt" -P ./custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/
  !wget -c "https://modelscope.cn/api/v1/models/Shanghai_AI_Laboratory/animatediff/repo?Revision=master&FilePath=v2_lora_TiltDown.ckpt" -P ./custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/
  !wget -c "https://modelscope.cn/api/v1/models/Shanghai_AI_Laboratory/animatediff/repo?Revision=master&FilePath=v2_lora_TiltUp.ckpt" -P ./custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/
  !wget -c "https://modelscope.cn/api/v1/models/Shanghai_AI_Laboratory/animatediff/repo?Revision=master&FilePath=v2_lora_ZoomIn.ckpt" -P ./custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/
  !wget -c "https://modelscope.cn/api/v1/models/Shanghai_AI_Laboratory/animatediff/repo?Revision=master&FilePath=v2_lora_ZoomOut.ckpt" -P ./custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/


按照上面的方式使用cloudflared运行ComfyUI

!wget "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/cloudflared-linux-amd64.deb"
!dpkg -i cloudflared-linux-amd64.deb
%cd /mnt/workspace/ComfyUI
import subprocess
import threading
import time
import socket
import urllib.request
def iframe_thread(port):
  while True:
      time.sleep(0.5)
      sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
      result = sock.connect_ex(('127.0.0.1', port))
      if result == 0:
        break
      sock.close()
  print("\nComfyUI finished loading, trying to launch cloudflared (if it gets stuck here cloudflared is having issues)\n")
  p = subprocess.Popen(["cloudflared", "tunnel", "--url", "http://127.0.0.1:{}".format(port)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
  for line in p.stderr:
    l = line.decode()
    if "trycloudflare.com " in l:
      print("This is the URL to access ComfyUI:", l[l.find("http"):], end='')
    #print(l, end='')
threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()
!python main.py --dont-print-server



打开页面,即可看到comfyUI的manager模块:

image.png


点击Manager,选择可以安装所需要的自定义Nodes(Install Custom Nodes)或者缺少的Nodes(Install Missing Custom Nodes),如Animatediff Evolved

image.png


导入如下workflow,即可使用

image.png


实践代码链接:

ComfyUI最佳实践


ComfyUI+Manager最佳实践


欢迎在魔搭体验ComfyUI,并分享更多好玩的模型和工作流程!


点击即可跳转体验~

概览 · 魔搭社区 (modelscope.cn)

作者介绍
目录