Skip to content
GitLab
菜单
为什么选择 GitLab
定价
联系销售
探索
为什么选择 GitLab
定价
联系销售
探索
登录
获取免费试用
主导航
搜索或转到…
项目
C
ComfyUI
管理
动态
成员
代码
仓库
分支
提交
标签
仓库图
比较修订版本
锁定的文件
部署
模型注册表
分析
模型实验
帮助
帮助
支持
GitLab 文档
比较 GitLab 各版本
社区论坛
为极狐GitLab 提交贡献
提交反馈
隐私声明
快捷键
?
新增功能
4
代码片段
群组
项目
Show more breadcrumbs
hanamizuki
ComfyUI
提交
799f510d
提交
799f510d
编辑于
2年前
作者:
comfyanonymous
浏览文件
操作
下载
补丁
差异文件
Add some links to notebook for the t2i styles model.
上级
8515d963
No related branches found
No related tags found
无相关合并请求
变更
1
隐藏空白变更内容
行内
左右并排
显示
1 个更改的文件
notebooks/comfyui_colab.ipynb
+5
-0
5 个添加, 0 个删除
notebooks/comfyui_colab.ipynb
有
5 个添加
和
0 个删除
notebooks/comfyui_colab.ipynb
+
5
−
0
浏览文件 @
799f510d
...
@@ -89,6 +89,11 @@
...
@@ -89,6 +89,11 @@
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_color_sd14v1.pth -P ./models/t2i_adapter/\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_color_sd14v1.pth -P ./models/t2i_adapter/\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_canny_sd14v1.pth -P ./models/t2i_adapter/\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_canny_sd14v1.pth -P ./models/t2i_adapter/\n",
"\n",
"\n",
"# T2I Styles Model\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_style_sd14v1.pth -P ./models/style_models/\n",
"\n",
"# CLIPVision model (needed for styles model)\n",
"#!wget -c https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin -O ./models/clip_vision/clip_vit14.bin\n",
"\n",
"\n",
"\n",
"\n",
"# ControlNet\n",
"# ControlNet\n",
...
...
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
Git clone the repo and install the requirements. (ignore the pip errors about protobuf)
Git clone the repo and install the requirements. (ignore the pip errors about protobuf)
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
!git clone https://github.com/comfyanonymous/ComfyUI
!git clone https://github.com/comfyanonymous/ComfyUI
%cd ComfyUI
%cd ComfyUI
!pip install xformers -r requirements.txt
!pip install xformers -r requirements.txt
!sed -i 's/v1-inference.yaml/v1-inference_fp16.yaml/g' webshit/index.html
!sed -i 's/v1-inference.yaml/v1-inference_fp16.yaml/g' webshit/index.html
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
Download some models/checkpoints/vae or custom comfyui nodes (uncomment the commands for the ones you want)
Download some models/checkpoints/vae or custom comfyui nodes (uncomment the commands for the ones you want)
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
# Checkpoints
# Checkpoints
# SD1.5
# SD1.5
!wget -c https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -P ./models/checkpoints/
!wget -c https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -P ./models/checkpoints/
# SD2
# SD2
#!wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.safetensors -P ./models/checkpoints/
#!wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.safetensors -P ./models/checkpoints/
#!wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.safetensors -P ./models/checkpoints/
#!wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.safetensors -P ./models/checkpoints/
# Some SD1.5 anime style
# Some SD1.5 anime style
#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_hard.safetensors -P ./models/checkpoints/
#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_hard.safetensors -P ./models/checkpoints/
#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1.safetensors -P ./models/checkpoints/
#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1.safetensors -P ./models/checkpoints/
#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A3.safetensors -P ./models/checkpoints/
#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A3.safetensors -P ./models/checkpoints/
#!wget -c https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/anything-v3-fp16-pruned.safetensors -P ./models/checkpoints/
#!wget -c https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/anything-v3-fp16-pruned.safetensors -P ./models/checkpoints/
# Waifu Diffusion 1.5 (anime style SD2.x 768-v)
# Waifu Diffusion 1.5 (anime style SD2.x 768-v)
#!wget -c https://huggingface.co/waifu-diffusion/wd-1-5-beta2/resolve/main/checkpoints/wd-1-5-beta2-fp16.safetensors -P ./models/checkpoints/
#!wget -c https://huggingface.co/waifu-diffusion/wd-1-5-beta2/resolve/main/checkpoints/wd-1-5-beta2-fp16.safetensors -P ./models/checkpoints/
# VAE
# VAE
!wget -c https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors -P ./models/vae/
!wget -c https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors -P ./models/vae/
#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt -P ./models/vae/
#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt -P ./models/vae/
# Loras
# Loras
#!wget -c --content-disposition https://civitai.com/api/download/models/10350 -P ./models/loras/ #theovercomer8sContrastFix SD2.x 768-v
#!wget -c --content-disposition https://civitai.com/api/download/models/10350 -P ./models/loras/ #theovercomer8sContrastFix SD2.x 768-v
#!wget -c --content-disposition https://civitai.com/api/download/models/10638 -P ./models/loras/ #theovercomer8sContrastFix SD1.x
#!wget -c --content-disposition https://civitai.com/api/download/models/10638 -P ./models/loras/ #theovercomer8sContrastFix SD1.x
# T2I-Adapter
# T2I-Adapter
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_depth_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_depth_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_seg_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_seg_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_sketch_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_sketch_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_keypose_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_keypose_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_openpose_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_openpose_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_color_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_color_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_canny_sd14v1.pth -P ./models/t2i_adapter/
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_canny_sd14v1.pth -P ./models/t2i_adapter/
# T2I Styles Model
#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_style_sd14v1.pth -P ./models/style_models/
# CLIPVision model (needed for styles model)
#!wget -c https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin -O ./models/clip_vision/clip_vit14.bin
# ControlNet
# ControlNet
#!wget -c https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/main/control_depth-fp16.safetensors -P ./models/controlnet/
#!wget -c https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/main/control_depth-fp16.safetensors -P ./models/controlnet/
#!wget -c https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/main/control_scribble-fp16.safetensors -P ./models/controlnet/
#!wget -c https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/main/control_scribble-fp16.safetensors -P ./models/controlnet/
#!wget -c https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/main/control_openpose-fp16.safetensors -P ./models/controlnet/
#!wget -c https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/main/control_openpose-fp16.safetensors -P ./models/controlnet/
# Controlnet Preprocessor nodes by Fannovel16
# Controlnet Preprocessor nodes by Fannovel16
#!cd custom_nodes && git clone https://github.com/Fannovel16/comfy_controlnet_preprocessors; cd comfy_controlnet_preprocessors && python install.py
#!cd custom_nodes && git clone https://github.com/Fannovel16/comfy_controlnet_preprocessors; cd comfy_controlnet_preprocessors && python install.py
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### Run ComfyUI with localtunnel (Recommended Way)
### Run ComfyUI with localtunnel (Recommended Way)
use the
**fp16**
model configs for more speed
use the
**fp16**
model configs for more speed
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
!npm install -g localtunnel
!npm install -g localtunnel
import subprocess
import subprocess
import threading
import threading
import time
import time
import socket
import socket
def iframe_thread(port):
def iframe_thread(port):
while True:
while True:
time.sleep(0.5)
time.sleep(0.5)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex(('127.0.0.1', port))
result = sock.connect_ex(('127.0.0.1', port))
if result == 0:
if result == 0:
break
break
sock.close()
sock.close()
print("\nComfyUI finished loading, trying to launch localtunnel (if it gets stuck here localtunnel is having issues)")
print("\nComfyUI finished loading, trying to launch localtunnel (if it gets stuck here localtunnel is having issues)")
p = subprocess.Popen(["lt", "--port", "{}".format(port)], stdout=subprocess.PIPE)
p = subprocess.Popen(["lt", "--port", "{}".format(port)], stdout=subprocess.PIPE)
for line in p.stdout:
for line in p.stdout:
print(line.decode(), end='')
print(line.decode(), end='')
threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()
threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()
!python main.py --dont-print-server
!python main.py --dont-print-server
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work)
### Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work)
use the
**fp16**
model configs for more speed
use the
**fp16**
model configs for more speed
You should see the ui appear in an iframe. If you get a 403 error, it's your firefox settings or an extension that's messing things up.
You should see the ui appear in an iframe. If you get a 403 error, it's your firefox settings or an extension that's messing things up.
If you want to open it in another window use the link.
If you want to open it in another window use the link.
Note that some UI features like live image previews won't work because the colab iframe blocks websockets.
Note that some UI features like live image previews won't work because the colab iframe blocks websockets.
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
import threading
import threading
import time
import time
import socket
import socket
def iframe_thread(port):
def iframe_thread(port):
while True:
while True:
time.sleep(0.5)
time.sleep(0.5)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex(('127.0.0.1', port))
result = sock.connect_ex(('127.0.0.1', port))
if result == 0:
if result == 0:
break
break
sock.close()
sock.close()
from google.colab import output
from google.colab import output
output.serve_kernel_port_as_iframe(port, height=1024)
output.serve_kernel_port_as_iframe(port, height=1024)
print("to open it in a window you can open this link here:")
print("to open it in a window you can open this link here:")
output.serve_kernel_port_as_window(port)
output.serve_kernel_port_as_window(port)
threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()
threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()
!python main.py --dont-print-server
!python main.py --dont-print-server
```
```
...
...
This diff is collapsed.
点击以展开。
预览
0%
加载中
请重试
或
添加新附件
.
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
保存评论
取消
想要评论请
注册
或
登录