该项目从 https://github.com/comfyanonymous/ComfyUI.git 镜像。
拉取镜像更新于 。
- 3月 07, 2023
-
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
- 3月 06, 2023
-
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode. Put the clip vision model in models/clip_vision Put the t2i style model in models/style_models StyleModelLoader to load it, StyleModelApply to apply it ConditioningAppend to append the conditioning it outputs to a positive one.
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
- 3月 05, 2023
-
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
- 3月 04, 2023
-
-
由 comfyanonymous 创作于
Specify which way is the recommended way to run ComfyUI in colab.
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
from the CLIPLoader node.
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
Use a more simple way to detect if the model is -v prediction.
-
由 comfyanonymous 创作于
-
- 3月 03, 2023
-
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
--use-pytorch-cross-attention to use it.
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
- 3月 01, 2023
-
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
- 2月 28, 2023
-
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-
由 comfyanonymous 创作于
-