- 8月 14, 2023
-
-
由 AUTOMATIC1111 创作于
-
由 AUTOMATIC1111 创作于
-
由 AUTOMATIC1111 创作于
-
由 AUTOMATIC1111 创作于
-
由 AUTOMATIC1111 创作于
-
由 AUTOMATIC1111 创作于
-
由 AUTOMATIC1111 创作于
-
由 Kohaku-Blueleaf 创作于
-
由 AUTOMATIC1111 创作于
-
由 AUTOMATIC1111 创作于
-
由 Kohaku-Blueleaf 创作于
-
由 AUTOMATIC1111 创作于
-
由 AUTOMATIC1111 创作于
-
由 Ikko Eltociear Ashimine 创作于
existance -> existence
-
- 8月 13, 2023
-
-
由 AUTOMATIC1111 创作于
add new callback for scripts to be used before processing
-
由 brkirch 创作于
-
由 brkirch 创作于
-
由 brkirch 创作于
-
由 brkirch 创作于
-
由 brkirch 创作于
For MPS, using a tensor created with `torch.empty()` can cause `torch.baddbmm()` to include NaNs in the tensor it returns, even though `beta=0`. However, with a tensor of shape [1,1,1], there should be a negligible performance difference between `torch.empty()` and `torch.zeros()` anyway, so it's better to just use `torch.zeros()` for this and avoid unnecessarily creating issues.
-
由 brkirch 创作于
-
由 brkirch 创作于
Even if this causes chunks to be much smaller, performance isn't significantly impacted. This will usually reduce memory usage but should also help with poor performance when free memory is low.
-
由 AUTOMATIC1111 创作于
-
由 catboxanon 创作于
-
由 catboxanon 创作于
-
由 catboxanon 创作于
-
由 catboxanon 创作于
-
由 AUTOMATIC1111 创作于
Lora: output warnings in UI rather than fail for unfitting loras; switch to logging for error output in console
-
由 catboxanon 创作于
-
由 catboxanon 创作于
-
由 catboxanon 创作于
More significant VRAM reduction.
-
由 catboxanon 创作于
Significantly reduces VRAM. This makes encoding more inline with how decoding currently functions.
-
由 catboxanon 创作于
-
由 AUTOMATIC1111 创作于
-
由 AUTOMATIC1111 创作于
write out correct model name in infotext, rather than the refiner model
-
由 AUTOMATIC1111 创作于
-
- 8月 12, 2023
-
-
由 AUTOMATIC1111 创作于
-
由 AUTOMATIC1111 创作于
-
由 AUTOMATIC1111 创作于
add a way for scripts to register a callback for before/after just a single component's creation
-