add `--lowram` parameter
load models to VRM instead of RAM (for machines which have bigger VRM than RAM such as free Google Colab server)
想要评论请 注册 或 登录
load models to VRM instead of RAM (for machines which have bigger VRM than RAM such as free Google Colab server)