这是用户在 2024-9-26 7:49 为 https://app.immersivetranslate.com/word/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

IC-Light | Video Relighting | AnimateDiff
IC-Light 集成电路 |视频重新照明 |AnimateDiff (动画差异

In this ComfyUI IC-Light workflow, you can easily relight your “Human” Character Video using a Lightmap. Based on your prompts, and elements in your light maps like shapes and neon lights, the tool regenerates a new video with relighting.
在这个 ComfyUI IC-Light 工作流程中,您可以使用光照贴图轻松重新照亮您的“人类”角色视频B根据您的提示以及光照贴图中的元素(如形状和霓虹灯),该工具会重新生成一个带有重新照明的新视频

Workflow: https://drive.google.com/drive/folders/16Aq1mqZKP-h8vApaN4FX5at3acidqPUv
工作流:https://drive.google.com/drive/folders/16Aq1mqZKP-h8vApaN4FX5at3acidqPUv

YouTube Tutorial Link: https://youtu.be/q__YTKxtQAE
YouTube 教程链接: https://youtu.be/q__YTKxtQAE


HOW TO USE:
如何使用:

Upload Source Video.
上传源视频。

Upload Light Map Video or Single Light Map Image.
上传光照贴图视频或单个光照贴图图像。

Enter Load Cap and other Settings, same settings should be in Light Map Video.
输入 Load Cap 和其他 Settings,相同的设置应该在 Light Map Video 中。

Enter Prompts which describes your new light settings like Sunlight or neon lights.
输入 Prompts(提示),它描述了您的新光照设置,如 Sunlight(阳光)或霓虹灯。

Select You Model. (Realistic model is preferred)
选择 You Model。(有写实模型优先)

Change Light Map Composing and other Settings if needed.
如果需要,更改光照贴图合成和其他设置。

Hit render
点击渲染

Outputs will be saved in Comfyui > Outputs
输出将保存在 Comfyui > 输出中

------------------------------------------------------------------------------------------------------------------------------------

Inputs_1 - Settings
Inputs_1 - 设置

Here we have 5 Settings:
这里我们有 5 个设置:

Sampler Steps: It determines the Total number of steps ksampler take to render an image. It should not be changed. [Default Value 26]
Sampler Steps(采样器步骤):它确定 ksampler 渲染图像所需的总步骤数。它不应该被改变。[默认值 26]

Detail Enhancer: It Increase the minute Details in the Final Render. [Use value Between 0.1 and 1]
Detail Enhancer(细节增强器):它增加了最终渲染中的微小细节。[使用介于 0.1 和 1 之间的值]

Seed: It controls the Generations Seed for every Ksamplers
种子:它控制每个 Ksampler 的 Generations Seed
.

Sampler CFG: This Controls the CFG values of the Ksamplers
采样器 CFG:控制 Ksampler 的 CFG 值

Refiner Upscale: This works like the Highres Fix value. [Use between 1.1 – 1.6 for best results]
Refiner Upscale:这与 Highres Fix 值类似。[在 1.1 – 1.6 之间使用以获得最佳效果]

Prompts
提示

Positive Prompt: Enter prompts which best describes your Image with the new lighting.
肯定提示:输入最能描述具有新光照的图像的提示。

Negative Prompts: It is configured to give best results. Feel free to edit it.
否定提示:它被配置为提供最佳结果。请随意编辑它。

Clip Text Encode nodes: It helps in Encoding Text to maximize quality. Leave it at “full”
Clip Text Encode (剪辑文本编码) 节点:它有助于对文本进行编码以最大限度地提高质量。将其保留为 “full”

Models and Loras
模型和 Loras

Checkpoint: Choose any realistic SD 1.5 model for accurate results. Feel to choose any SD 1.5 model for stylistic results.
检查点:选择任何逼真的 SD 1.5 模型以获得准确的结果。选择任何 SD 1.5 型号以获得风格效果。

Loras: [Optional] Choose any loras from the give list if you desire. Do not use them at full strength. Use around 0.5-0.7 for best effect
Loras: [可选] 如果您愿意,可以从捐赠列表中选择任何 loras。不要全力使用它们。使用约 0.5-0.7 以获得最佳效果

Input Source Video
输入源视频

Upload Source Video: Here you click and upload your Human Character video you want to change light of.
上传源视频: 在这里,您单击并上传要更改光线的 Human Character 视频。

- It should be under
100 MB, Comfy will fail to upload large size.

- 它应该在
100 MB 以下,Comfy 将无法上传大尺寸。

- It should be no longer than 15-20 seconds. It may fail to render longer videos
- 它不应超过 15-20 秒。它可能无法渲染更长的视频

- It should be in 720p or lower
- 它应该是 720p 或更低

- Use Skip Frames Nodes if you need to skip some starting frames. [Light Map
video’s will also skip this much frames]

- 如果您需要跳过一些起始帧,请使用跳过帧节点。[光图
video 也会跳过这么多帧]


Fit Image Size Limiter: Here you cap the rendering resolution, whether be landscape or portrait, max resolution will always be under or equal to Set value.
Fit Image Size Limiter(适合图像大小限制器):在这里,您可以限制渲染分辨率,无论是横向还是纵向,最大分辨率将始终小于或等于 Set 值。

- Use Value between 800 – 1200 for best results. [This will impact
Vram]

- 使用 800 – 1200 之间的值以获得最佳效果。[这将影响
弗兰]

IMP: Use Frames Load Cap of 10 to Test Out First
IMP:首先使用 10 的帧负载上限进行测试

- Use About 200 - 300 frames at 1000 – 1200 fit size, if you have 24 GB.
- 如果您有 24 GB,请使用 1000 – 1200 适合大小的大约 200 - 300 帧。

- Use 0 if you want to render all frames. [Not Recommended for longer videos]
- 如果要渲染所有帧,请使用 0。[不建议用于较长的视频]

Mask and Depth Settings
蒙版和深度设置

Mask: It uses the Robust Video Matting, The default settings are fine.
蒙版: 它使用 Robust Video Matting, 默认设置很好。

Depth ControlNet: It uses the latest DepthAnything v2 models.
Depth ControlNet:它使用最新的 DepthAnything v2 模型。

- Strength and End Percent are Set at 75% to give optimal results
- 强度和结束百分比设置为 75% 以获得最佳结果

- Use Co Adaptor Depth (
https://huggingface.co/TencentARC/T2I-Adapter/blob/main/models/coadapter-depth-sd15v1.pth) for best results.

- 使用 Co 适配器深度 (
https://huggingface.co/TencentARC/T2I-Adapter/blob/main/models/coadapter-depth-sd15v1.pth) 以获得最佳效果。

Light Map
光照贴图

Upload Light Map: Click and upload a light map video you want.
Upload Light Map(上传光照贴图):单击并上传所需的光照贴图视频。

- It will auto scale to the Source video’s dimensions
- 它将自动缩放到源视频的尺寸

- Make Sure it is longer or equal to source video’s dimensions
else it will give error.

- 确保它更长或等于源视频的尺寸
否则会产生错误。

Light Map ControlNet: This light map is also used as Light controlnet using this model - https://civitai.com/models/80536?modelVersionId=85428
Light Map ControlNet:此光照贴图也用作 使用此模型的 Light controlnet - https://civitai.com/models/80536?modelVersionId=85428

CN Strength and End Percent: Use Low values here, Higher values may cause over exposure or Sharp Light transition.
CN 强度和结束百分比:在此处使用低值,较高的值可能会导致过度曝光或锐光过渡。

Single Light Map
单光照贴图

To Use a Single Image as light map, unmute these nodes and connect the reroute node to “Pick one Input” Node.
要将单个图像用作光照贴图,请取消禁用这些节点,并将重新路由节点连接到“Pick one Input”节点。

AnimateDIff
动画 DIff

Load Animatediff Model: You can use any model for different effects.
Load Animatediff Model(加载 Animatediff 模型):您可以使用任何模型来获得不同的效果。

Animatediff Other nodes: You need to have some knowledge of animatediff to change other settings [ You can find them here: [ https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/tree/main]
Animatediff 其他节点:您需要对 animatediff 有一定的了解才能更改其他设置 [ 您可以在此处找到它们:[ https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/tree/main]

Settings SMZ: This is node to increase more quality of the model Pipeline, all settings are predefined to work well.
设置 SMZ:这是提高模型 Pipeline 质量的节点,所有设置都是预定义的,以便正常工作。

Composing of Light Map and IC Conditioning
光映射和 IC 调节的合成

The Above top Adjustment Nodes (In Grey color) are there to control the Conditioning of the IC Light Conditioning, to make is less contrast and control brightness.
上面顶部的调整节点(灰色)用于控制 IC Light Conditioning 的调节,以降低对比度并控制亮度。

Generate New background: When Disabled it will the original image inputs and try to map the details similar to the source video’s background according to “Background Prompts” if present in the positive prompt box
生成新背景:禁用后,它将输入原始图像,并尝试根据“背景提示”(如果正提示框中存在)映射类似于源视频背景的细节

[1girl, sunlight, sunset, white shirt, black short jeans, interior, room]
[1女孩, 阳光, 日落, 白衬衫, 黑色短牛仔裤, 室内, 房间]

When Generate New background is Enabled: It will generate a new background considering the depth
启用 Generate New background 时:它将根据深度生成新背景

[1girl, sunlight, sunset, nature in the background, sky]
[1女孩, 阳光, 日落, 背景中的自然, 天空]


Also Depth ControlNet’s Strength and End Percent was decreased to 45 % to have an Open Area in the background.
此外,Depth ControlNet 的 Strength 和 End Percent 降低到 45%,以便在背景中有一个 Open Area。

Light Map on Top: When True, Light map will be on top of the Source video and more dominant, When False Source will be on top, more dominant and Brighter
Light Map on Top:当 True 时,Light map 将位于 Source video 的顶部并且更占主导地位,当 False S位于 Top,它将更占主导地位且更亮

Subject Affecting Area: 2 Blending modes works the best
主体影响区域:2 种混合模式效果最佳

- Multiply: It will darker the shadow areas according to light map on top or bottom
- 正片叠底:根据顶部或底部的光照贴图使阴影区域变暗

- Screen: It will brighten the shadow area according to light map on top or bottom
- 屏幕:它会根据顶部或底部的光照贴图使阴影区域变亮

- Blend Factor is for the intensity.
- Blend Factor 用于强度。

Overall Adjustments: This will control the brightness, contrast, gamma, tint of the Final Processed Light map from above.
Overall Adjustments(整体调整):这将从上方控制 Final Processed Light 贴图的亮度、对比度、灰度系数和色调。

Image Remap: Use this node to control the overall global brightness and Darkness the whole image.
Image Remap:使用此节点可控制整体全局亮度和整个图像的 Darkness。



- Higher min value will brighten the scene
- 较高的最小值将使场景变亮

- Lower Max values will make the Scene Darker and can convert the Brighter areas into morphing objects like the
- 较低的 Max 值将使 Scene 更暗,并且可以将 Brighter 区域转换为变形对象,例如
QrCode
二维码
Monster CN
怪兽 CN

- Use mostly the Min value to 0.1 or 0.2 to light up a scene a little bit.
- 主要将 Min 值设置为 0.1 或 0.2 以稍微照亮场景。

- Min Value 0 will have a pitch-black shadow for black pixels of the light map.
- 最小值 0 将为光照贴图的黑色像素提供漆黑的阴影。

KSamplers (Raw and Refine)
KSamplers (Raw 和 Refine)

IC Raw Ksampler: Unlike any other sampler it is starting at step 8 instead of zero, due the IC light Condition (The frames are denoised from 8th Step)
IC Raw Ksampler:与任何其他采样器不同,由于 IC 光照条件,它从第 8 步而不是第 0 步开始(帧从 8 步开始降噪

For example, an End step of 20
例如,End step 为 20

Start Step at
开始步骤

- 0 will have no Light map effect.
- 0 将不具有光照贴图效果。

- 5 will have 50 percent effect
- 5 将产生 50% 的效果

- 10 will have 100 percent effect.
- 10 将具有 100% 的效果。

-
So, about 3-8 is a good value to test from.

-
所以,大约 3-8 是一个很好的测试值。

When Generate New Background is TRUE, you can go Lower than 5 for Better Results
当 Generate New Background 为 TRUE 时,您可以小于 5 以获得更好的结果

Ksampler Refine: It works like a Img2Img Refiner After IC raw sampler.
Ksampler Refine:它的工作原理类似于 Img2Img Refiner After IC 原始采样器。

For an End step of 25
对于 End 步长 25

Start Step at
开始步骤

- 10 and below will work like raw sampler and will give you morphing objects
- 10 及以下版本的工作方式类似于 Raw Sampler,并为您提供变形对象

- 15 will work like a proper refiner.
- 15 将像一个合适的 Refiner 一样工作。

- 20 will not work properly
- 20 将无法正常工作

- Above 20 and above will produce messed up results
- 超过 20 及以上会产生一团糟的结果

- So, Default 16 is good.
- 所以,默认 16 很好。

Face Fix
面部修复

Upscale For Face Fix: If your Faces are not satisfactory after face fix, you can upscale it to about 1.2 to 1.6 to have better faces.
高档面部修复:如果您的面部修复后不满意,您可以将其放大到大约 1.2 到 1.6 以获得更好的面部。

Positive Prompt: Here you can write the prompts for the face. It’s set to “smiling” by default. You can change it.
肯定提示: 在这里,您可以编写人脸的提示。默认情况下,它设置为 “smiling”。您可以更改它。

Face Denoise: Use Around, 0.35 – 0.45. On higher face may render incorrectly and also sliding faces issue may arise.
面部降噪:使用 Around,0.35 – 0.45。在较高的面上,可能会出现渲染不正确的情况,并且可能会出现滑动面问题。

Saving
储蓄

Video Combine: This will export all the frames in a video format. If this node fails why combining that means there are too much frames, and it is running out of ram. Reduce the frames load cap if it happens
视频合并:这将以视频格式导出所有帧。如果此节点失败,则合并意味着帧太多,并且 RAM 即将耗尽。如果发生这种情况,请减少帧负载上限

- It will save into
ComfyUI > Outputs by default.

- 默认情况下,它将保存到
ComfyUI > 输出中。

Change Output Path: Unmute this node, if you want to save the output to a custom save location
Change Output Path:如果要将输出保存到自定义保存位置,请取消此节点的静音

INSTALLATION:
安装:

Custom Nodes:
自定义节点:

1.https://github.com/daxcay/ComfyUI-JDCN.git

2.https://github.com/bronkula/comfyui-fitsize.git

3.https://github.com/kijai/ComfyUI-IC-Light.git

4.https://github.com/kijai/ComfyUI-KJNodes.git

5.https://github.com/mcmonkeyprojects/sd-dynamic-thresholding.git

6.https://github.com/cubiq/ComfyUI_essentials.git

7.https://github.com/giriss/comfy-image-saver.git

8.https://github.com/M1kep/ComfyLiterals.git

9.https://github.com/theUpsider/ComfyUI-Logic.git

10.https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite.git

11.https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved.git

12.https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet.git

13.https://github.com/shiimizu/ComfyUI_smZNodes.git

14.https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes.git

15.https://github.com/Nourepide/ComfyUI-Allor.git

16.https://github.com/WASasquatch/was-node-suite-comfyui.git

17.https://github.com/Fannovel16/ComfyUI-Video-Matting.git

18.https://github.com/Fannovel16/comfyui_controlnet_aux.git

19.https://github.com/comfyanonymous/ComfyUI_experiments.git

20.https://github.com/ltdrdata/ComfyUI-Impact-Pack.git

ControlNet Models:
ControlNet Models:

https://civitai.com/models/80536?modelVersionId=85428

models/coadapter-depth-sd15v1.pth · TencentARC/T2I-Adapter at main (huggingface.co)
models/coadapter-depth-sd15v1.pth · TencentARC/T2I-主适配器 (huggingface.co)

Target Location: Comfyui\models\controlnet
目标位置:Comfyui\models\controlnet

IC model
IC 型号

https://huggingface.co/lllyasviel/ic-light/tree/main

Put in Comfyui>model>unet
放入 Comfyui>model>unet

AnimateDiff Motion Module:
AnimateDiff Motion 模块:

https://civitai.com/models/139237?modelVersionId=154097

Location: ComfyUI\models\animatediff_models
位置:ComfyUI\models\animatediff_models

Other Models
其他型号

They Should Auto Download…. But anyways:
他们应该自动下载......但无论如何:

SAM: https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth
山姆:https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth

FaceDetect: https://huggingface.co/dustysys/ddetailer/resolve/main/mmdet/bbox/mmdet_anime-face_yolov3.pth
人脸检测:https://huggingface.co/dustysys/ddetailer/resolve/main/mmdet/bbox/mmdet_anime-face_yolov3.pth

Location Sam: ComfyUI/models/sams
位置 Sam: ComfyUI/models/sams

Location FaceDetect: ComfyUI\models\mmdets\bbox
位置 FaceDetectComfyUI\models\mmdets\bbox

About this workflow’s Author
关于此工作流的 Author

Jerry Davos
杰里·戴沃斯

YouTube Channel: https://www.youtube.com/@jerrydavos
YouTube 频道:https://www.youtube.com/@jerrydavos

Patreon: https://www.patreon.com/jerrydavos
Patreon:https://www.patreon.com/jerrydavos

Contacts
接触

Email: davos.jerry@gmail.com
电子邮件: davos.jerry@gmail.com

Discord: https://discord.gg/z9rgJyfPWJ
无序:https://discord.gg/z9rgJyfPWJ