Skip to content

Commit 9c42a2c

Browse files
author
Tu Bui
committed
cleanup code
1 parent 33f6ab1 commit 9c42a2c

File tree

13 files changed

+101866
-241
lines changed

13 files changed

+101866
-241
lines changed

Diff for: README.md

+18-222
Original file line numberDiff line numberDiff line change
@@ -1,234 +1,30 @@
1-
# ControlNet
1+
# RoSteALS
22

3-
Official implementation of [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543).
3+
Official implementation of [RoSteALS: Robust Steganography using Autoencoder Latent Space]().
44

5-
ControlNet is a neural network structure to control diffusion models by adding extra conditions.
5+
### Environment
66

7-
![img](github_page/he.png)
7+
We tested with pytorch 1.11, torchvision 0.12 and cuda 11.3, but other pytorch version probably works, too. To reproduce the environment, please check [dependencies](dependencies).
88

9-
It copys the weights of neural network blocks into a "locked" copy and a "trainable" copy.
9+
# Training
10+
## Data Preparation
11+
TODO: instructions to download and prepare the MIRFlickR dataset.
1012

11-
The "trainable" one learns your condition. The "locked" one preserves your model.
13+
Update the data path in the config file at [models/VQ4_mir.yaml](models/VQ4_mir.yaml).
1214

13-
Thanks to this, training with small dataset of image pairs will not destroy the production-ready diffusion models.
15+
## Train
16+
```
17+
python train.py --config models/VQ4_mir.yaml --secret_len 100 --max_image_weight_ratio 10 --batch_size 4 -o saved_models
1418
15-
The "zero convolution" is 1×1 convolution with both weight and bias initialized as zeros.
19+
```
20+
where batch_size=4 is enough to fit a 24GB GPU.
1621

17-
Before training, all zero convolutions output zeros, and ControlNet will not cause any distortion.
22+
# Inference
23+
TODO: upload trained model, inference demo
1824

19-
No layer is trained from scratch. You are still fine-tuning. Your original model is safe.
25+
# Acknowledgement
26+
The code is inspired from [Stable Diffusion](https://github.com/CompVis/stable-diffusion) and [ControlNet](https://github.com/lllyasviel/ControlNet).
2027

21-
This allows training on small-scale or even personal devices.
22-
23-
This is also friendly to merge/replacement/offsetting of models/weights/blocks/layers.
24-
25-
### FAQ
26-
27-
**Q:** But wait, if the weight of a conv layer is zero, the gradient will also be zero, and the network will not learn anything. Why "zero convolution" works?
28-
29-
**A:** This is not true. [See an explanation here](docs/faq.md).
30-
31-
# Stable Diffusion + ControlNet
32-
33-
By repeating the above simple structure 14 times, we can control stable diffusion in this way:
34-
35-
![img](github_page/sd.png)
36-
37-
Note that the way we connect layers is computational efficient. The original SD encoder does not need to store gradients (the locked original SD Encoder Block 1234 and Middle). The required GPU memory is not much larger than original SD, although many layers are added. Great!
38-
39-
# Production-Ready Pretrained Models
40-
41-
First create a new conda environment
42-
43-
conda env create -f environment.yaml
44-
conda activate control
45-
46-
All models and detectors can be downloaded from [our Hugging Face page](https://huggingface.co/lllyasviel/ControlNet). Make sure that SD models are put in "ControlNet/models" and detectors are put in "ControlNet/annotator/ckpts". Make sure that you download all necessary pretrained weights and detector models from that Hugging Face page, including HED edge detection model, Midas depth estimation model, Openpose, and so on.
47-
48-
We provide 9 Gradio apps with these models.
49-
50-
All test images can be found at the folder "test_imgs".
51-
52-
### News
53-
54-
2023/02/12 - Now you can play with any community model by [Transferring the ControlNet](https://github.com/lllyasviel/ControlNet/discussions/12).
55-
56-
2023/02/11 - [Low VRAM mode](docs/low_vram.md) is added. Please use this mode if you are using 8GB GPU(s) or if you want larger batch size.
57-
58-
## ControlNet with Canny Edge
59-
60-
Stable Diffusion 1.5 + ControlNet (using simple Canny edge detection)
61-
62-
python gradio_canny2image.py
63-
64-
The Gradio app also allows you to change the Canny edge thresholds. Just try it for more details.
65-
66-
Prompt: "bird"
67-
![p](github_page/p1.png)
68-
69-
Prompt: "cute dog"
70-
![p](github_page/p2.png)
71-
72-
## ControlNet with M-LSD Lines
73-
74-
Stable Diffusion 1.5 + ControlNet (using simple M-LSD straight line detection)
75-
76-
python gradio_hough2image.py
77-
78-
The Gradio app also allows you to change the M-LSD thresholds. Just try it for more details.
79-
80-
Prompt: "room"
81-
![p](github_page/p3.png)
82-
83-
Prompt: "building"
84-
![p](github_page/p4.png)
85-
86-
## ControlNet with HED Boundary
87-
88-
Stable Diffusion 1.5 + ControlNet (using soft HED Boundary)
89-
90-
python gradio_hed2image.py
91-
92-
The soft HED Boundary will preserve many details in input images, making this app suitable for recoloring and stylizing. Just try it for more details.
93-
94-
Prompt: "oil painting of handsome old man, masterpiece"
95-
![p](github_page/p5.png)
96-
97-
Prompt: "Cyberpunk robot"
98-
![p](github_page/p6.png)
99-
100-
## ControlNet with User Scribbles
101-
102-
Stable Diffusion 1.5 + ControlNet (using Scribbles)
103-
104-
python gradio_scribble2image.py
105-
106-
Note that the UI is based on Gradio, and Gradio is somewhat difficult to customize. Right now you need to draw scribbles outside the UI (using your favorite drawing software, for example, MS Paint) and then import the scribble image to Gradio.
107-
108-
Prompt: "turtle"
109-
![p](github_page/p7.png)
110-
111-
Prompt: "hot air balloon"
112-
![p](github_page/p8.png)
113-
114-
### Interactive Interface
115-
116-
We actually provide an interactive interface
117-
118-
python gradio_scribble2image_interactive.py
119-
120-
However, because gradio is very [buggy](https://github.com/gradio-app/gradio/issues/3166) and difficult to customize, right now, user need to first set canvas width and heights and then click "Open drawing canvas" to get a drawing area. Please do not upload image to that drawing canvas. Also, the drawing area is very small; it should be bigger. But I failed to find out how to make it larger. Again, gradio is really buggy.
121-
122-
The below dog sketch is drawn by me. Perhaps we should draw a better dog for showcase.
123-
124-
Prompt: "dog in a room"
125-
![p](github_page/p20.png)
126-
127-
## ControlNet with Fake Scribbles
128-
129-
Stable Diffusion 1.5 + ControlNet (using fake scribbles)
130-
131-
python gradio_fake_scribble2image.py
132-
133-
Sometimes we are lazy, and we do not want to draw scribbles. This script use the exactly same scribble-based model but use a simple algorithm to synthesize scribbles from input images.
134-
135-
Prompt: "bag"
136-
![p](github_page/p9.png)
137-
138-
Prompt: "shose" (Note that "shose" is a typo; it should be "shoes". But it still seems to work.)
139-
![p](github_page/p10.png)
140-
141-
## ControlNet with Human Pose
142-
143-
Stable Diffusion 1.5 + ControlNet (using human pose)
144-
145-
python gradio_pose2image.py
146-
147-
Apparently, this model deserves a better UI to directly manipulate pose skeleton. However, again, Gradio is somewhat difficult to customize. Right now you need to input an image and then the Openpose will detect the pose for you.
148-
149-
Prompt: "Chief in the kitchen"
150-
![p](github_page/p11.png)
151-
152-
Prompt: "An astronaut on the moon"
153-
![p](github_page/p12.png)
154-
155-
## ControlNet with Semantic Segmentation
156-
157-
Stable Diffusion 1.5 + ControlNet (using semantic segmentation)
158-
159-
python gradio_seg2image.py
160-
161-
This model use ADE20K's segmentation protocol. Again, this model deserves a better UI to directly draw the segmentations. However, again, Gradio is somewhat difficult to customize. Right now you need to input an image and then a model called Uniformer will detect the segmentations for you. Just try it for more details.
162-
163-
Prompt: "House"
164-
![p](github_page/p13.png)
165-
166-
Prompt: "River"
167-
![p](github_page/p14.png)
168-
169-
## ControlNet with Depth
170-
171-
Stable Diffusion 1.5 + ControlNet (using depth map)
172-
173-
python gradio_depth2image.py
174-
175-
Great! Now SD 1.5 also have a depth control. FINALLY. So many possibilities (considering SD1.5 has much more community models than SD2).
176-
177-
Note that different from Stability's model, the ControlNet receive the full 512×512 depth map, rather than 64×64 depth. Note that Stability's SD2 depth model use 64*64 depth maps. This means that the ControlNet will preserve more details in the depth map.
178-
179-
This is always a strength because if users do not want to preserve more details, they can simply use another SD to post-process an i2i. But if they want to preserve more details, ControlNet becomes their only choice. Again, SD2 uses 64×64 depth, we use 512×512.
180-
181-
Prompt: "Stormtrooper's lecture"
182-
![p](github_page/p15.png)
183-
184-
## ControlNet with Normal Map
185-
186-
Stable Diffusion 1.5 + ControlNet (using normal map)
187-
188-
python gradio_normal2image.py
189-
190-
This model use normal map. Rightnow in the APP, the normal is computed from the midas depth map and a user threshold (to determine how many area is background with identity normal face to viewer, tune the "Normal background threshold" in the gradio app to get a feeling).
191-
192-
Prompt: "Cute toy"
193-
![p](github_page/p17.png)
194-
195-
Prompt: "Plaster statue of Abraham Lincoln"
196-
![p](github_page/p18.png)
197-
198-
Compared to depth model, this model seems to be a bit better at preserving the geometry. This is intuitive: minor details are not salient in depth maps, but are salient in normal maps. Below is the depth result with same inputs. You can see that the hairstyle of the man in the input image is modified by depth model, but preserved by the normal model.
199-
200-
Prompt: "Plaster statue of Abraham Lincoln"
201-
![p](github_page/p19.png)
202-
203-
## ControlNet with Anime Line Drawing
204-
205-
We also trained a relatively simple ControlNet for anime line drawings. This tool may be useful for artistic creations. (Although the image details in the results is a bit modified, since it still diffuse latent images.)
206-
207-
This model is not available right now. We need to evaluate the potential risks before releasing this model. Nevertheless, you may be interested in [transferring the ControlNet to any community model](https://github.com/lllyasviel/ControlNet/discussions/12).
208-
209-
![p](github_page/p21.png)
210-
211-
# Annotate Your Own Data
212-
213-
We provide simple python scripts to process images.
214-
215-
[See a gradio example here](docs/annotator.md).
216-
217-
# Train with Your Own Data
218-
219-
Training a ControlNet is as easy as (or even easier than) training a simple pix2pix.
220-
221-
[See the steps here](docs/train.md).
22228

22329
# Citation
224-
225-
@misc{zhang2023adding,
226-
title={Adding Conditional Control to Text-to-Image Diffusion Models},
227-
author={Lvmin Zhang and Maneesh Agrawala},
228-
year={2023},
229-
eprint={2302.05543},
230-
archivePrefix={arXiv},
231-
primaryClass={cs.CV}
232-
}
233-
234-
[Arxiv Link](https://arxiv.org/abs/2302.05543)
30+
TODO: update

Diff for: cldm/ae.py

+11-8
Original file line numberDiff line numberDiff line change
@@ -526,7 +526,7 @@ def get_input(self, batch, return_first_stage=False, bs=None):
526526
# if self.training and self.fixed_input:
527527
if self.fixed_input:
528528
if self.fixed_x is None: # first iteration
529-
print('Warmup training - using fixed input image for now!')
529+
print('[TRAINING] Warmup - using fixed input image for now!')
530530
self.fixed_x = x.detach().clone()[:bs]
531531
self.fixed_img = image.detach().clone()[:bs]
532532
self.fixed_input_recon = image_rec.detach().clone()[:bs]
@@ -572,17 +572,20 @@ def shared_step(self, batch):
572572
bit_acc = loss_dict["bit_acc"]
573573

574574
bit_acc_ = bit_acc.item()
575-
if (bit_acc_ > 0.98) and (not self.fixed_input) and (not self.secret_warmup): # ramp up image loss at late training stage
575+
576+
if (bit_acc_ > 0.98) and (not self.fixed_input) and self.noise.is_activated():
576577
self.loss_layer.activate_ramp(self.global_step)
578+
579+
if (bit_acc_ > 0.95) and (not self.fixed_input): # ramp up image loss at late training stage
577580
if hasattr(self, 'noise') and (not self.noise.is_activated()):
578581
self.noise.activate(self.global_step)
579582

580-
if (bit_acc_ > 0.95) and (not self.fixed_input) and self.secret_warmup:
581-
if self.secret_baselen == self.secret_len: # warm up done
582-
self.secret_warmup = False
583-
else:
584-
print(f'[TRAINING] secret length warmup: {self.secret_baselen} -> {self.secret_baselen*2}')
585-
self.secret_baselen *= 2
583+
# if (bit_acc_ > 0.95) and (not self.fixed_input) and self.secret_warmup:
584+
# if self.secret_baselen == self.secret_len: # warm up done
585+
# self.secret_warmup = False
586+
# else:
587+
# print(f'[TRAINING] secret length warmup: {self.secret_baselen} -> {self.secret_baselen*2}')
588+
# self.secret_baselen *= 2
586589

587590
if (bit_acc_ > 0.9) and self.fixed_input: # execute only once
588591
print(f'[TRAINING] High bit acc ({bit_acc_}) achieved, switch to full image dataset training.')

Diff for: cldm/transformations.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -20,14 +20,14 @@ def __init__(self, rnd_bri=0.3, rnd_hue=0.1, do_jpeg=False, jpeg_quality=50, rnd
2020
self.contrast_low, self.contrast_high = contrast
2121
self.do_jpeg = do_jpeg
2222
self.ramp = ramp
23-
self.step0 = 0
23+
self.register_buffer('step0', torch.tensor(0)) # large number
2424
if imagenetc_level > 0:
2525
self.imagenetc = ImagenetCTransform(max_severity=imagenetc_level)
2626

2727
def activate(self, global_step):
2828
if self.step0 == 0:
2929
print(f'[TRAINING] Activating TransformNet at step {global_step}')
30-
self.step0 = global_step
30+
self.step0 = torch.tensor(global_step)
3131

3232
def is_activated(self):
3333
return self.step0 > 0
@@ -41,7 +41,7 @@ def forward(self, x, global_step, p=0.9):
4141
return x
4242
x = x * 0.5 + 0.5 # [-1, 1] -> [0, 1]
4343
batch_size, sh, device = x.shape[0], x.size(), x.device
44-
ramp_fn = lambda ramp: np.min([(global_step-self.step0) / ramp, 1.])
44+
ramp_fn = lambda ramp: np.min([(global_step-self.step0.cpu().item()) / ramp, 1.])
4545

4646
rnd_bri = ramp_fn(self.ramp) * self.rnd_bri
4747
rnd_hue = ramp_fn(self.ramp) * self.rnd_hue

0 commit comments

Comments
 (0)