Statements (52)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:model
|
| gptkbp:application |
image generation
image-to-image translation |
| gptkbp:architecture |
gptkb:convolutional_neural_network
|
| gptkbp:basedOn |
gptkb:Stable_Diffusion
|
| gptkbp:citation |
Zhang, Lvmin, et al. 'Adding Conditional Control to Text-to-Image Diffusion Models.' arXiv preprint arXiv:2302.05543 (2023).
|
| gptkbp:compatibleWith |
gptkb:Stable_Diffusion_1.5
gptkb:Stable_Diffusion_2.1 |
| gptkbp:controls |
gptkb:Canny_edge
gptkb:MLSD pose estimation segmentation depth map lineart normal map openpose reference image scribble |
| gptkbp:developedBy |
gptkb:Lvmin_Zhang
gptkb:Mingyuan_Zhang |
| gptkbp:enables |
fine-grained control over image generation
|
| gptkbp:input |
gptkb:illustrator
text prompt conditioning map |
| gptkbp:language |
gptkb:Python
|
| gptkbp:license |
gptkb:CreativeML_Open_RAIL-M
|
| gptkbp:notableFeature |
adds trainable copy of network blocks
preserves original model weights supports multiple control types |
| gptkbp:notableFor |
animation
AI art creative design image editing image inpainting image restoration depth-to-image image outpainting virtual try-on edge-to-image line art colorization photo-to-anime pose transfer semantic segmentation to image sketch-to-image |
| gptkbp:openSource |
true
|
| gptkbp:platform |
gptkb:PyTorch
|
| gptkbp:releaseYear |
2023
|
| gptkbp:repository |
https://github.com/lllyasviel/ControlNet
|
| gptkbp:usedFor |
controlling diffusion models
|
| gptkbp:bfsParent |
gptkb:IEC_61158
|
| gptkbp:bfsLayer |
6
|
| https://www.w3.org/2000/01/rdf-schema#label |
ControlNet
|