forked from SystemErrorWang/White-box-Cartoonization
-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
76 lines (60 loc) · 3.47 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Learning to Cartoonize Using White-box Cartoon Representations</title>
<link rel="stylesheet" type="text/css" href="./index_files/pixl-bk.css">
<link rel="stylesheet" type="text/css" href="./index_files/pixl-fonts.css">
</head>
<body>
<div class="crumb">
<a href="https://github.com/lllyasviel">Style2Paints Research</a> →
[Wang et al. 2020]
</span>
</div>
<div class="content">
<div class="paperheader">
<div class="papertitle"> Learning to Cartoonize Using White-box Cartoon Representations </div>
<br>
<div class="pubinfo"> Computer Vision and Pattern Recognition (CVPR), June 2020 </div>
<br>
<div class="authors"> <a href="https://github.com/SystemErrorWang">Xinrui Wang</a> and Jinze Yu </div>
</div>
<div class="paperimg"><img src="./paper/shinjuku.jpg"></div>
<div class="longcaption">Example of image cartoonization with our method: left is a frame in the animation "Garden of words", right is a real-world photo processed by our proposed method.</div>
<div class="header">Abstract</div>
<p>
</p><div class="abstract">
This paper presents an approach for image cartooniza- tion. By observing the cartoon painting behavior and consulting artists, we propose to separately identify three white-box representations from images: the surface rep- resentation that contains a smooth surface of cartoon im- ages, the structure representation that refers to the sparse color-blocks and flatten global content in the celluloid style workflow, and the texture representation that reflects high- frequency texture, contours, and details in cartoon im- ages. A Generative Adversarial Network (GAN) framework is used to learn the extracted representations and to car- toonize images.
<br>
The learning objectives of our method are separately based on each extracted representations, making our frame- work controllable and adjustable. This enables our ap- proach to meet artists’ requirements in different styles and diverse use cases. Qualitative comparisons and quanti- tative analyses, as well as user studies, have been con- ducted to validate the effectiveness of this approach, and our method outperforms previous methods in all compar- isons. Finally, the ablation study demonstrates the influence of each component in our framework.
</div>
<div class="header">Files</div>
<ul>
<li> <a href="./paper/06791.pdf">Paper</a> (9 MB PDF)</li>
<li> <a href="./paper/06791-supp.pdf">Supplementary Material</a> (15 MB PDF)</li>
</ul>
<div class="header">See Also</div>
<ul>
<li> <a href="https://github.com/SystemErrorWang/White-box-Cartoonization">Source Code</a> - Only inference code available now, training code will be updated later.</li>
<li> <a href="https://www.bilibili.com/video/av56708333">Demo Video</a> - Generated with early version of our work in bilibili.com.</li>
</ul>
<div class="header">Citation</div>
<p>
Xinrui Wang and Jinze Yu<br>
"Learning to Cartoonize Using White-box Cartoon Representations."<br>
<i>IEEE Conference on Computer Vision and Pattern Recognition</i>, June 2020.
<!--
</p><div class="header">BibTeX</div>
<p>
</p><pre>@article{Zhang2020,
author = "Lvmin Zhang and Edgar Simo-Serra and Yi Ji and Chunping Liu",
title = "Generating Digital Painting Lighting Effects via RGB-space Geometry",
journal = "ACM Transactions on Graphics",
year = "2020"
}
</pre>
</div>
-->
</body></html>