Survey on Sketch-to-photo Translation

dc.coverageDOI: 10.1145/3606694
dc.creatorDonoso, Diego
dc.creatorSaavedra, Jose M.
dc.date2024
dc.date.accessioned2026-01-05T21:14:29Z
dc.date.available2026-01-05T21:14:29Z
dc.description<p>Sketch-based understanding is involved in human communication and cognitive development, making it essential in visual perception. A specific task in this domain is sketch-to-photo translation, where a model produces realistic images from simple drawings. To this end, large paired training datasets are commonly required, which is impractical in real applications. Thus, this work studies conditional generative models for sketch-to-photo translation, overcoming the lack of training datasets by a self-supervised approach that produces sketch-photo pairs from a target catalog. Our study shows the benefit of cycle-consistency loss and UNet architectures that, together with the proposed dataset generation, improve performance in real applications like eCommerce. Our results also reveal the weakness of conditional DDPMs for generating images resembling the input sketch, even though they achieve a high FID score.</p>eng
dc.identifierhttps://investigadores.uandes.cl/en/publications/dfffd473-54d0-446a-ba09-f6ad70f09b3f
dc.identifier.urihttps://repositorio.uandes.cl/handle/uandes/66138
dc.languageeng
dc.rightsinfo:eu-repo/semantics/restrictedAccess
dc.sourcevol.56 (2024) date: 2024-01-31 nr.1 p.1–25
dc.subjectAdditional Key Words and PhrasesGenerative models
dc.subjectconditional GANs
dc.subjectdeep learning
dc.subjectsketch-to-photo translation
dc.titleSurvey on Sketch-to-photo Translationeng
dc.typeArticleeng
dc.typeArtículospa
Files
Collections