VETE: improving visual embeddings through text descriptions for eCommerce search engines

dc.coverageDOI: 10.1007/s11042-023-14595-8
dc.creatorMartínez, Guillermo
dc.creatorSaavedra, Jose M.
dc.creatorMurrugara-Llerena, Nils
dc.date2023
dc.date.accessioned05-01-2026 18:14
dc.date.available05-01-2026 18:14
dc.description<p>A search engine is a critical component in the success of eCommerce. Searching for a particular product can be frustrating when users want specific product features that cannot be easily represented by a simple text search or catalog filter. Due to the advances in artificial intelligence and deep learning, content-based visual search engines are included in eCommerce search bars. A visual search is instantaneous, just take a picture and search; and it is fully expressive of image details. However, visual search in eCommerce still undergoes a large semantic gap. Traditionally, visual search models are trained in a supervised manner with large collections of images that do not represent well the semantic of a target eCommerce catalog. Therefore, we propose VETE (Visual Embedding modulated by TExt) to boost visual embeddings in eCommerce leveraging textual information of products in the target catalog. with real eCommerce data. Our proposal improves the baseline visual space for global and fine-grained categories in real-world eCommerce data. We achieved an average improvement of 3.48% for catalog-like queries, and 3.70% for noisy ones.</p>eng
dc.identifierhttps://investigadores.uandes.cl/en/publications/1abc07d1-d043-47f8-ab8b-9c48e67f8757
dc.languageeng
dc.rightsinfo:eu-repo/semantics/restrictedAccess
dc.sourcevol.82 (2023) nr.26 p.41343-41379
dc.subjectContent-based image retrieval
dc.subjectSelf-supervised representation learning
dc.subjectVisual and text embeddings
dc.titleVETE: improving visual embeddings through text descriptions for eCommerce search engineseng
dc.typeArticleeng
dc.typeArtículospa
Files