{"id":73935,"date":"2023-01-12T15:43:02","date_gmt":"2023-01-12T06:43:02","guid":{"rendered":"https:\/\/www.waseda.jp\/inst\/research\/?p=73935"},"modified":"2023-01-12T15:43:02","modified_gmt":"2023-01-12T06:43:02","slug":"fashion-intelligence-systeman-outfit-interpretation-utilizing-images-and-rich-abstruct-tags%ef%bc%88published-in-expert-systems-with-applications-november-2022%ef%bc%89","status":"publish","type":"post","link":"https:\/\/www.waseda.jp\/inst\/research\/news\/73935","title":{"rendered":"Fashion Intelligence system:An Outfit Interpretation Utilizing Images and Rich Abstruct Tags\uff08Published in Expert Systems with Applications, November 2022\uff09"},"content":{"rendered":"<table class=\"table table-bordered table-colored-tbhd\" style=\"height: 550px; width: 100%; border-collapse: collapse; border-style: solid;\" border=\"1\">\n<tbody>\n<tr style=\"height: 78px;\">\n<td style=\"width: 16.7893%; height: 78px;\">Journal Title<br \/>\n\/\u63b2\u8f09\u30b8\u30e3\u30fc\u30ca\u30eb\u540d<\/td>\n<td style=\"width: 71.3712%; height: 78px;\">Expert Systems with Applications<\/td>\n<\/tr>\n<tr style=\"height: 65px;\">\n<td style=\"width: 16.7893%; height: 80px;\">Publication Year and Month<br \/>\n\/\u63b2\u8f09\u5e74\u6708<\/td>\n<td style=\"width: 71.3712%; height: 80px;\">November, 2022<\/td>\n<\/tr>\n<tr style=\"height: 55px;\">\n<td style=\"width: 16.7893%; height: 79px;\">Paper Title<br \/>\n\/\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/td>\n<td style=\"width: 71.3712%; height: 79px;\">Fashion Intelligence system:An Outfit Interpretation Utilizing Images and Rich Abstruct Tags<\/td>\n<\/tr>\n<tr style=\"height: 85px;\">\n<td style=\"width: 16.7893%; height: 85px;\">DOI<br \/>\n\/\u8ad6\u6587DOI<\/td>\n<td style=\"width: 71.3712%; height: 85px;\"><a href=\"https:\/\/doi.org\/10.1016\/j.eswa.2022.119167\">10.1016\/j.eswa.2022.119167<\/a><\/td>\n<\/tr>\n<tr style=\"height: 59px;\">\n<td style=\"width: 16.7893%; height: 80px;\">\u00a0Author of Waseda University<br \/>\n\/\u672c\u5b66\u306e\u8457\u8005<\/td>\n<td style=\"width: 71.3712%; height: 80px;\">SHIMIZU, Ryotaro(2nd Year Doctoral Student, Faculty of Science and Engineering, School of Creative Science and Engineering):First Author, Corresponding Author<\/td>\n<\/tr>\n<tr style=\"height: 68px;\">\n<td style=\"width: 16.7893%; height: 86px;\">Related Websites<br \/>\n\/\u95a2\u9023Web<\/td>\n<td style=\"width: 71.3712%; height: 86px;\">&#8211;<\/td>\n<\/tr>\n<tr style=\"height: 138px;\">\n<td style=\"width: 16.7893%; height: 148px;\">Abstract<br \/>\n\/\u6284\u9332<\/td>\n<td style=\"width: 71.3712%; height: 148px;\">In recent years, it has become common for consumers to familiarize themselves with the latest fashion trends through the internet and engage in their own fashion-inspired shopping activities. Therefore, making fashion-inspired shopping and browsing activities (internet surfing in the fashion domain) comfortable is essential because it leads to interactions in the fashion industry. However, fashion is a fuzzy and complex domain that contains many abstract elements, and this ambiguity and complexity can hinder users\u2019 deep interest in the fashion industry. Therefore, we define a novel technology and domain called \u201cfashion intelligence\u201d and propose a system based on a visual-semantic embedding method for automatically learning and interpreting fashion and obtaining answers to users\u2019 questions. Our proposed method can embed the abundant abstract tag information in the same projective space as outfit images. Mapping of images and tags in a projective space helps search for outfit images using fashion-specific abstract words. In addition, visually estimating the degree of relevance between images and tags helps interpret abstract words. As a result, this research helps decrease fashion-specific ambiguity and complexity and supports the marketing activities and fashion choices of both experts and non-experts.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n","protected":false},"excerpt":{"rendered":"<p>Journal Title \/\u63b2\u8f09\u30b8\u30e3\u30fc\u30ca\u30eb\u540d Expert Systems with Applications Publication Year and Month \/\u63b2\u8f09\u5e74\u6708 November, 2022 Paper [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[],"tags":[218,217],"class_list":["post-73935","post","type-post","status-publish","format-standard","hentry","tag-impact-en","tag-impact"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.waseda.jp\/inst\/research\/wp-json\/wp\/v2\/posts\/73935","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.waseda.jp\/inst\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.waseda.jp\/inst\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.waseda.jp\/inst\/research\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.waseda.jp\/inst\/research\/wp-json\/wp\/v2\/comments?post=73935"}],"version-history":[{"count":1,"href":"https:\/\/www.waseda.jp\/inst\/research\/wp-json\/wp\/v2\/posts\/73935\/revisions"}],"predecessor-version":[{"id":73936,"href":"https:\/\/www.waseda.jp\/inst\/research\/wp-json\/wp\/v2\/posts\/73935\/revisions\/73936"}],"wp:attachment":[{"href":"https:\/\/www.waseda.jp\/inst\/research\/wp-json\/wp\/v2\/media?parent=73935"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.waseda.jp\/inst\/research\/wp-json\/wp\/v2\/categories?post=73935"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.waseda.jp\/inst\/research\/wp-json\/wp\/v2\/tags?post=73935"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}