Attribit: content creation with semantic attributes

S. Chaudhuri, E. Kalogerakis, S. Giguere, T. Funkhouser
{"title":"Attribit: content creation with semantic attributes","authors":"S. Chaudhuri, E. Kalogerakis, S. Giguere, T. Funkhouser","doi":"10.1145/2501988.2502008","DOIUrl":null,"url":null,"abstract":"We present AttribIt, an approach for people to create visual content using relative semantic attributes expressed in linguistic terms. During an off-line processing step, AttribIt learns semantic attributes for design components that reflect the high-level intent people may have for creating content in a domain (e.g. adjectives such as \"dangerous\", \"scary\" or \"strong\") and ranks them according to the strength of each learned attribute. Then, during an interactive design session, a person can explore different combinations of visual components using commands based on relative attributes (e.g. \"make this part more dangerous\"). Novel designs are assembled in real-time as the strengths of selected attributes are varied, enabling rapid, in-situ exploration of candidate designs. We applied this approach to 3D modeling and web design. Experiments suggest this interface is an effective alternative for novices performing tasks with high-level design goals.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"127","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 26th annual ACM symposium on User interface software and technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2501988.2502008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 127

Abstract

We present AttribIt, an approach for people to create visual content using relative semantic attributes expressed in linguistic terms. During an off-line processing step, AttribIt learns semantic attributes for design components that reflect the high-level intent people may have for creating content in a domain (e.g. adjectives such as "dangerous", "scary" or "strong") and ranks them according to the strength of each learned attribute. Then, during an interactive design session, a person can explore different combinations of visual components using commands based on relative attributes (e.g. "make this part more dangerous"). Novel designs are assembled in real-time as the strengths of selected attributes are varied, enabling rapid, in-situ exploration of candidate designs. We applied this approach to 3D modeling and web design. Experiments suggest this interface is an effective alternative for novices performing tasks with high-level design goals.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
属性:具有语义属性的内容创建
我们提出了AttribIt,这是一种使用语言术语表示的相对语义属性来创建视觉内容的方法。在离线处理步骤中,AttribIt学习设计组件的语义属性,这些属性反映了人们在一个领域中创建内容的高级意图(例如,“危险”、“可怕”或“强”等形容词),并根据每个学习属性的强度对它们进行排名。然后,在交互设计会话期间,一个人可以使用基于相关属性的命令来探索视觉组件的不同组合。“让这部分更危险”)。由于所选属性的强度各不相同,因此可以实时组装新的设计,从而实现对候选设计的快速、原位探索。我们将这种方法应用到3D建模和网页设计中。实验表明,这个界面对于新手执行具有高级设计目标的任务来说是一个有效的选择。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Session details: Keynote address Session details: Sensing Touch scrolling transfer functions A colorful approach to text processing by example BodyAvatar: creating freeform 3D avatars using first-person body gestures
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1