首页 > 最新文献

Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
M-Hair: Creating Novel Tactile Feedback by Augmenting the Body Hair to Respond to Magnetic Field M-Hair:通过增加体毛来响应磁场来创造新的触觉反馈
Roger Boldu, Sambhav Jain, J. P. F. Cortés, Haimo Zhang, Suranga Nanayakkara
In this paper, we present M-Hair, a novel method for providing tactile feedback by stimulating only the body hair without touching the skin. It works by applying passive magnetic materials to the body hair, which is actuated by external magnetic fields. Our user study suggested that the value of the M-hair mechanism is in inducing affective sensations such as pleasantness, rather than effectively discriminating features such as shape, size, and direction. This work invites future research to use this method in applications that induce emotional responses or affective states, and as a research tool for investigations of this novel sensation.
在本文中,我们提出了M-Hair,一种通过只刺激体毛而不接触皮肤来提供触觉反馈的新方法。它的工作原理是将被动磁性材料应用于体毛,体毛由外部磁场驱动。我们的用户研究表明,M-hair机制的价值在于诱导情感感觉,如愉悦感,而不是有效地区分形状、大小和方向等特征。这项工作邀请未来的研究在诱导情绪反应或情感状态的应用中使用这种方法,并作为调查这种新感觉的研究工具。
{"title":"M-Hair: Creating Novel Tactile Feedback by Augmenting the Body Hair to Respond to Magnetic Field","authors":"Roger Boldu, Sambhav Jain, J. P. F. Cortés, Haimo Zhang, Suranga Nanayakkara","doi":"10.1145/3332165.3347955","DOIUrl":"https://doi.org/10.1145/3332165.3347955","url":null,"abstract":"In this paper, we present M-Hair, a novel method for providing tactile feedback by stimulating only the body hair without touching the skin. It works by applying passive magnetic materials to the body hair, which is actuated by external magnetic fields. Our user study suggested that the value of the M-hair mechanism is in inducing affective sensations such as pleasantness, rather than effectively discriminating features such as shape, size, and direction. This work invites future research to use this method in applications that induce emotional responses or affective states, and as a research tool for investigations of this novel sensation.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132318328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Videostrates
C. Klokmose, C. Rémy, Janus Bager Kristensen, Rolf Bagge, Michel Beaudouin-Lafon, W. Mackay
We present Videostrates, a concept and a toolkit for creating real-time collaborative video editing tools. Videostrates supports both live and recorded video composition with a declarative HTML-based notation, combining both simple and sophisticated editing tools that can be used collaboratively. Videostrates is programmable and unleashes the power of the modern web platform for video manipulation. We demonstrate its potential through three use scenarios: collaborative video editing with multiple tools and devices; orchestration of multiple live streams that are recorded and broadcast to a popular streaming platform; and programmatic creation of video using WebGL and shaders for blue screen effects. These scenarios only scratch the surface of Videostrates' potential, which opens up a design space for novel collaborative video editors with fully programmable interfaces.
{"title":"Videostrates","authors":"C. Klokmose, C. Rémy, Janus Bager Kristensen, Rolf Bagge, Michel Beaudouin-Lafon, W. Mackay","doi":"10.1145/3332165.3347912","DOIUrl":"https://doi.org/10.1145/3332165.3347912","url":null,"abstract":"We present Videostrates, a concept and a toolkit for creating real-time collaborative video editing tools. Videostrates supports both live and recorded video composition with a declarative HTML-based notation, combining both simple and sophisticated editing tools that can be used collaboratively. Videostrates is programmable and unleashes the power of the modern web platform for video manipulation. We demonstrate its potential through three use scenarios: collaborative video editing with multiple tools and devices; orchestration of multiple live streams that are recorded and broadcast to a popular streaming platform; and programmatic creation of video using WebGL and shaders for blue screen effects. These scenarios only scratch the surface of Videostrates' potential, which opens up a design space for novel collaborative video editors with fully programmable interfaces.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115200521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
SensorSnaps SensorSnaps
A. Dementyev, Tomás Vega Gálvez, A. Olwal
Adding electronics to textiles can be time-consuming and requires technical expertise. We introduce SensorSnaps, low-power wireless sensor nodes that seamlessly integrate into caps of fabric snap fasteners. SensorSnaps provide a new technique to quickly and intuitively augment any location on the clothing with sensing capabilities. SensorSnaps securely attach and detach from ubiquitous commercial snap fasteners. Using inertial measurement units, the SensorSnaps detect tap and rotation gestures, as well as track body motion. We optimized the power consumption for SensorSnaps to work continuously for 45 minutes and up to 4 hours in capacitive touch standby mode. We present applications in which the SensorSnaps are used as gestural interfaces for a music player controller, cursor control, and motion tracking suit. The user study showed that SensorSnap could be attached in around 71 seconds, similar to attaching off-the-shelf snaps, and participants found the gestures easy to learn and perform. SensorSnaps could allow anyone to effortlessly add sophisticated sensing capacities to ubiquitous snap fasteners.
{"title":"SensorSnaps","authors":"A. Dementyev, Tomás Vega Gálvez, A. Olwal","doi":"10.1145/3332165.3347913","DOIUrl":"https://doi.org/10.1145/3332165.3347913","url":null,"abstract":"Adding electronics to textiles can be time-consuming and requires technical expertise. We introduce SensorSnaps, low-power wireless sensor nodes that seamlessly integrate into caps of fabric snap fasteners. SensorSnaps provide a new technique to quickly and intuitively augment any location on the clothing with sensing capabilities. SensorSnaps securely attach and detach from ubiquitous commercial snap fasteners. Using inertial measurement units, the SensorSnaps detect tap and rotation gestures, as well as track body motion. We optimized the power consumption for SensorSnaps to work continuously for 45 minutes and up to 4 hours in capacitive touch standby mode. We present applications in which the SensorSnaps are used as gestural interfaces for a music player controller, cursor control, and motion tracking suit. The user study showed that SensorSnap could be attached in around 71 seconds, similar to attaching off-the-shelf snaps, and participants found the gestures easy to learn and perform. SensorSnaps could allow anyone to effortlessly add sophisticated sensing capacities to ubiquitous snap fasteners.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116319579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
LabelAR
Michael J. Laielli, James Smith, Giscard Biamby, Trevor Darrell, B. Hartmann
Computer vision is applied in an ever expanding range of applications, many of which require custom training data to perform well. We present a novel interface for rapid collection of labeled training images to improve CV-based object detectors. LabelAR leverages the spatial tracking capabilities of an AR-enabled camera, allowing users to place persistent bounding volumes that stay centered on real-world objects. The interface then guides the user to move the camera to cover a wide variety of viewpoints. We eliminate the need for post hoc labeling of images by automatically projecting 2D bounding boxes around objects in the images as they are captured from AR-marked viewpoints. In a user study with 12 participants, LabelAR significantly outperforms existing approaches in terms of the trade-off between detection performance and collection time.
{"title":"LabelAR","authors":"Michael J. Laielli, James Smith, Giscard Biamby, Trevor Darrell, B. Hartmann","doi":"10.1145/3332165.3347927","DOIUrl":"https://doi.org/10.1145/3332165.3347927","url":null,"abstract":"Computer vision is applied in an ever expanding range of applications, many of which require custom training data to perform well. We present a novel interface for rapid collection of labeled training images to improve CV-based object detectors. LabelAR leverages the spatial tracking capabilities of an AR-enabled camera, allowing users to place persistent bounding volumes that stay centered on real-world objects. The interface then guides the user to move the camera to cover a wide variety of viewpoints. We eliminate the need for post hoc labeling of images by automatically projecting 2D bounding boxes around objects in the images as they are captured from AR-marked viewpoints. In a user study with 12 participants, LabelAR significantly outperforms existing approaches in terms of the trade-off between detection performance and collection time.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116237526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
BubBowl: Display Vessel Using Electrolysis Bubbles in Drinkable Beverages 气泡碗:使用电解气泡在可饮用饮料中的展示容器
Ayaka Ishii, I. Siio
Research was conducted regarding a display that presents digital information using bubbles. Conventional bubble displays require moving parts, because it is common to use air taken from outside of the water to represent pixels. However, it is difficult to increase the number of pixels at a low cost. We propose a liquid-surface display using pixels of bubble clusters generated from electrolysis, and present the cup-type device BubBowl, which generates a 10×10 pixel dot matrix pattern on the surface of a beverage. Our technique requires neither a gas supply from the outside nor moving parts. Using the proposed electrolysis method, a higher-resolution display can easily be realized using a PCB with a higher density of matrix electrodes.Moreover, the method is simple and practical, and can be utilized in daily life, such as for presenting information using bubbles on the surface of coffee in a cup.
对一种用气泡显示数字信息的显示器进行了研究。传统的气泡显示器需要移动部件,因为通常使用从水外提取的空气来表示像素。然而,以较低的成本增加像素数是很困难的。我们提出了一种使用电解产生的气泡簇像素的液体表面显示,并提出了杯型设备BubBowl,它在饮料表面产生10×10像素点阵图案。我们的技术既不需要外部气体供应,也不需要活动部件。采用所提出的电解方法,可以使用具有更高密度的矩阵电极的PCB轻松实现更高分辨率的显示。此外,该方法简单实用,可以在日常生活中使用,例如利用杯子中咖啡表面的气泡来呈现信息。
{"title":"BubBowl: Display Vessel Using Electrolysis Bubbles in Drinkable Beverages","authors":"Ayaka Ishii, I. Siio","doi":"10.1145/3332165.3347923","DOIUrl":"https://doi.org/10.1145/3332165.3347923","url":null,"abstract":"Research was conducted regarding a display that presents digital information using bubbles. Conventional bubble displays require moving parts, because it is common to use air taken from outside of the water to represent pixels. However, it is difficult to increase the number of pixels at a low cost. We propose a liquid-surface display using pixels of bubble clusters generated from electrolysis, and present the cup-type device BubBowl, which generates a 10×10 pixel dot matrix pattern on the surface of a beverage. Our technique requires neither a gas supply from the outside nor moving parts. Using the proposed electrolysis method, a higher-resolution display can easily be realized using a PCB with a higher density of matrix electrodes.Moreover, the method is simple and practical, and can be utilized in daily life, such as for presenting information using bubbles on the surface of coffee in a cup.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116579067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Session details: Session 9B: 3D and VR Input 会话详细信息:会话9B: 3D和VR输入
David Lindbauer
{"title":"Session details: Session 9B: 3D and VR Input","authors":"David Lindbauer","doi":"10.1145/3368386","DOIUrl":"https://doi.org/10.1145/3368386","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121512167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection 眼睛和头部:眼睛和头部协同运动的凝视指向和选择
Ludwig Sidenmark, Hans-Werner Gellersen
Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and identify design principles for Eye&Head gaze interaction. We introduce three novel techniques that build on the distinction of head-supported versus eyes-only gaze, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-selections, and iterative and fast confirmation of targets. We demonstrate Eye&Head interaction on applications in virtual reality, and evaluate our techniques against baselines in pointing and confirmation studies. Our results show that Eye&Head techniques enable novel gaze behaviours that provide users with more control and flexibility in fast gaze pointing and selection.
眼球注视涉及眼球和头部运动的协调以获取注视目标,但现有的注视指向方法是基于眼球跟踪,对头部运动进行抽象。我们建议利用眼和头的协同运动,并确定眼和头注视交互的设计原则。我们介绍了三种新技术,它们建立在区分头部支持和眼睛支持的注视的基础上,以实现注视和指针的动态耦合,悬停交互,围绕预选择的视觉探索以及迭代和快速确认目标。我们在虚拟现实应用中演示了眼与头的交互,并在指向和确认研究中根据基线评估了我们的技术。我们的研究结果表明,Eye&Head技术实现了新的凝视行为,为用户提供了更多的控制和灵活性,可以快速指向和选择凝视。
{"title":"Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection","authors":"Ludwig Sidenmark, Hans-Werner Gellersen","doi":"10.1145/3332165.3347921","DOIUrl":"https://doi.org/10.1145/3332165.3347921","url":null,"abstract":"Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and identify design principles for Eye&Head gaze interaction. We introduce three novel techniques that build on the distinction of head-supported versus eyes-only gaze, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-selections, and iterative and fast confirmation of targets. We demonstrate Eye&Head interaction on applications in virtual reality, and evaluate our techniques against baselines in pointing and confirmation studies. Our results show that Eye&Head techniques enable novel gaze behaviours that provide users with more control and flexibility in fast gaze pointing and selection.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124655362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
milliMorph -- Fluid-Driven Thin Film Shape-Change Materials for Interaction Design 用于交互设计的流体驱动薄膜形状变化材料
Qiuyu Lu, Jifei Ou, João Wilbert, André Haben, Haipeng Mi, H. Ishii
This paper presents a design space, a fabrication system and applications of creating fluidic chambers and channels at millimeter scale for tangible actuated interfaces. The ability to design and fabricate millifluidic chambers allows one to create high frequency actuation, sequential control of flows and high resolution design on thin film materials. We propose a four dimensional design space of creating these fluidic chambers, a novel heat sealing system that enables easy and precise millifluidics fabrication, and application demonstrations of the fabricated materials for haptics, ambient devices and robotics. As shape-change materials are increasingly integrated in designing novel interfaces, milliMorph enriches the library of fluid-driven shape-change materials, and demonstrates new design opportunities that is unique at millimeter scale for product and interaction design.
本文介绍了一种设计空间、制造系统和在毫米尺度上为有形驱动界面创建流体室和通道的应用。设计和制造毫流体室的能力允许人们在薄膜材料上创建高频驱动,流的顺序控制和高分辨率设计。我们提出了创建这些流体室的四维设计空间,一种新颖的热密封系统,使微流体制造变得简单和精确,以及制造材料在触觉,环境设备和机器人技术中的应用演示。随着形状变化材料越来越多地集成到设计新颖的界面中,milorph丰富了流体驱动形状变化材料的库,并展示了在毫米尺度上独特的产品和交互设计的新设计机会。
{"title":"milliMorph -- Fluid-Driven Thin Film Shape-Change Materials for Interaction Design","authors":"Qiuyu Lu, Jifei Ou, João Wilbert, André Haben, Haipeng Mi, H. Ishii","doi":"10.1145/3332165.3347956","DOIUrl":"https://doi.org/10.1145/3332165.3347956","url":null,"abstract":"This paper presents a design space, a fabrication system and applications of creating fluidic chambers and channels at millimeter scale for tangible actuated interfaces. The ability to design and fabricate millifluidic chambers allows one to create high frequency actuation, sequential control of flows and high resolution design on thin film materials. We propose a four dimensional design space of creating these fluidic chambers, a novel heat sealing system that enables easy and precise millifluidics fabrication, and application demonstrations of the fabricated materials for haptics, ambient devices and robotics. As shape-change materials are increasingly integrated in designing novel interfaces, milliMorph enriches the library of fluid-driven shape-change materials, and demonstrates new design opportunities that is unique at millimeter scale for product and interaction design.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125264876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
SCALE 规模
Takatoshi Yoshida, Xiaoyan Shen, Koichi Yoshino, Ken Nakagaki, H. Ishii
ANDRIAN DJAMALU. Strategy for the Development of Small scale Smoking Roa Fish (Hemiramphus sp.) industry In Boalemo District, Gorontalo Province. (Supervised by Sitti Nur Faridah and Muh. Hatta Jamil). About 95% of the demand for smoked Roa fish in the province of Gorontalo comes from outside the province and the remaining 5% provided by the roa fish smoking industry in Boalemo Regency. These conditions are affected by the lacks of production capacity, production facilities, and capital owned by small businesses. This study aims to analyze the current conditions of the small scale roa smoking industry, conduct financial feasibility analysis, and formulate development strategy for the small scale roa fish smoking industry. The research method used was qualitative and quantitative research with data collection techniques in the form of interviews, documentation, and SWOT analysis. Through the NPV, IRR, BCR, PP, and BEP values, the fasibility of the roa smoking industry was determined. Based on the results obtained from SWOT analysis, it was found that the strength-opportunity strategy had the highest score. Policies to support this development strategy are to create brands and labels, improve cooperative relationships with existing partners and networks, take advantage of the abundant availability of raw materials to increase production capacity. In addition, it was also found that the lack of processing facility can be overcome, and develop diversification of through assistance from government or from other agencies. It was also found that diversisification of processed products derived from smoked Roa fish can become an important strategy. Other important findings from this study were the demands for the product were high and the industry could not keep up with the demands, the Roa smoking industry is investment-worthy, and the right strategy to develop this industry should be based on the Strength-Opportunity strategy.
ANDRIAN DJAMALU。哥伦塔洛省Boalemo区小型烟熏罗阿鱼(Hemiramphus sp.)产业发展战略。(由Sitti Nur Faridah和Muh监督。净化贾米尔)。Gorontalo省约95%的熏制罗阿鱼需求来自省外,其余5%由Boalemo摄政的罗阿鱼熏制业提供。这些情况受到小型企业缺乏生产能力、生产设施和资本的影响。本研究旨在分析小型罗非鱼烟熏产业的现状,进行财务可行性分析,制定小型罗非鱼烟熏产业的发展战略。使用的研究方法是定性和定量研究与数据收集技术的访谈,文件,和SWOT分析的形式。通过NPV、IRR、BCR、PP和BEP值,确定了道路吸烟行业的可行性。根据SWOT分析结果,实力-机会战略得分最高。支持这一发展战略的政策是创建品牌和标签,改善与现有合作伙伴和网络的合作关系,利用丰富的原材料供应来提高产能。此外,还发现可以克服缺乏加工设施的问题,并通过政府或其他机构的援助发展多样化。研究还发现,从熏制罗阿鱼衍生的加工产品的多样化可以成为一个重要的策略。本研究的其他重要发现是对产品的需求很高,行业无法跟上需求,Roa吸烟行业是值得投资的,发展该行业的正确战略应该基于优势-机会战略。
{"title":"SCALE","authors":"Takatoshi Yoshida, Xiaoyan Shen, Koichi Yoshino, Ken Nakagaki, H. Ishii","doi":"10.1145/3332165.3347935","DOIUrl":"https://doi.org/10.1145/3332165.3347935","url":null,"abstract":"ANDRIAN DJAMALU. Strategy for the Development of Small scale Smoking Roa Fish (Hemiramphus sp.) industry In Boalemo District, Gorontalo Province. (Supervised by Sitti Nur Faridah and Muh. Hatta Jamil). About 95% of the demand for smoked Roa fish in the province of Gorontalo comes from outside the province and the remaining 5% provided by the roa fish smoking industry in Boalemo Regency. These conditions are affected by the lacks of production capacity, production facilities, and capital owned by small businesses. This study aims to analyze the current conditions of the small scale roa smoking industry, conduct financial feasibility analysis, and formulate development strategy for the small scale roa fish smoking industry. The research method used was qualitative and quantitative research with data collection techniques in the form of interviews, documentation, and SWOT analysis. Through the NPV, IRR, BCR, PP, and BEP values, the fasibility of the roa smoking industry was determined. Based on the results obtained from SWOT analysis, it was found that the strength-opportunity strategy had the highest score. Policies to support this development strategy are to create brands and labels, improve cooperative relationships with existing partners and networks, take advantage of the abundant availability of raw materials to increase production capacity. In addition, it was also found that the lack of processing facility can be overcome, and develop diversification of through assistance from government or from other agencies. It was also found that diversisification of processed products derived from smoked Roa fish can become an important strategy. Other important findings from this study were the demands for the product were high and the industry could not keep up with the demands, the Roa smoking industry is investment-worthy, and the right strategy to develop this industry should be based on the Strength-Opportunity strategy.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121809531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Redirected Jumping: Perceptual Detection Rates for Curvature Gains 重定向跳跃:曲率增益的感知检测率
Sungchul Jung, C. Borst, S. Hoermann, R. Lindeman
Redirected walking (RDW) techniques provide a way to explore a virtual space that is larger than the available physical space by imperceptibly manipulating the virtual world view or motions. These manipulations may introduce conflicts between real and virtual cues (e.g., visual-vestibular conflicts), which can be disturbing when detectable by users. The empirically established detection thresholds of rotation manipulation for RDW still require a large physical tracking space and are therefore impractical for general-purpose Virtual Reality (VR) applications. We investigate Redirected Jumping (RDJ) as a new locomotion metaphor for redirection to partially address this limitation, and because jumping is a common interaction for environments like games. We investigated the detection rates for different curvature gains during RDJ. The probability of users detecting RDJ appears substantially lower than that of RDW, meaning designers can get away with greater manipulations with RDJ than with RDW. We postulate that the substantial vertical (up/down) movement present when jumping introduces increased vestibular noise compared to normal walking, thereby supporting greater rotational manipulations. Our study suggests that the potential combination of metaphors (e.g., walking and jumping) could further reduce the required physical space for locomotion in VR. We also summarize some differences in user jumping approaches and provide motion sickness measures in our study.
重定向行走(RDW)技术提供了一种探索比可用物理空间更大的虚拟空间的方法,通过不知不觉地操纵虚拟世界观或运动。这些操作可能会在真实和虚拟线索之间引入冲突(例如,视觉前庭冲突),当用户发现时可能会感到不安。经验建立的RDW旋转操作检测阈值仍然需要很大的物理跟踪空间,因此对于通用的虚拟现实(VR)应用是不切实际的。我们研究了重定向跳跃(RDJ)作为重定向的一种新的运动隐喻,以部分解决这一限制,因为跳跃是游戏等环境中的常见交互。我们研究了RDJ过程中不同曲率增益的检出率。用户发现RDJ的概率明显低于RDW,这意味着设计师可以使用RDJ进行比RDW更大的操作。我们假设跳跃时存在的大量垂直(上/下)运动与正常行走相比会增加前庭噪音,从而支持更大的旋转操作。我们的研究表明,隐喻的潜在组合(例如,行走和跳跃)可以进一步减少VR中运动所需的物理空间。我们还总结了用户跳跃方式的一些差异,并在我们的研究中提供了晕动病的措施。
{"title":"Redirected Jumping: Perceptual Detection Rates for Curvature Gains","authors":"Sungchul Jung, C. Borst, S. Hoermann, R. Lindeman","doi":"10.1145/3332165.3347868","DOIUrl":"https://doi.org/10.1145/3332165.3347868","url":null,"abstract":"Redirected walking (RDW) techniques provide a way to explore a virtual space that is larger than the available physical space by imperceptibly manipulating the virtual world view or motions. These manipulations may introduce conflicts between real and virtual cues (e.g., visual-vestibular conflicts), which can be disturbing when detectable by users. The empirically established detection thresholds of rotation manipulation for RDW still require a large physical tracking space and are therefore impractical for general-purpose Virtual Reality (VR) applications. We investigate Redirected Jumping (RDJ) as a new locomotion metaphor for redirection to partially address this limitation, and because jumping is a common interaction for environments like games. We investigated the detection rates for different curvature gains during RDJ. The probability of users detecting RDJ appears substantially lower than that of RDW, meaning designers can get away with greater manipulations with RDJ than with RDW. We postulate that the substantial vertical (up/down) movement present when jumping introduces increased vestibular noise compared to normal walking, thereby supporting greater rotational manipulations. Our study suggests that the potential combination of metaphors (e.g., walking and jumping) could further reduce the required physical space for locomotion in VR. We also summarize some differences in user jumping approaches and provide motion sickness measures in our study.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122085724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1