Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.9.ipas-a09
Abstract Image Processing: Algorithms and Systems continues the tradition of the past conference, Nonlinear Image Processing and Pattern Analysis, in exploring new image processing algorithms. Specifically, the conference aims at highlighting the importance of the interaction between transform-, model-, and learning-based approaches for creating effective algorithms and building modern imaging systems for new and emerging applications. It also reverberates the growing call for integration of the theoretical research on image processing algorithms with the more applied research on image processing systems.
{"title":"Image Processing: Algorithms and Systems XXI Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.9.ipas-a09","DOIUrl":"https://doi.org/10.2352/ei.2023.35.9.ipas-a09","url":null,"abstract":"Abstract Image Processing: Algorithms and Systems continues the tradition of the past conference, Nonlinear Image Processing and Pattern Analysis, in exploring new image processing algorithms. Specifically, the conference aims at highlighting the importance of the interaction between transform-, model-, and learning-based approaches for creating effective algorithms and building modern imaging systems for new and emerging applications. It also reverberates the growing call for integration of the theoretical research on image processing algorithms with the more applied research on image processing systems.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.5.iriacv-a05
Abstract This conference brings together real-world practitioners and researchers in intelligent robots and computer vision to share recent applications and developments. Topics of interest include the integration of imaging sensors supporting hardware, computers, and algorithms for intelligent robots, manufacturing inspection, characterization, and/or control. The decreased cost of computational power and vision sensors has motivated the rapid proliferation of machine vision technology in a variety of industries, including aluminum, automotive, forest products, textiles, glass, steel, metal casting, aircraft, chemicals, food, fishing, agriculture, archaeological products, medical products, artistic products, etc. Other industries, such as semiconductor and electronics manufacturing, have been employing machine vision technology for several decades. Machine vision supporting handling robots is another main topic. With respect to intelligent robotics another approach is sensor fusion – combining multi-modal sensors in audio, location, image and video data for signal processing, machine learning and computer vision, and additionally other 3D capturing devices. There is a need for accurate, fast, and robust detection of objects and their position in space. Their surface, background, and illumination are uncontrolled, and in most cases the objects of interest are within a bulk of many others. For both new and existing industrial users of machine vision, there are numerous innovative methods to improve productivity, quality, and compliance with product standards. There are several broad problem areas that have received significant attention in recent years. For example, some industries are collecting enormous amounts of image data from product monitoring systems. New and efficient methods are required to extract insight and to perform process diagnostics based on this historical record. Regarding the physical scale of the measurements, microscopy techniques are nearing resolution limits in fields such as semiconductors, biology, and other nano-scale technologies. Techniques such as resolution enhancement, model-based methods, and statistical imaging may provide the means to extend these systems beyond current capabilities. Furthermore, obtaining real-time and robust measurements in-line or at-line in harsh industrial environments is a challenge for machine vision researchers, especially when the manufacturer cannot make significant changes to their facility or process.
{"title":"Intelligent Robotics and Industrial Applications using Computer Vision 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.5.iriacv-a05","DOIUrl":"https://doi.org/10.2352/ei.2023.35.5.iriacv-a05","url":null,"abstract":"Abstract This conference brings together real-world practitioners and researchers in intelligent robots and computer vision to share recent applications and developments. Topics of interest include the integration of imaging sensors supporting hardware, computers, and algorithms for intelligent robots, manufacturing inspection, characterization, and/or control. The decreased cost of computational power and vision sensors has motivated the rapid proliferation of machine vision technology in a variety of industries, including aluminum, automotive, forest products, textiles, glass, steel, metal casting, aircraft, chemicals, food, fishing, agriculture, archaeological products, medical products, artistic products, etc. Other industries, such as semiconductor and electronics manufacturing, have been employing machine vision technology for several decades. Machine vision supporting handling robots is another main topic. With respect to intelligent robotics another approach is sensor fusion – combining multi-modal sensors in audio, location, image and video data for signal processing, machine learning and computer vision, and additionally other 3D capturing devices. There is a need for accurate, fast, and robust detection of objects and their position in space. Their surface, background, and illumination are uncontrolled, and in most cases the objects of interest are within a bulk of many others. For both new and existing industrial users of machine vision, there are numerous innovative methods to improve productivity, quality, and compliance with product standards. There are several broad problem areas that have received significant attention in recent years. For example, some industries are collecting enormous amounts of image data from product monitoring systems. New and efficient methods are required to extract insight and to perform process diagnostics based on this historical record. Regarding the physical scale of the measurements, microscopy techniques are nearing resolution limits in fields such as semiconductors, biology, and other nano-scale technologies. Techniques such as resolution enhancement, model-based methods, and statistical imaging may provide the means to extend these systems beyond current capabilities. Furthermore, obtaining real-time and robust measurements in-line or at-line in harsh industrial environments is a challenge for machine vision researchers, especially when the manufacturer cannot make significant changes to their facility or process.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.12.ervr-217
Sharad Sharma, JeeWoong Park, Brendan Tran Morris
There is a need to prepare for emergencies such as active shooter events. Emergency response training drills and exercises are necessary to train for such events as we are unable to predict when emergencies do occur. There has been progress in understanding human behavior, unpredictability, human motion synthesis, crowd dynamics, and their relationships with active shooter events, but challenges remain. This paper presents an immersive security personnel training module for active shooter events in an indoor building. We have created an experimental platform for conducting active shooter drills for training that gives a fully immersive feel of the situation and allow one to perform virtual evacuation drills. The security personnel training module also incorporates four sub-modules namely 1) Situational assessment module, 2) Individual officer intervention module, 3) Team Response Module, and 4) Rescue Task Force module. We have developed an immersive virtual reality training module for active shooter events using an Oculus for course of action, visualization, and situational awareness for active shooter events as shown in Fig.1. The immersive security personnel training module aims to get information about the emergency situation inside the building. The dispatched officer will verify the active shooter situation in the building. The security personnel should find a safe zone in the building and secure the people in that area. The security personnel should also find the number and location of persons in possible jeopardy. Upon completion of the initial assessment, the first security personnel shall advise communications and request resources as deemed necessary. This will allow determining whether to take immediate action alone or with another officer or wait until additional resources are available. After successfully gathering the information, the personnel needs to update the info to their officer through a communication device.
{"title":"Immersive security personnel training module for active shooter events","authors":"Sharad Sharma, JeeWoong Park, Brendan Tran Morris","doi":"10.2352/ei.2023.35.12.ervr-217","DOIUrl":"https://doi.org/10.2352/ei.2023.35.12.ervr-217","url":null,"abstract":"There is a need to prepare for emergencies such as active shooter events. Emergency response training drills and exercises are necessary to train for such events as we are unable to predict when emergencies do occur. There has been progress in understanding human behavior, unpredictability, human motion synthesis, crowd dynamics, and their relationships with active shooter events, but challenges remain. This paper presents an immersive security personnel training module for active shooter events in an indoor building. We have created an experimental platform for conducting active shooter drills for training that gives a fully immersive feel of the situation and allow one to perform virtual evacuation drills. The security personnel training module also incorporates four sub-modules namely 1) Situational assessment module, 2) Individual officer intervention module, 3) Team Response Module, and 4) Rescue Task Force module. We have developed an immersive virtual reality training module for active shooter events using an Oculus for course of action, visualization, and situational awareness for active shooter events as shown in Fig.1. The immersive security personnel training module aims to get information about the emergency situation inside the building. The dispatched officer will verify the active shooter situation in the building. The security personnel should find a safe zone in the building and secure the people in that area. The security personnel should also find the number and location of persons in possible jeopardy. Upon completion of the initial assessment, the first security personnel shall advise communications and request resources as deemed necessary. This will allow determining whether to take immediate action alone or with another officer or wait until additional resources are available. After successfully gathering the information, the personnel needs to update the info to their officer through a communication device.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.12.ervr-a12
Abstract Virtual and augmented reality systems are evolving. In addition to research, the trend toward content building continues and practitioners find that technologies and disciplines must be tailored and integrated for specific visualization and interactive applications. This conference serves as a forum where advances and practical advice toward both creative activity and scientific investigation are presented and discussed. Research results can be presented and applications can be demonstrated.
{"title":"Engineering Reality of Virtual Reality 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.12.ervr-a12","DOIUrl":"https://doi.org/10.2352/ei.2023.35.12.ervr-a12","url":null,"abstract":"Abstract Virtual and augmented reality systems are evolving. In addition to research, the trend toward content building continues and practitioners find that technologies and disciplines must be tailored and integrated for specific visualization and interactive applications. This conference serves as a forum where advances and practical advice toward both creative activity and scientific investigation are presented and discussed. Research results can be presented and applications can be demonstrated.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative Adversarial Networks (GAN) have been widely investigated for image synthesis based on their powerful representation learning ability. In this work, we explore the StyleGAN and its application of synthetic food image generation. Despite the impressive performance of GAN for natural image generation, food images suffer from high intra-class diversity and inter-class similarity, resulting in overfitting and visual artifacts for synthetic images. Therefore, we aim to explore the capability and improve the performance of GAN methods for food image generation. Specifically, we first choose StyleGAN3 as the baseline method to generate synthetic food images and analyze the performance. Then, we identify two issues that can cause performance degradation on food images during the training phase: (1) inter-class feature entanglement during multi-food classes training and (2) loss of high-resolution detail during image downsampling. To address both issues, we propose to train one food category at a time to avoid feature entanglement and leverage image patches cropped from high-resolution datasets to retain fine details. We evaluate our method on the Food-101 dataset and show improved quality of generated synthetic food images compared with the baseline. Finally, we demonstrate the great potential of improving the performance of downstream tasks, such as food image classification by including high-quality synthetic training samples in the data augmentation.
{"title":"Conditional synthetic food image generation","authors":"Wenjin Fu, Yue Han, Jiangpeng He, Sriram Baireddy, Mridul Gupta, Fengqing Zhu","doi":"10.2352/ei.2023.35.7.image-268","DOIUrl":"https://doi.org/10.2352/ei.2023.35.7.image-268","url":null,"abstract":"Generative Adversarial Networks (GAN) have been widely investigated for image synthesis based on their powerful representation learning ability. In this work, we explore the StyleGAN and its application of synthetic food image generation. Despite the impressive performance of GAN for natural image generation, food images suffer from high intra-class diversity and inter-class similarity, resulting in overfitting and visual artifacts for synthetic images. Therefore, we aim to explore the capability and improve the performance of GAN methods for food image generation. Specifically, we first choose StyleGAN3 as the baseline method to generate synthetic food images and analyze the performance. Then, we identify two issues that can cause performance degradation on food images during the training phase: (1) inter-class feature entanglement during multi-food classes training and (2) loss of high-resolution detail during image downsampling. To address both issues, we propose to train one food category at a time to avoid feature entanglement and leverage image patches cropped from high-resolution datasets to retain fine details. We evaluate our method on the Food-101 dataset and show improved quality of generated synthetic food images compared with the baseline. Finally, we demonstrate the great potential of improving the performance of downstream tasks, such as food image classification by including high-quality synthetic training samples in the data augmentation.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135693974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.3.mobmu-354
Nazneen Mansoor, Klaus Schwarz, Reiner Creutzburg
Open-source technologies (OSINT) and Social Media Intelligence (SOCMINT) are becoming increasingly popular with investigative and government agencies, intelligence services, media companies, and corporations. These OSINT and SOCMINT technologies use sophisticated techniques and special tools to efficiently analyze the continually growing sources of information. There is a great need for training and further education in the OSINT field worldwide. This report describes the importance of open source or social media intelligence for evaluating disaster management. It also gives an overview of the government work in Australia, Haiti, and Japan for disaster management using various OSINT tools and platforms. Thus, decision support for using OSINT and SOCMINT tools is given, and the necessary training needs for investigators can be better estimated.
{"title":"Importance of OSINT/SOCMINT for modern disaster management evaluation - Australia, Haiti, Japan","authors":"Nazneen Mansoor, Klaus Schwarz, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-354","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-354","url":null,"abstract":"Open-source technologies (OSINT) and Social Media Intelligence (SOCMINT) are becoming increasingly popular with investigative and government agencies, intelligence services, media companies, and corporations. These OSINT and SOCMINT technologies use sophisticated techniques and special tools to efficiently analyze the continually growing sources of information. There is a great need for training and further education in the OSINT field worldwide. This report describes the importance of open source or social media intelligence for evaluating disaster management. It also gives an overview of the government work in Australia, Haiti, and Japan for disaster management using various OSINT tools and platforms. Thus, decision support for using OSINT and SOCMINT tools is given, and the necessary training needs for investigators can be better estimated.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.10.hvei-251
Bernice Rogowitz, Paul Borrel
During these past years, international COVID data have been collected by several reputable organizations and made available to the worldwide community. This has resulted in a wellspring of different visualizations. Many different measures can be selected (e.g., cases, deaths, hospitalizations). And for each measure, designers and policy makers can make a myriad of different choices of how to represent the data. Data from individual countries may be presented on linear or log scales, daily, weekly, or cumulative, alone or in the context of other countries, scaled to a common grid, or scaled to their own range, raw or per capita, etc. It is well known that the data representation can influence the interpretation of data. But, what visual features in these different representations affect our judgments? To explore this idea, we conducted an experiment where we asked participants to look at time-series data plots and assess how safe they would feel if they were traveling to one of the countries represented, and how confident they are of their judgment. Observers rated 48 visualizations of the same data, rendered differently along 6 controlled dimensions. Our initial results provide insight into how characteristics of the visual representation affect human judgments of time series data. We also discuss how these results could impact how public policy and news organizations choose to represent data to the public.
{"title":"Am I safe? A preliminary examination of how everyday people interpret covid data visualizations","authors":"Bernice Rogowitz, Paul Borrel","doi":"10.2352/ei.2023.35.10.hvei-251","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-251","url":null,"abstract":"During these past years, international COVID data have been collected by several reputable organizations and made available to the worldwide community. This has resulted in a wellspring of different visualizations. Many different measures can be selected (e.g., cases, deaths, hospitalizations). And for each measure, designers and policy makers can make a myriad of different choices of how to represent the data. Data from individual countries may be presented on linear or log scales, daily, weekly, or cumulative, alone or in the context of other countries, scaled to a common grid, or scaled to their own range, raw or per capita, etc. It is well known that the data representation can influence the interpretation of data. But, what visual features in these different representations affect our judgments? To explore this idea, we conducted an experiment where we asked participants to look at time-series data plots and assess how safe they would feel if they were traveling to one of the countries represented, and how confident they are of their judgment. Observers rated 48 visualizations of the same data, rendered differently along 6 controlled dimensions. Our initial results provide insight into how characteristics of the visual representation affect human judgments of time series data. We also discuss how these results could impact how public policy and news organizations choose to represent data to the public.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.3.mobmu-363
Kirsnaragavan Arudpiragasam, Taraka Rama Krishna Kanth Kannuri, Klaus Schwarz, Michael Hartmann, Reiner Creutzburg
Over the past decade, researchers have suggested many methods to find anomalies. However, none of the studies has applied frame reconstruction with Object Tracking (OT) to detect anomalies. Therefore, this study focuses on road accident detection using a combination of OT and U-Net associated with variants such as skip, skip residual and attention connections. The U-Net algorithm is developed for reconstructing the images using the UFC-Crime dataset. Furthermore, YOLOV4 and DeepSort are used for object detection and tracking within the frames. Finally, the Mahalanobis distance and the reconstruction error (RCE) are determined using a Kalman filter and the U-Net model.
{"title":"Improvement of vehicles accident detection using object tracking with U-Net","authors":"Kirsnaragavan Arudpiragasam, Taraka Rama Krishna Kanth Kannuri, Klaus Schwarz, Michael Hartmann, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-363","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-363","url":null,"abstract":"Over the past decade, researchers have suggested many methods to find anomalies. However, none of the studies has applied frame reconstruction with Object Tracking (OT) to detect anomalies. Therefore, this study focuses on road accident detection using a combination of OT and U-Net associated with variants such as skip, skip residual and attention connections. The U-Net algorithm is developed for reconstructing the images using the UFC-Crime dataset. Furthermore, YOLOV4 and DeepSort are used for object detection and tracking within the frames. Finally, the Mahalanobis distance and the reconstruction error (RCE) are determined using a Kalman filter and the U-Net model.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.10.hvei-a10
Abstract The conference on Human Vision and Electronic Imaging explores the role of human perception and cognition in the design, analysis, and use of electronic media systems. Over the years, it has brought together researchers, technologists, and artists, from all over the world, for a rich and lively exchange of ideas. We believe that understanding the human observer is fundamental to the advancement of electronic media systems, and that advances in these systems and applications drive new research into the perception and cognition of the human observer. Every year, we introduce new topics through our Special Sessions, centered on areas driving innovation at the intersection of perception and emerging media technologies.
{"title":"Human Vision and Electronic Imaging 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.10.hvei-a10","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-a10","url":null,"abstract":"Abstract The conference on Human Vision and Electronic Imaging explores the role of human perception and cognition in the design, analysis, and use of electronic media systems. Over the years, it has brought together researchers, technologists, and artists, from all over the world, for a rich and lively exchange of ideas. We believe that understanding the human observer is fundamental to the advancement of electronic media systems, and that advances in these systems and applications drive new research into the perception and cognition of the human observer. Every year, we introduce new topics through our Special Sessions, centered on areas driving innovation at the intersection of perception and emerging media technologies.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.10.hvei-220
Nicko R. Caluya, Xiaoyang Tian, Damon M. Chandler
The method of loci (memory palace technique) is a learning strategy that uses visualizations of spatial environments to enhance memory. One particularly popular use of the method of loci is for language learning, in which the method can help long-term memory of vocabulary by allowing users to associate location and other spatial information with particular words/concepts, thus making use of spatial memory to assist memory typically associated with language. Augmented reality (AR) and virtual reality (VR) have been shown to potentially provide even better memory enhancement due to their superior visualization abilities. However, a direct comparison of the two techniques in terms of language-learning enhancement has not yet been investigated. In this presentation, we present the results of a study designed to compare AR and VR when using the method of loci for learning vocabulary from a second language.
记忆宫殿法(loci method of memory palace technique)是一种利用空间环境的可视化来增强记忆的学习策略。位点法的一个特别流行的用途是语言学习,其中该方法可以通过允许用户将位置和其他空间信息与特定的单词/概念联系起来,从而利用空间记忆来辅助通常与语言相关的记忆,从而帮助词汇的长期记忆。增强现实(AR)和虚拟现实(VR)已经被证明有可能提供更好的记忆增强,因为它们具有卓越的可视化能力。然而,两种技术在语言学习增强方面的直接比较尚未被调查。在本次演讲中,我们介绍了一项研究的结果,该研究旨在比较AR和VR在使用loci方法学习第二语言词汇时的效果。
{"title":"Comparison of AR and VR memory palace quality in second-language vocabulary acquisition (Invited)","authors":"Nicko R. Caluya, Xiaoyang Tian, Damon M. Chandler","doi":"10.2352/ei.2023.35.10.hvei-220","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-220","url":null,"abstract":"The method of loci (memory palace technique) is a learning strategy that uses visualizations of spatial environments to enhance memory. One particularly popular use of the method of loci is for language learning, in which the method can help long-term memory of vocabulary by allowing users to associate location and other spatial information with particular words/concepts, thus making use of spatial memory to assist memory typically associated with language. Augmented reality (AR) and virtual reality (VR) have been shown to potentially provide even better memory enhancement due to their superior visualization abilities. However, a direct comparison of the two techniques in terms of language-learning enhancement has not yet been investigated. In this presentation, we present the results of a study designed to compare AR and VR when using the method of loci for learning vocabulary from a second language.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}