Pub Date : 2021-01-03DOI: 10.1080/13875868.2020.1865359
E. Charalambous, S. Hanna, A. Penn
ABSTRACT Reorientation depends greatly on the perceived geometric information, which constantly changes during navigation in urban environments. Environmental novelty, as a driver of exploratory behavior, is likely to engender this spatial Aha! moment. The paper investigates the contribution of two qualitatively different types of novelty, corresponding to distinct visuospatial cues: (a) situations that cause surprise, e.g., a sudden change in spaciousness; versus (b) situations that engender mystery, e.g., a change in the complexity of visuospatial information and the promise of gaining new information. Visibility graph analysis is used to quantify and examine these hypotheses in relation to participants’ exploratory behavior and brain dynamics (EEG) during virtual navigation. The findings suggest that reorientation is a spatial boundary effect, associated primarily with a change in visuospatial complexity.
{"title":"Aha! I know where I am: the contribution of visuospatial cues to reorientation in urban environments","authors":"E. Charalambous, S. Hanna, A. Penn","doi":"10.1080/13875868.2020.1865359","DOIUrl":"https://doi.org/10.1080/13875868.2020.1865359","url":null,"abstract":"ABSTRACT Reorientation depends greatly on the perceived geometric information, which constantly changes during navigation in urban environments. Environmental novelty, as a driver of exploratory behavior, is likely to engender this spatial Aha! moment. The paper investigates the contribution of two qualitatively different types of novelty, corresponding to distinct visuospatial cues: (a) situations that cause surprise, e.g., a sudden change in spaciousness; versus (b) situations that engender mystery, e.g., a change in the complexity of visuospatial information and the promise of gaining new information. Visibility graph analysis is used to quantify and examine these hypotheses in relation to participants’ exploratory behavior and brain dynamics (EEG) during virtual navigation. The findings suggest that reorientation is a spatial boundary effect, associated primarily with a change in visuospatial complexity.","PeriodicalId":46199,"journal":{"name":"Spatial Cognition and Computation","volume":"15 1","pages":"197 - 234"},"PeriodicalIF":1.9,"publicationDate":"2021-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89328095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2020-10-05DOI: 10.1080/13875868.2020.1825442
Linda Abarbanell, Peggy Li
We examine whether acquiring left/right language affects children's ability to take a non-egocentric left-right perspective. In Experiment 1, we tested 10-13 year-old Tseltal (Mayan) and Spanish-speaking children from the same community on a task that required they retrieve a coin they previously seen hidden in one of four boxes to the left/right/front/back of a toy sheep after the entire array was rotated out of view. Their performance on the left/right boxes correlated positively with their comprehension and use of left-right language. In Experiment 2, we found that training Tseltal-speaking children to apply left-right lexical labels to represent the location of the coin improved performance, but improvement was more robust among a second group of children trained to use gestures instead.
{"title":"Unraveling the contribution of left-right language on spatial perspective taking.","authors":"Linda Abarbanell, Peggy Li","doi":"10.1080/13875868.2020.1825442","DOIUrl":"10.1080/13875868.2020.1825442","url":null,"abstract":"<p><p>We examine whether acquiring left/right language affects children's ability to take a non-egocentric left-right perspective. In Experiment 1, we tested 10-13 year-old Tseltal (Mayan) and Spanish-speaking children from the same community on a task that required they retrieve a coin they previously seen hidden in one of four boxes to the left/right/front/back of a toy sheep after the entire array was rotated out of view. Their performance on the left/right boxes correlated positively with their comprehension and use of left-right language. In Experiment 2, we found that training Tseltal-speaking children to apply left-right lexical labels to represent the location of the coin improved performance, but improvement was more robust among a second group of children trained to use gestures instead.</p>","PeriodicalId":46199,"journal":{"name":"Spatial Cognition and Computation","volume":"21 1","pages":"1-38"},"PeriodicalIF":1.9,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7985953/pdf/nihms-1632477.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25517568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-08DOI: 10.1080/13875868.2020.1841202
A. Z. Zardiny, F. Hakimpour
ABSTRACT Drawing sketch maps is one of the most widely used tools for observation recording in community mapping. However, because sketches are not to scale and features are not precisely located, they are not spatially accurate. With this in mind, consider an important question. Can the use of sketch maps in a community mapping lead to an acceptable result? This article addresses this question by investigating the sketch maps drawn by children in a simulated community mapping. To make the sketches useful, they must be matched and integrated together. Although much research has been conducted about data matching in sketch maps, the integration of data extracted from sketch maps has been less considered. Therefore, this article focuses on the integration of sketch maps and proposes a solution in order to examine the maps more accurately while revising and customizing the existing matching solutions. The output of the data analysis is an integrated sketch map. The accuracy of the matching between the integrated sketch map and the data extracted from OpenStreetMap (OSM) is about 94.8%. In addition, the output contains features that are not present in the OSM data, which means that this output can be used for descriptive and geometric enrichment of metric maps. These results are the output of a simulated community mapping under some strict conditions. Therefore in a real community mapping, one can expect higher accuracy in using the proposed algorithm for matching and integration of the data in sketch maps.
{"title":"Integration of sketch maps in community mapping activities","authors":"A. Z. Zardiny, F. Hakimpour","doi":"10.1080/13875868.2020.1841202","DOIUrl":"https://doi.org/10.1080/13875868.2020.1841202","url":null,"abstract":"ABSTRACT Drawing sketch maps is one of the most widely used tools for observation recording in community mapping. However, because sketches are not to scale and features are not precisely located, they are not spatially accurate. With this in mind, consider an important question. Can the use of sketch maps in a community mapping lead to an acceptable result? This article addresses this question by investigating the sketch maps drawn by children in a simulated community mapping. To make the sketches useful, they must be matched and integrated together. Although much research has been conducted about data matching in sketch maps, the integration of data extracted from sketch maps has been less considered. Therefore, this article focuses on the integration of sketch maps and proposes a solution in order to examine the maps more accurately while revising and customizing the existing matching solutions. The output of the data analysis is an integrated sketch map. The accuracy of the matching between the integrated sketch map and the data extracted from OpenStreetMap (OSM) is about 94.8%. In addition, the output contains features that are not present in the OSM data, which means that this output can be used for descriptive and geometric enrichment of metric maps. These results are the output of a simulated community mapping under some strict conditions. Therefore in a real community mapping, one can expect higher accuracy in using the proposed algorithm for matching and integration of the data in sketch maps.","PeriodicalId":46199,"journal":{"name":"Spatial Cognition and Computation","volume":"42 1","pages":"114 - 142"},"PeriodicalIF":1.9,"publicationDate":"2020-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84729973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-03DOI: 10.1080/13875868.2020.1857386
D. Perico, P. Santos, Reinaldo A. C. Bianchi
ABSTRACT Navigation is an essential ability for mobile agents to be completely autonomous and able to perform complex actions. However, the problem of navigation for agents with limited (or no) perception of the world, or devoid of a fully defined motion model, has received little attention from research in AI and Robotics. One way to tackle this problem is to use guided navigation, in which other autonomous agents, endowed with perception, can combine their distinct viewpoints to infer the localization and the appropriate commands to guide a sensory deprived agent through a particular path. Due to the limited knowledge about the physical and perceptual characteristics of the guided agent, this task should be conducted on a level of abstraction allowing the use of a generic motion model, and high-level commands, that can be applied by any type of autonomous agents, including humans. The main task considered in this work is, given a group of autonomous agents perceiving their common environment with their independent, egocentric and local vision sensors, the development and evaluation of algorithms capable of producing a set of high-level commands (involving qualitative directions: e.g. move left, go straight ahead) capable of guiding a sensory deprived robot to a goal location. In order to accomplish this, the present paper assumes relations from the qualitative spatial reasoning formalism called StarVars, whose inference method is also used to build a model of the domain. This paper presents two qualitative-probabilistic algorithms for guided navigation using a particle filter and qualitative spatial relations. In the first algorithm, the particle filter is run upon a qualitative representation of the domain, whereas the second algorithm transforms the numerical output of a standard particle filter into qualitative relations to guide a sensory deprived robot. The proposed methods were evaluated with experiments carried out on a 2D humanoid robot simulator. A proof of concept executing the algorithms on a group of real humanoid robots is also presented. The results obtained demonstrate the success of the guided navigation models proposed in this work.
{"title":"Guided navigation from multiple viewpoints using qualitative spatial reasoning","authors":"D. Perico, P. Santos, Reinaldo A. C. Bianchi","doi":"10.1080/13875868.2020.1857386","DOIUrl":"https://doi.org/10.1080/13875868.2020.1857386","url":null,"abstract":"ABSTRACT Navigation is an essential ability for mobile agents to be completely autonomous and able to perform complex actions. However, the problem of navigation for agents with limited (or no) perception of the world, or devoid of a fully defined motion model, has received little attention from research in AI and Robotics. One way to tackle this problem is to use guided navigation, in which other autonomous agents, endowed with perception, can combine their distinct viewpoints to infer the localization and the appropriate commands to guide a sensory deprived agent through a particular path. Due to the limited knowledge about the physical and perceptual characteristics of the guided agent, this task should be conducted on a level of abstraction allowing the use of a generic motion model, and high-level commands, that can be applied by any type of autonomous agents, including humans. The main task considered in this work is, given a group of autonomous agents perceiving their common environment with their independent, egocentric and local vision sensors, the development and evaluation of algorithms capable of producing a set of high-level commands (involving qualitative directions: e.g. move left, go straight ahead) capable of guiding a sensory deprived robot to a goal location. In order to accomplish this, the present paper assumes relations from the qualitative spatial reasoning formalism called StarVars, whose inference method is also used to build a model of the domain. This paper presents two qualitative-probabilistic algorithms for guided navigation using a particle filter and qualitative spatial relations. In the first algorithm, the particle filter is run upon a qualitative representation of the domain, whereas the second algorithm transforms the numerical output of a standard particle filter into qualitative relations to guide a sensory deprived robot. The proposed methods were evaluated with experiments carried out on a 2D humanoid robot simulator. A proof of concept executing the algorithms on a group of real humanoid robots is also presented. The results obtained demonstrate the success of the guided navigation models proposed in this work.","PeriodicalId":46199,"journal":{"name":"Spatial Cognition and Computation","volume":"1 1","pages":"143 - 172"},"PeriodicalIF":1.9,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75805274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-12DOI: 10.1080/13875868.2020.1830993
Demet Yesiltepe, A. O. Torun, A. Coutrot, M. Hornberger, H. Spiers, R. Dalton
ABSTRACT This study aimed to understand whether or not computer models of saliency could explain landmark saliency. An online survey was conducted and participants were asked to watch videos from a spatial navigation video game (Sea Hero Quest). Participants were asked to pay attention to the environments within which the boat was moving and to rate the perceived saliency of each landmark. In addition, state-of-the-art computer saliency models were used to objectively quantify landmark saliency. No significant relationship was found between objective and subjective saliency measures. This indicates that during passive observation of an environment while being navigated, current automated models of saliency fail to predict subjective reports of visual attention to landmarks.
{"title":"Computer models of saliency alone fail to predict subjective visual attention to landmarks during observed navigation","authors":"Demet Yesiltepe, A. O. Torun, A. Coutrot, M. Hornberger, H. Spiers, R. Dalton","doi":"10.1080/13875868.2020.1830993","DOIUrl":"https://doi.org/10.1080/13875868.2020.1830993","url":null,"abstract":"ABSTRACT This study aimed to understand whether or not computer models of saliency could explain landmark saliency. An online survey was conducted and participants were asked to watch videos from a spatial navigation video game (Sea Hero Quest). Participants were asked to pay attention to the environments within which the boat was moving and to rate the perceived saliency of each landmark. In addition, state-of-the-art computer saliency models were used to objectively quantify landmark saliency. No significant relationship was found between objective and subjective saliency measures. This indicates that during passive observation of an environment while being navigated, current automated models of saliency fail to predict subjective reports of visual attention to landmarks.","PeriodicalId":46199,"journal":{"name":"Spatial Cognition and Computation","volume":"1 1","pages":"39 - 66"},"PeriodicalIF":1.9,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78376324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-12DOI: 10.1080/13875868.2020.1830994
A. Lovett, Holger Schultheis
ABSTRACT Spatial adaptation is the process of adjusting one’s mental representations for a task, so that spatial details necessary for performing the task are captured in the representations, whereas irrelevant details are ignored. We believe this process plays a critical role both in spatial ability tests and in STEM domains because it produces problem-tailored representations that can facilitate mental manipulation by representing only task-relevant details. Here, we present a computational model that illustrates the importance of spatial adaptation in a mental rotation task. The model automatically generates shape representations by segmenting objects into parts at concavities. It adjusts its representations in two ways: by varying the number of parts used to represent a shape, and by varying the types of information encoded for each part. Critically, the model can adapt to a mental rotation task by adjusting the degree of detail in its shape representations automatically, based on how much detail is needed to distinguish the shapes from distractors.
{"title":"Spatial adaptation: modeling a key spatial ability","authors":"A. Lovett, Holger Schultheis","doi":"10.1080/13875868.2020.1830994","DOIUrl":"https://doi.org/10.1080/13875868.2020.1830994","url":null,"abstract":"ABSTRACT Spatial adaptation is the process of adjusting one’s mental representations for a task, so that spatial details necessary for performing the task are captured in the representations, whereas irrelevant details are ignored. We believe this process plays a critical role both in spatial ability tests and in STEM domains because it produces problem-tailored representations that can facilitate mental manipulation by representing only task-relevant details. Here, we present a computational model that illustrates the importance of spatial adaptation in a mental rotation task. The model automatically generates shape representations by segmenting objects into parts at concavities. It adjusts its representations in two ways: by varying the number of parts used to represent a shape, and by varying the types of information encoded for each part. Critically, the model can adapt to a mental rotation task by adjusting the degree of detail in its shape representations automatically, based on how much detail is needed to distinguish the shapes from distractors.","PeriodicalId":46199,"journal":{"name":"Spatial Cognition and Computation","volume":"110 1","pages":"89 - 113"},"PeriodicalIF":1.9,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80532437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-12DOI: 10.1080/13875868.2020.1830995
C. Meneghetti, Tommaso Feraco, Paola Ispiro, Stefanie Pietsch, P. Jansen
ABSTRACT The study aimed to examine the relationship between the practice of judo and different spatial abilities. Several individual measures, including spatial tasks and questionnaires on wayfinding inclinations, were administered to 52 judo experts and 45 non-athlete controls. After learning by navigating in a virtual environment, participants were measured on their route retracing and shortcut finding performance. The results showed that judo practitioners had greater spatial abilities (especially in mental rotation) and a stronger sense of direction than controls, but were no better in wayfinding performance. A structural equation model showed that the practice of judo had an indirect effect on wayfinding (inclinations and performance), mediated by spatial abilities. These results are discussed in the theoretical frame of spatial cognition and sport.
{"title":"The practice of judo: how does it relate to different spatial abilities?","authors":"C. Meneghetti, Tommaso Feraco, Paola Ispiro, Stefanie Pietsch, P. Jansen","doi":"10.1080/13875868.2020.1830995","DOIUrl":"https://doi.org/10.1080/13875868.2020.1830995","url":null,"abstract":"ABSTRACT The study aimed to examine the relationship between the practice of judo and different spatial abilities. Several individual measures, including spatial tasks and questionnaires on wayfinding inclinations, were administered to 52 judo experts and 45 non-athlete controls. After learning by navigating in a virtual environment, participants were measured on their route retracing and shortcut finding performance. The results showed that judo practitioners had greater spatial abilities (especially in mental rotation) and a stronger sense of direction than controls, but were no better in wayfinding performance. A structural equation model showed that the practice of judo had an indirect effect on wayfinding (inclinations and performance), mediated by spatial abilities. These results are discussed in the theoretical frame of spatial cognition and sport.","PeriodicalId":46199,"journal":{"name":"Spatial Cognition and Computation","volume":"1 1","pages":"67 - 88"},"PeriodicalIF":1.9,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88326192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-13DOI: 10.1080/13875868.2020.1817925
Jiayan Zhao, Tesalee K. Sensibaugh, Bobby Bodenheimer, T. McNamara, Alina Nazareth, N. Newcombe, Meredith Minear, A. Klippel
ABSTRACT Although immersive virtual reality is attractive to users, we know relatively little about whether higher immersion levels increase or decrease spatial learning outcomes. In addition, questions remain about how different approaches to travel within a virtual environment affect spatial learning. In this paper, we investigated the role of immersion (desktop computer versus HTC Vive) and teleportation in spatial learning. Results showed few differences between conditions, favoring, if anything, the desktop environment. There seems to be no advantage of using continuous travel over teleportation, or using the Vive with teleportation compared to a desktop computer. Discussing the results, we look critically at the experimental design, identify potentially confounding variables, and suggest avenues for future research.
{"title":"Desktop versus immersive virtual environments: effects on spatial learning","authors":"Jiayan Zhao, Tesalee K. Sensibaugh, Bobby Bodenheimer, T. McNamara, Alina Nazareth, N. Newcombe, Meredith Minear, A. Klippel","doi":"10.1080/13875868.2020.1817925","DOIUrl":"https://doi.org/10.1080/13875868.2020.1817925","url":null,"abstract":"ABSTRACT Although immersive virtual reality is attractive to users, we know relatively little about whether higher immersion levels increase or decrease spatial learning outcomes. In addition, questions remain about how different approaches to travel within a virtual environment affect spatial learning. In this paper, we investigated the role of immersion (desktop computer versus HTC Vive) and teleportation in spatial learning. Results showed few differences between conditions, favoring, if anything, the desktop environment. There seems to be no advantage of using continuous travel over teleportation, or using the Vive with teleportation compared to a desktop computer. Discussing the results, we look critically at the experimental design, identify potentially confounding variables, and suggest avenues for future research.","PeriodicalId":46199,"journal":{"name":"Spatial Cognition and Computation","volume":"147 1","pages":"328 - 363"},"PeriodicalIF":1.9,"publicationDate":"2020-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86103454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-26DOI: 10.1007/978-3-030-57983-8_4
Lilian Le Vinh, Annika Meert, H. Mallot
{"title":"The Influence of Position on Spatial Representation in Working Memory","authors":"Lilian Le Vinh, Annika Meert, H. Mallot","doi":"10.1007/978-3-030-57983-8_4","DOIUrl":"https://doi.org/10.1007/978-3-030-57983-8_4","url":null,"abstract":"","PeriodicalId":46199,"journal":{"name":"Spatial Cognition and Computation","volume":"62 1","pages":"50-58"},"PeriodicalIF":1.9,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83968019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}