Pub Date : 2017-03-27DOI: 10.1109/COGSIMA.2017.7929579
Marguerite McDaniel, Emma Sloan, Siobahn C. Day, James Mayes, A. Esterline, K. Roy, William Nick
We are interested in how evidence in a case fits together to support a judgment about the identity of an agent. We present a computational framework that extends to the cyber world although our current work focuses on physical evidence from a crime scene. We take Barwise's situation theory as a foundation. Situations support items of information and, by virtue of constraints, some carry information about other situations. In particular, an utterance situation carries information about a described situation. We provide an account of the support for an identity judgment (in an utterance situation called an id-situation) that looks at building a case (called an id-case), like a legal case, since identity cases can lead to multiple situations that impact the value of our evidence. We have developed a novel situation ontology on which we built an id-situation ontology. To capture our current focus, we developed a physical biometrics ontology, a law enforcement ontology, and several supporting stubs. We show how a case can be encoded in the RDF in conformance with our ontologies. We complement our id-situation ontology with SWRL rules to infer the agent in a crime scene and to classify situations and id-cases. Combining possibly conflicting evidence is handled with Dempster-Shafer theory, as reported elsewhere.
{"title":"Situation-based ontologies for a computational framework for identity focusing on crime scenes","authors":"Marguerite McDaniel, Emma Sloan, Siobahn C. Day, James Mayes, A. Esterline, K. Roy, William Nick","doi":"10.1109/COGSIMA.2017.7929579","DOIUrl":"https://doi.org/10.1109/COGSIMA.2017.7929579","url":null,"abstract":"We are interested in how evidence in a case fits together to support a judgment about the identity of an agent. We present a computational framework that extends to the cyber world although our current work focuses on physical evidence from a crime scene. We take Barwise's situation theory as a foundation. Situations support items of information and, by virtue of constraints, some carry information about other situations. In particular, an utterance situation carries information about a described situation. We provide an account of the support for an identity judgment (in an utterance situation called an id-situation) that looks at building a case (called an id-case), like a legal case, since identity cases can lead to multiple situations that impact the value of our evidence. We have developed a novel situation ontology on which we built an id-situation ontology. To capture our current focus, we developed a physical biometrics ontology, a law enforcement ontology, and several supporting stubs. We show how a case can be encoded in the RDF in conformance with our ontologies. We complement our id-situation ontology with SWRL rules to infer the agent in a crime scene and to classify situations and id-cases. Combining possibly conflicting evidence is handled with Dempster-Shafer theory, as reported elsewhere.","PeriodicalId":252066,"journal":{"name":"2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","volume":"120 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115357688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-27DOI: 10.1109/COGSIMA.2017.7929578
Giuseppe D’aniello, Angelo Gaeta, V. Loia, F. Orciuoli
We present our results on the definition of a formal and interactive situation model improving comprehension of situations and supporting reasoning on projections of situations. The model is based on the rough sets and allows the creation of lattices that fuse the elements of an environment according to different perspectives and requirements of interest for a human operator. To support rapid decision making on dissimilarities between recognized and projected situations, we adopt some measures defined on the lattices. In many scenarios, like in emergency response, this can support the generation of early warnings that may help the human operators in identifying future dangerous events. An early evaluation has been accomplished by considering an illustrative case study based on real scenarios for management of vessel traffic.
{"title":"A model based on rough sets for situation comprehension and projection","authors":"Giuseppe D’aniello, Angelo Gaeta, V. Loia, F. Orciuoli","doi":"10.1109/COGSIMA.2017.7929578","DOIUrl":"https://doi.org/10.1109/COGSIMA.2017.7929578","url":null,"abstract":"We present our results on the definition of a formal and interactive situation model improving comprehension of situations and supporting reasoning on projections of situations. The model is based on the rough sets and allows the creation of lattices that fuse the elements of an environment according to different perspectives and requirements of interest for a human operator. To support rapid decision making on dissimilarities between recognized and projected situations, we adopt some measures defined on the lattices. In many scenarios, like in emergency response, this can support the generation of early warnings that may help the human operators in identifying future dangerous events. An early evaluation has been accomplished by considering an illustrative case study based on real scenarios for management of vessel traffic.","PeriodicalId":252066,"journal":{"name":"2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","volume":"55 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120816044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-27DOI: 10.1109/COGSIMA.2017.7929581
J. Nuamah, Younho Seong, Sun Yi
The application of autonomous systems is on an increase, and there is the need to optimize the fit between humans and these systems. While operators must be aware of the autonomous systems dynamic behaviors, the autonomous systems must in turn base their operations, among other things, on an ongoing knowledge of operators' cognitive state, and its application domain. Psychophysiology allows for the use of physiological measurements to understand an operators behavior by noninvasively recording peripheral and central physiological changes while the operator behaves under controlled conditions. Electroencephalography (EEG) is a psychophysiological technique for studying brain activation. In the present study, EEG task engagement index, defined as the ratio of beta to (alpha + theta), are used as inputs to an artificial neural network (ANN) to allow identification and classification of mental engagement. Six separate feedforward ANN with single hidden layer trained by backpropagation were designed to classify five mental tasks for each of six participants. The average classification accuracy across the six participants was 88.67 %. The results show that differences in cognitive task demand do elicit different degrees of mental engagement, which can be measured through the use of the task engagement index.
{"title":"Electroencephalography (EEG) classification of cognitive tasks based on task engagement index","authors":"J. Nuamah, Younho Seong, Sun Yi","doi":"10.1109/COGSIMA.2017.7929581","DOIUrl":"https://doi.org/10.1109/COGSIMA.2017.7929581","url":null,"abstract":"The application of autonomous systems is on an increase, and there is the need to optimize the fit between humans and these systems. While operators must be aware of the autonomous systems dynamic behaviors, the autonomous systems must in turn base their operations, among other things, on an ongoing knowledge of operators' cognitive state, and its application domain. Psychophysiology allows for the use of physiological measurements to understand an operators behavior by noninvasively recording peripheral and central physiological changes while the operator behaves under controlled conditions. Electroencephalography (EEG) is a psychophysiological technique for studying brain activation. In the present study, EEG task engagement index, defined as the ratio of beta to (alpha + theta), are used as inputs to an artificial neural network (ANN) to allow identification and classification of mental engagement. Six separate feedforward ANN with single hidden layer trained by backpropagation were designed to classify five mental tasks for each of six participants. The average classification accuracy across the six participants was 88.67 %. The results show that differences in cognitive task demand do elicit different degrees of mental engagement, which can be measured through the use of the task engagement index.","PeriodicalId":252066,"journal":{"name":"2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132375837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-27DOI: 10.1109/COGSIMA.2017.7929596
Taylor J. Carpenter, W. Zachary
While efforts to develop cognitive abilities for robots have made progress from the perspective of goal-directed task performance, research has shown that additional cognitive capabilities are needed to enable robots to interact, cooperate, and act as teammates with humans. In particular, robots need additional teamwork and coordination knowledge and an ability to apply this knowledge to a model of context that is at least homologous to the context models that people use in reasoning about environmental interactions. The Context-Augmented Robotic Interface Layer (CARIL) provides a robot with a cognitively-motivated computational capability for situation assessment and situational adaptation. CARIL is used to analyze and develop context-based reasoning strategies that allow a robot to coordinate its behavior and spatial movements with humans when they are working on shared tasks and/or in shared space. Both communication-free and communications approaches are addressed and tested in a simulated environment.
{"title":"Using context and robot-human communication to resolve unexpected situational conflicts","authors":"Taylor J. Carpenter, W. Zachary","doi":"10.1109/COGSIMA.2017.7929596","DOIUrl":"https://doi.org/10.1109/COGSIMA.2017.7929596","url":null,"abstract":"While efforts to develop cognitive abilities for robots have made progress from the perspective of goal-directed task performance, research has shown that additional cognitive capabilities are needed to enable robots to interact, cooperate, and act as teammates with humans. In particular, robots need additional teamwork and coordination knowledge and an ability to apply this knowledge to a model of context that is at least homologous to the context models that people use in reasoning about environmental interactions. The Context-Augmented Robotic Interface Layer (CARIL) provides a robot with a cognitively-motivated computational capability for situation assessment and situational adaptation. CARIL is used to analyze and develop context-based reasoning strategies that allow a robot to coordinate its behavior and spatial movements with humans when they are working on shared tasks and/or in shared space. Both communication-free and communications approaches are addressed and tested in a simulated environment.","PeriodicalId":252066,"journal":{"name":"2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116950597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-27DOI: 10.1109/COGSIMA.2017.7929595
B. Perelman, Shane T. Mueller, Kristin E. Schaefer
The integration of robotic systems into daily life is increasing, as technological advancements facilitate independent and interdependent decision-making by autonomous agents. Highly collaborative human-robot teams promise to maximize the capabilities of humans and machines. While a great deal of progress has been made toward developing efficient spatial path planning algorithms for robots, comparatively less attention has been paid to developing reliable means by which to assess the similarities and differences in path planning decisions and associated behaviors of humans and robots in these teams. This paper discusses a tool, the Algorithm for finding the Least Cost Areal Mapping between Paths (ALCAMP), which can be used to compare paths planned by humans and algorithms in order to quantify the differences between them, and understand the user's mental models underlying those decisions. In addition, this paper discusses prior and proposed future research related to human-robot collaborative teams. Prior studies using ALCAMP have measured path divergence in order to quantify error, infer decision-making processes, assess path memory, and assess team communication performance. Future research related to human-robot teaming includes measuring formation and path adherence, testing the repeatability of navigation algorithms and the clarity of communicated navigation instructions, inferring shared mental models for navigation among members of a group, and detecting anomalous movement.
{"title":"Evaluating path planning in human-robot teams: Quantifying path agreement and mental model congruency","authors":"B. Perelman, Shane T. Mueller, Kristin E. Schaefer","doi":"10.1109/COGSIMA.2017.7929595","DOIUrl":"https://doi.org/10.1109/COGSIMA.2017.7929595","url":null,"abstract":"The integration of robotic systems into daily life is increasing, as technological advancements facilitate independent and interdependent decision-making by autonomous agents. Highly collaborative human-robot teams promise to maximize the capabilities of humans and machines. While a great deal of progress has been made toward developing efficient spatial path planning algorithms for robots, comparatively less attention has been paid to developing reliable means by which to assess the similarities and differences in path planning decisions and associated behaviors of humans and robots in these teams. This paper discusses a tool, the Algorithm for finding the Least Cost Areal Mapping between Paths (ALCAMP), which can be used to compare paths planned by humans and algorithms in order to quantify the differences between them, and understand the user's mental models underlying those decisions. In addition, this paper discusses prior and proposed future research related to human-robot collaborative teams. Prior studies using ALCAMP have measured path divergence in order to quantify error, infer decision-making processes, assess path memory, and assess team communication performance. Future research related to human-robot teaming includes measuring formation and path adherence, testing the repeatability of navigation algorithms and the clarity of communicated navigation instructions, inferring shared mental models for navigation among members of a group, and detecting anomalous movement.","PeriodicalId":252066,"journal":{"name":"2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120901652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-27DOI: 10.1109/COGSIMA.2017.7929589
Patrick Philipp, J. Beyerer, Y. Fischer
To provide assistance functions in context of surgical interventions, the use of medical workflows plays an important role. Workflow models can be used to assess the progress of an on-going surgery, enabling tailored (i.e., context sensitive) support for the medical practitioner. Subsequently, this provides opportunities to prevent malpractices, to enhance the patient's outcome and to preserve a high level of satisfaction. In this work, we propose a framework which enables a formalization of medical workflows. It is driven by a dialog of medical as well as technical experts and is based on the Unified Modeling Language (UML). An easy comprehensible UML activity serves as a starting point for the automatic generation of more complex models that can be used for the actual estimation of the progress of a surgical intervention. In this work, we present translation rules, which allow to transfer a given UML activity into a Dynamic Bayesian Network (DBN). The methods are presented for the application example of a cholecystectomy (surgical removal of the gallbladder).
{"title":"Expert-based probabilistic modeling of workflows in context of surgical interventions","authors":"Patrick Philipp, J. Beyerer, Y. Fischer","doi":"10.1109/COGSIMA.2017.7929589","DOIUrl":"https://doi.org/10.1109/COGSIMA.2017.7929589","url":null,"abstract":"To provide assistance functions in context of surgical interventions, the use of medical workflows plays an important role. Workflow models can be used to assess the progress of an on-going surgery, enabling tailored (i.e., context sensitive) support for the medical practitioner. Subsequently, this provides opportunities to prevent malpractices, to enhance the patient's outcome and to preserve a high level of satisfaction. In this work, we propose a framework which enables a formalization of medical workflows. It is driven by a dialog of medical as well as technical experts and is based on the Unified Modeling Language (UML). An easy comprehensible UML activity serves as a starting point for the automatic generation of more complex models that can be used for the actual estimation of the progress of a surgical intervention. In this work, we present translation rules, which allow to transfer a given UML activity into a Dynamic Bayesian Network (DBN). The methods are presented for the application example of a cholecystectomy (surgical removal of the gallbladder).","PeriodicalId":252066,"journal":{"name":"2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134411171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-27DOI: 10.1109/COGSIMA.2017.7929592
Ubeyde Mavuş, Volkan Sezer
Gesture recognition is one of the emerging fields in industry and a hot research topic in academia. It is commonly used in smart devices to assist the owners in their day-to-day life. But it is also important in facilitating processes in any kind, that involves people. In our attempt at improving life quality for disabled people below the neck, an assistive autonomous powerchair is developed. To ease interaction with the chair, we propose embedding a head gesture recognition system using an IMU (Inertial Measurement Unit) sensor. This study explores the possibilities of such implementation. Several approaches have been developed for gesture recognition. Accuracy, sensitivity and rapid computation are some of the critical items which are being considered in different approaches. In this study, we use the Dynamic Time Warping (DTW) algorithm in order to calculate the similarity between two time sequences. After DTW calculation, we propose a new approach which optimizes the decision making problem and calculates the optimum threshold values. We propose and compare two different simple geometrical shapes for threshold optimization. Even with these simple 3D objects, 85.68% success rate is achieved. This means that more than 8 out of 10 repetitions of a gesture are recognized successfully. The results are promising for future studies.
{"title":"Head gesture recognition via dynamic time warping and threshold optimization","authors":"Ubeyde Mavuş, Volkan Sezer","doi":"10.1109/COGSIMA.2017.7929592","DOIUrl":"https://doi.org/10.1109/COGSIMA.2017.7929592","url":null,"abstract":"Gesture recognition is one of the emerging fields in industry and a hot research topic in academia. It is commonly used in smart devices to assist the owners in their day-to-day life. But it is also important in facilitating processes in any kind, that involves people. In our attempt at improving life quality for disabled people below the neck, an assistive autonomous powerchair is developed. To ease interaction with the chair, we propose embedding a head gesture recognition system using an IMU (Inertial Measurement Unit) sensor. This study explores the possibilities of such implementation. Several approaches have been developed for gesture recognition. Accuracy, sensitivity and rapid computation are some of the critical items which are being considered in different approaches. In this study, we use the Dynamic Time Warping (DTW) algorithm in order to calculate the similarity between two time sequences. After DTW calculation, we propose a new approach which optimizes the decision making problem and calculates the optimum threshold values. We propose and compare two different simple geometrical shapes for threshold optimization. Even with these simple 3D objects, 85.68% success rate is achieved. This means that more than 8 out of 10 repetitions of a gesture are recognized successfully. The results are promising for future studies.","PeriodicalId":252066,"journal":{"name":"2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121993531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-27DOI: 10.1109/COGSIMA.2017.7929605
Erin G. Zaroukian, J. Bakdash, A. Preece, William M. Webberley
We investigate automation bias for confirming erroneous information with a conversational interface. Participants in our studies used a conversational interface to report information in a simulated intelligence, surveillance, and reconnaissance (ISR) task. In the task, for flexibility and ease of use, participants reported information to the conversational agent in natural language. Then, the conversational agent interpreted the user's reports in a human- and machine-readable language. Next, participants could accept or reject the agent's interpretation. Misparses occur when the agent incorrectly interprets the report and the user erroneously accepts it. We hypothesize that the misparses naturally occur in the experiment due to automation bias and complacency because the agent interpretation was generally correct (92%). These errors indicate some users were unable to maintain situation awareness using the conversational interface. Our results illustrate concerns for implementing a flexible conversational interface in safety critical environments (e.g., military, emergency operations).
{"title":"Automation bias with a conversational interface: User confirmation of misparsed information","authors":"Erin G. Zaroukian, J. Bakdash, A. Preece, William M. Webberley","doi":"10.1109/COGSIMA.2017.7929605","DOIUrl":"https://doi.org/10.1109/COGSIMA.2017.7929605","url":null,"abstract":"We investigate automation bias for confirming erroneous information with a conversational interface. Participants in our studies used a conversational interface to report information in a simulated intelligence, surveillance, and reconnaissance (ISR) task. In the task, for flexibility and ease of use, participants reported information to the conversational agent in natural language. Then, the conversational agent interpreted the user's reports in a human- and machine-readable language. Next, participants could accept or reject the agent's interpretation. Misparses occur when the agent incorrectly interprets the report and the user erroneously accepts it. We hypothesize that the misparses naturally occur in the experiment due to automation bias and complacency because the agent interpretation was generally correct (92%). These errors indicate some users were unable to maintain situation awareness using the conversational interface. Our results illustrate concerns for implementing a flexible conversational interface in safety critical environments (e.g., military, emergency operations).","PeriodicalId":252066,"journal":{"name":"2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","volume":"570 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116064895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-27DOI: 10.1109/COGSIMA.2017.7929590
Qiang Liu, Birte Emmermann, Oscar Suen, Bryan Grant, Jake Hercules, Erik Glaser, B. Lathrop
The current paper describes research, examining the daytime lighting requirements for windshield based human machine interface (HMI) components of self-driving cars or highly autonomous vehicles (HAVs). The results of this study showed a significant rightward attentional bias in the detection of amber LEDs at low luminosity levels. The rightward bias persists at different viewing angles. However, this bias is absent for white LEDs. These results support the Saliency-Effort-Expectancy-Value (SEEV) model [12] of selective attention. These results highlight that priority should be given to the driver side for placement of critical external HMI components, especially those with lower perceptual saliency.
{"title":"Rightward attentional bias in windshield displays: Implication towards external human machine interfaces for self-driving cars","authors":"Qiang Liu, Birte Emmermann, Oscar Suen, Bryan Grant, Jake Hercules, Erik Glaser, B. Lathrop","doi":"10.1109/COGSIMA.2017.7929590","DOIUrl":"https://doi.org/10.1109/COGSIMA.2017.7929590","url":null,"abstract":"The current paper describes research, examining the daytime lighting requirements for windshield based human machine interface (HMI) components of self-driving cars or highly autonomous vehicles (HAVs). The results of this study showed a significant rightward attentional bias in the detection of amber LEDs at low luminosity levels. The rightward bias persists at different viewing angles. However, this bias is absent for white LEDs. These results support the Saliency-Effort-Expectancy-Value (SEEV) model [12] of selective attention. These results highlight that priority should be given to the driver side for placement of critical external HMI components, especially those with lower perceptual saliency.","PeriodicalId":252066,"journal":{"name":"2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113975208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-01DOI: 10.1109/COGSIMA.2017.7929599
Leonard Eusebi, S. Guarino
Modern adversaries have become more proficient in conducting cyber-attacks against our military's command and control (C2) infrastructure. To maintain security against these threats, operators perform a range of high-fidelity security assessments of existing and evolving software systems. This is just one example of the many settings in which massive amounts of data-Big Data-can prove difficult to understand in a timely manner for taking actions and responding to threats. To support such real-time analysis, situation awareness tools must reduce the cognitive load of monitoring multiple large, simultaneous data streams. This paper seeks to provide a Pragmatic Graphical Grammar (PGG) that evolves the concept of graphical grammars into an observability-focused method of data presentation.
{"title":"Designing a Pragmatic Graphical Grammar","authors":"Leonard Eusebi, S. Guarino","doi":"10.1109/COGSIMA.2017.7929599","DOIUrl":"https://doi.org/10.1109/COGSIMA.2017.7929599","url":null,"abstract":"Modern adversaries have become more proficient in conducting cyber-attacks against our military's command and control (C2) infrastructure. To maintain security against these threats, operators perform a range of high-fidelity security assessments of existing and evolving software systems. This is just one example of the many settings in which massive amounts of data-Big Data-can prove difficult to understand in a timely manner for taking actions and responding to threats. To support such real-time analysis, situation awareness tools must reduce the cognitive load of monitoring multiple large, simultaneous data streams. This paper seeks to provide a Pragmatic Graphical Grammar (PGG) that evolves the concept of graphical grammars into an observability-focused method of data presentation.","PeriodicalId":252066,"journal":{"name":"2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115739984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}