Pub Date : 2015-10-01DOI: 10.1109/COGINFOCOM.2015.7390657
Karolina Galinska, Piotr Luboch, Konrad Kluwak, M. Bieganski
This paper describes implementation of gait recognition that consists of two parts - capturing motion and data analysis. It introduces the process of building database of elementary human movements and explains the choice of BVH recording format. Besides filtering, three different methods of data analysis have been applied - Fourier transform, harmonic analysis and integral computing. Finally, it shows a broad spectrum of gait recognition practical application possibilities.
{"title":"A database of elementary human movements collected with RGB-D type camera","authors":"Karolina Galinska, Piotr Luboch, Konrad Kluwak, M. Bieganski","doi":"10.1109/COGINFOCOM.2015.7390657","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390657","url":null,"abstract":"This paper describes implementation of gait recognition that consists of two parts - capturing motion and data analysis. It introduces the process of building database of elementary human movements and explains the choice of BVH recording format. Besides filtering, three different methods of data analysis have been applied - Fourier transform, harmonic analysis and integral computing. Finally, it shows a broad spectrum of gait recognition practical application possibilities.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128040003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/COGINFOCOM.2015.7390642
L. Hunyadi
Emotions are important constituents of human behavior. The production and perception of cues of emotions is a complex task involving both verbal and nonverbal aspects of behavior. This complexity is further enhanced by the fact that emotions are subject to interpretation; a resulting emotion cannot be compositionally derived from its constituent building blocks. Even though we commonly associate an emotion with a given modality (most often with visuality), it will be argued that emotions are essentially multimodal. Multimodality in turn involves both the temporal alignment and sequential organization of cues across a number of modalities with virtually no primary modality of expression. These assumptions will be tested and elaborated using the extensively annotated multimodal HuComTech corpus by considering the frequency, alignment and sequence of the annotations of the basic emotions across three perception conditions: video only, audio only and video+audio.
{"title":"On multimodality in the perception of emotions from materials of the HuComTech corpus","authors":"L. Hunyadi","doi":"10.1109/COGINFOCOM.2015.7390642","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390642","url":null,"abstract":"Emotions are important constituents of human behavior. The production and perception of cues of emotions is a complex task involving both verbal and nonverbal aspects of behavior. This complexity is further enhanced by the fact that emotions are subject to interpretation; a resulting emotion cannot be compositionally derived from its constituent building blocks. Even though we commonly associate an emotion with a given modality (most often with visuality), it will be argued that emotions are essentially multimodal. Multimodality in turn involves both the temporal alignment and sequential organization of cues across a number of modalities with virtually no primary modality of expression. These assumptions will be tested and elaborated using the extensively annotated multimodal HuComTech corpus by considering the frequency, alignment and sequence of the annotations of the basic emotions across three perception conditions: video only, audio only and video+audio.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125401247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/COGINFOCOM.2015.7390578
C. Csiszár, David Foldes
Smart transportation is essentially leveraged by decision making of humans, especially behaviour of travellers. The behaviour (movements; information management) and the advanced information services are mutually entangled. The travellers and the ICT (integrated infocommunication systems of transportation) is considered as an undecomposable set, which has new cognitive capabilities. These capabilities are to be used for mobility related decisions in order to improve sustainability of transportation. In order to reveal, how these capabilities coelvolve with smart transportation comprehensive system and process-oriented scientific research had been launched. Herewith the basic definitions, the architecture and the operation of the integrated system of smart transportation and the model of the smart traveller have been presented following top-down approach of system engineering.
{"title":"Advanced information services for cognitive behaviour of travellers","authors":"C. Csiszár, David Foldes","doi":"10.1109/COGINFOCOM.2015.7390578","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390578","url":null,"abstract":"Smart transportation is essentially leveraged by decision making of humans, especially behaviour of travellers. The behaviour (movements; information management) and the advanced information services are mutually entangled. The travellers and the ICT (integrated infocommunication systems of transportation) is considered as an undecomposable set, which has new cognitive capabilities. These capabilities are to be used for mobility related decisions in order to improve sustainability of transportation. In order to reveal, how these capabilities coelvolve with smart transportation comprehensive system and process-oriented scientific research had been launched. Herewith the basic definitions, the architecture and the operation of the integrated system of smart transportation and the model of the smart traveller have been presented following top-down approach of system engineering.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123573067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/COGINFOCOM.2015.7390606
István Szekrényes
ProsoTool is a computer algorithm implemented as a Praat script for the automatic annotation of certain prosodic features in recorded dialogs. The tool was developed in the framework of the HuComTech project. The current version aims at making the raw F0 data more expressive and processable by smoothing and segmenting the pitch curve into larger tonal movements adjusting the calculation parameters to the individual vocal range of the speaker. This research paper contains the complete description of the modificated annotation method and its first samples in the HuComTech Corpus.
{"title":"ProsoTool, a method for automatic annotation of fundamental frequency","authors":"István Szekrényes","doi":"10.1109/COGINFOCOM.2015.7390606","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390606","url":null,"abstract":"ProsoTool is a computer algorithm implemented as a Praat script for the automatic annotation of certain prosodic features in recorded dialogs. The tool was developed in the framework of the HuComTech project. The current version aims at making the raw F0 data more expressive and processable by smoothing and segmenting the pitch curve into larger tonal movements adjusting the calculation parameters to the individual vocal range of the speaker. This research paper contains the complete description of the modificated annotation method and its first samples in the HuComTech Corpus.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"531 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123571612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/COGINFOCOM.2015.7390653
Anikó Vágner
Nobody likes sitting in a car which is in a traffic jam. If you are in a traffic jam, you think over the opportunities how you could escape from it as soon as possible. In this paper, an imaginary system is introduced which supports the car drivers of a city. The intelligent route planner system could show how you can more easily avoid the traffic jams of the city. Moreover, it provides other actual traffic information, it plans routes based on historical and actual data, it considers the weather and it shows where you can find free, cheap parking place near to your destination. Additionally, you could use a mobile application to reach this information, and you could command the application by your voice. Although it is an imaginary system, the knowledge and the information is given for such an intelligent route planning system even for a city in Hungary. It is waiting for implementation.
{"title":"Intelligent route planning system for car drivers in a city","authors":"Anikó Vágner","doi":"10.1109/COGINFOCOM.2015.7390653","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390653","url":null,"abstract":"Nobody likes sitting in a car which is in a traffic jam. If you are in a traffic jam, you think over the opportunities how you could escape from it as soon as possible. In this paper, an imaginary system is introduced which supports the car drivers of a city. The intelligent route planner system could show how you can more easily avoid the traffic jams of the city. Moreover, it provides other actual traffic information, it plans routes based on historical and actual data, it considers the weather and it shows where you can find free, cheap parking place near to your destination. Additionally, you could use a mobile application to reach this information, and you could command the application by your voice. Although it is an imaginary system, the knowledge and the information is given for such an intelligent route planning system even for a city in Hungary. It is waiting for implementation.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114435909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/COGINFOCOM.2015.7390605
A. Kovács, G. Kiss, K. Vicsi, I. Winkler, M. Coath
In this work, we compare two skewness-based salient event detector algorithms, which can detect transients in human speech signals. Speech transients are characterized by rapid changes in signal energy. The purpose of this study was to compare the identification of transients by two different methods based on skewness calculation in order to develop a method to be used in studying the processing of speech transients in the human brain. The first method, the skewness in variable time (SKV) finds transients using a cochlear model. The skewness of the energy distribution for a variable time window is implemented on artificial neural networks. The second method, the automatic segmentation method for transient detection (RoT) is more speech segmentation-based and developed for detecting transient-speech segment ratio in spoken records. In the current study, the test corpus included Hungarian and English speech recorded from different speakers (2 male and 2 female for both languages) Results were compared by the F-measure, the Jaccard similarity index, and the Hamming distance. The results of the two algorithms were also tested against a hand-labeled corpus annotated by linguistic experts for an absolute assessment of the performance of the two methods. Transient detection was tested once for onset events alone and, separately, for onset and offset events together. The results show that in most cases, the RoT method works better on the expert labeled databases. Using F measure with +-25ms window length the following results were obtained when all type of transient events were evaluated: 0,664 on English and 0,834 on Hungarian. Otherwise, the two methods identify the same stimulus features as the transients also coinciding with those hand-labeled by experts.
在这项工作中,我们比较了两种基于偏度的显著事件检测器算法,这两种算法可以检测人类语音信号中的瞬态。语音瞬态的特征是信号能量的快速变化。本研究的目的是比较基于偏度计算的两种不同方法对瞬态的识别,以期建立一种用于研究人脑语音瞬态处理的方法。第一种方法,可变时间偏度(SKV)利用耳蜗模型寻找瞬态。在人工神经网络上实现了变时窗下能量分布的偏性。第二种方法,即瞬态检测的自动分割方法(automatic segmentation method for transient detection, RoT),它更多地基于语音分割,是为检测语音记录中的瞬态语音分割比例而开发的。在本研究中,测试语料库包括来自不同说话者(两种语言各2名男性和2名女性)的匈牙利语和英语语音记录,并通过f测量、Jaccard相似指数和汉明距离对结果进行比较。这两种算法的结果也针对手工标记语料库进行了测试,由语言专家注释,以绝对评估这两种方法的性能。瞬态检测仅对启动事件进行一次测试,并分别对启动和偏移事件一起进行测试。结果表明,在大多数情况下,RoT方法在专家标记的数据库上效果更好。使用+-25ms窗口长度的F测量,在评估所有类型的瞬态事件时获得以下结果:英语为0,664,匈牙利语为0,834。否则,这两种方法识别的刺激特征与瞬态相同,并且与专家手工标记的特征一致。
{"title":"Comparison of skewness-based salient event detector algorithms in speech","authors":"A. Kovács, G. Kiss, K. Vicsi, I. Winkler, M. Coath","doi":"10.1109/COGINFOCOM.2015.7390605","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390605","url":null,"abstract":"In this work, we compare two skewness-based salient event detector algorithms, which can detect transients in human speech signals. Speech transients are characterized by rapid changes in signal energy. The purpose of this study was to compare the identification of transients by two different methods based on skewness calculation in order to develop a method to be used in studying the processing of speech transients in the human brain. The first method, the skewness in variable time (SKV) finds transients using a cochlear model. The skewness of the energy distribution for a variable time window is implemented on artificial neural networks. The second method, the automatic segmentation method for transient detection (RoT) is more speech segmentation-based and developed for detecting transient-speech segment ratio in spoken records. In the current study, the test corpus included Hungarian and English speech recorded from different speakers (2 male and 2 female for both languages) Results were compared by the F-measure, the Jaccard similarity index, and the Hamming distance. The results of the two algorithms were also tested against a hand-labeled corpus annotated by linguistic experts for an absolute assessment of the performance of the two methods. Transient detection was tested once for onset events alone and, separately, for onset and offset events together. The results show that in most cases, the RoT method works better on the expert labeled databases. Using F measure with +-25ms window length the following results were obtained when all type of transient events were evaluated: 0,664 on English and 0,834 on Hungarian. Otherwise, the two methods identify the same stimulus features as the transients also coinciding with those hand-labeled by experts.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127486599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/COGINFOCOM.2015.7390556
Joni Jämsä, H. Kaartinen
In this demonstration paper, we will present our adaptive user interface for replacing a vehicle's conventional dashboard. A programmable tablet computer with the Android operating system was utilized for displaying a vehicle's conventional instrumentation data and navigation guidance, as well as notifications from the vehicle's local safety sensors or external infrastructure. The developed user interface reacts to the data received from various sources and displays the most relevant information in each case. When unsafe situation occurs, some of the regular data is hidden and replaced with the crucial information until the situation is over. The priority ranking for potential occurrences is programmed in interface routines that choose the correct layout to display.
{"title":"Adaptive user interface for assisting the drivers' decision making","authors":"Joni Jämsä, H. Kaartinen","doi":"10.1109/COGINFOCOM.2015.7390556","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390556","url":null,"abstract":"In this demonstration paper, we will present our adaptive user interface for replacing a vehicle's conventional dashboard. A programmable tablet computer with the Android operating system was utilized for displaying a vehicle's conventional instrumentation data and navigation guidance, as well as notifications from the vehicle's local safety sensors or external infrastructure. The developed user interface reacts to the data received from various sources and displays the most relevant information in each case. When unsafe situation occurs, some of the regular data is hidden and replaced with the crucial information until the situation is over. The priority ranking for potential occurrences is programmed in interface routines that choose the correct layout to display.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126526154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/COGINFOCOM.2015.7390659
P. Baranyi, Á. Csapó
A central notion behind the field of cognitive infocommunications is that entangled sets of human and ICT capabilities, created through the co-evolution of humans together with ICT, can be conceptually represented and analyzed as new `cognitive entities'. This term, along with the concept of generation CE (in analogy with the generations X, Y and Z), has been recently introduced to characterize such entangled relationships in terms of functionally relevant units, and to describe the effects that such entangled relationships are having on the personal and social development of the new generation growing up today. In this paper, we make the case that a better understanding of cognitive-social-technological phenomena can be obtained when viewed through the lens of such concepts.
{"title":"Revisiting the concept of generation CE - Generation of cognitive entities","authors":"P. Baranyi, Á. Csapó","doi":"10.1109/COGINFOCOM.2015.7390659","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390659","url":null,"abstract":"A central notion behind the field of cognitive infocommunications is that entangled sets of human and ICT capabilities, created through the co-evolution of humans together with ICT, can be conceptually represented and analyzed as new `cognitive entities'. This term, along with the concept of generation CE (in analogy with the generations X, Y and Z), has been recently introduced to characterize such entangled relationships in terms of functionally relevant units, and to describe the effects that such entangled relationships are having on the personal and social development of the new generation growing up today. In this paper, we make the case that a better understanding of cognitive-social-technological phenomena can be obtained when viewed through the lens of such concepts.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"44 21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129985641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/COGINFOCOM.2015.7390555
Szász Csaba
As it is well known, the uninterruptible trend of endowing robotic systems with more human-like intelligence represents a great scientific challenge and a major task of cognitive robotics. With the main goal of improving human-robot interaction, nowadays the great majority of human-robot communication behaviors are implemented on mobile agents, or so called pervasive mobile robots enhanced with complex interaction abilities. However, to choose the adequate hardware and software technologies in order to achieve rapid and efficient advancement in human-computer interactions implementation it looks a difficult and burdening task even for the specialists involved in the topic. Starting from the above observation, the main goal of this paper is to unfold a well-fitted example of mixing last generation hardware resources and software toolkits to implement complex human-like abilities on robotic systems. Besides outlining the advantages and versatility of the chosen technologies the paper also presents research efforts and experimental results regarding symbolic cognitive abilities implementation on a specially developed NI-9631 prototype robot structure. The hardware architecture of this mobile agent has been developed upon a unique robotic configuration designed for human gestures and speech recognition abilities implementation. By using the LabView graphical programming software technology, these modules integrates through specific software drivers various sensor modality including vision and speech signals analyzing. The robot upgraded with the above mentioned capabilities becomes a very flexible and powerful development toolkit for complex human-robot interaction research and highlevel cognitive abilities implementation. In addition, the system also represents a useful platform for a large amount of voice signals and image processing algorithms rapid testing and development, through that the robot displays intelligence and cooperativeness in its behavior.
{"title":"Symbolic cognitive abilities implementation on the NI-9631 pervasive mobile robot","authors":"Szász Csaba","doi":"10.1109/COGINFOCOM.2015.7390555","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390555","url":null,"abstract":"As it is well known, the uninterruptible trend of endowing robotic systems with more human-like intelligence represents a great scientific challenge and a major task of cognitive robotics. With the main goal of improving human-robot interaction, nowadays the great majority of human-robot communication behaviors are implemented on mobile agents, or so called pervasive mobile robots enhanced with complex interaction abilities. However, to choose the adequate hardware and software technologies in order to achieve rapid and efficient advancement in human-computer interactions implementation it looks a difficult and burdening task even for the specialists involved in the topic. Starting from the above observation, the main goal of this paper is to unfold a well-fitted example of mixing last generation hardware resources and software toolkits to implement complex human-like abilities on robotic systems. Besides outlining the advantages and versatility of the chosen technologies the paper also presents research efforts and experimental results regarding symbolic cognitive abilities implementation on a specially developed NI-9631 prototype robot structure. The hardware architecture of this mobile agent has been developed upon a unique robotic configuration designed for human gestures and speech recognition abilities implementation. By using the LabView graphical programming software technology, these modules integrates through specific software drivers various sensor modality including vision and speech signals analyzing. The robot upgraded with the above mentioned capabilities becomes a very flexible and powerful development toolkit for complex human-robot interaction research and highlevel cognitive abilities implementation. In addition, the system also represents a useful platform for a large amount of voice signals and image processing algorithms rapid testing and development, through that the robot displays intelligence and cooperativeness in its behavior.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132671980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/COGINFOCOM.2015.7390604
Zsolt Medgyesi, K. Pomázi, Luca Szegletes, B. Forstner
Evaluating mobile and desktop applications from usability aspects is becoming trending as the usage of electronic devices increases every day. Both user loyalty and usage efficiency are key factors in developing successful applications, given the wide variety of the market. An affordable and portable monitoring environment for desktop and mobile platforms could fulfill the specific needs of user experience researchers. In this paper we present a system that monitors the physiological signals of the user while using any application installed on the devices. As a result, we are able track the eye movement, ECG and EEG signals, and analyze the combined data provided by these sensors. Results are visualized in a way that user experience experts can follow effortlessly application usage and cognitive processes of the user, which makes it possible to improve even further the user interface to boost productivity and user loyalty, and lower frustration.
{"title":"Evaluating application usability with portable biofeedback system for mobile and desktop","authors":"Zsolt Medgyesi, K. Pomázi, Luca Szegletes, B. Forstner","doi":"10.1109/COGINFOCOM.2015.7390604","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390604","url":null,"abstract":"Evaluating mobile and desktop applications from usability aspects is becoming trending as the usage of electronic devices increases every day. Both user loyalty and usage efficiency are key factors in developing successful applications, given the wide variety of the market. An affordable and portable monitoring environment for desktop and mobile platforms could fulfill the specific needs of user experience researchers. In this paper we present a system that monitors the physiological signals of the user while using any application installed on the devices. As a result, we are able track the eye movement, ECG and EEG signals, and analyze the combined data provided by these sensors. Results are visualized in a way that user experience experts can follow effortlessly application usage and cognitive processes of the user, which makes it possible to improve even further the user interface to boost productivity and user loyalty, and lower frustration.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131732155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}