Suguru Matsuyoshi, Koji Murakami, Yuji Matsumoto, Kentaro Inui
In this paper, we present a database of logical relations between predicate argument structures (PASs) in Japanese for recognizing relations between statements. We have defined nine logical relations between PASs and manually collected argument structures and logical relations for verbs from definition sentences in a machine-readable Japanese dictionary. In addition, we augmented the relations in our database with a thesaurus of verb argument structures, which identifies synonymy and antonymy between PASs. Our database consists of 29,555 entries and 45,905 relations between PASs. In a preliminary experiment with this database, we constructed a system that recognizes synonymy between PASs in Web documents with a precision of about 0.80.
{"title":"A Database of Relations between Predicate Argument Structures for Recognizing Textual Entailment and Contradiction","authors":"Suguru Matsuyoshi, Koji Murakami, Yuji Matsumoto, Kentaro Inui","doi":"10.1109/ISUC.2008.31","DOIUrl":"https://doi.org/10.1109/ISUC.2008.31","url":null,"abstract":"In this paper, we present a database of logical relations between predicate argument structures (PASs) in Japanese for recognizing relations between statements. We have defined nine logical relations between PASs and manually collected argument structures and logical relations for verbs from definition sentences in a machine-readable Japanese dictionary. In addition, we augmented the relations in our database with a thesaurus of verb argument structures, which identifies synonymy and antonymy between PASs. Our database consists of 29,555 entries and 45,905 relations between PASs. In a preliminary experiment with this database, we constructed a system that recognizes synonymy between PASs in Web documents with a precision of about 0.80.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132697686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this research, we evaluate pseudo-haptic, olfactory, gustatory stimulation with visual feedback to construct multi-modal display system. To evaluate pseudo-haptics, we introduce EMG and show relationship between visual feedback and subjective evaluation. To evaluate pseudo-olfaction, we construct two olfactory display systems with visual feedback and demonstrate olfactory illusion. To evaluate pseudo-gustation, we examine relationship between drink colors and taste by using dye addition juice.
{"title":"A Study of Multi-modal Display System with Visual Feedback","authors":"T. Tanikawa, M. Hirose","doi":"10.1109/ISUC.2008.89","DOIUrl":"https://doi.org/10.1109/ISUC.2008.89","url":null,"abstract":"In this research, we evaluate pseudo-haptic, olfactory, gustatory stimulation with visual feedback to construct multi-modal display system. To evaluate pseudo-haptics, we introduce EMG and show relationship between visual feedback and subjective evaluation. To evaluate pseudo-olfaction, we construct two olfactory display systems with visual feedback and demonstrate olfactory illusion. To evaluate pseudo-gustation, we examine relationship between drink colors and taste by using dye addition juice.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134377786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the field of an automotive research, a method to monitor and to detect a drowsy or a drunken driver has been studied for many years. Previous research uses sensors such as an infrared camera for pupil detection or voice to detect fatigue. Even these approaches are able to detect driver¿s fatigue, however, these methods are not driver adaptable nor interactive with a outside driving situation. Unlike previous approach, we propose driver¿s fatigue detection system which uses the driver¿s pedal controlling pattern with respect to the driver¿s front view situation. The system uses a distance sensor on the frontend of the car so that it can capture an outside event. The entire sensor data are processed using a combination of Decision Tree learning algorithm and rule-based algorithm. The system does learning process at every startup of a car so that our system is capable to be adapted to each driver¿s driving style and behavior. Accordingly, we can be obtained the driver¿s fatigue level based on the response patterns.
{"title":"Detecting Driver Fatigue based on the Driver's Response Pattern and the Front View Environment of an Automobile","authors":"Youngjae Kim, Youmin Kim, Minsoo Hahn","doi":"10.1109/ISUC.2008.58","DOIUrl":"https://doi.org/10.1109/ISUC.2008.58","url":null,"abstract":"In the field of an automotive research, a method to monitor and to detect a drowsy or a drunken driver has been studied for many years. Previous research uses sensors such as an infrared camera for pupil detection or voice to detect fatigue. Even these approaches are able to detect driver¿s fatigue, however, these methods are not driver adaptable nor interactive with a outside driving situation. Unlike previous approach, we propose driver¿s fatigue detection system which uses the driver¿s pedal controlling pattern with respect to the driver¿s front view situation. The system uses a distance sensor on the frontend of the car so that it can capture an outside event. The entire sensor data are processed using a combination of Decision Tree learning algorithm and rule-based algorithm. The system does learning process at every startup of a car so that our system is capable to be adapted to each driver¿s driving style and behavior. Accordingly, we can be obtained the driver¿s fatigue level based on the response patterns.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122306800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinfu Ni, S. Sakai, Tohru Shimizu, Satoshi Nakamura
Chinese is a tonal language. It has both lexical tones and intonation. The fundamental frequency (F0) contours thereby consist of tone and intonation components. This paper presents an approach to modeling the two components in separate ways and combining them to form the final F0 contours based on a functional F0 model. We analyze tonal patterns as sparse target points (tonal F0 peaks and valleys) and model them using classification and regression trees (CART) with contextual linguistic features. As a first step, we stylize expressive intonation using a few piecewise linear patterns specified by a few markup tags. Both tonal and intonational patterns are represented in a parametric form within the framework of this F0 model. Our experimental results indicated that very low F0 prediction errors were achieved by the CART-based modeling of the tonal patterns uttered by two female and male speakers. In a listening test, the native speakers could identify 90% of synthesized stimuli with enhancing emphasis in word. Also, the linguistic features related to the lexical tone context and distinction between voiced and unvoiced initials played the most important role in characterizing the tonal patterns.
{"title":"Prosody Modeling from Tone to Intonation in Chinese using a Functional F0 Model","authors":"Jinfu Ni, S. Sakai, Tohru Shimizu, Satoshi Nakamura","doi":"10.1109/ISUC.2008.37","DOIUrl":"https://doi.org/10.1109/ISUC.2008.37","url":null,"abstract":"Chinese is a tonal language. It has both lexical tones and intonation. The fundamental frequency (F0) contours thereby consist of tone and intonation components. This paper presents an approach to modeling the two components in separate ways and combining them to form the final F0 contours based on a functional F0 model. We analyze tonal patterns as sparse target points (tonal F0 peaks and valleys) and model them using classification and regression trees (CART) with contextual linguistic features. As a first step, we stylize expressive intonation using a few piecewise linear patterns specified by a few markup tags. Both tonal and intonational patterns are represented in a parametric form within the framework of this F0 model. Our experimental results indicated that very low F0 prediction errors were achieved by the CART-based modeling of the tonal patterns uttered by two female and male speakers. In a listening test, the native speakers could identify 90% of synthesized stimuli with enhancing emphasis in word. Also, the linguistic features related to the lexical tone context and distinction between voiced and unvoiced initials played the most important role in characterizing the tonal patterns.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"91 4-5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123493317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motion parallax was described as a cue to depth over 300 years ago and as producing apparent motion over 150 years ago. In recent years, experimental interest in motion parallax has increased, following there discovery of the idea of yoking stimulus motion to head movement. Contemporary research indicates how depth and motion perception are dependent on the conditions of stimulation. From what we know about motion parallax, we suggest an experimental 3-D display system.
{"title":"Perception of Depth, Motion, and Stability with Motion Parallax (Invited Paper)","authors":"H. Ono","doi":"10.1109/ISUC.2008.81","DOIUrl":"https://doi.org/10.1109/ISUC.2008.81","url":null,"abstract":"Motion parallax was described as a cue to depth over 300 years ago and as producing apparent motion over 150 years ago. In recent years, experimental interest in motion parallax has increased, following there discovery of the idea of yoking stimulus motion to head movement. Contemporary research indicates how depth and motion perception are dependent on the conditions of stimulation. From what we know about motion parallax, we suggest an experimental 3-D display system.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121348341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To give listeners a vivid sense of 3D spatial audio, virtual auditory display technology relies crucially on head related transfer functions (HRTFs). However, as each person has unique morphological characteristics of their head and ears, for a realistic auditory experience it is important to use personalized HRTFs. Our approach to HRTF personalization is first to measure a listener's head and ear morphology, currently by magnetic resonance imaging (MRI); then to use the 3D morphological data in computer simulation of sound wave propagation, by the finite difference time domain (FDTD) method. This paper summarizes our methods and recent improvements, which have led to obtaining more faithful, personalized HRTFs by FDTD simulation.
{"title":"Computer Simulation of HRTFs for Personalization of 3D Audio","authors":"P. Mokhtari, H. Takemoto, R. Nishimura, H. Kato","doi":"10.1109/ISUC.2008.41","DOIUrl":"https://doi.org/10.1109/ISUC.2008.41","url":null,"abstract":"To give listeners a vivid sense of 3D spatial audio, virtual auditory display technology relies crucially on head related transfer functions (HRTFs). However, as each person has unique morphological characteristics of their head and ears, for a realistic auditory experience it is important to use personalized HRTFs. Our approach to HRTF personalization is first to measure a listener's head and ear morphology, currently by magnetic resonance imaging (MRI); then to use the 3D morphological data in computer simulation of sound wave propagation, by the finite difference time domain (FDTD) method. This paper summarizes our methods and recent improvements, which have led to obtaining more faithful, personalized HRTFs by FDTD simulation.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116182212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to enrich visual communications, we present a non-photorealistic rendering technique that depicts motion of 3D objects in a still image. To realize this technique, ¿speed line¿, one of the most familiar techniques in Japanese cartoon ¿manga¿, is introduced. Our technique firstly decomposes a combined motion of 3D objects into a translational motion of the center of gravity of the object and its rotational motion. Then, to depict motion of the 3D object, we render texture-mapped polygons that generated from a series of geometrical positions of the 3D object that represent its animation progress with the 3D objects. Textures mapped to the polygons are changed automatically according to the speed of the 3D object. Experimental results verify that our technique is effective enough to automatically depict various motions of 3D objects in real time.
{"title":"An Extended Non-Photorealistic Rendering Technique for Depicting Motions of Multiple 3D Objects","authors":"Tomoaki Moriya, Tokiichiro Takahashi","doi":"10.1109/ISUC.2008.64","DOIUrl":"https://doi.org/10.1109/ISUC.2008.64","url":null,"abstract":"In order to enrich visual communications, we present a non-photorealistic rendering technique that depicts motion of 3D objects in a still image. To realize this technique, ¿speed line¿, one of the most familiar techniques in Japanese cartoon ¿manga¿, is introduced. Our technique firstly decomposes a combined motion of 3D objects into a translational motion of the center of gravity of the object and its rotational motion. Then, to depict motion of the 3D object, we render texture-mapped polygons that generated from a series of geometrical positions of the 3D object that represent its animation progress with the 3D objects. Textures mapped to the polygons are changed automatically according to the speed of the 3D object. Experimental results verify that our technique is effective enough to automatically depict various motions of 3D objects in real time.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129430453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper elaborates on context assessment strategies for smart homes and, in a broader perspective, for context-aware cognitive systems. The proposed framework, which is inspired by a cognitive theory called functionalism, is aimed at integrating ontology and logic approaches to context modeling. Two are the assumptions underlying the model: (i) the availability of an ontology (i.e., a "context-role" representation of what exists in a given domain); (ii) a simple inference schema (i.e., subsumption between concepts). The context model is formally defined adopting a structural approach, which describes contexts and situations as recursive structures grounded with respect to the ontology. Examples are presented to discuss the proposed model.
{"title":"Understanding Events Relationally and Temporally Related: Context Assessment Strategies for a Smart Home (Invited Paper)","authors":"F. Mastrogiovanni, A. Sgorbissa, R. Zaccaria","doi":"10.1109/ISUC.2008.27","DOIUrl":"https://doi.org/10.1109/ISUC.2008.27","url":null,"abstract":"This paper elaborates on context assessment strategies for smart homes and, in a broader perspective, for context-aware cognitive systems. The proposed framework, which is inspired by a cognitive theory called functionalism, is aimed at integrating ontology and logic approaches to context modeling. Two are the assumptions underlying the model: (i) the availability of an ontology (i.e., a \"context-role\" representation of what exists in a given domain); (ii) a simple inference schema (i.e., subsumption between concepts). The context model is formally defined adopting a structural approach, which describes contexts and situations as recursive structures grounded with respect to the ontology. Examples are presented to discuss the proposed model.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130963862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Relevance feedback is an important source of information about a user and often used for usage and user modeling for further personalization of user-system interactions. In this paper we present a method to infer the userpsilas interests from his/her relevance feedback using an online incremental clustering method. For inference of a new interest (concept) and concept update the method uses the similarity characteristics of uniform user relevance feedback. It is fast, easy to implement and gives reasonable clustering results. We evaluate the method against two different data sets, demonstrate and discuss the outcomes.
{"title":"Inferring User Interests from Relevance Feedback with High Similarity Sequence Data-Driven Clustering","authors":"Roman Y. Shtykh, Qun Jin","doi":"10.1109/ISUC.2008.39","DOIUrl":"https://doi.org/10.1109/ISUC.2008.39","url":null,"abstract":"Relevance feedback is an important source of information about a user and often used for usage and user modeling for further personalization of user-system interactions. In this paper we present a method to infer the userpsilas interests from his/her relevance feedback using an online incremental clustering method. For inference of a new interest (concept) and concept update the method uses the similarity characteristics of uniform user relevance feedback. It is fast, easy to implement and gives reasonable clustering results. We evaluate the method against two different data sets, demonstrate and discuss the outcomes.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"358 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115865291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The auditory presence of a reproduced sound depends on its content and the characteristics of the system used. In this study, the former property is referred to as ¿content presence¿, while the latter is called ¿system presence¿. A psychoacoustical experiment was conducted to measure the presence of twenty-five stimuli, which consisted of five reproduction systems with five sounds. The five systems differed in their accuracy of sound localization, and included a binaural reproduction system and a monaural system. The five sounds were chosen due to their different content-presence based on our previous experiment. Herein the experiment was conducted using the method of Scheffe's paired comparison. The results showed that for a high presence, the accuracy of sound localization is important. Moreover, it is found that the system presence is comparable to the content presence in audio reproduction systems.
{"title":"Content Presence vs. System Presence in Audio Reproduction Systems","authors":"K. Ozawa, Yoshihiro Chujo","doi":"10.1109/ISUC.2008.45","DOIUrl":"https://doi.org/10.1109/ISUC.2008.45","url":null,"abstract":"The auditory presence of a reproduced sound depends on its content and the characteristics of the system used. In this study, the former property is referred to as ¿content presence¿, while the latter is called ¿system presence¿. A psychoacoustical experiment was conducted to measure the presence of twenty-five stimuli, which consisted of five reproduction systems with five sounds. The five systems differed in their accuracy of sound localization, and included a binaural reproduction system and a monaural system. The five sounds were chosen due to their different content-presence based on our previous experiment. Herein the experiment was conducted using the method of Scheffe's paired comparison. The results showed that for a high presence, the accuracy of sound localization is important. Moreover, it is found that the system presence is comparable to the content presence in audio reproduction systems.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132787990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}