首页 > 最新文献

2019 International Conference on Cyberworlds (CW)最新文献

英文 中文
Applying Firefly Algorithm to Data Fitting for the Van der Waals Equation of State with Bézier Curves 应用萤火虫算法拟合bsamzier曲线Van der Waals状态方程
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00042
Almudena Campuzano, A. Iglesias, A. Gálvez
The Van der Waals equation is an equation of state that generalizes the ideal gas law. It involves two characteristic curves, called binodal and spinodal curves. They are usually reconstructed through standard polynomial fitting. However, the resulting fitting models are strongly limited in several ways. In this paper, we address this issue through least-squares approximation of the set of 2D points by using free-form Bezier curves. This requires to perform data parameterization in addition to computing the poles of the curves. This is achieved by applying a powerful swarm intelligence method called the firefly algorithm. Our method is applied to real data of a gas. Our results show that the method can reconstruct the characteristic curves with good accuracy. Comparative work shows that our approach outperforms two state-of-the-art methods for this example.
范德华方程是一个推广理想气体定律的状态方程。它包括两条特征曲线,称为双节曲线和单节曲线。它们通常通过标准多项式拟合来重建。然而,得到的拟合模型在几个方面有很大的局限性。在本文中,我们通过使用自由形式的贝塞尔曲线对二维点集进行最小二乘逼近来解决这个问题。这除了计算曲线的极点外,还需要执行数据参数化。这是通过应用一种强大的群体智能方法——萤火虫算法来实现的。该方法已应用于某气体的实测数据。结果表明,该方法能较好地重建特征曲线。比较工作表明,我们的方法在本例中优于两种最先进的方法。
{"title":"Applying Firefly Algorithm to Data Fitting for the Van der Waals Equation of State with Bézier Curves","authors":"Almudena Campuzano, A. Iglesias, A. Gálvez","doi":"10.1109/CW.2019.00042","DOIUrl":"https://doi.org/10.1109/CW.2019.00042","url":null,"abstract":"The Van der Waals equation is an equation of state that generalizes the ideal gas law. It involves two characteristic curves, called binodal and spinodal curves. They are usually reconstructed through standard polynomial fitting. However, the resulting fitting models are strongly limited in several ways. In this paper, we address this issue through least-squares approximation of the set of 2D points by using free-form Bezier curves. This requires to perform data parameterization in addition to computing the poles of the curves. This is achieved by applying a powerful swarm intelligence method called the firefly algorithm. Our method is applied to real data of a gas. Our results show that the method can reconstruct the characteristic curves with good accuracy. Comparative work shows that our approach outperforms two state-of-the-art methods for this example.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121361114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Electroencephalography Based Motor Imagery Classification Using Unsupervised Feature Selection 基于脑电图的无监督特征选择运动图像分类
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00047
Abdullah Al Shiam, M. Islam, Toshihisa Tanaka, M. I. Molla
The major challenge in Brain Computer Interface (BCI) is to obtain reliable classification accuracy of motor imagery (MI) task. This paper mainly focuses on unsupervised feature selection for electroencephalography (EEG) classification leading to BCI implementation. The multichannel EEG signal is decomposed into a number of subband signals. The features are extracted from each subband by applying spatial filtering technique. The features are combined into a common feature space to represent the effective event MI classification. It may inevitably include some irrelevant features yielding the increase of dimension and mislead the classification system. The unsupervised discriminative feature selection (UDFS) is employed here to select the subset of extracted features. It effectively selects the dominant features to improve classification accuracy of motor imagery task acquired by EEG signals. The classification of MI tasks is performed by support vector machine. The performance of the proposed method is evaluated using publicly available dataset obtained from BCI Competition III (IVA). The experimental results show that the performance of this method is better than that of the recently developed algorithms.
脑机接口(BCI)面临的主要挑战是如何获得可靠的运动意象任务分类精度。本文主要研究脑电分类中的无监督特征选择,从而实现脑机接口。将多路脑电信号分解为若干子带信号。利用空间滤波技术对每个子带进行特征提取。将这些特征组合成一个公共特征空间来表示有效的事件MI分类。它可能不可避免地包含一些不相关的特征,从而增加了维度,误导了分类系统。本文采用无监督判别特征选择(unsupervised discriminative feature selection, UDFS)对提取的特征子集进行选择。该方法有效地选择了优势特征,提高了脑电信号获取的运动图像任务的分类精度。通过支持向量机对人工智能任务进行分类。使用从BCI Competition III (IVA)获得的公开可用数据集对所提出方法的性能进行了评估。实验结果表明,该方法的性能优于最近开发的算法。
{"title":"Electroencephalography Based Motor Imagery Classification Using Unsupervised Feature Selection","authors":"Abdullah Al Shiam, M. Islam, Toshihisa Tanaka, M. I. Molla","doi":"10.1109/CW.2019.00047","DOIUrl":"https://doi.org/10.1109/CW.2019.00047","url":null,"abstract":"The major challenge in Brain Computer Interface (BCI) is to obtain reliable classification accuracy of motor imagery (MI) task. This paper mainly focuses on unsupervised feature selection for electroencephalography (EEG) classification leading to BCI implementation. The multichannel EEG signal is decomposed into a number of subband signals. The features are extracted from each subband by applying spatial filtering technique. The features are combined into a common feature space to represent the effective event MI classification. It may inevitably include some irrelevant features yielding the increase of dimension and mislead the classification system. The unsupervised discriminative feature selection (UDFS) is employed here to select the subset of extracted features. It effectively selects the dominant features to improve classification accuracy of motor imagery task acquired by EEG signals. The classification of MI tasks is performed by support vector machine. The performance of the proposed method is evaluated using publicly available dataset obtained from BCI Competition III (IVA). The experimental results show that the performance of this method is better than that of the recently developed algorithms.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134094401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Interactive System for Modeling Fish Shapes 鱼类形状建模的交互式系统
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00076
Masayuki Tamiya, Y. Dobashi
Recently, computer graphics is widely used in movies and games, etc., and modeling three-dimensional virtual objects is important for synthesizing realistic images. Since modeling realistic objects often requires special skills and takes long time, many methods have been developed to help the user generate models such as plants and buildings. However, little attention has been paid to the modeling of fish shapes because of the complexity of their shapes. We propose an interactive system for modeling a realistic fish shape from a single image. We also introduce a method called Direct Manipulation Blendshapes for improving the usability of our system.
近年来,计算机图形学在电影、游戏等领域得到了广泛的应用,三维虚拟物体的建模是合成逼真图像的重要手段。由于建模现实对象往往需要特殊的技能,需要很长时间,许多方法已经开发出来,以帮助用户生成模型,如植物和建筑物。然而,由于鱼类形状的复杂性,对其形状建模的关注很少。我们提出了一个交互式系统,用于从单个图像建模逼真的鱼形状。我们还引入了一种称为直接操纵混合形状的方法,以提高系统的可用性。
{"title":"An Interactive System for Modeling Fish Shapes","authors":"Masayuki Tamiya, Y. Dobashi","doi":"10.1109/CW.2019.00076","DOIUrl":"https://doi.org/10.1109/CW.2019.00076","url":null,"abstract":"Recently, computer graphics is widely used in movies and games, etc., and modeling three-dimensional virtual objects is important for synthesizing realistic images. Since modeling realistic objects often requires special skills and takes long time, many methods have been developed to help the user generate models such as plants and buildings. However, little attention has been paid to the modeling of fish shapes because of the complexity of their shapes. We propose an interactive system for modeling a realistic fish shape from a single image. We also introduce a method called Direct Manipulation Blendshapes for improving the usability of our system.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127112180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Person Identification from Visual Aesthetics Using Gene Expression Programming 基于基因表达编程的视觉美学人物识别
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00053
Brandon Sieu, M. Gavrilova
The last decade has witnessed an increase in online human interactions, covering all aspects of personal and professional activities. Identification of people based on their behavior rather than physical traits is a growing industry, spanning diverse spheres such as online education, e-commerce and cyber security. One prominent behavior is the expression of opinions, commonly as a reaction to images posted online. Visual aesthetic is a soft, behavioral biometric that refers to a person's sense of fondness to a certain image. Identifying individuals using their visual aesthetics as discriminatory features is an emerging domain of research. This paper introduces a new method for aesthetic feature dimensionality reduction using gene expression programming. The advantage of this method is that the resulting system is capable of using a tree-based genetic approach for feature recombination. Reducing feature dimensionality improves classifier accuracy, reduces computation runtime, and minimizes required storage. The results obtained on a dataset of 200 Flickr users evaluating 40000 images demonstrates a 94% accuracy of identity recognition based solely on users' aesthetic preferences. This outperforms the best-known method by 13.5%.
过去十年见证了在线人际互动的增加,涵盖了个人和职业活动的各个方面。根据人们的行为而不是身体特征来识别他们是一个正在发展的行业,涉及在线教育、电子商务和网络安全等多个领域。一个突出的行为是表达意见,通常是对网上发布的图片的反应。视觉审美是一种软的、行为的生物特征,指的是一个人对某种图像的喜爱感。利用个人的视觉美学作为歧视性特征来识别他们是一个新兴的研究领域。介绍了一种利用基因表达式编程进行美学特征降维的新方法。该方法的优点是所得到的系统能够使用基于树的遗传方法进行特征重组。降低特征维度可以提高分类器的准确性,减少计算运行时间,并最大限度地减少所需的存储。在200个Flickr用户评估40000张图片的数据集上获得的结果表明,仅仅基于用户的审美偏好,身份识别的准确率为94%。这种方法比最著名的方法高出13.5%。
{"title":"Person Identification from Visual Aesthetics Using Gene Expression Programming","authors":"Brandon Sieu, M. Gavrilova","doi":"10.1109/CW.2019.00053","DOIUrl":"https://doi.org/10.1109/CW.2019.00053","url":null,"abstract":"The last decade has witnessed an increase in online human interactions, covering all aspects of personal and professional activities. Identification of people based on their behavior rather than physical traits is a growing industry, spanning diverse spheres such as online education, e-commerce and cyber security. One prominent behavior is the expression of opinions, commonly as a reaction to images posted online. Visual aesthetic is a soft, behavioral biometric that refers to a person's sense of fondness to a certain image. Identifying individuals using their visual aesthetics as discriminatory features is an emerging domain of research. This paper introduces a new method for aesthetic feature dimensionality reduction using gene expression programming. The advantage of this method is that the resulting system is capable of using a tree-based genetic approach for feature recombination. Reducing feature dimensionality improves classifier accuracy, reduces computation runtime, and minimizes required storage. The results obtained on a dataset of 200 Flickr users evaluating 40000 images demonstrates a 94% accuracy of identity recognition based solely on users' aesthetic preferences. This outperforms the best-known method by 13.5%.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126084260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Detection of Humanoid Robot Design Preferences Using EEG and Eye Tracker 基于脑电图和眼动仪的仿人机器人设计偏好检测
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00044
Yisi Liu, Fan Li, L. Tang, Zirui Lan, Jian Cui, O. Sourina, Chun-Hsien Chen
Currently, many modern humanoid robots have little appeal due to their simple designs and bland appearances. To provide recommendations for designers and improve the designs of humanoid robots, a study of human's perception on humanoid robot designs is conducted using Electroencephalogram (EEG), eye tracking information and questionnaires. We proposed and carried out an experiment with 20 subjects to collect the EEG and eye tracking data to study their reaction to different robot designs and the corresponding preference towards these designs. This study can possibly give us some insights on how people react to the aesthetic designs of different humanoid robot models and the important traits in a humanoid robot design, such as the perceived smartness and friendliness of the robots. Another point of interest is to investigate the most prominent feature of the robot, such as the head, facial features and the chest. The result shows that the head and facial features are the focus. It is also discovered that more attention is paid to the robots that appear to be more appealing. Lastly, it is affirmed that the first impressions of the robots generally do not change over time, which may imply that a good humanoid robot design impress the observers at first sight.
目前,许多现代类人机器人由于设计简单、外观平淡,缺乏吸引力。为了给设计人员提供建议和改进仿人机器人的设计,采用脑电图(EEG)、眼动追踪信息和问卷调查的方法研究了人对仿人机器人设计的感知。我们提出并开展了20名被试的实验,收集了他们的脑电图和眼动追踪数据,研究了他们对不同机器人设计的反应以及对这些设计的偏好。这项研究可能会让我们了解人们对不同人形机器人模型的美学设计的反应,以及人形机器人设计的重要特征,如机器人的感知智能和友好性。另一个有趣的点是研究机器人最突出的特征,如头部、面部特征和胸部。结果表明,头部和面部特征是研究的重点。研究还发现,看起来更有吸引力的机器人会得到更多的关注。最后,可以肯定的是,机器人的第一印象通常不会随着时间的推移而改变,这可能意味着一个好的人形机器人设计给观察者留下了第一眼印象。
{"title":"Detection of Humanoid Robot Design Preferences Using EEG and Eye Tracker","authors":"Yisi Liu, Fan Li, L. Tang, Zirui Lan, Jian Cui, O. Sourina, Chun-Hsien Chen","doi":"10.1109/CW.2019.00044","DOIUrl":"https://doi.org/10.1109/CW.2019.00044","url":null,"abstract":"Currently, many modern humanoid robots have little appeal due to their simple designs and bland appearances. To provide recommendations for designers and improve the designs of humanoid robots, a study of human's perception on humanoid robot designs is conducted using Electroencephalogram (EEG), eye tracking information and questionnaires. We proposed and carried out an experiment with 20 subjects to collect the EEG and eye tracking data to study their reaction to different robot designs and the corresponding preference towards these designs. This study can possibly give us some insights on how people react to the aesthetic designs of different humanoid robot models and the important traits in a humanoid robot design, such as the perceived smartness and friendliness of the robots. Another point of interest is to investigate the most prominent feature of the robot, such as the head, facial features and the chest. The result shows that the head and facial features are the focus. It is also discovered that more attention is paid to the robots that appear to be more appealing. Lastly, it is affirmed that the first impressions of the robots generally do not change over time, which may imply that a good humanoid robot design impress the observers at first sight.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114890485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Human Movements Classification Using Multi-channel Surface EMG Signals and Deep Learning Technique 基于多通道表面肌电信号和深度学习技术的人体运动分类
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00051
Jianhua Zhang, C. Ling, Sunan Li
Electromyography (EMG) signals can be used for human movements classification. Nonetheless, due to their nonlinear and time-varying properties, it is difficult to classify the EMG signals and it is critical to use appropriate algorithms for EMG feature extraction and pattern classification. In literature various machine learning (ML) methods have been applied to the EMG signal classification problem in question. In this paper, we extracted four time-domain features of the EMG signals and use a generative graphical model, Deep Belief Network (DBN), to classify the EMG signals. A DBN is a fast, greedy deep learning algorithm that can rapidly find a set of optimal weights of a deep network with many hidden layers. To evaluate the DBN model, we acquired EMG signals, extracted their time-domain features, and then utilized the DBN model to classify human movements. The real data analysis results are presented to show the effectiveness of the proposed deep learning technique for both binary and 4-class recognition of human movements using the measured 8-channel EMG signals. The proposed DBN model may find applications in design of EMG-based user interfaces.
肌电图(EMG)信号可用于人体运动分类。然而,由于肌电信号的非线性和时变特性,很难对其进行分类,因此采用合适的算法进行肌电特征提取和模式分类至关重要。在文献中,各种机器学习(ML)方法已经应用于肌电信号分类问题。在本文中,我们提取了肌电信号的四个时域特征,并使用生成图形模型深度信念网络(DBN)对肌电信号进行分类。DBN是一种快速、贪婪的深度学习算法,可以快速找到具有许多隐藏层的深度网络的一组最优权重。为了评估DBN模型,我们获取肌电信号,提取其时域特征,然后利用DBN模型对人体运动进行分类。真实的数据分析结果表明,所提出的深度学习技术在使用测量的8通道肌电信号对人体运动进行二值和四类识别方面是有效的。提出的DBN模型可以应用于基于肌电图的用户界面设计。
{"title":"Human Movements Classification Using Multi-channel Surface EMG Signals and Deep Learning Technique","authors":"Jianhua Zhang, C. Ling, Sunan Li","doi":"10.1109/CW.2019.00051","DOIUrl":"https://doi.org/10.1109/CW.2019.00051","url":null,"abstract":"Electromyography (EMG) signals can be used for human movements classification. Nonetheless, due to their nonlinear and time-varying properties, it is difficult to classify the EMG signals and it is critical to use appropriate algorithms for EMG feature extraction and pattern classification. In literature various machine learning (ML) methods have been applied to the EMG signal classification problem in question. In this paper, we extracted four time-domain features of the EMG signals and use a generative graphical model, Deep Belief Network (DBN), to classify the EMG signals. A DBN is a fast, greedy deep learning algorithm that can rapidly find a set of optimal weights of a deep network with many hidden layers. To evaluate the DBN model, we acquired EMG signals, extracted their time-domain features, and then utilized the DBN model to classify human movements. The real data analysis results are presented to show the effectiveness of the proposed deep learning technique for both binary and 4-class recognition of human movements using the measured 8-channel EMG signals. The proposed DBN model may find applications in design of EMG-based user interfaces.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125494127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Stylized Line Drawing of 3D Models using CNN 使用CNN的3D模型的风格化线条绘制
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00015
Mitsuhiro Uchida, S. Saito
Techniques to render 3D models like hand-drawings are often required. In this paper, we propose an approach that generates line-drawing with various styles by machine learning. We train two Convolutional neural networks (CNNs), of which one is a line extractor from the depth and normal images of a 3D object, and the other is a line thickness applicator. The following process to CNNs interprets the thickness of the lines as intensity to control properties of a line style. Using the obtained intensities, non-uniform line styled drawings are generated. The results show the efficiency of combining the machine learning method and the interpreter.
通常需要像手绘这样的3D模型渲染技术。在本文中,我们提出了一种通过机器学习生成各种风格线条的方法。我们训练了两个卷积神经网络(cnn),其中一个是从3D物体的深度和法线图像中提取线条,另一个是线条厚度涂抹器。下面的过程cnn将线条的厚度解释为强度,以控制线条样式的属性。利用获得的强度,生成非均匀线条样式的绘图。结果表明了机器学习方法与解释器相结合的有效性。
{"title":"Stylized Line Drawing of 3D Models using CNN","authors":"Mitsuhiro Uchida, S. Saito","doi":"10.1109/CW.2019.00015","DOIUrl":"https://doi.org/10.1109/CW.2019.00015","url":null,"abstract":"Techniques to render 3D models like hand-drawings are often required. In this paper, we propose an approach that generates line-drawing with various styles by machine learning. We train two Convolutional neural networks (CNNs), of which one is a line extractor from the depth and normal images of a 3D object, and the other is a line thickness applicator. The following process to CNNs interprets the thickness of the lines as intensity to control properties of a line style. Using the obtained intensities, non-uniform line styled drawings are generated. The results show the efficiency of combining the machine learning method and the interpreter.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125388611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Tsunami Evacuation Simulation System for Disaster Prevention Plan 防灾预案海啸疏散模拟系统
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00067
Yasuo Kawai, Yurie Kaizu
Hazard maps are currently developed using computationally expensive techniques and must be revised after a major disaster. Therefore, we developed a low-cost tsunami evacuation simulation system using a game engine and open data. A simulation evaluation was conducted using the target area of Kamakura City, Japan. We also developed an agent that performs evacuation actions and autonomously searches for evacuation destinations and evacuation behaviors at specified speeds in order to clarify current issues with the location of evacuation sites. The agent prepared three walking speeds, three disaster conditions, and two evacuation behaviors and could freely change the number and ratio of agents. The simulation results revealed locations with high concentrations of tsunami victims, even in inland areas.
灾害地图目前是使用昂贵的计算技术绘制的,在发生重大灾害后必须进行修订。因此,我们利用游戏引擎和开放数据开发了一个低成本的海啸疏散模拟系统。以日本镰仓市为目标区域进行了模拟评价。我们还开发了一个代理,它可以执行疏散动作,并以指定的速度自主搜索疏散目的地和疏散行为,以澄清当前疏散地点位置的问题。agent准备了三种行走速度、三种灾害条件和两种疏散行为,并且可以自由改变agent的数量和比例。模拟结果显示了海啸受害者高度集中的地点,甚至在内陆地区。
{"title":"Tsunami Evacuation Simulation System for Disaster Prevention Plan","authors":"Yasuo Kawai, Yurie Kaizu","doi":"10.1109/CW.2019.00067","DOIUrl":"https://doi.org/10.1109/CW.2019.00067","url":null,"abstract":"Hazard maps are currently developed using computationally expensive techniques and must be revised after a major disaster. Therefore, we developed a low-cost tsunami evacuation simulation system using a game engine and open data. A simulation evaluation was conducted using the target area of Kamakura City, Japan. We also developed an agent that performs evacuation actions and autonomously searches for evacuation destinations and evacuation behaviors at specified speeds in order to clarify current issues with the location of evacuation sites. The agent prepared three walking speeds, three disaster conditions, and two evacuation behaviors and could freely change the number and ratio of agents. The simulation results revealed locations with high concentrations of tsunami victims, even in inland areas.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130934185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic Image Enhancement Taking into Account User Preference 考虑用户偏好的自动图像增强
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00070
Yuri Murata, Y. Dobashi
In these days, we can take many pictures everyday and everywhere with mobile devices such as smartphones. After taking a picture, we often modify it by using some image enhancement tools so that the appearance of the picture becomes favorable to his/her own preference. However, since there are many parameters in the enhancement functions, it is not an easy task to find an appropriate parameter set to obtain the desired result. Some tools have a function that automatically determine the parameters but they do not take into account the user's preference. In this paper, we present a system to address this problem. Our system first estimates the user's preference by using RankNet. Next, the image enhancement parameters are optimized to maximize the estimated preference. We show some experimental results to demonstrate the usefulness of our system.
如今,我们每天随地都可以用智能手机等移动设备拍很多照片。在拍完一张照片后,我们经常会用一些图像增强工具对其进行修改,使照片的外观变得符合他/她自己的喜好。然而,由于增强函数中有很多参数,要找到一个合适的参数集来获得期望的结果并不是一件容易的事情。有些工具具有自动确定参数的功能,但它们不考虑用户的偏好。在本文中,我们提出了一个系统来解决这个问题。我们的系统首先通过使用RankNet来估计用户的偏好。接下来,优化图像增强参数以最大化估计偏好。我们给出了一些实验结果来证明我们系统的有效性。
{"title":"Automatic Image Enhancement Taking into Account User Preference","authors":"Yuri Murata, Y. Dobashi","doi":"10.1109/CW.2019.00070","DOIUrl":"https://doi.org/10.1109/CW.2019.00070","url":null,"abstract":"In these days, we can take many pictures everyday and everywhere with mobile devices such as smartphones. After taking a picture, we often modify it by using some image enhancement tools so that the appearance of the picture becomes favorable to his/her own preference. However, since there are many parameters in the enhancement functions, it is not an easy task to find an appropriate parameter set to obtain the desired result. Some tools have a function that automatically determine the parameters but they do not take into account the user's preference. In this paper, we present a system to address this problem. Our system first estimates the user's preference by using RankNet. Next, the image enhancement parameters are optimized to maximize the estimated preference. We show some experimental results to demonstrate the usefulness of our system.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116534723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Colorblind-Shareable Videos Colorblind-Shareable视频
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00065
Xinghong Hu, Xueting Liu, Xiangyu Mao, T. Wong
The two distinctive visual experiences of binocular display, with and without stereoscopic glasses, have been recently utilized for visual sharing between the colorblind and the normal-vision audiences. However, all existing methods only work for still images, and lack of temporal consistency for video application. In this paper, we propose the first synthesis method for colorblind-sharable videos that possess the temporal consistency for both visual experiences of colorblind and normal-vision, and retains all other crucial characteristics for visual sharing with colorblind. We formulate this challenging multi-constraint problem as a global optimization and minimize an objective function consisting of temporal term, color preservation term, color distinguishability term, and binocular fusibility term. Qualitative and quantitative experiments are conducted to evaluate the effectiveness of the proposed method comparing to existing methods.
双目显示的两种不同的视觉体验,带立体眼镜和不带立体眼镜,最近被用于色盲和正常视力的观众之间的视觉共享。然而,现有的方法都只适用于静态图像,缺乏视频应用的时间一致性。在本文中,我们提出了第一种色盲共享视频的合成方法,该方法具有色盲和正常视觉视觉体验的时间一致性,并保留了与色盲视觉共享的所有其他关键特征。我们将这一具有挑战性的多约束问题表述为一个全局优化和最小化目标函数,该目标函数由时间项、颜色保持项、颜色可分辨性项和双眼融合性项组成。进行了定性和定量实验,与现有方法进行了比较,评价了该方法的有效性。
{"title":"Colorblind-Shareable Videos","authors":"Xinghong Hu, Xueting Liu, Xiangyu Mao, T. Wong","doi":"10.1109/CW.2019.00065","DOIUrl":"https://doi.org/10.1109/CW.2019.00065","url":null,"abstract":"The two distinctive visual experiences of binocular display, with and without stereoscopic glasses, have been recently utilized for visual sharing between the colorblind and the normal-vision audiences. However, all existing methods only work for still images, and lack of temporal consistency for video application. In this paper, we propose the first synthesis method for colorblind-sharable videos that possess the temporal consistency for both visual experiences of colorblind and normal-vision, and retains all other crucial characteristics for visual sharing with colorblind. We formulate this challenging multi-constraint problem as a global optimization and minimize an objective function consisting of temporal term, color preservation term, color distinguishability term, and binocular fusibility term. Qualitative and quantitative experiments are conducted to evaluate the effectiveness of the proposed method comparing to existing methods.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127851583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2019 International Conference on Cyberworlds (CW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1