首页 > 最新文献

2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)最新文献

英文 中文
Estimation of Facial Expression Intensity for Lifelog Videos Retrieval 面部表情强度估计在生活日志视频检索中的应用
Yamato Shinohara, Hiroki Nomiya, T. Hochin
Facial expression intensity has been proposed to estimate the intensity of facial expression for the purpose of retrieving impressive scenes from lifelog videos. However, estimation of facial expression intensity involves manual work, and can only be relatively evaluated. We propose a new estimation method of facial expression intensity for reducing manual work, and for absolute evaluation. We estimate the proposed expression intensity for the lifelog videos used in previous research and MMI datasets, and compare it with the previous research, and evaluate the proposed method. As a result, it is shown it is possible to reduce the manual work while maintaining the estimation accuracy.
面部表情强度被用来估计面部表情的强度,以便从生活日志视频中检索令人印象深刻的场景。然而,面部表情强度的估计涉及人工操作,只能相对评估。我们提出了一种新的面部表情强度估计方法,以减少人工工作量,并对其进行绝对评价。我们对先前研究中使用的生活日志视频和MMI数据集估计了所提出的表达强度,并与先前的研究进行了比较,并对所提出的方法进行了评价。结果表明,在保持估计准确性的同时减少手工工作是可能的。
{"title":"Estimation of Facial Expression Intensity for Lifelog Videos Retrieval","authors":"Yamato Shinohara, Hiroki Nomiya, T. Hochin","doi":"10.1109/CSII.2018.00030","DOIUrl":"https://doi.org/10.1109/CSII.2018.00030","url":null,"abstract":"Facial expression intensity has been proposed to estimate the intensity of facial expression for the purpose of retrieving impressive scenes from lifelog videos. However, estimation of facial expression intensity involves manual work, and can only be relatively evaluated. We propose a new estimation method of facial expression intensity for reducing manual work, and for absolute evaluation. We estimate the proposed expression intensity for the lifelog videos used in previous research and MMI datasets, and compare it with the previous research, and evaluate the proposed method. As a result, it is shown it is possible to reduce the manual work while maintaining the estimation accuracy.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115750867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Publisher's Information 出版商的信息
{"title":"Publisher's Information","authors":"","doi":"10.1109/csii.2018.00038","DOIUrl":"https://doi.org/10.1109/csii.2018.00038","url":null,"abstract":"","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115357109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measurement of Line-of-Sight Detection Using Pixel Quantity Variation and Application for Autism 基于像素量变化的视线检测测量及其在自闭症中的应用
T. Niwa, Ippei Torii, N. Ishii
In this study, we develop a tool to support physically disabled people's communication and an assessment tool to measure the intelligence index of autistic children, which uses eye movements with image processing. For the measurement of eye movements, we newly developed a pixel center of gravity method that detects in which the direction of the eye movement is shown in the point where the weights of the black pixels moved. This method is different from using the conventional black eye detection or ellipse detection. The method enables accurate detection even when a physically handicapped person uses. On the other hand, the assessment tool that measures the intelligence index of autistic children prepares dedicated goggles that combines light emitting diodes and near-infrared cameras. It is a study to measure the response speed of left and right eye movements by applying the results so far and to explore the relationship with autism.
在本研究中,我们开发了一个支持肢体障碍者交流的工具和一个测量自闭症儿童智力指数的评估工具,该工具使用眼动和图像处理。对于眼球运动的测量,我们新开发了一种像素重心法,通过黑色像素的权重移动点来检测眼球运动的方向。该方法不同于传统的黑眼检测或椭圆检测。这种方法即使是身体残疾的人使用,也能准确地检测到。另一方面,测量自闭症儿童智力指数的评估工具准备了结合发光二极管和近红外摄像机的专用护目镜。这是一项利用已有的研究结果来测量左右眼运动的反应速度,并探讨其与自闭症的关系的研究。
{"title":"Measurement of Line-of-Sight Detection Using Pixel Quantity Variation and Application for Autism","authors":"T. Niwa, Ippei Torii, N. Ishii","doi":"10.1109/CSII.2018.00020","DOIUrl":"https://doi.org/10.1109/CSII.2018.00020","url":null,"abstract":"In this study, we develop a tool to support physically disabled people's communication and an assessment tool to measure the intelligence index of autistic children, which uses eye movements with image processing. For the measurement of eye movements, we newly developed a pixel center of gravity method that detects in which the direction of the eye movement is shown in the point where the weights of the black pixels moved. This method is different from using the conventional black eye detection or ellipse detection. The method enables accurate detection even when a physically handicapped person uses. On the other hand, the assessment tool that measures the intelligence index of autistic children prepares dedicated goggles that combines light emitting diodes and near-infrared cameras. It is a study to measure the response speed of left and right eye movements by applying the results so far and to explore the relationship with autism.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114812801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Generation of Convex Cones Based on Nearest Neighbor Relations 基于最近邻关系的凸锥生成
N. Ishii, Ippei Torii, K. Iwata, Kazuya Ogagiri, Toyoshiro Nakashima
Dimension reduction of data is an important issue in the data processing and it is needed for the analysis of higher dimensional data in the application domain. Rough set is fundamental and useful to reduce higher dimensional data to lower one for the classification. We develop generation of reducts based on nearest neighbor relation for the classification. In this paper, the nearest neighbor relation is shown to play a fundamental role for the classification from the geometric easoning of reducts by convex cones. Then, it is shown that reducts are generated based on the convex cones construction. Finally, using nearest neighbor relation, algebraic operations are derived on the degenerate convex cones.
数据降维是数据处理中的一个重要问题,是应用领域对高维数据进行分析所必需的。粗糙集是将高维数据降维为低维数据进行分类的基本方法。提出了基于最近邻关系的约简生成方法。本文从凸锥约简的几何推理出发,证明了最近邻关系在分类中起着重要的作用。然后,证明了基于凸锥构造生成约简。最后,利用最近邻关系,导出了退化凸锥的代数运算。
{"title":"Generation of Convex Cones Based on Nearest Neighbor Relations","authors":"N. Ishii, Ippei Torii, K. Iwata, Kazuya Ogagiri, Toyoshiro Nakashima","doi":"10.1109/CSII.2018.00022","DOIUrl":"https://doi.org/10.1109/CSII.2018.00022","url":null,"abstract":"Dimension reduction of data is an important issue in the data processing and it is needed for the analysis of higher dimensional data in the application domain. Rough set is fundamental and useful to reduce higher dimensional data to lower one for the classification. We develop generation of reducts based on nearest neighbor relation for the classification. In this paper, the nearest neighbor relation is shown to play a fundamental role for the classification from the geometric easoning of reducts by convex cones. Then, it is shown that reducts are generated based on the convex cones construction. Finally, using nearest neighbor relation, algebraic operations are derived on the degenerate convex cones.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131391951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective Fusion of Disaster-Relief Agent in RoboCupRescue Simulation RoboCupRescue仿真中救灾Agent的有效融合
Taishun Kusaka, Yukinobu Miyamoto, Akira Hasegawa, Shunki Takami, K. Iwata, N. Ito
The RoboCupRescue Simulation project is one of the responses to recent large-scale natural disasters. In particular, the project provides a platform for studying disaster-relief agents and simulations. We designed and implemented an agent based on the results of a combinational experiment of various modules taken from teams that participated at RoboCup 2017. We developed a new fusional agent with better modules in the Agent Development Framework. This paper presents the results of the combination experiment in detail. We confirm that our fusional agent based on the experimental results obtained a better score than the champion agent at RoboCup 2017.
RoboCupRescue模拟项目是对近期大规模自然灾害的回应之一。特别是,该项目提供了一个研究灾害救济剂和模拟的平台。我们根据参加2017年机器人世界杯的团队的各种模块的组合实验结果设计并实现了一个代理。我们在agent开发框架中开发了一个具有更好模块的新型融合agent。本文详细介绍了组合实验的结果。根据实验结果,我们确认我们的融合智能体在2017年机器人世界杯上取得了比冠军智能体更好的成绩。
{"title":"Effective Fusion of Disaster-Relief Agent in RoboCupRescue Simulation","authors":"Taishun Kusaka, Yukinobu Miyamoto, Akira Hasegawa, Shunki Takami, K. Iwata, N. Ito","doi":"10.1109/CSII.2018.00021","DOIUrl":"https://doi.org/10.1109/CSII.2018.00021","url":null,"abstract":"The RoboCupRescue Simulation project is one of the responses to recent large-scale natural disasters. In particular, the project provides a platform for studying disaster-relief agents and simulations. We designed and implemented an agent based on the results of a combinational experiment of various modules taken from teams that participated at RoboCup 2017. We developed a new fusional agent with better modules in the Agent Development Framework. This paper presents the results of the combination experiment in detail. We confirm that our fusional agent based on the experimental results obtained a better score than the champion agent at RoboCup 2017.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127712464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Dangerous Behavior by Estimation of Head Pose and Moving Direction 基于头部姿态和移动方向的危险行为检测
K. Miyoshi, Hiroki Nomiya, T. Hochin
We propose a detection system of hazardous behavior using depth information, focusing attention on head position and movement direction. The purpose of this system is to estimate the line-of-sight direction from the head pose and to detect the dangerous behavior that the movement direction is greatly different from the head direction. In the experiment, the risk of behavior was classified into three levels from the direction of the head and the direction of movement, and the accuracy of recognition was confirmed. Experimental results showed the validity of the accuracy of detecting dangerous behavior in this system.
我们提出了一种利用深度信息,关注头部位置和运动方向的危险行为检测系统。该系统的目的是根据头部姿态估计视线方向,并检测运动方向与头部方向相差很大的危险行为。在实验中,从头部方向和运动方向将行为风险分为三个层次,并验证了识别的准确性。实验结果表明,该系统对危险行为检测的准确性是有效的。
{"title":"Detection of Dangerous Behavior by Estimation of Head Pose and Moving Direction","authors":"K. Miyoshi, Hiroki Nomiya, T. Hochin","doi":"10.1109/CSII.2018.00028","DOIUrl":"https://doi.org/10.1109/CSII.2018.00028","url":null,"abstract":"We propose a detection system of hazardous behavior using depth information, focusing attention on head position and movement direction. The purpose of this system is to estimate the line-of-sight direction from the head pose and to detect the dangerous behavior that the movement direction is greatly different from the head direction. In the experiment, the risk of behavior was classified into three levels from the direction of the head and the direction of movement, and the accuracy of recognition was confirmed. Experimental results showed the validity of the accuracy of detecting dangerous behavior in this system.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133155890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Message from the CSII 2018 Program Chair CSII 2018项目主席的讲话
{"title":"Message from the CSII 2018 Program Chair","authors":"","doi":"10.1109/csii.2018.00006","DOIUrl":"https://doi.org/10.1109/csii.2018.00006","url":null,"abstract":"","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115736110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synchronizing Method of Music and Movie Clips Considering Temporal Harmony 考虑时间和谐的音乐和电影剪辑同步方法
Toshihiro Ozaki, T. Hochin, Hiroki Nomiya
A synchronization method is proposed to match music and movie clips. For this end, harmonic intervals of a movie are proposed. These correspond to BPM of a music material. Harmonic intervals are obtained based on the changes in the motion of images. In the video analysis, we propose a method for recognizing objects and a method for tracking moving objects even when the background moves. The proposed method is evaluated through a subjective evaluation experiment. The experimental result shows the proposed method is effective.
提出了一种音乐与电影片段同步匹配的方法。为此,提出了电影的谐波间隔。这些对应于音乐素材的BPM。根据图像运动的变化得到谐波区间。在视频分析中,我们提出了一种识别物体的方法和一种即使背景移动也能跟踪运动物体的方法。通过主观评价实验对该方法进行了评价。实验结果表明,该方法是有效的。
{"title":"Synchronizing Method of Music and Movie Clips Considering Temporal Harmony","authors":"Toshihiro Ozaki, T. Hochin, Hiroki Nomiya","doi":"10.1109/CSII.2018.00027","DOIUrl":"https://doi.org/10.1109/CSII.2018.00027","url":null,"abstract":"A synchronization method is proposed to match music and movie clips. For this end, harmonic intervals of a movie are proposed. These correspond to BPM of a music material. Harmonic intervals are obtained based on the changes in the motion of images. In the video analysis, we propose a method for recognizing objects and a method for tracking moving objects even when the background moves. The proposed method is evaluated through a subjective evaluation experiment. The experimental result shows the proposed method is effective.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133612389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Impression-Based Music Information Retrieval Method 基于个性化印象的音乐信息检索方法
Yuta Uenoyama, A. Ogino
Along with the spread of music distribution service, there are a growing interest in music information retrieval (MIR) systems. This research proposed a music search method appropriate to more suitable impression for impression by personalizing sound model used in previous research. In this research, we use data collected from 15 subjects and select songs suitable for individuals. This method presumes the impression of music by using a personal sound model and lyric model according to the rules of previous research. The impression to be estimated is based on three impressions which the subject clearly distinguished from the prior questionnaire. 14 subjects who are 20's have evaluated the impressions of three music which the proposed system predicted. The results show that more than 75% of subjects, except one impression, are consistent with the suggested impression based on the method.
随着音乐发行服务的普及,音乐信息检索(MIR)系统受到越来越多的关注。本研究通过以往研究中使用的个性化声音模型,提出了一种适合更适合印象的音乐搜索方法。在本研究中,我们收集了15名受试者的数据,并选择了适合个人的歌曲。该方法根据前人的研究规律,采用个人声音模型和抒情模型对音乐的印象进行假设。要估计的印象是基于三个印象,这三个印象是受试者与之前的问卷明显区分出来的。14名20多岁的受试者评估了该系统预测的三种音乐的印象。结果表明,除一个印象外,75%以上的受试者与基于该方法的建议印象一致。
{"title":"Personalized Impression-Based Music Information Retrieval Method","authors":"Yuta Uenoyama, A. Ogino","doi":"10.1109/CSII.2018.00032","DOIUrl":"https://doi.org/10.1109/CSII.2018.00032","url":null,"abstract":"Along with the spread of music distribution service, there are a growing interest in music information retrieval (MIR) systems. This research proposed a music search method appropriate to more suitable impression for impression by personalizing sound model used in previous research. In this research, we use data collected from 15 subjects and select songs suitable for individuals. This method presumes the impression of music by using a personal sound model and lyric model according to the rules of previous research. The impression to be estimated is based on three impressions which the subject clearly distinguished from the prior questionnaire. 14 subjects who are 20's have evaluated the impressions of three music which the proposed system predicted. The results show that more than 75% of subjects, except one impression, are consistent with the suggested impression based on the method.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132742248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improvement of Emotional Video Scene Retrieval System for Lifelog Videos Based on Facial Expression Intensity 基于面部表情强度的生活日志视频情感场景检索系统改进
Kazuya Sugawara, Hiroki Nomiya, T. Hochin
Lifelog has been proposed, in which various data of daily life are acquired and accumulated, and utilized later. However, it is a problem that we can not immediately retrieve the necessary data from a large amount of accumulated data, so the lifelog data are not effectively used. This paper deals with lifelog videos. In order to make it easy to search the scene that the user wants to watch from the lifelog videos, Morikuni tried to construct a system that could search the scene considered to be important with a change in facial expression of the person and to present it in an easy-to-understand manner. After that, "facial expression intensity" which is a numerical representation of facial expressions was devised, and Maeda designed and constructed a video scene retrieval system for lifelog videos based on the facial expression intensity. In this paper, we aim to improve the user interface of this retrieval system and establish a method to estimate the threshold values of the facial expression intensity level. We propose and implement a method to calculate the threshold values using the k-means clustering. We compare the performance of the threshold values with the threshold values of the previous method, and show that the performance was improved.
生活日志(lifeelog)是一种收集和积累日常生活中的各种数据,供日后使用的系统。但是,我们不能从大量积累的数据中立即检索到需要的数据是一个问题,因此生活日志数据没有得到有效利用。本文涉及生活日志视频。为了方便用户从生活日志视频中搜索到想要观看的场景,Morikuni试图构建一个系统,可以通过人的面部表情变化搜索到认为重要的场景,并以易于理解的方式呈现。之后,设计了面部表情的数值表示“面部表情强度”,Maeda设计并构建了基于面部表情强度的生活日志视频场景检索系统。在本文中,我们旨在改进该检索系统的用户界面,并建立一种估计面部表情强度水平阈值的方法。我们提出并实现了一种使用k均值聚类计算阈值的方法。我们将阈值的性能与之前方法的阈值进行了比较,表明性能得到了提高。
{"title":"Improvement of Emotional Video Scene Retrieval System for Lifelog Videos Based on Facial Expression Intensity","authors":"Kazuya Sugawara, Hiroki Nomiya, T. Hochin","doi":"10.1109/CSII.2018.00026","DOIUrl":"https://doi.org/10.1109/CSII.2018.00026","url":null,"abstract":"Lifelog has been proposed, in which various data of daily life are acquired and accumulated, and utilized later. However, it is a problem that we can not immediately retrieve the necessary data from a large amount of accumulated data, so the lifelog data are not effectively used. This paper deals with lifelog videos. In order to make it easy to search the scene that the user wants to watch from the lifelog videos, Morikuni tried to construct a system that could search the scene considered to be important with a change in facial expression of the person and to present it in an easy-to-understand manner. After that, \"facial expression intensity\" which is a numerical representation of facial expressions was devised, and Maeda designed and constructed a video scene retrieval system for lifelog videos based on the facial expression intensity. In this paper, we aim to improve the user interface of this retrieval system and establish a method to estimate the threshold values of the facial expression intensity level. We propose and implement a method to calculate the threshold values using the k-means clustering. We compare the performance of the threshold values with the threshold values of the previous method, and show that the performance was improved.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134524533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1