首页 > 最新文献

Proceedings of the 2018 International Conference on Intelligent Information Technology最新文献

英文 中文
Human Emotion Classification from Brain EEG Signal Using Multimodal Approach of Classifier 基于多模态分类器的脑电信号情感分类
N. Kimmatkar, V. Babu
To deeply understand the brain response under different emotional states can fundamentally advance the computational models for emotion recognition. Various psychophysiology studies have demonstrated the correlations between human emotions and EEG signals. With the quick development of wearable devices and dry electrode techniques it is now possible to implement EEG-based emotion recognition from laboratories to real-world applications. In this paper we have developed EEG-based emotion recognition models for three emotions: positive, neutral and negative. Extracted features are downloaded from seed database to test a classification method. Gamma band is selected as it relates to emotional states more closely than other frequency bands. The linear dynamical system (LDS) is used to smooth the features before classification. The classification accuracy of the proposed system using DE, ASM, DASM, RASM is 97.33, 89.33 and 98.37 for SVM (linear), SVM (rbf sigma value 6) and KNN(n value 3) respectively.
深入了解不同情绪状态下的大脑反应,可以从根本上推进情绪识别的计算模型。各种心理生理学研究已经证明了人类情绪和脑电图信号之间的相关性。随着可穿戴设备和干电极技术的快速发展,现在可以实现基于脑电图的情感识别,从实验室到现实世界的应用。本文建立了基于脑电图的三种情绪识别模型:积极情绪、中性情绪和消极情绪。从种子数据库中下载提取的特征来测试分类方法。选择伽马波段是因为它与情绪状态的关系比其他波段更密切。在分类前使用线性动力系统(LDS)对特征进行平滑处理。采用DE、ASM、DASM、RASM对SVM(线性)、SVM (rbf sigma值6)和KNN(n值3)的分类准确率分别为97.33、89.33和98.37。
{"title":"Human Emotion Classification from Brain EEG Signal Using Multimodal Approach of Classifier","authors":"N. Kimmatkar, V. Babu","doi":"10.1145/3193063.3193067","DOIUrl":"https://doi.org/10.1145/3193063.3193067","url":null,"abstract":"To deeply understand the brain response under different emotional states can fundamentally advance the computational models for emotion recognition. Various psychophysiology studies have demonstrated the correlations between human emotions and EEG signals. With the quick development of wearable devices and dry electrode techniques it is now possible to implement EEG-based emotion recognition from laboratories to real-world applications. In this paper we have developed EEG-based emotion recognition models for three emotions: positive, neutral and negative. Extracted features are downloaded from seed database to test a classification method. Gamma band is selected as it relates to emotional states more closely than other frequency bands. The linear dynamical system (LDS) is used to smooth the features before classification. The classification accuracy of the proposed system using DE, ASM, DASM, RASM is 97.33, 89.33 and 98.37 for SVM (linear), SVM (rbf sigma value 6) and KNN(n value 3) respectively.","PeriodicalId":429317,"journal":{"name":"Proceedings of the 2018 International Conference on Intelligent Information Technology","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124423955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Towards a Biographic Knowledge-based Story Ontology System 基于传记知识的故事本体系统研究
J. Yeh
In this article, we illustrate some of the semantic web-related technologies and design a set of ontology knowledge structures based on biographical history, using the OWL markup language, which we call BKOnto. This is an official framework for processing biographical history-related messages on the semantic web, including biographical events, time and space relationships, related personal messages, and more. We elaborate on this ontology knowledge architecture and explain how to use BKOnto as a basis for more domain-specific knowledge representation. In BKOnto, we use the OWL language to define the main components of the cognitive structure of the historical body of biography, namely the Storyline of the biography and the historical event of the biography. The so-called biographical story line, which is used to organize the history of multiple biographical superstructure, can be used to describe the biography of a particular person. The so-called biographical historical events, based on the historical data can be based on the description of the content and related space-time factor description of the basic unit. BKOnto's design was based on the StoryLine and Event infrastructure, and then we developed the ontology knowledge building system based on this ontology awareness architecture. Therefore, we also developed a set of ontology knowledge building system based on BKOnto, which is called StoryTeller. The StoryTeller system can be used to construct relevant knowledge of human things in the history of the biography and form a complete biographical story. StoryTeller system, mainly based on the story line organized by the timeline, which contains a number of types and events related to multiple human things as the basic unit to build the story line. The event unit not only describes the description of related human affairs, but also contains the description of time factor and space factor, which is used to construct the space-time information of the unit in the story line. As a result, in a story line with multiple event units, you will be able to present a wealth of information about people and things with their associated spatiotemporal features. In addition, based on the idea of supporting the digital collection system, we also linked up individual event units with the digital collection system of their information sources so that more diverse digital collections could be presented in the future. The empirical study also uses the Mackay Digital Archives Project (http://dlm.csie.au.edu.tw/) as a source of information to demonstrate the ontology knowledge building process of Mackay's biographical stories, as well as related Digital collection of information.
在本文中,我们阐述了一些与语义web相关的技术,并使用OWL标记语言(我们称之为BKOnto)设计了一套基于传记历史的本体知识结构。这是一个官方框架,用于处理语义网上与传记历史相关的信息,包括传记事件、时间和空间关系、相关的个人信息等等。我们详细阐述了这种本体知识体系结构,并解释了如何使用BKOnto作为更多领域特定知识表示的基础。在BKOnto中,我们使用OWL语言定义传记历史主体认知结构的主要组成部分,即传记的故事情节和传记的历史事件。所谓传记式故事线,是用来组织多重传记式上层建筑的历史,可以用来描述某一个人的生平。所谓传记式的历史事件,可以是基于历史资料的内容描述和相关时空因素描述的基本单位。BKOnto的设计是基于故事线和事件的基础架构,然后在此基础上开发了本体知识构建系统。因此,我们也开发了一套基于BKOnto的本体知识构建系统,称为StoryTeller。叙述者系统可以用来构建传记历史中有关人的事情的相关知识,形成一个完整的传记故事。说书人系统,主要以时间线组织的故事线为基础,其中包含了与多个人类事物相关的若干类型和事件作为构建故事线的基本单元。事件单元不仅描述了对相关人文事件的描述,还包含了对时间因素和空间因素的描述,用来构建该单元在故事情节中的时空信息。因此,在一个包含多个事件单元的故事情节中,你将能够呈现关于人和事物及其相关时空特征的丰富信息。此外,基于支持数位馆藏系统的理念,我们亦将个别事件单位与其资讯来源的数位馆藏系统连结起来,以便日后呈现更多元的数位馆藏。实证研究还以麦凯数字档案项目(http://dlm.csie.au.edu.tw/)为信息来源,展示了麦凯传记故事的本体知识构建过程,以及相关的数字信息收集。
{"title":"Towards a Biographic Knowledge-based Story Ontology System","authors":"J. Yeh","doi":"10.1145/3193063.3193065","DOIUrl":"https://doi.org/10.1145/3193063.3193065","url":null,"abstract":"In this article, we illustrate some of the semantic web-related technologies and design a set of ontology knowledge structures based on biographical history, using the OWL markup language, which we call BKOnto. This is an official framework for processing biographical history-related messages on the semantic web, including biographical events, time and space relationships, related personal messages, and more. We elaborate on this ontology knowledge architecture and explain how to use BKOnto as a basis for more domain-specific knowledge representation. In BKOnto, we use the OWL language to define the main components of the cognitive structure of the historical body of biography, namely the Storyline of the biography and the historical event of the biography. The so-called biographical story line, which is used to organize the history of multiple biographical superstructure, can be used to describe the biography of a particular person. The so-called biographical historical events, based on the historical data can be based on the description of the content and related space-time factor description of the basic unit. BKOnto's design was based on the StoryLine and Event infrastructure, and then we developed the ontology knowledge building system based on this ontology awareness architecture. Therefore, we also developed a set of ontology knowledge building system based on BKOnto, which is called StoryTeller. The StoryTeller system can be used to construct relevant knowledge of human things in the history of the biography and form a complete biographical story. StoryTeller system, mainly based on the story line organized by the timeline, which contains a number of types and events related to multiple human things as the basic unit to build the story line. The event unit not only describes the description of related human affairs, but also contains the description of time factor and space factor, which is used to construct the space-time information of the unit in the story line. As a result, in a story line with multiple event units, you will be able to present a wealth of information about people and things with their associated spatiotemporal features. In addition, based on the idea of supporting the digital collection system, we also linked up individual event units with the digital collection system of their information sources so that more diverse digital collections could be presented in the future. The empirical study also uses the Mackay Digital Archives Project (http://dlm.csie.au.edu.tw/) as a source of information to demonstrate the ontology knowledge building process of Mackay's biographical stories, as well as related Digital collection of information.","PeriodicalId":429317,"journal":{"name":"Proceedings of the 2018 International Conference on Intelligent Information Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121338164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fast Hemorrhage Detection in Brain CT Scan Slices Using Projection Profile Based Decision Tree 基于投影轮廓决策树的脑CT快速出血检测
Sinachettra Thay, P. Aimmanee, Bunyarit Uyyanavara, Pataravit Rukskul
Detection of a hemorrhage in CT scan slices is one of the crucial steps for a neurosurgeon to diagnose any abnormality and severity in the brain of a patient. It is usually time consuming as there are as many as 256 produced slices from a CT scan machine for each patient. In this paper, we introduce an automatic hemorrhage detection in brain CT slices using features-based approach. We employ decision tree based on 8 features to classify slices to two classes- with and without the sign of hemorrhage. The proposed method is tested on 1,451 CT scan slices and achieves a classification accuracy for up to 99% and it takes 0.12 second to detect slices.
在CT扫描中发现出血是神经外科医生诊断患者大脑异常和严重程度的关键步骤之一。每名患者的CT扫描结果多达256张,因此非常耗时。本文介绍了一种基于特征的脑CT切片出血自动检测方法。我们采用基于8个特征的决策树将切片分为两类-有出血迹象和没有出血迹象。该方法在1451个CT扫描切片上进行了测试,分类准确率高达99%,检测切片时间为0.12秒。
{"title":"Fast Hemorrhage Detection in Brain CT Scan Slices Using Projection Profile Based Decision Tree","authors":"Sinachettra Thay, P. Aimmanee, Bunyarit Uyyanavara, Pataravit Rukskul","doi":"10.1145/3193063.3193073","DOIUrl":"https://doi.org/10.1145/3193063.3193073","url":null,"abstract":"Detection of a hemorrhage in CT scan slices is one of the crucial steps for a neurosurgeon to diagnose any abnormality and severity in the brain of a patient. It is usually time consuming as there are as many as 256 produced slices from a CT scan machine for each patient. In this paper, we introduce an automatic hemorrhage detection in brain CT slices using features-based approach. We employ decision tree based on 8 features to classify slices to two classes- with and without the sign of hemorrhage. The proposed method is tested on 1,451 CT scan slices and achieves a classification accuracy for up to 99% and it takes 0.12 second to detect slices.","PeriodicalId":429317,"journal":{"name":"Proceedings of the 2018 International Conference on Intelligent Information Technology","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129115974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
OD Localization Using Rotational 2D Vessel Projection with Decision Tree Classification 基于决策树分类的旋转二维血管投影OD定位
Bodeetorn Sutcharit, P. Aimmanee, Pongsate Tangseng
Automatic Optic Disc (OD) localization is an important problem in ophthalmic image processing. Knowing its location helps doctors with the early detection of preventable eye diseases. Inspired by a fast and accurate OD localization algorithm utilizing the vessel projection technique that is usually inefficient when the OD in the image is unusually pale, we employed the decision tree with 5 features to improve the accuracy of the existing algorithm. Also to overcome the problem of poor accuracy when the image is tilted, we repeatedly run this improved algorithm on a series of images tilted at different degree from the original image to obtain the voted location of the OD. The proposed method has been tested on different starting angles between 0 to 180 degrees from Structured Analysis of the Retina (STARE) and retinopathy of prematurity (ROP) datasets. We achieve an average accuracy of up to 86% with an average computation time per image of only 13 seconds per image. Our approach outperforms two other based approaches, Mahfouz and Rotational 2D Vessel Projection (RVP), by up to 34% and 12%, respectively.
视盘自动定位是眼科图像处理中的一个重要问题。了解它的位置有助于医生及早发现可预防的眼病。利用血管投影技术的快速准确的外径定位算法在图像外径异常苍白时通常效率低下,受此启发,我们采用具有5个特征的决策树来提高现有算法的精度。为了克服图像倾斜时精度差的问题,我们在一系列与原始图像倾斜不同程度的图像上反复运行该改进算法,以获得OD的投票位置。该方法已经在视网膜结构分析(STARE)和早产儿视网膜病变(ROP)数据集的0到180度的不同起始角度上进行了测试。我们实现了高达86%的平均精度,每张图像的平均计算时间仅为13秒。我们的方法比Mahfouz和旋转2D船舶投影(RVP)方法分别高出34%和12%。
{"title":"OD Localization Using Rotational 2D Vessel Projection with Decision Tree Classification","authors":"Bodeetorn Sutcharit, P. Aimmanee, Pongsate Tangseng","doi":"10.1145/3193063.3193075","DOIUrl":"https://doi.org/10.1145/3193063.3193075","url":null,"abstract":"Automatic Optic Disc (OD) localization is an important problem in ophthalmic image processing. Knowing its location helps doctors with the early detection of preventable eye diseases. Inspired by a fast and accurate OD localization algorithm utilizing the vessel projection technique that is usually inefficient when the OD in the image is unusually pale, we employed the decision tree with 5 features to improve the accuracy of the existing algorithm. Also to overcome the problem of poor accuracy when the image is tilted, we repeatedly run this improved algorithm on a series of images tilted at different degree from the original image to obtain the voted location of the OD. The proposed method has been tested on different starting angles between 0 to 180 degrees from Structured Analysis of the Retina (STARE) and retinopathy of prematurity (ROP) datasets. We achieve an average accuracy of up to 86% with an average computation time per image of only 13 seconds per image. Our approach outperforms two other based approaches, Mahfouz and Rotational 2D Vessel Projection (RVP), by up to 34% and 12%, respectively.","PeriodicalId":429317,"journal":{"name":"Proceedings of the 2018 International Conference on Intelligent Information Technology","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127281025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Spoken Term Detection of Zero-Resource Language using Machine Learning 基于机器学习的零资源语言口语词检测
A. Ito, Masatoshi Koizumi
In this paper, we propose a spoken term detection method for detection of terms in zero-resource languages. The proposed method uses the classifier (the speech comparator) trained by a machine learning method combined with the dynamic time warping method. The advantage of the proposed method is that the classifier can be trained using a large language resource that is different from the target language. We exploited the random forest as a classifier, and carried out an experiment of the spoken term detection from Kaqchikel speech. As a result, the proposed method showed better detection performance compared with the method based on the Euclidean distance.
本文提出了一种用于零资源语言中术语检测的口语术语检测方法。该方法使用机器学习方法训练的分类器(语音比较器)与动态时间规整方法相结合。该方法的优点是可以使用与目标语言不同的大型语言资源来训练分类器。我们利用随机森林作为分类器,对Kaqchikel语音进行了语音术语检测实验。结果表明,与基于欧氏距离的方法相比,该方法具有更好的检测性能。
{"title":"Spoken Term Detection of Zero-Resource Language using Machine Learning","authors":"A. Ito, Masatoshi Koizumi","doi":"10.1145/3193063.3193068","DOIUrl":"https://doi.org/10.1145/3193063.3193068","url":null,"abstract":"In this paper, we propose a spoken term detection method for detection of terms in zero-resource languages. The proposed method uses the classifier (the speech comparator) trained by a machine learning method combined with the dynamic time warping method. The advantage of the proposed method is that the classifier can be trained using a large language resource that is different from the target language. We exploited the random forest as a classifier, and carried out an experiment of the spoken term detection from Kaqchikel speech. As a result, the proposed method showed better detection performance compared with the method based on the Euclidean distance.","PeriodicalId":429317,"journal":{"name":"Proceedings of the 2018 International Conference on Intelligent Information Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123540384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Robust Liver Segmentation in CT-images Using 3D Level-Set Developed with the Edge and the Region Information 基于边缘和区域信息的三维水平集的ct图像肝脏鲁棒分割
Thanh-Sach Le, D. Tran
CT-images have been used widely in hospitals around the world. The segmentation of liver from CT-images is important, because it can help medical doctors to have a clear view of the liver with rendering tools. The segmentation's result is also useful for radiotherapy. However, liver segmentation is a challenging task because of the liver's geometrical structure and position and because of the similarity between the liver and its nearby organs about the intensity of voxels. In this paper, we propose a method to segment the liver from CT-images by modeling the segmentation with a proposed level-set method on 3D-space. In combination with the proposed 3D level-set methods, we propose to combine the edge information with the region information into the level-set's energy function. The experimental results are compared with manual segmentation performed by clinical experts and with recently developed methods for liver segmentation. Our proposed method can perform the segmentation more accurate in comparison with the others. It also can produce a surface that is smoother than one resulted from the other methods in the comparison.
ct图像已在世界各地的医院得到广泛应用。从ct图像中分割肝脏很重要,因为它可以帮助医生使用渲染工具清晰地观察肝脏。分割的结果对放射治疗也很有用。然而,由于肝脏的几何结构和位置,以及肝脏与邻近器官体素强度的相似性,肝脏分割是一项具有挑战性的任务。在本文中,我们提出了一种方法,通过在三维空间上使用所提出的水平集方法对分割进行建模,从ct图像中分割肝脏。结合已提出的三维水平集方法,我们提出将边缘信息和区域信息结合到水平集的能量函数中。实验结果与临床专家进行的人工分割和最近开发的肝脏分割方法进行了比较。与其他方法相比,我们提出的方法可以实现更准确的分割。它还可以产生比比较中其他方法产生的表面更光滑的表面。
{"title":"A Robust Liver Segmentation in CT-images Using 3D Level-Set Developed with the Edge and the Region Information","authors":"Thanh-Sach Le, D. Tran","doi":"10.1145/3193063.3193064","DOIUrl":"https://doi.org/10.1145/3193063.3193064","url":null,"abstract":"CT-images have been used widely in hospitals around the world. The segmentation of liver from CT-images is important, because it can help medical doctors to have a clear view of the liver with rendering tools. The segmentation's result is also useful for radiotherapy. However, liver segmentation is a challenging task because of the liver's geometrical structure and position and because of the similarity between the liver and its nearby organs about the intensity of voxels. In this paper, we propose a method to segment the liver from CT-images by modeling the segmentation with a proposed level-set method on 3D-space. In combination with the proposed 3D level-set methods, we propose to combine the edge information with the region information into the level-set's energy function. The experimental results are compared with manual segmentation performed by clinical experts and with recently developed methods for liver segmentation. Our proposed method can perform the segmentation more accurate in comparison with the others. It also can produce a surface that is smoother than one resulted from the other methods in the comparison.","PeriodicalId":429317,"journal":{"name":"Proceedings of the 2018 International Conference on Intelligent Information Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121570242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Simulation Wireless Sensor Networks in Castalia 在Castalia模拟无线传感器网络
K. Ngo, T. Huynh, D. Huynh
Wide application usage of Wireless Sensor Network (WSN) triggers many developments in this area. Properly selecting network simulator plays an important role for developing WSN routing protocol and MAC protocol since its different performance focuses. This paper aims to provide an adequate guidance for simulation in Castalia which is suitable for low power sensor nodes deployed in large-scale wireless sensor networks. Moreover, LEACH is a well-known routing protocol, which is used for demonstration along the basic guidance. Lastly, manipulating data extracted from Castalia Result is documented for wide understanding what Castalia aid in WSN research.
无线传感器网络(WSN)的广泛应用引发了这一领域的许多发展。由于无线传感器网络路由协议和MAC协议的性能关注点不同,因此正确选择网络模拟器对无线传感器网络路由协议和MAC协议的开发具有重要意义。本文旨在为Castalia的仿真提供充分的指导,该仿真适用于大规模无线传感器网络中部署的低功耗传感器节点。此外,LEACH是一种众所周知的路由协议,用于沿着基本指南进行演示。最后,从Castalia结果中提取的操作数据被记录下来,以便广泛理解Castalia在WSN研究中的帮助。
{"title":"Simulation Wireless Sensor Networks in Castalia","authors":"K. Ngo, T. Huynh, D. Huynh","doi":"10.1145/3193063.3193066","DOIUrl":"https://doi.org/10.1145/3193063.3193066","url":null,"abstract":"Wide application usage of Wireless Sensor Network (WSN) triggers many developments in this area. Properly selecting network simulator plays an important role for developing WSN routing protocol and MAC protocol since its different performance focuses. This paper aims to provide an adequate guidance for simulation in Castalia which is suitable for low power sensor nodes deployed in large-scale wireless sensor networks. Moreover, LEACH is a well-known routing protocol, which is used for demonstration along the basic guidance. Lastly, manipulating data extracted from Castalia Result is documented for wide understanding what Castalia aid in WSN research.","PeriodicalId":429317,"journal":{"name":"Proceedings of the 2018 International Conference on Intelligent Information Technology","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123055549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
OpF-STM: Optimized Persistent Overhead Fail-safety Software Transactional Memory in Non-volatile Memory OpF-STM:非易失性存储器中优化的持久开销故障安全软件事务存储器
Jihyun Kim, Y. Won
Non-volatile memory is emerged such as PCM and 3D XPoint. With the advent of Non-volatile memory, Software platforms have also been developed to manage Non-volatile memory areas. Recently those platforms support PTM system(Persistent transactional memory) which provides transaction system and guarantee crash-consistency of transaction at the main memory level. For ensuring crash-consistency of transaction, PTM system should use frequently hardware-instruction. Because ensuring persistent boundary has been changed volatile memory/storage to volatile cache/Non-volatile memory. This has a huge adverse effect on PTM system. In this paper, we propose a three techniques. Append-only dynamic log can support compact and dynamic log area. Lazy and bulk persistence aggressively delay persistence phase to commit phase. Non temporal persistence can provide enhanced memory copy function. Above techniques aim to reduce persistent overhead as many as possible. Our result shows that those techniques can enhance averagely 117% / 140% transaction performance.
非易失性存储器出现,如PCM和3D XPoint。随着非易失性存储器的出现,也开发了软件平台来管理非易失性存储器区域。最近,这些平台都支持PTM系统(Persistent transactional memory),它在主内存级别提供事务系统并保证事务的崩溃一致性。为了保证事务的崩溃一致性,PTM系统应该经常使用硬件指令。因为确保持久边界已被易失性内存/存储更改为易失性缓存/非易失性内存。这对PTM系统有很大的不利影响。在本文中,我们提出了三种技术。仅追加动态日志可以支持紧凑的动态日志区域。延迟和大容量持久化积极地将持久化阶段延迟到提交阶段。非时间持久性可以提供增强的内存复制功能。上述技术旨在尽可能地减少持久性开销。我们的结果表明,这些技术可以平均提高117% / 140%的交易性能。
{"title":"OpF-STM: Optimized Persistent Overhead Fail-safety Software Transactional Memory in Non-volatile Memory","authors":"Jihyun Kim, Y. Won","doi":"10.1145/3193063.3193076","DOIUrl":"https://doi.org/10.1145/3193063.3193076","url":null,"abstract":"Non-volatile memory is emerged such as PCM and 3D XPoint. With the advent of Non-volatile memory, Software platforms have also been developed to manage Non-volatile memory areas. Recently those platforms support PTM system(Persistent transactional memory) which provides transaction system and guarantee crash-consistency of transaction at the main memory level. For ensuring crash-consistency of transaction, PTM system should use frequently hardware-instruction. Because ensuring persistent boundary has been changed volatile memory/storage to volatile cache/Non-volatile memory. This has a huge adverse effect on PTM system. In this paper, we propose a three techniques. Append-only dynamic log can support compact and dynamic log area. Lazy and bulk persistence aggressively delay persistence phase to commit phase. Non temporal persistence can provide enhanced memory copy function. Above techniques aim to reduce persistent overhead as many as possible. Our result shows that those techniques can enhance averagely 117% / 140% transaction performance.","PeriodicalId":429317,"journal":{"name":"Proceedings of the 2018 International Conference on Intelligent Information Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114147463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An OSGi Monitoring System to Support Dynamicity and to Enhance Fault Tolerance of OSGi Systems 一个支持OSGi动态和增强OSGi系统容错性的OSGi监控系统
Yufang Dan, N. Stouls, S. Frénot
This work addresses the problem of monitoring stateful services on a dynamic service-oriented architecture (SOA), such as OSGi. Indeed, in such architecture, services may disappear and appear, and if a used service disappears, then the client doesn't receive any notification. In such cases, classical monitoring approaches with statically linked monitors into services cannot be used. In this paper, we propose an OSGi based runtime monitoring system which enables to make security and self-healablility enforcement of dynamic services. For monitoring coherent stateful services usage, a transactional approach is defined to preserve the current run and collected data. In order to proof the validation of this solution, we give an implementation guidelines based on OSGi platform.
这项工作解决了在动态面向服务的体系结构(SOA)(如OSGi)上监视有状态服务的问题。实际上,在这样的体系结构中,服务可能会消失或出现,如果使用的服务消失,则客户端不会收到任何通知。在这种情况下,不能使用将监视器静态链接到服务中的传统监视方法。在本文中,我们提出了一个基于OSGi的运行时监控系统,该系统能够实现动态服务的安全性和自修复性。为了监视一致的有状态服务使用情况,定义了事务性方法来保存当前运行和收集的数据。为了证明该方案的有效性,给出了基于OSGi平台的实现指南。
{"title":"An OSGi Monitoring System to Support Dynamicity and to Enhance Fault Tolerance of OSGi Systems","authors":"Yufang Dan, N. Stouls, S. Frénot","doi":"10.1145/3193063.3193072","DOIUrl":"https://doi.org/10.1145/3193063.3193072","url":null,"abstract":"This work addresses the problem of monitoring stateful services on a dynamic service-oriented architecture (SOA), such as OSGi. Indeed, in such architecture, services may disappear and appear, and if a used service disappears, then the client doesn't receive any notification. In such cases, classical monitoring approaches with statically linked monitors into services cannot be used. In this paper, we propose an OSGi based runtime monitoring system which enables to make security and self-healablility enforcement of dynamic services. For monitoring coherent stateful services usage, a transactional approach is defined to preserve the current run and collected data. In order to proof the validation of this solution, we give an implementation guidelines based on OSGi platform.","PeriodicalId":429317,"journal":{"name":"Proceedings of the 2018 International Conference on Intelligent Information Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114699705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Overview of Techniques for Confirming Big Data Property Rights 大数据产权确认技术综述
Su Cheng, Haijun Zhao
The major premise of big data circulation is to identify the ownership of data resource. This paper summed some feasible techniques and methods for confirming big data property which are data citation technology, data provenance technology, data reversible hiding technology, computer forensic technology and block chain technology. The ownership of information property which from different sizes, different formats and different storage condition on distributed heterogeneous platforms can be confirmed by comprehensive application of these techniques and methods based on the coupling interface between them in the practice of big data.
大数据流通的大前提是数据资源的归属。本文总结了一些可行的大数据属性确认技术和方法,分别是数据引用技术、数据溯源技术、数据可逆隐藏技术、计算机取证技术和区块链技术。在大数据实践中,基于这些技术和方法之间的耦合接口,综合运用这些技术和方法来确定分布式异构平台上不同大小、不同格式、不同存储条件的信息属性的归属。
{"title":"An Overview of Techniques for Confirming Big Data Property Rights","authors":"Su Cheng, Haijun Zhao","doi":"10.1145/3193063.3193069","DOIUrl":"https://doi.org/10.1145/3193063.3193069","url":null,"abstract":"The major premise of big data circulation is to identify the ownership of data resource. This paper summed some feasible techniques and methods for confirming big data property which are data citation technology, data provenance technology, data reversible hiding technology, computer forensic technology and block chain technology. The ownership of information property which from different sizes, different formats and different storage condition on distributed heterogeneous platforms can be confirmed by comprehensive application of these techniques and methods based on the coupling interface between them in the practice of big data.","PeriodicalId":429317,"journal":{"name":"Proceedings of the 2018 International Conference on Intelligent Information Technology","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129166145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 2018 International Conference on Intelligent Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1