Anomaly detection is vital for automated data analysis, with specific applications spanning almost every domain. In this paper, we propose a hybrid supervised learning of anomaly detection using frequent itemset mining and random forest with an ensemble probabilistic voting method, which outperforms the alternative supervised learning methods through the commonly used measures for anomaly detection: accuracy, true positive rate (i.e. recall) and false positive rate. To justify our claim, a benchmark dataset is used to evaluate the efficiency of the proposed approach, where the results illustrate its benefits.
{"title":"Ensemble learning using frequent itemset mining for anomaly detection","authors":"Saeid Soheily-Khah, Yiming Wu","doi":"10.5121/csit.2019.90931","DOIUrl":"https://doi.org/10.5121/csit.2019.90931","url":null,"abstract":"Anomaly detection is vital for automated data analysis, with specific applications spanning almost every domain. In this paper, we propose a hybrid supervised learning of anomaly detection using frequent itemset mining and random forest with an ensemble probabilistic voting method, which outperforms the alternative supervised learning methods through the commonly used measures for anomaly detection: accuracy, true positive rate (i.e. recall) and false positive rate. To justify our claim, a benchmark dataset is used to evaluate the efficiency of the proposed approach, where the results illustrate its benefits.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114456839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many applications in mesh processing require the detection of feature lines. Feature lines convey the inherent features of the shape. Existing techniques to find feature lines in discrete surfaces are relied on user-specified thresholds, inaccurate and time-consuming. We use an automatic approximation technique to estimate the optimal threshold for detecting feature lines. Some examples are presented to show our method is effective, which leads to improve the feature lines visualization.
{"title":"Automatic Extraction of Feature Lines on 3D Surface","authors":"Zhihong Mao, Ruichao Wang, Yu-lin Zhou","doi":"10.5121/CSIT.2019.90901","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90901","url":null,"abstract":"Many applications in mesh processing require the detection of feature lines. Feature lines convey the inherent features of the shape. Existing techniques to find feature lines in discrete surfaces are relied on user-specified thresholds, inaccurate and time-consuming. We use an automatic approximation technique to estimate the optimal threshold for detecting feature lines. Some examples are presented to show our method is effective, which leads to improve the feature lines visualization.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123472479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing is one of the most smart technology in the era of computing as its capability todecrease the cost of data processing while increasing flexibility and scalability for computer processes. Security is one of the core concerns related to the cloud computing as it hinders the organizations to adopt this technology. Infrastructure as a service (IaaS) is one of the main services of cloud computing which uses virtualization to supply virtualized computing resources to its users through the internet. Virtual Machine Image is the key component in the cloud as it is used to run an instance. There are security issues related to the virtual machine image that need to be analysed as being an essential component related to the cloud computing. Some studies were conducted to provide countermeasure for the identify security threats. However, there is no study has attempted to synthesize security threats and corresponding vulnerabilities. In addition, these studies did not model and classified security threats to find their effect on the Virtual Machine Image. Therefore, this paper provides a threat modelling approach to identify threats that affect the virtual machine image. Furthermore, threat classification is carried out to each individual threat to find out their effects on the cloud computing. Potential attack was drawn to show how an adversary might exploit the weakness in the system to attack the Virtual Machine Image.
{"title":"Threat Modelling for the Virtual Machine Image in Cloud Computing","authors":"R. K. Hussein, V. Sassone","doi":"10.5121/CSIT.2019.90911","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90911","url":null,"abstract":"Cloud computing is one of the most smart technology in the era of computing as its capability todecrease the cost of data processing while increasing flexibility and scalability for computer processes. Security is one of the core concerns related to the cloud computing as it hinders the organizations to adopt this technology. Infrastructure as a service (IaaS) is one of the main services of cloud computing which uses virtualization to supply virtualized computing resources to its users through the internet. Virtual Machine Image is the key component in the cloud as it is used to run an instance. There are security issues related to the virtual machine image that need to be analysed as being an essential component related to the cloud computing. Some studies were conducted to provide countermeasure for the identify security threats. However, there is no study has attempted to synthesize security threats and corresponding vulnerabilities. In addition, these studies did not model and classified security threats to find their effect on the Virtual Machine Image. Therefore, this paper provides a threat modelling approach to identify threats that affect the virtual machine image. Furthermore, threat classification is carried out to each individual threat to find out their effects on the cloud computing. Potential attack was drawn to show how an adversary might exploit the weakness in the system to attack the Virtual Machine Image.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129158591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Sadou, A. Lahoulou, T. Bouden, Anderson R. Avila, T. Falk, Z. Akhtar
In this paper, a new no-reference image quality assessment (NR-IQA) metric for grey images is proposed using LIVE II image database. The features used are extracted from three well-known NR-IQA objective metrics based on natural scene statistical attributes from three different domains. These metrics may contain redundant, noisy or less informative features which affect the quality score prediction. In order to overcome this drawback, the first step of our work consists in selecting the most relevant image quality features by using Singular Value Decomposition (SVD) based dominant eigenvectors. The second step is performed by employing Relevance Vector Machine (RVM) to learn the mapping between the previously selected features and human opinion scores. Simulations demonstrate that the proposed metric performs very well in terms of correlation and monotonicity.
{"title":"Blind Image Quality Assessment Using Singular Value Decomposition Based Dominant Eigenvectors for Feature Selection","authors":"B. Sadou, A. Lahoulou, T. Bouden, Anderson R. Avila, T. Falk, Z. Akhtar","doi":"10.5121/CSIT.2019.90919","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90919","url":null,"abstract":"In this paper, a new no-reference image quality assessment (NR-IQA) metric for grey images is proposed using LIVE II image database. The features used are extracted from three well-known NR-IQA objective metrics based on natural scene statistical attributes from three different domains. These metrics may contain redundant, noisy or less informative features which affect the quality score prediction. In order to overcome this drawback, the first step of our work consists in selecting the most relevant image quality features by using Singular Value Decomposition (SVD) based dominant eigenvectors. The second step is performed by employing Relevance Vector Machine (RVM) to learn the mapping between the previously selected features and human opinion scores. Simulations demonstrate that the proposed metric performs very well in terms of correlation and monotonicity.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128938359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an approach to reconstructing 3D objects based on the generation of dense depth map. From a two 2D images (a pair of images) of the same 3D object, taken from different points of view, a new grayscale image is estimated. It is an intermediate image between a purely 2D image and a 3D image where each pixel of this image represents a z-height according to its gray level value. Our objective therefore is to play on the precision of this map in order to prove the interest and effectiveness of this map on the quality of the reconstruction.
{"title":"Three-Dimensional Reconstruction Using the Depth Map","authors":"A. Abderrahmani, R. Lasri, K. Satori","doi":"10.5121/CSIT.2019.90922","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90922","url":null,"abstract":"This paper presents an approach to reconstructing 3D objects based on the generation of dense depth map. From a two 2D images (a pair of images) of the same 3D object, taken from different points of view, a new grayscale image is estimated. It is an intermediate image between a purely 2D image and a 3D image where each pixel of this image represents a z-height according to its gray level value. Our objective therefore is to play on the precision of this map in order to prove the interest and effectiveness of this map on the quality of the reconstruction.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133924132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ubiquitous computing and context-aware applications experience at the present time a very important development. This has led organizations to open more of their information systems, making them available anywhere, at any time and integrating the dimension of mobile users. This cannot be done without taking into account thoughtfully the access security: a pervasive information system must henceforth be able to take into account the contextual features to ensure a robust access control. In this paper, access control and a few existing mechanisms have been exposed. It is intended to show the importance of taking into account context during a request for access. In this regard, our proposal incorporates the concept of trust to establish a trust relationship according to three contextual constraints (location, social situation and time) in order to decide to grant or deny the access request of a user to a service.
{"title":"Context-Aware Trust-Based Access Control for Ubiquitous Systems","authors":"M. Yaici, Faiza Ainennas, Nassima Zidi","doi":"10.5121/CSIT.2019.90902","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90902","url":null,"abstract":"The ubiquitous computing and context-aware applications experience at the present time a very important development. This has led organizations to open more of their information systems, making them available anywhere, at any time and integrating the dimension of mobile users. This cannot be done without taking into account thoughtfully the access security: a pervasive information system must henceforth be able to take into account the contextual features to ensure a robust access control. In this paper, access control and a few existing mechanisms have been exposed. It is intended to show the importance of taking into account context during a request for access. In this regard, our proposal incorporates the concept of trust to establish a trust relationship according to three contextual constraints (location, social situation and time) in order to decide to grant or deny the access request of a user to a service.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114181233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Mabrouk, Abubakr Awad, H. Shousha, Wafaa Alake, A. Salama, T. Awad
Background: Assessment of liver fibrosis is a vital need for enabling therapeutic decisions and prognostic evaluations of chronic hepatitis. Liver biopsy is considered the definitive investigation for assessing the stage of liver fibrosis but it carries several limitations. FIB-4 and APRI also have a limited accuracy. The National Committee for Control of Viral Hepatitis (NCCVH) in Egypt has supplied a valuable pool of electronic patients’ data that data mining techniques can analyze to disclose hidden patterns, trends leading to the evolution of predictive algorithms. Aim: to collaborate with physicians to develop a novel reliable, easy to comprehend noninvasive model to predict the stage of liver fibrosis utilizing routine workup, without imposing extra costs for additional examinations especially in areas with limited resources like Egypt. Methods: This multi-centered retrospective study included baseline demographic, laboratory, and histopathological data of 69106 patients with chronic hepatitis C. We started by data collection preprocessing, cleansing and formatting for knowledge discovery of useful information from Electronic Health Records EHRs. Data mining has been used to build a decision tree (Reduced Error Pruning tree (REP tree)) with 10-fold internal cross-validation. Histopathology results were used to assess accuracy for fibrosis stages. Machine learning feature selection and reduction (CfsSubseteval / best first) reduced the initial number of input features (N=15) to the most relevant ones (N=6) for developing the prediction model. Results: In this study, 32419 patients had F(0-1), 25073 had F(2) and 11615 had F(3-4). FIB-4 and APRI revalidation in our study showed low accuracy and high discordance with biopsy results, with overall AUC 0.68 and 0.58 respectively. Out of 15 attributes machine learning selected Age, AFP, AST, glucose, albumin, and platelet as the most relevant attributes. Results for REP tree indicated an overall classification accuracy up to 70% and ROC Area 0.74 which was not nearly affected by attribute reduction, and pruning . However attribute reduction, and tree pruning were associated with simpler model easy to understand by physician with less time for execution. Conclusion: This study we had the chance to study a large cohort of 69106 chronic hepatitis patients with available liver biopsy results to revise and validate the accuracy of FIB-4 and APRI. This study represents the collaboration between computer scientist and hepatologists to provide clinicians with an accurate novel and reliable, noninvasive model to predict the stage of liver fibrosis.
背景:肝纤维化的评估对慢性肝炎的治疗决策和预后评估至关重要。肝活检被认为是评估肝纤维化分期的决定性调查,但它有一些局限性。FIB-4和APRI也有一定的准确性。埃及病毒性肝炎控制国家委员会(NCCVH)提供了一个有价值的电子患者数据库,数据挖掘技术可以对其进行分析,以揭示隐藏的模式和趋势,从而导致预测算法的发展。目的:与医生合作开发一种新颖可靠、易于理解的无创模型,利用常规检查来预测肝纤维化的阶段,而不会增加额外检查的额外费用,特别是在资源有限的地区,如埃及。方法:本多中心回顾性研究包括69106例慢性丙型肝炎患者的基线人口统计学、实验室和组织病理学数据。我们从数据收集、预处理、清理和格式化开始,以便从电子健康记录EHRs中发现有用的信息。数据挖掘被用于构建具有10倍内部交叉验证的决策树(减少错误修剪树(REP树))。组织病理学结果用于评估纤维化分期的准确性。机器学习特征选择与约简(CfsSubseteval / best first)将初始输入特征(N=15)减少到最相关的特征(N=6),用于开发预测模型。结果:本研究中F(0-1) 32419例,F(2) 25073例,F(3-4) 11615例。本研究中FIB-4和APRI再验证的准确性较低,与活检结果高度不一致,总AUC分别为0.68和0.58。在15个属性中,机器学习选择年龄、AFP、AST、葡萄糖、白蛋白和血小板作为最相关的属性。结果表明,REP树总体分类精度达70%,ROC Area 0.74,属性约简和剪枝对分类精度影响不大。而属性约简和树修剪的模型更简单,易于医生理解,执行时间更短。结论:在这项研究中,我们有机会对69106名慢性肝炎患者进行大队列研究,这些患者有可用的肝活检结果,以修正和验证FIB-4和APRI的准确性。这项研究代表了计算机科学家和肝病学家之间的合作,为临床医生提供了一个准确、新颖、可靠、无创的模型来预测肝纤维化的分期。
{"title":"Attribute Reduction and Decision Tree Pruning to Simplify Liver Fibrosis Prediction Algorithms A Cohort Study","authors":"M. Mabrouk, Abubakr Awad, H. Shousha, Wafaa Alake, A. Salama, T. Awad","doi":"10.5121/CSIT.2019.90927","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90927","url":null,"abstract":"Background: Assessment of liver fibrosis is a vital need for enabling therapeutic decisions and prognostic evaluations of chronic hepatitis. Liver biopsy is considered the definitive investigation for assessing the stage of liver fibrosis but it carries several limitations. FIB-4 and APRI also have a limited accuracy. The National Committee for Control of Viral Hepatitis (NCCVH) in Egypt has supplied a valuable pool of electronic patients’ data that data mining techniques can analyze to disclose hidden patterns, trends leading to the evolution of predictive algorithms. Aim: to collaborate with physicians to develop a novel reliable, easy to comprehend noninvasive model to predict the stage of liver fibrosis utilizing routine workup, without imposing extra costs for additional examinations especially in areas with limited resources like Egypt. Methods: This multi-centered retrospective study included baseline demographic, laboratory, and histopathological data of 69106 patients with chronic hepatitis C. We started by data collection preprocessing, cleansing and formatting for knowledge discovery of useful information from Electronic Health Records EHRs. Data mining has been used to build a decision tree (Reduced Error Pruning tree (REP tree)) with 10-fold internal cross-validation. Histopathology results were used to assess accuracy for fibrosis stages. Machine learning feature selection and reduction (CfsSubseteval / best first) reduced the initial number of input features (N=15) to the most relevant ones (N=6) for developing the prediction model. Results: In this study, 32419 patients had F(0-1), 25073 had F(2) and 11615 had F(3-4). FIB-4 and APRI revalidation in our study showed low accuracy and high discordance with biopsy results, with overall AUC 0.68 and 0.58 respectively. Out of 15 attributes machine learning selected Age, AFP, AST, glucose, albumin, and platelet as the most relevant attributes. Results for REP tree indicated an overall classification accuracy up to 70% and ROC Area 0.74 which was not nearly affected by attribute reduction, and pruning . However attribute reduction, and tree pruning were associated with simpler model easy to understand by physician with less time for execution. Conclusion: This study we had the chance to study a large cohort of 69106 chronic hepatitis patients with available liver biopsy results to revise and validate the accuracy of FIB-4 and APRI. This study represents the collaboration between computer scientist and hepatologists to provide clinicians with an accurate novel and reliable, noninvasive model to predict the stage of liver fibrosis.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114438190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Three major factors that can affect Voice over Internet Protocol (VoIP) phone services’ quality, these include packet delay, packet loss, and jitter. The focus of this study is specific to the VoIP phone services offered to customers by cable companies that utilize broadband hybrid fiber coaxial (HFC) networks. HFC networks typically carry three types of traffic that include voice, data, and video. Unlike data and video, some delays or packet loss can result in a noticeable degraded impact on a VoIP’s phone conversation. We will examine various differentiated services code point (DSCP) marking, then analyze and assess their impact on VoIP’s quality of service (QoS). This study mimics the production environment. It examines the relationship between specific DSCP marking’s configuration. This research avoids automated test calls and rather focuses on human made call testing. This study relies on users’ experience and the captured data to support this research’s findings.
影响VoIP (Voice over Internet Protocol)电话服务质量的三个主要因素包括数据包延迟、数据包丢失和抖动。本研究的重点是有线电视公司提供给客户的VoIP电话服务,这些公司利用宽带混合光纤同轴(HFC)网络。HFC网络通常承载三种类型的流量,包括语音、数据和视频。与数据和视频不同,一些延迟或数据包丢失可能会导致VoIP电话通话的明显下降影响。我们将研究各种差异化服务代码点(DSCP)标记,然后分析和评估它们对VoIP服务质量(QoS)的影响。这项研究模拟了生产环境。它检查了特定DSCP标记的配置之间的关系。这项研究避免了自动测试调用,而是专注于人工呼叫测试。这项研究依赖于用户的经验和捕获的数据来支持这项研究的发现。
{"title":"Optimizing DSCP Marking to Ensure VoIP’s QoS over HFC Network","authors":"Shaher Daoud, Yanzhen Qu","doi":"10.5121/CSIT.2019.90928","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90928","url":null,"abstract":"Three major factors that can affect Voice over Internet Protocol (VoIP) phone services’ quality, these include packet delay, packet loss, and jitter. The focus of this study is specific to the VoIP phone services offered to customers by cable companies that utilize broadband hybrid fiber coaxial (HFC) networks. HFC networks typically carry three types of traffic that include voice, data, and video. Unlike data and video, some delays or packet loss can result in a noticeable degraded impact on a VoIP’s phone conversation. We will examine various differentiated services code point (DSCP) marking, then analyze and assess their impact on VoIP’s quality of service (QoS). This study mimics the production environment. It examines the relationship between specific DSCP marking’s configuration. This research avoids automated test calls and rather focuses on human made call testing. This study relies on users’ experience and the captured data to support this research’s findings.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127026976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oral cancer is one of the most widespread tumors of the head and neck region. An earlier diagnosis can help dentist getting a better therapy plan, giving patients a better treatment and the reliable techniques for detecting oral cancer cells are urgently required. This study proposes an optic and automation method using reflection images obtained with scanned laser pico-projection system, and Gray-Level Co-occurrence Matrix for sampling. Moreover, the artificial intelligence technology, Support Vector Machine, was used to classify samples. Normal Oral Keratinocyte and dysplastic oral keratinocyte were simulating the evolvement of cancer to be classified. The accuracy in distinguishing two cells has reached 85.22%. Compared to existing diagnosis methods, the proposed method possesses many advantages, including a lower cost, a larger sample size, an instant, a non-invasive, and a more reliable diagnostic performance. As a result, it provides a highly promising solution for the early diagnosis of oral squamous carcinoma.
{"title":"Construction Of an Oral Cancer Auto-Classify system Based On Machine-Learning for Artificial Intelligence","authors":"Meng-Jia Lian, C. Huang, T. Lee","doi":"10.5121/CSIT.2019.90903","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90903","url":null,"abstract":"Oral cancer is one of the most widespread tumors of the head and neck region. An earlier diagnosis can help dentist getting a better therapy plan, giving patients a better treatment and the reliable techniques for detecting oral cancer cells are urgently required. This study proposes an optic and automation method using reflection images obtained with scanned laser pico-projection system, and Gray-Level Co-occurrence Matrix for sampling. Moreover, the artificial intelligence technology, Support Vector Machine, was used to classify samples. Normal Oral Keratinocyte and dysplastic oral keratinocyte were simulating the evolvement of cancer to be classified. The accuracy in distinguishing two cells has reached 85.22%. Compared to existing diagnosis methods, the proposed method possesses many advantages, including a lower cost, a larger sample size, an instant, a non-invasive, and a more reliable diagnostic performance. As a result, it provides a highly promising solution for the early diagnosis of oral squamous carcinoma.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128623902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Colonoscopy examinations are widely used for detecting colon cancer and many other colon abnormalities. Unfortunately, the resulting colon videos often have artifacts caused by camera motion and specular highlights caused by light reflections from the wet colon surface. To address these problems, we have developed a method for motion compensated colonoscopy image restoration. Our approach utilizes RANSAC-based image registration to align sequences of N consecutive images in the colonoscopy video and restores each frame of the video using information from these aligned images. We compare image alignment quality when N adjacent images are registered to each other versus registering images with larger step sizes between them. Three types of image pre processing were evaluated in our work. We found that the removal of non-informative images prior to image registration produced better alignment results and reduced processing time. We also evaluated the effects of image smoothing and resizing as a pre processing step for image registration.
{"title":"Motion Compensated Restoration of Colonoscopy Images","authors":"Nidhal Azawi, J. Gauch","doi":"10.5121/CSIT.2019.90920","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90920","url":null,"abstract":"Colonoscopy examinations are widely used for detecting colon cancer and many other colon abnormalities. Unfortunately, the resulting colon videos often have artifacts caused by camera motion and specular highlights caused by light reflections from the wet colon surface. To address these problems, we have developed a method for motion compensated colonoscopy image restoration. Our approach utilizes RANSAC-based image registration to align sequences of N consecutive images in the colonoscopy video and restores each frame of the video using information from these aligned images. We compare image alignment quality when N adjacent images are registered to each other versus registering images with larger step sizes between them. Three types of image pre processing were evaluated in our work. We found that the removal of non-informative images prior to image registration produced better alignment results and reduced processing time. We also evaluated the effects of image smoothing and resizing as a pre processing step for image registration.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"230 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128601556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}