Cloud computing is one of the most smart technology in the era of computing as its capability todecrease the cost of data processing while increasing flexibility and scalability for computer processes. Security is one of the core concerns related to the cloud computing as it hinders the organizations to adopt this technology. Infrastructure as a service (IaaS) is one of the main services of cloud computing which uses virtualization to supply virtualized computing resources to its users through the internet. Virtual Machine Image is the key component in the cloud as it is used to run an instance. There are security issues related to the virtual machine image that need to be analysed as being an essential component related to the cloud computing. Some studies were conducted to provide countermeasure for the identify security threats. However, there is no study has attempted to synthesize security threats and corresponding vulnerabilities. In addition, these studies did not model and classified security threats to find their effect on the Virtual Machine Image. Therefore, this paper provides a threat modelling approach to identify threats that affect the virtual machine image. Furthermore, threat classification is carried out to each individual threat to find out their effects on the cloud computing. Potential attack was drawn to show how an adversary might exploit the weakness in the system to attack the Virtual Machine Image.
{"title":"Threat Modelling for the Virtual Machine Image in Cloud Computing","authors":"R. K. Hussein, V. Sassone","doi":"10.5121/CSIT.2019.90911","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90911","url":null,"abstract":"Cloud computing is one of the most smart technology in the era of computing as its capability todecrease the cost of data processing while increasing flexibility and scalability for computer processes. Security is one of the core concerns related to the cloud computing as it hinders the organizations to adopt this technology. Infrastructure as a service (IaaS) is one of the main services of cloud computing which uses virtualization to supply virtualized computing resources to its users through the internet. Virtual Machine Image is the key component in the cloud as it is used to run an instance. There are security issues related to the virtual machine image that need to be analysed as being an essential component related to the cloud computing. Some studies were conducted to provide countermeasure for the identify security threats. However, there is no study has attempted to synthesize security threats and corresponding vulnerabilities. In addition, these studies did not model and classified security threats to find their effect on the Virtual Machine Image. Therefore, this paper provides a threat modelling approach to identify threats that affect the virtual machine image. Furthermore, threat classification is carried out to each individual threat to find out their effects on the cloud computing. Potential attack was drawn to show how an adversary might exploit the weakness in the system to attack the Virtual Machine Image.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129158591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, several online services have proliferated to provide similar services with same functionality by different service providers with varying Quality of Service (QoS) properties. So, service composition should provide effective adaptation especially in a dynamically changing composition environment. Meanwhile, a large number of component services pose scalability issues. As a result, monitoring and resolving performance problems in web services based systems is challenging task as these systems depend on component web services that are distributed in nature. In this paper, a distributed approach is used to identify performance related problems in component web services. The service composition adaptation provides timely replacement of the performance bottleneck source that can prohibit performance degradation for the forthcoming requests. Experimentation results demonstrate the efficiency of the proposed approach, and also the quality of solution of a service composition is maintained.
{"title":"Effective Service Composition Approach based on Pruning Performance Bottlenecks","authors":"Navinderjit Kaur Kahlon, K. Chahal","doi":"10.5121/CSIT.2019.90924","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90924","url":null,"abstract":"In recent years, several online services have proliferated to provide similar services with same functionality by different service providers with varying Quality of Service (QoS) properties. So, service composition should provide effective adaptation especially in a dynamically changing composition environment. Meanwhile, a large number of component services pose scalability issues. As a result, monitoring and resolving performance problems in web services based systems is challenging task as these systems depend on component web services that are distributed in nature. In this paper, a distributed approach is used to identify performance related problems in component web services. The service composition adaptation provides timely replacement of the performance bottleneck source that can prohibit performance degradation for the forthcoming requests. Experimentation results demonstrate the efficiency of the proposed approach, and also the quality of solution of a service composition is maintained.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133656972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies the job-scheduling problem on m ≥ 2 parallel/identical machines.There are n jobs, denoted by Ji,1 ≤ i ≤ n. Each job Ji, has a due date di. A job has one or more tasks, each with a specific processing time. The tasks can’t be preempted, i.e., once scheduled, a task cannot be interrupted and resumed later. Different tasks of the same job can be scheduled concurrently on different machines. A job is on time if all of its tasks finish before its due date; otherwise, it is tardy. A schedule of the jobs specifies which task is scheduled on which machine at what time. The problem is to find a schedule of these jobs so that the number of on time jobs is maximized; or equivalently, the number of tardy jobs is minimized. We consider two cases: the case when each job has only a single task and the case where a job can have one or more tasks. For the first case, if all jobs have common due date we design a simple algorithm and show that the algorithm can generate a schedule whose number of on time jobs is at most (m-1) less than that of the optimal schedule. We also show that the modified algorithm works for the second case with common due date and has same performance. Finally, we design an algorithm when jobs have different due dates for the second case. We conduct computation experiment and show that the algorithm has very good performance.
{"title":"Maximizing the Total Number of on TIME Jobs on Identical Machines","authors":"Hairong Zhao","doi":"10.5121/CSIT.2019.90909","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90909","url":null,"abstract":"This paper studies the job-scheduling problem on m ≥ 2 parallel/identical machines.There are n jobs, denoted by Ji,1 ≤ i ≤ n. Each job Ji, has a due date di. A job has one or more tasks, each with a specific processing time. The tasks can’t be preempted, i.e., once scheduled, a task cannot be interrupted and resumed later. Different tasks of the same job can be scheduled concurrently on different machines. A job is on time if all of its tasks finish before its due date; otherwise, it is tardy. A schedule of the jobs specifies which task is scheduled on which machine at what time. The problem is to find a schedule of these jobs so that the number of on time jobs is maximized; or equivalently, the number of tardy jobs is minimized. We consider two cases: the case when each job has only a single task and the case where a job can have one or more tasks. For the first case, if all jobs have common due date we design a simple algorithm and show that the algorithm can generate a schedule whose number of on time jobs is at most (m-1) less than that of the optimal schedule. We also show that the modified algorithm works for the second case with common due date and has same performance. Finally, we design an algorithm when jobs have different due dates for the second case. We conduct computation experiment and show that the algorithm has very good performance.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127019055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Sadou, A. Lahoulou, T. Bouden, Anderson R. Avila, T. Falk, Z. Akhtar
In this paper, a new no-reference image quality assessment (NR-IQA) metric for grey images is proposed using LIVE II image database. The features used are extracted from three well-known NR-IQA objective metrics based on natural scene statistical attributes from three different domains. These metrics may contain redundant, noisy or less informative features which affect the quality score prediction. In order to overcome this drawback, the first step of our work consists in selecting the most relevant image quality features by using Singular Value Decomposition (SVD) based dominant eigenvectors. The second step is performed by employing Relevance Vector Machine (RVM) to learn the mapping between the previously selected features and human opinion scores. Simulations demonstrate that the proposed metric performs very well in terms of correlation and monotonicity.
{"title":"Blind Image Quality Assessment Using Singular Value Decomposition Based Dominant Eigenvectors for Feature Selection","authors":"B. Sadou, A. Lahoulou, T. Bouden, Anderson R. Avila, T. Falk, Z. Akhtar","doi":"10.5121/CSIT.2019.90919","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90919","url":null,"abstract":"In this paper, a new no-reference image quality assessment (NR-IQA) metric for grey images is proposed using LIVE II image database. The features used are extracted from three well-known NR-IQA objective metrics based on natural scene statistical attributes from three different domains. These metrics may contain redundant, noisy or less informative features which affect the quality score prediction. In order to overcome this drawback, the first step of our work consists in selecting the most relevant image quality features by using Singular Value Decomposition (SVD) based dominant eigenvectors. The second step is performed by employing Relevance Vector Machine (RVM) to learn the mapping between the previously selected features and human opinion scores. Simulations demonstrate that the proposed metric performs very well in terms of correlation and monotonicity.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128938359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an approach to reconstructing 3D objects based on the generation of dense depth map. From a two 2D images (a pair of images) of the same 3D object, taken from different points of view, a new grayscale image is estimated. It is an intermediate image between a purely 2D image and a 3D image where each pixel of this image represents a z-height according to its gray level value. Our objective therefore is to play on the precision of this map in order to prove the interest and effectiveness of this map on the quality of the reconstruction.
{"title":"Three-Dimensional Reconstruction Using the Depth Map","authors":"A. Abderrahmani, R. Lasri, K. Satori","doi":"10.5121/CSIT.2019.90922","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90922","url":null,"abstract":"This paper presents an approach to reconstructing 3D objects based on the generation of dense depth map. From a two 2D images (a pair of images) of the same 3D object, taken from different points of view, a new grayscale image is estimated. It is an intermediate image between a purely 2D image and a 3D image where each pixel of this image represents a z-height according to its gray level value. Our objective therefore is to play on the precision of this map in order to prove the interest and effectiveness of this map on the quality of the reconstruction.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133924132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ubiquitous computing and context-aware applications experience at the present time a very important development. This has led organizations to open more of their information systems, making them available anywhere, at any time and integrating the dimension of mobile users. This cannot be done without taking into account thoughtfully the access security: a pervasive information system must henceforth be able to take into account the contextual features to ensure a robust access control. In this paper, access control and a few existing mechanisms have been exposed. It is intended to show the importance of taking into account context during a request for access. In this regard, our proposal incorporates the concept of trust to establish a trust relationship according to three contextual constraints (location, social situation and time) in order to decide to grant or deny the access request of a user to a service.
{"title":"Context-Aware Trust-Based Access Control for Ubiquitous Systems","authors":"M. Yaici, Faiza Ainennas, Nassima Zidi","doi":"10.5121/CSIT.2019.90902","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90902","url":null,"abstract":"The ubiquitous computing and context-aware applications experience at the present time a very important development. This has led organizations to open more of their information systems, making them available anywhere, at any time and integrating the dimension of mobile users. This cannot be done without taking into account thoughtfully the access security: a pervasive information system must henceforth be able to take into account the contextual features to ensure a robust access control. In this paper, access control and a few existing mechanisms have been exposed. It is intended to show the importance of taking into account context during a request for access. In this regard, our proposal incorporates the concept of trust to establish a trust relationship according to three contextual constraints (location, social situation and time) in order to decide to grant or deny the access request of a user to a service.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114181233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Mabrouk, Abubakr Awad, H. Shousha, Wafaa Alake, A. Salama, T. Awad
Background: Assessment of liver fibrosis is a vital need for enabling therapeutic decisions and prognostic evaluations of chronic hepatitis. Liver biopsy is considered the definitive investigation for assessing the stage of liver fibrosis but it carries several limitations. FIB-4 and APRI also have a limited accuracy. The National Committee for Control of Viral Hepatitis (NCCVH) in Egypt has supplied a valuable pool of electronic patients’ data that data mining techniques can analyze to disclose hidden patterns, trends leading to the evolution of predictive algorithms. Aim: to collaborate with physicians to develop a novel reliable, easy to comprehend noninvasive model to predict the stage of liver fibrosis utilizing routine workup, without imposing extra costs for additional examinations especially in areas with limited resources like Egypt. Methods: This multi-centered retrospective study included baseline demographic, laboratory, and histopathological data of 69106 patients with chronic hepatitis C. We started by data collection preprocessing, cleansing and formatting for knowledge discovery of useful information from Electronic Health Records EHRs. Data mining has been used to build a decision tree (Reduced Error Pruning tree (REP tree)) with 10-fold internal cross-validation. Histopathology results were used to assess accuracy for fibrosis stages. Machine learning feature selection and reduction (CfsSubseteval / best first) reduced the initial number of input features (N=15) to the most relevant ones (N=6) for developing the prediction model. Results: In this study, 32419 patients had F(0-1), 25073 had F(2) and 11615 had F(3-4). FIB-4 and APRI revalidation in our study showed low accuracy and high discordance with biopsy results, with overall AUC 0.68 and 0.58 respectively. Out of 15 attributes machine learning selected Age, AFP, AST, glucose, albumin, and platelet as the most relevant attributes. Results for REP tree indicated an overall classification accuracy up to 70% and ROC Area 0.74 which was not nearly affected by attribute reduction, and pruning . However attribute reduction, and tree pruning were associated with simpler model easy to understand by physician with less time for execution. Conclusion: This study we had the chance to study a large cohort of 69106 chronic hepatitis patients with available liver biopsy results to revise and validate the accuracy of FIB-4 and APRI. This study represents the collaboration between computer scientist and hepatologists to provide clinicians with an accurate novel and reliable, noninvasive model to predict the stage of liver fibrosis.
背景:肝纤维化的评估对慢性肝炎的治疗决策和预后评估至关重要。肝活检被认为是评估肝纤维化分期的决定性调查,但它有一些局限性。FIB-4和APRI也有一定的准确性。埃及病毒性肝炎控制国家委员会(NCCVH)提供了一个有价值的电子患者数据库,数据挖掘技术可以对其进行分析,以揭示隐藏的模式和趋势,从而导致预测算法的发展。目的:与医生合作开发一种新颖可靠、易于理解的无创模型,利用常规检查来预测肝纤维化的阶段,而不会增加额外检查的额外费用,特别是在资源有限的地区,如埃及。方法:本多中心回顾性研究包括69106例慢性丙型肝炎患者的基线人口统计学、实验室和组织病理学数据。我们从数据收集、预处理、清理和格式化开始,以便从电子健康记录EHRs中发现有用的信息。数据挖掘被用于构建具有10倍内部交叉验证的决策树(减少错误修剪树(REP树))。组织病理学结果用于评估纤维化分期的准确性。机器学习特征选择与约简(CfsSubseteval / best first)将初始输入特征(N=15)减少到最相关的特征(N=6),用于开发预测模型。结果:本研究中F(0-1) 32419例,F(2) 25073例,F(3-4) 11615例。本研究中FIB-4和APRI再验证的准确性较低,与活检结果高度不一致,总AUC分别为0.68和0.58。在15个属性中,机器学习选择年龄、AFP、AST、葡萄糖、白蛋白和血小板作为最相关的属性。结果表明,REP树总体分类精度达70%,ROC Area 0.74,属性约简和剪枝对分类精度影响不大。而属性约简和树修剪的模型更简单,易于医生理解,执行时间更短。结论:在这项研究中,我们有机会对69106名慢性肝炎患者进行大队列研究,这些患者有可用的肝活检结果,以修正和验证FIB-4和APRI的准确性。这项研究代表了计算机科学家和肝病学家之间的合作,为临床医生提供了一个准确、新颖、可靠、无创的模型来预测肝纤维化的分期。
{"title":"Attribute Reduction and Decision Tree Pruning to Simplify Liver Fibrosis Prediction Algorithms A Cohort Study","authors":"M. Mabrouk, Abubakr Awad, H. Shousha, Wafaa Alake, A. Salama, T. Awad","doi":"10.5121/CSIT.2019.90927","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90927","url":null,"abstract":"Background: Assessment of liver fibrosis is a vital need for enabling therapeutic decisions and prognostic evaluations of chronic hepatitis. Liver biopsy is considered the definitive investigation for assessing the stage of liver fibrosis but it carries several limitations. FIB-4 and APRI also have a limited accuracy. The National Committee for Control of Viral Hepatitis (NCCVH) in Egypt has supplied a valuable pool of electronic patients’ data that data mining techniques can analyze to disclose hidden patterns, trends leading to the evolution of predictive algorithms. Aim: to collaborate with physicians to develop a novel reliable, easy to comprehend noninvasive model to predict the stage of liver fibrosis utilizing routine workup, without imposing extra costs for additional examinations especially in areas with limited resources like Egypt. Methods: This multi-centered retrospective study included baseline demographic, laboratory, and histopathological data of 69106 patients with chronic hepatitis C. We started by data collection preprocessing, cleansing and formatting for knowledge discovery of useful information from Electronic Health Records EHRs. Data mining has been used to build a decision tree (Reduced Error Pruning tree (REP tree)) with 10-fold internal cross-validation. Histopathology results were used to assess accuracy for fibrosis stages. Machine learning feature selection and reduction (CfsSubseteval / best first) reduced the initial number of input features (N=15) to the most relevant ones (N=6) for developing the prediction model. Results: In this study, 32419 patients had F(0-1), 25073 had F(2) and 11615 had F(3-4). FIB-4 and APRI revalidation in our study showed low accuracy and high discordance with biopsy results, with overall AUC 0.68 and 0.58 respectively. Out of 15 attributes machine learning selected Age, AFP, AST, glucose, albumin, and platelet as the most relevant attributes. Results for REP tree indicated an overall classification accuracy up to 70% and ROC Area 0.74 which was not nearly affected by attribute reduction, and pruning . However attribute reduction, and tree pruning were associated with simpler model easy to understand by physician with less time for execution. Conclusion: This study we had the chance to study a large cohort of 69106 chronic hepatitis patients with available liver biopsy results to revise and validate the accuracy of FIB-4 and APRI. This study represents the collaboration between computer scientist and hepatologists to provide clinicians with an accurate novel and reliable, noninvasive model to predict the stage of liver fibrosis.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114438190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Three major factors that can affect Voice over Internet Protocol (VoIP) phone services’ quality, these include packet delay, packet loss, and jitter. The focus of this study is specific to the VoIP phone services offered to customers by cable companies that utilize broadband hybrid fiber coaxial (HFC) networks. HFC networks typically carry three types of traffic that include voice, data, and video. Unlike data and video, some delays or packet loss can result in a noticeable degraded impact on a VoIP’s phone conversation. We will examine various differentiated services code point (DSCP) marking, then analyze and assess their impact on VoIP’s quality of service (QoS). This study mimics the production environment. It examines the relationship between specific DSCP marking’s configuration. This research avoids automated test calls and rather focuses on human made call testing. This study relies on users’ experience and the captured data to support this research’s findings.
影响VoIP (Voice over Internet Protocol)电话服务质量的三个主要因素包括数据包延迟、数据包丢失和抖动。本研究的重点是有线电视公司提供给客户的VoIP电话服务,这些公司利用宽带混合光纤同轴(HFC)网络。HFC网络通常承载三种类型的流量,包括语音、数据和视频。与数据和视频不同,一些延迟或数据包丢失可能会导致VoIP电话通话的明显下降影响。我们将研究各种差异化服务代码点(DSCP)标记,然后分析和评估它们对VoIP服务质量(QoS)的影响。这项研究模拟了生产环境。它检查了特定DSCP标记的配置之间的关系。这项研究避免了自动测试调用,而是专注于人工呼叫测试。这项研究依赖于用户的经验和捕获的数据来支持这项研究的发现。
{"title":"Optimizing DSCP Marking to Ensure VoIP’s QoS over HFC Network","authors":"Shaher Daoud, Yanzhen Qu","doi":"10.5121/CSIT.2019.90928","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90928","url":null,"abstract":"Three major factors that can affect Voice over Internet Protocol (VoIP) phone services’ quality, these include packet delay, packet loss, and jitter. The focus of this study is specific to the VoIP phone services offered to customers by cable companies that utilize broadband hybrid fiber coaxial (HFC) networks. HFC networks typically carry three types of traffic that include voice, data, and video. Unlike data and video, some delays or packet loss can result in a noticeable degraded impact on a VoIP’s phone conversation. We will examine various differentiated services code point (DSCP) marking, then analyze and assess their impact on VoIP’s quality of service (QoS). This study mimics the production environment. It examines the relationship between specific DSCP marking’s configuration. This research avoids automated test calls and rather focuses on human made call testing. This study relies on users’ experience and the captured data to support this research’s findings.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127026976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oral cancer is one of the most widespread tumors of the head and neck region. An earlier diagnosis can help dentist getting a better therapy plan, giving patients a better treatment and the reliable techniques for detecting oral cancer cells are urgently required. This study proposes an optic and automation method using reflection images obtained with scanned laser pico-projection system, and Gray-Level Co-occurrence Matrix for sampling. Moreover, the artificial intelligence technology, Support Vector Machine, was used to classify samples. Normal Oral Keratinocyte and dysplastic oral keratinocyte were simulating the evolvement of cancer to be classified. The accuracy in distinguishing two cells has reached 85.22%. Compared to existing diagnosis methods, the proposed method possesses many advantages, including a lower cost, a larger sample size, an instant, a non-invasive, and a more reliable diagnostic performance. As a result, it provides a highly promising solution for the early diagnosis of oral squamous carcinoma.
{"title":"Construction Of an Oral Cancer Auto-Classify system Based On Machine-Learning for Artificial Intelligence","authors":"Meng-Jia Lian, C. Huang, T. Lee","doi":"10.5121/CSIT.2019.90903","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90903","url":null,"abstract":"Oral cancer is one of the most widespread tumors of the head and neck region. An earlier diagnosis can help dentist getting a better therapy plan, giving patients a better treatment and the reliable techniques for detecting oral cancer cells are urgently required. This study proposes an optic and automation method using reflection images obtained with scanned laser pico-projection system, and Gray-Level Co-occurrence Matrix for sampling. Moreover, the artificial intelligence technology, Support Vector Machine, was used to classify samples. Normal Oral Keratinocyte and dysplastic oral keratinocyte were simulating the evolvement of cancer to be classified. The accuracy in distinguishing two cells has reached 85.22%. Compared to existing diagnosis methods, the proposed method possesses many advantages, including a lower cost, a larger sample size, an instant, a non-invasive, and a more reliable diagnostic performance. As a result, it provides a highly promising solution for the early diagnosis of oral squamous carcinoma.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128623902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IoT (Internet of Things), represents many kinds of devices in the field, connected to data-centers via various networks, submitting data, and allow themselves to be controlled. Connected cameras, TV, media players, access control systems, and wireless sensors are becoming pervasive. Their applications include Retail Solutions, Home, Transportation and Automotive, Industrial and Energy etc. This growth also represents security threat, as several hacker attacks been launched using these devices as agents. We explore the current environment and propose a quantitative and qualitative trust model, using a multi-dimensional exploration space, based on the hardware and software stack. This can be extended to any combination of IoT devices, and dynamically updated as the type of applications, deployment environment or any ingredients change.
IoT (Internet of Things),代表了现场的多种设备,通过各种网络连接到数据中心,提交数据,并允许自己被控制。联网相机、电视、媒体播放器、门禁系统和无线传感器正变得无处不在。他们的应用包括零售解决方案,家庭,交通和汽车,工业和能源等。这种增长也代表了安全威胁,因为一些黑客攻击是使用这些设备作为代理发起的。我们探索了当前的环境,并提出了一个定量和定性的信任模型,使用多维探索空间,基于硬件和软件堆栈。这可以扩展到物联网设备的任何组合,并随着应用程序类型、部署环境或任何成分的变化而动态更新。
{"title":"Trust Modelling for Security of IoT Devices","authors":"Naresh Sehgal, S. Shankar, J. Acken","doi":"10.5121/CSIT.2019.90913","DOIUrl":"https://doi.org/10.5121/CSIT.2019.90913","url":null,"abstract":"IoT (Internet of Things), represents many kinds of devices in the field, connected to data-centers via various networks, submitting data, and allow themselves to be controlled. Connected cameras, TV, media players, access control systems, and wireless sensors are becoming pervasive. Their applications include Retail Solutions, Home, Transportation and Automotive, Industrial and Energy etc. This growth also represents security threat, as several hacker attacks been launched using these devices as agents. We explore the current environment and propose a quantitative and qualitative trust model, using a multi-dimensional exploration space, based on the hardware and software stack. This can be extended to any combination of IoT devices, and dynamically updated as the type of applications, deployment environment or any ingredients change.","PeriodicalId":248929,"journal":{"name":"9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123527889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}