首页 > 最新文献

International Conference on Software and Information Engineering最新文献

英文 中文
Positive and Negative Feature-Feature Correlation Measure: AddGain 正特征和负特征相关度量:添加增益
Pub Date : 2018-05-02 DOI: 10.1145/3220267.3220270
M. Salama, Ghada Hassan
Feature selection techniques are searching for an optimal subset of features required in the machine learning algorithms. Techniques like the statistical models have been applied for measuring the correlation degree for each feature separately. However, the mutual correlation and effect between features is not taken into consideration. The proposed technique measures the constructive and the destructive effect (gain) of adding a feature to a subset of features. This technique studies feature-feature correlation in addition to the feature-class label correlation. The optimality in the resulted subset of features is based on searching for a highly constructive subset of features with respect to the target class label. The proposed feature selection technique is tested by measuring the classification accuracy results of a data set containing subsets of constructively correlated features. A comparative analysis shows that the resulted classification accuracy and number of the selected feature of the proposed technique is better than the other feature selection techniques.
特征选择技术是寻找机器学习算法所需的最优特征子集。统计模型等技术已被用于分别测量每个特征的关联度。然而,没有考虑特征之间的相互关联和相互影响。所提出的技术测量在特征子集中添加特征的建设性和破坏性效果(增益)。该技术除了研究特征类标签的相关性外,还研究了特征与特征之间的相关性。结果特征子集的最优性是基于搜索相对于目标类标签的高度建设性的特征子集。通过测量包含建设性相关特征子集的数据集的分类精度结果来测试所提出的特征选择技术。对比分析表明,该方法的分类精度和所选特征的数量均优于其他特征选择方法。
{"title":"Positive and Negative Feature-Feature Correlation Measure: AddGain","authors":"M. Salama, Ghada Hassan","doi":"10.1145/3220267.3220270","DOIUrl":"https://doi.org/10.1145/3220267.3220270","url":null,"abstract":"Feature selection techniques are searching for an optimal subset of features required in the machine learning algorithms. Techniques like the statistical models have been applied for measuring the correlation degree for each feature separately. However, the mutual correlation and effect between features is not taken into consideration. The proposed technique measures the constructive and the destructive effect (gain) of adding a feature to a subset of features. This technique studies feature-feature correlation in addition to the feature-class label correlation. The optimality in the resulted subset of features is based on searching for a highly constructive subset of features with respect to the target class label. The proposed feature selection technique is tested by measuring the classification accuracy results of a data set containing subsets of constructively correlated features. A comparative analysis shows that the resulted classification accuracy and number of the selected feature of the proposed technique is better than the other feature selection techniques.","PeriodicalId":177522,"journal":{"name":"International Conference on Software and Information Engineering","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126167660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Architecture for Controlled Accurate Computation using AVX AVX控制精确计算的高效架构
Pub Date : 2018-05-02 DOI: 10.1145/3220267.3220292
DiaaEldin M. Osman, M. Sobh, A. Eldin, A. M. Zaki
Several applications have problems with the representation of the real numbers because of its drawbacks like the propagation and the accumulation of errors. These numbers have a fixed length format representation that provides a large dynamic range, but on the other hand it causes truncation of some parts of the numbers in case of a number that needs to be represented by a long stream of bits. Researchers suggested many solutions for these errors, one of these solutions is the Multi-Number (MN) system. MN system represents the real number as a vector of floating-point numbers with controlled accuracy by adjusting the length of the vector to accumulate the non-overlapping real number sequences. MN system main drawback is the MN computations that are iterative and time consuming, making it unsuitable for real time applications. In this work, the Single Instruction Multiple Data (SIMD) model supported in modern CPUs is exploited to accelerate the MN Computations. The basic arithmetic operation algorithms had been adjusted to make use of the SIMD architecture and support both single and double precision operations. The new architecture maintains the same accuracy of the original one, when was implemented for both single and double precision. Also, in this paper the normal Gaussian Jordan Elimination algorithm was proposed and used to get the inverse of the Hilbert Matrix, as an example of ill-conditioned matrices, instead of using iterative and time-consuming methods. The accuracy of the operations was proved by getting the inverse of the Hilbert Matrix and verify that the multiplication of the inverse and the original matrix producing the unity matrix. Hilbert Matrix inverse execution time was accelerated and achieved a speedup 3x, compared to the original NM operations. In addition to the previous, the accelerated MN system version was used to solve the polynomial regression problem.
由于实数的传播和误差积累等缺点,一些应用程序在实数的表示方面存在问题。这些数字具有固定长度的格式表示,提供了很大的动态范围,但另一方面,如果需要用长比特流表示数字,则会导致数字的某些部分被截断。研究人员为这些错误提出了许多解决方案,其中一种解决方案是多号码(MN)系统。MN系统将实数表示为精度可控的浮点数向量,通过调整向量的长度来累加不重叠的实数序列。MN系统的主要缺点是MN计算迭代和耗时,不适合实时应用。在这项工作中,利用现代cpu支持的单指令多数据(SIMD)模型来加速MN计算。调整了基本的算术运算算法,以利用SIMD体系结构,并支持单精度和双精度运算。当实现单精度和双精度时,新结构保持了与原结构相同的精度。此外,本文还提出了正态高斯Jordan消去算法,并将其用于以病态矩阵为例的Hilbert矩阵的逆求,而不是使用迭代和耗时的方法。通过求希尔伯特矩阵的逆证明了该运算的准确性,并验证了逆与原矩阵相乘得到单位矩阵。与原来的NM操作相比,Hilbert矩阵逆执行时间加快了3倍。在之前的基础上,采用加速的MN系统版本来解决多项式回归问题。
{"title":"Efficient Architecture for Controlled Accurate Computation using AVX","authors":"DiaaEldin M. Osman, M. Sobh, A. Eldin, A. M. Zaki","doi":"10.1145/3220267.3220292","DOIUrl":"https://doi.org/10.1145/3220267.3220292","url":null,"abstract":"Several applications have problems with the representation of the real numbers because of its drawbacks like the propagation and the accumulation of errors. These numbers have a fixed length format representation that provides a large dynamic range, but on the other hand it causes truncation of some parts of the numbers in case of a number that needs to be represented by a long stream of bits. Researchers suggested many solutions for these errors, one of these solutions is the Multi-Number (MN) system. MN system represents the real number as a vector of floating-point numbers with controlled accuracy by adjusting the length of the vector to accumulate the non-overlapping real number sequences. MN system main drawback is the MN computations that are iterative and time consuming, making it unsuitable for real time applications. In this work, the Single Instruction Multiple Data (SIMD) model supported in modern CPUs is exploited to accelerate the MN Computations. The basic arithmetic operation algorithms had been adjusted to make use of the SIMD architecture and support both single and double precision operations. The new architecture maintains the same accuracy of the original one, when was implemented for both single and double precision. Also, in this paper the normal Gaussian Jordan Elimination algorithm was proposed and used to get the inverse of the Hilbert Matrix, as an example of ill-conditioned matrices, instead of using iterative and time-consuming methods. The accuracy of the operations was proved by getting the inverse of the Hilbert Matrix and verify that the multiplication of the inverse and the original matrix producing the unity matrix. Hilbert Matrix inverse execution time was accelerated and achieved a speedup 3x, compared to the original NM operations. In addition to the previous, the accelerated MN system version was used to solve the polynomial regression problem.","PeriodicalId":177522,"journal":{"name":"International Conference on Software and Information Engineering","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126757725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Directer: A Parallel and Directed Fuzzing based on Concolic Execution Directer:一种基于协同执行的并行定向模糊测试
Pub Date : 2018-05-02 DOI: 10.1145/3220267.3220272
Xiaobin Song, Zehui Wu, Yunchao Wang
Fuzzing is a widely used technology to find vulnerabilities, but the current technology is mostly based on coverage and there are relatively few research in the field of directed fuzzing. In this paper, a parallelized testing technique combining directed fuzzing and concolic execution will be proposed. It extracts path space within the level of basic block in the function call chain through the program control flow analysis and function call relationship. Concolic execution is used to implement the target reachable paths guidance, in order to achieve the goal of rapid arrival. In the experimental stage, the developed Directer was used to test on LAVA dataset, which shows better performance than the existing fuzzers.
模糊测试是一种广泛使用的漏洞发现技术,但目前的技术大多是基于覆盖的,在定向模糊测试领域的研究相对较少。本文提出了一种结合定向模糊和协同执行的并行化测试技术。通过程序控制流分析和函数调用关系,提取函数调用链中基本块层内的路径空间。采用协同执行来实现对目标可达路径的引导,以达到快速到达的目的。在实验阶段,开发的Directer在LAVA数据集上进行了测试,显示出比现有模糊器更好的性能。
{"title":"Directer: A Parallel and Directed Fuzzing based on Concolic Execution","authors":"Xiaobin Song, Zehui Wu, Yunchao Wang","doi":"10.1145/3220267.3220272","DOIUrl":"https://doi.org/10.1145/3220267.3220272","url":null,"abstract":"Fuzzing is a widely used technology to find vulnerabilities, but the current technology is mostly based on coverage and there are relatively few research in the field of directed fuzzing. In this paper, a parallelized testing technique combining directed fuzzing and concolic execution will be proposed. It extracts path space within the level of basic block in the function call chain through the program control flow analysis and function call relationship. Concolic execution is used to implement the target reachable paths guidance, in order to achieve the goal of rapid arrival. In the experimental stage, the developed Directer was used to test on LAVA dataset, which shows better performance than the existing fuzzers.","PeriodicalId":177522,"journal":{"name":"International Conference on Software and Information Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129804189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Denoising Technique for CT Dose Modulation CT剂量调制图像去噪技术
Pub Date : 2018-05-02 DOI: 10.1145/3220267.3220288
Haneen A. Elyamani, S. El-Seoud, E. Rashed
Low-dose computed tomography (LDCT) imaging is considerably recommended for use in clinical CT scanning because of growing fears over excessive radiation exposure. Automatic exposure control (AEC) is one of the methods used in dose reduction techniques that have been implemented clinically comprise showing significant decrease scan range. The quality of some images may be roughly degraded with noise and streak artifacts due to x-ray flux, based on modulating radiation dose in the angular and slice directions. In 2005, the nonlocal means (NLM) algorithm showed high performance in denoising images corrupted by LDCT. The proposed method incorporates a prior knowledge obtained from previous high-quality CT slices to improve low-quality CT slice during the filtering process because of the anatomical similarity between the arranged image slices of the scans. The proposed method is evaluated using real data and CT image quality is notably improved.
低剂量计算机断层扫描(LDCT)被广泛推荐用于临床CT扫描,因为人们越来越担心过度的辐射暴露。自动暴露控制(AEC)是临床实施的剂量减少技术中使用的方法之一,包括显示显着降低扫描范围。基于角方向和切片方向的调制辐射剂量,由于x射线通量的影响,某些图像的质量可能会受到噪声和条纹伪影的影响。2005年,非局部均值(nonlocal means, NLM)算法在去噪LDCT损坏图像方面表现出优异的性能。该方法结合了从以前的高质量CT切片中获得的先验知识,以改善在滤波过程中由于扫描图像切片之间排列的解剖相似性而导致的低质量CT切片。用实际数据对该方法进行了评价,结果表明,该方法显著提高了CT图像质量。
{"title":"Image Denoising Technique for CT Dose Modulation","authors":"Haneen A. Elyamani, S. El-Seoud, E. Rashed","doi":"10.1145/3220267.3220288","DOIUrl":"https://doi.org/10.1145/3220267.3220288","url":null,"abstract":"Low-dose computed tomography (LDCT) imaging is considerably recommended for use in clinical CT scanning because of growing fears over excessive radiation exposure. Automatic exposure control (AEC) is one of the methods used in dose reduction techniques that have been implemented clinically comprise showing significant decrease scan range. The quality of some images may be roughly degraded with noise and streak artifacts due to x-ray flux, based on modulating radiation dose in the angular and slice directions. In 2005, the nonlocal means (NLM) algorithm showed high performance in denoising images corrupted by LDCT. The proposed method incorporates a prior knowledge obtained from previous high-quality CT slices to improve low-quality CT slice during the filtering process because of the anatomical similarity between the arranged image slices of the scans. The proposed method is evaluated using real data and CT image quality is notably improved.","PeriodicalId":177522,"journal":{"name":"International Conference on Software and Information Engineering","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125435260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Example-Based English to Arabic Machine Translation: Matching Stage Using Internal Medicine Publications 基于实例的英语到阿拉伯语机器翻译:基于内科出版物的匹配阶段
Pub Date : 2018-05-02 DOI: 10.1145/3220267.3220294
R. Ehab, Eslam Amer, M. Gadallah
Automatic machine translation becomes an important source of translation nowadays. It is a software system that translates a text from one natural language to one (many) natural language. On the web, there are many machine translation systems that give the reasonable translation, although the systems are not very good. Medical records contain complex information that must be translated correctly according to its medical meaning not its English meaning only. So, the quality of a machine translation in this domain is very important. In this paper, we present using matching stage from Example-Based Machine Translation technique to translate a medical text from English as source language to Arabic as the target language. We have used 259 medical sentences that are extracted from internal medicine publications for our system. Experimental results on BLUE metrics showed a decreased performance 0.486 comparing to GOOGLE translation which has an accuracy result about 0.536.
自动机器翻译已成为当今翻译的重要来源。它是一个将文本从一种自然语言翻译成一种(多种)自然语言的软件系统。在网络上,有很多机器翻译系统给出了合理的翻译,尽管这些系统不是很好。医疗记录包含复杂的信息,必须根据其医学含义正确翻译,而不仅仅是英文含义。所以,在这个领域机器翻译的质量是非常重要的。本文提出了一种基于实例的机器翻译技术,利用匹配阶段将英语作为源语言的医学文本翻译成阿拉伯语作为目的语言。我们在系统中使用了从内科出版物中提取的259个医学句子。BLUE指标的实验结果显示,与谷歌翻译相比,BLUE的性能下降了0.486,而谷歌翻译的准确率约为0.536。
{"title":"Example-Based English to Arabic Machine Translation: Matching Stage Using Internal Medicine Publications","authors":"R. Ehab, Eslam Amer, M. Gadallah","doi":"10.1145/3220267.3220294","DOIUrl":"https://doi.org/10.1145/3220267.3220294","url":null,"abstract":"Automatic machine translation becomes an important source of translation nowadays. It is a software system that translates a text from one natural language to one (many) natural language. On the web, there are many machine translation systems that give the reasonable translation, although the systems are not very good. Medical records contain complex information that must be translated correctly according to its medical meaning not its English meaning only. So, the quality of a machine translation in this domain is very important. In this paper, we present using matching stage from Example-Based Machine Translation technique to translate a medical text from English as source language to Arabic as the target language. We have used 259 medical sentences that are extracted from internal medicine publications for our system. Experimental results on BLUE metrics showed a decreased performance 0.486 comparing to GOOGLE translation which has an accuracy result about 0.536.","PeriodicalId":177522,"journal":{"name":"International Conference on Software and Information Engineering","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134054108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extraction of Egyptian License Plate Numbers and Characters Using SURF and Cross Correlation 利用SURF和互相关提取埃及车牌号码和字符
Pub Date : 2018-05-02 DOI: 10.1145/3220267.3220276
A. Nosseir, Ramy Roshdy
In Egypt, Traffic police or traffic officers usually write down the car license numbers and characters to enforce traffic rules. This is subject to errors of writing or reading the numbers and characters. The proposed work can utilise the advantage of widely spread of mobile phones. Officers can take pictures of car plate licenses and the system converts the pictures of car plate numbers and characters into digital numbers and letters. Arabic characters are challenging because some are very similar to each other's unlike the English characters.. For example, feh (ف) and Qaaf (ق), noon (ن) and ba (ب) difference is minor. The challenge of this work is to extract the Arabic characters and numbers with high accuracy from pictures of new and old car plate design and pictures by regular people. The algorithm has five steps image acquisition, pre-processing, segmentation, feature extraction, and character recognition. To improve the performance time, in the pre-processing step, the developed system tests the cropped area, converts the picture into gray scale, reverses color, and converts it into binary image. Then, it uses morphological operations which is dilation. To improve the accuracy, in the feature extraction step it uses SURF (Speeded Up Robust Features) and cross correlation algorithms in the character recognition. The system is tested with 21 plate pictures and the accuracy is 95% and only one plate picture was missed.
在埃及,交通警察或交通官员通常会记下汽车牌照的号码和字符,以执行交通规则。这是由于数字和字符的书写或阅读错误造成的。所提出的工作可以利用手机的广泛普及的优势。警察可以拍摄车牌,系统将车牌号码和字符的照片转换为数字数字和字母。阿拉伯字符是具有挑战性的,因为有些字符彼此非常相似,不像英语字符。例如,feh()和Qaaf (), noon()和ba()的差别很小。这项工作的挑战是如何从新老汽车车牌设计图片和普通人的图片中提取出高精度的阿拉伯文字和数字。该算法包括图像采集、预处理、分割、特征提取和字符识别五个步骤。为了提高系统的性能,在预处理步骤中,对图像进行裁剪面积测试,将图像转换为灰度,反色,并将其转换为二值图像。然后,它使用形态学操作,即扩张。为了提高识别精度,在特征提取步骤中采用SURF (accelerated Robust Features)算法和字符识别中的互相关算法。系统对21张板材图片进行了测试,准确率达到95%,仅有一张板材图片丢失。
{"title":"Extraction of Egyptian License Plate Numbers and Characters Using SURF and Cross Correlation","authors":"A. Nosseir, Ramy Roshdy","doi":"10.1145/3220267.3220276","DOIUrl":"https://doi.org/10.1145/3220267.3220276","url":null,"abstract":"In Egypt, Traffic police or traffic officers usually write down the car license numbers and characters to enforce traffic rules. This is subject to errors of writing or reading the numbers and characters. The proposed work can utilise the advantage of widely spread of mobile phones. Officers can take pictures of car plate licenses and the system converts the pictures of car plate numbers and characters into digital numbers and letters. Arabic characters are challenging because some are very similar to each other's unlike the English characters.. For example, feh (ف) and Qaaf (ق), noon (ن) and ba (ب) difference is minor. The challenge of this work is to extract the Arabic characters and numbers with high accuracy from pictures of new and old car plate design and pictures by regular people.\u0000 The algorithm has five steps image acquisition, pre-processing, segmentation, feature extraction, and character recognition. To improve the performance time, in the pre-processing step, the developed system tests the cropped area, converts the picture into gray scale, reverses color, and converts it into binary image. Then, it uses morphological operations which is dilation. To improve the accuracy, in the feature extraction step it uses SURF (Speeded Up Robust Features) and cross correlation algorithms in the character recognition. The system is tested with 21 plate pictures and the accuracy is 95% and only one plate picture was missed.","PeriodicalId":177522,"journal":{"name":"International Conference on Software and Information Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131061168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Empirical Study with Function Point Analysis for Software Development Phase Method 软件开发阶段法中功能点分析的实证研究
Pub Date : 2018-05-02 DOI: 10.1145/3220267.3220268
J. Shah, Nazri Kama, Saiful Adli Ismail
It is important to know the actual size and complexity of the software before predicting the amount of effort required to be implemented. Two most common methods used for software size estimation are: (i) Source Lines of Code (SLOC) and (ii) Function Point Analysis (FPA). Estimating the size of a software with SLOC method is only possible once the process of coding is completed. On the other hand, estimating software size with FPA method is possible in early phases of Software Development Life Cycle (SDLC). However, one main challenge from the viewpoint of software development phase, is the presence of inconsistent states of software artifacts i.e. some of the classes are completely developed, some are partially developed, and some are not developed yet. Therefore, this research is using the new developed model i.e. Function Point Analysis for Software Development Phase (FPA-SDP) in an empirical study to overcome this challenge. The results of FPA-SDP model can help software project managers in: (i) knowing the inconsistent states of software artifacts (ii) estimating the actual size of a change request with its complexity level for software development phase.
在预测实现所需的工作量之前,了解软件的实际规模和复杂性是很重要的。用于软件大小评估的两种最常用方法是:(i)源代码行(SLOC)和(ii)功能点分析(FPA)。只有在编码过程完成后,才能用SLOC方法估计软件的大小。另一方面,估算软件规模与平安险方法是可能的早期阶段的软件开发生命周期(SDLC)。然而,从软件开发阶段的角度来看,一个主要的挑战是软件工件的不一致状态的存在,即一些类已经完全开发,一些类已经部分开发,还有一些还没有开发。因此,本研究在实证研究中使用了新开发的模型,即软件开发阶段功能点分析(FPA-SDP)来克服这一挑战。FPA-SDP模型的结果可以帮助软件项目经理:(i)了解软件工件的不一致状态(ii)估计软件开发阶段变更请求的实际大小及其复杂程度。
{"title":"An Empirical Study with Function Point Analysis for Software Development Phase Method","authors":"J. Shah, Nazri Kama, Saiful Adli Ismail","doi":"10.1145/3220267.3220268","DOIUrl":"https://doi.org/10.1145/3220267.3220268","url":null,"abstract":"It is important to know the actual size and complexity of the software before predicting the amount of effort required to be implemented. Two most common methods used for software size estimation are: (i) Source Lines of Code (SLOC) and (ii) Function Point Analysis (FPA). Estimating the size of a software with SLOC method is only possible once the process of coding is completed. On the other hand, estimating software size with FPA method is possible in early phases of Software Development Life Cycle (SDLC). However, one main challenge from the viewpoint of software development phase, is the presence of inconsistent states of software artifacts i.e. some of the classes are completely developed, some are partially developed, and some are not developed yet. Therefore, this research is using the new developed model i.e. Function Point Analysis for Software Development Phase (FPA-SDP) in an empirical study to overcome this challenge. The results of FPA-SDP model can help software project managers in: (i) knowing the inconsistent states of software artifacts (ii) estimating the actual size of a change request with its complexity level for software development phase.","PeriodicalId":177522,"journal":{"name":"International Conference on Software and Information Engineering","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127008520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A New Approach for Implementing 3D Video Call on Cloud Computing Infrastructure 在云计算基础设施上实现3D视频通话的新方法
Pub Date : 2018-05-02 DOI: 10.1145/3220267.3220274
Nada Radwan, M. B. Abdelhalim, Ashraf AbdelRaouf
3D video call is a set of technologies, which allow a caller to feel the depth of the other caller and to give the real-life feeling. 3D video call is a developing technology that can be presented by peer-to-peer architecture. Cloud-based technologies are driving positive changes in the way organizations can communicate. In running a global business, the need for travel and being available in meetings is a must. However, with expensive travel costs, an alternative solution to overcome this problem is required. This paper presents, a new approach that enhances current 2D video calls to 3D video calls benefiting from the unlimited features of the cloud-computing. Three technologies were implemented, OpenStack cloud, webRTC call and 3D anaglyph effect to achieve the sense of 3D video.
3D视频通话是一组技术,它可以让呼叫者感受到另一个呼叫者的深度,并提供真实的感觉。3D视频通话是一项正在发展中的技术,可以通过点对点架构来实现。基于云的技术正在推动组织沟通方式的积极变化。在经营一家全球企业时,出差和参加会议是必须的。然而,由于旅行费用昂贵,需要另一种解决方案来克服这个问题。本文提出了一种利用云计算的无限特性,将当前2D视频通话增强为3D视频通话的新方法。通过OpenStack云、webbrtc调用和3D浮雕效果三种技术实现视频的3D感。
{"title":"A New Approach for Implementing 3D Video Call on Cloud Computing Infrastructure","authors":"Nada Radwan, M. B. Abdelhalim, Ashraf AbdelRaouf","doi":"10.1145/3220267.3220274","DOIUrl":"https://doi.org/10.1145/3220267.3220274","url":null,"abstract":"3D video call is a set of technologies, which allow a caller to feel the depth of the other caller and to give the real-life feeling. 3D video call is a developing technology that can be presented by peer-to-peer architecture. Cloud-based technologies are driving positive changes in the way organizations can communicate. In running a global business, the need for travel and being available in meetings is a must. However, with expensive travel costs, an alternative solution to overcome this problem is required. This paper presents, a new approach that enhances current 2D video calls to 3D video calls benefiting from the unlimited features of the cloud-computing. Three technologies were implemented, OpenStack cloud, webRTC call and 3D anaglyph effect to achieve the sense of 3D video.","PeriodicalId":177522,"journal":{"name":"International Conference on Software and Information Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129152815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computer Aided Diagnosis System for Liver Cirrhosis Based on Ultrasound Images 基于超声图像的肝硬化计算机辅助诊断系统
Pub Date : 2018-05-02 DOI: 10.1145/3220267.3220283
Reham Rabie, M. Eltoukhy, M. Al-Shatouri, E. Rashed
This work introduces a computer-aided diagnosis (CAD) system for diagnosing liver cirrhosis in ultrasound (US) images. The proposed system uses a set of features obtained from different feature extraction methods. These features are the first order statistics (FOS), the fractal dimension (FD), the gray level co-occurrence matrix (GLCM), the Gabor filter (GF), the wavelet (WT) and the curvelet (CT) features. The measured features are presented in two different classifiers such as support vector machine (SVM) and k-nearest neighbors (K-NN). The proposed system is applied on dataset consists of 72 cirrhosis and 75 normal regions each of 128x128 pixels. The classification accuracy rates are calculated using a 10-fold cross validation. A correlation-based feature selection (CFS) is used resulting in better accuracy predictions. The results showed that SVM and K-NN classifiers achieved higher performance with the combination of the wavelet and curvelet feature vectors than other feature extraction methods.
本文介绍了一种用于超声诊断肝硬化的计算机辅助诊断(CAD)系统。该系统使用了一组从不同特征提取方法中获得的特征。这些特征是一阶统计量(FOS)、分形维数(FD)、灰度共生矩阵(GLCM)、Gabor滤波器(GF)、小波(WT)和曲线(CT)特征。测量的特征用支持向量机(SVM)和k近邻(K-NN)两种不同的分类器来表示。该系统应用于由72个肝硬化区域和75个正常区域组成的数据集,每个区域为128 × 128像素。使用10倍交叉验证计算分类准确率。使用了基于相关性的特征选择(CFS),从而提高了预测的准确性。结果表明,结合小波特征向量和曲线特征向量的SVM和K-NN分类器比其他特征提取方法具有更高的性能。
{"title":"Computer Aided Diagnosis System for Liver Cirrhosis Based on Ultrasound Images","authors":"Reham Rabie, M. Eltoukhy, M. Al-Shatouri, E. Rashed","doi":"10.1145/3220267.3220283","DOIUrl":"https://doi.org/10.1145/3220267.3220283","url":null,"abstract":"This work introduces a computer-aided diagnosis (CAD) system for diagnosing liver cirrhosis in ultrasound (US) images. The proposed system uses a set of features obtained from different feature extraction methods. These features are the first order statistics (FOS), the fractal dimension (FD), the gray level co-occurrence matrix (GLCM), the Gabor filter (GF), the wavelet (WT) and the curvelet (CT) features. The measured features are presented in two different classifiers such as support vector machine (SVM) and k-nearest neighbors (K-NN). The proposed system is applied on dataset consists of 72 cirrhosis and 75 normal regions each of 128x128 pixels. The classification accuracy rates are calculated using a 10-fold cross validation. A correlation-based feature selection (CFS) is used resulting in better accuracy predictions. The results showed that SVM and K-NN classifiers achieved higher performance with the combination of the wavelet and curvelet feature vectors than other feature extraction methods.","PeriodicalId":177522,"journal":{"name":"International Conference on Software and Information Engineering","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122315646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using SMOTE and Heterogeneous Stacking in Ensemble learning for Software Defect Prediction 基于SMOTE和异构堆叠的集成学习软件缺陷预测
Pub Date : 2018-05-02 DOI: 10.1145/3220267.3220286
S. El-Shorbagy, Wael El-Gammal, W. Abdelmoez
Nowadays, there are a lot of classifications models used for predictions in the software engineering field such as effort estimation and defect prediction. One of these models is the ensemble learning machine that improves model performance by combining multiple models in different ways to get a more powerful model. One of the problems facing the prediction model is the misclassification of the minority samples. This problem mainly appears in the case of defect prediction. Our aim is the classification of defects which are considered minority samples during the training phase. This can be improved by implementing the Synthetic Minority Over-Sampling Technique (SMOTE) before the implementation of the ensemble model which leads to over-sample the minority class instances. In this paper, our work propose applying a new ensemble model by combining the SMOTE technique with the heterogeneous stacking ensemble to get the most benefit and performance in training a dataset that focus on the minority subset as in the software prediction study. Our proposed model shows better performance that overcomes other techniques results applied on the minority samples of the defect prediction.
目前,在软件工程领域中有许多用于预测的分类模型,如工作量估计和缺陷预测。其中一种模型是集成学习机,它通过以不同的方式组合多个模型来获得更强大的模型,从而提高模型的性能。预测模型面临的问题之一是对少数样本的错误分类。这个问题主要出现在缺陷预测的情况下。我们的目标是在训练阶段对被认为是少数样本的缺陷进行分类。这可以通过在集成模型实现之前实现合成少数类过采样技术(SMOTE)来改进,因为集成模型会导致少数类实例过采样。在本文中,我们的工作提出了一种新的集成模型,通过将SMOTE技术与异构堆叠集成相结合,在训练集中于少数子集的数据集时获得最大的收益和性能,就像在软件预测研究中一样。我们提出的模型表现出更好的性能,克服了其他技术在缺陷预测的少数样本上应用的结果。
{"title":"Using SMOTE and Heterogeneous Stacking in Ensemble learning for Software Defect Prediction","authors":"S. El-Shorbagy, Wael El-Gammal, W. Abdelmoez","doi":"10.1145/3220267.3220286","DOIUrl":"https://doi.org/10.1145/3220267.3220286","url":null,"abstract":"Nowadays, there are a lot of classifications models used for predictions in the software engineering field such as effort estimation and defect prediction. One of these models is the ensemble learning machine that improves model performance by combining multiple models in different ways to get a more powerful model.\u0000 One of the problems facing the prediction model is the misclassification of the minority samples. This problem mainly appears in the case of defect prediction. Our aim is the classification of defects which are considered minority samples during the training phase. This can be improved by implementing the Synthetic Minority Over-Sampling Technique (SMOTE) before the implementation of the ensemble model which leads to over-sample the minority class instances.\u0000 In this paper, our work propose applying a new ensemble model by combining the SMOTE technique with the heterogeneous stacking ensemble to get the most benefit and performance in training a dataset that focus on the minority subset as in the software prediction study. Our proposed model shows better performance that overcomes other techniques results applied on the minority samples of the defect prediction.","PeriodicalId":177522,"journal":{"name":"International Conference on Software and Information Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129966269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
International Conference on Software and Information Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1