首页 > 最新文献

2014 14th International Conference on Hybrid Intelligent Systems最新文献

英文 中文
Analysis of modified triple — A steganography technique using Fisher Yates algorithm 基于Fisher - Yates算法的改进三重A隐写技术分析
Pub Date : 2014-12-01 DOI: 10.1109/HIS.2014.7086199
S. Alam, S. Zakariya, N. Akhtar
Steganography is a science of embedding private, confidential, sensitive data or information within the given cover media without making any visible changes to it. In this paper, we present a modified Triple-A method for RGB image based steganography. This method introduces the concept of storing variable number of bits in each channel (R, G or B) of pixel. We come out with extended Randomize pixel Steganography algorithm without any limitations on the type of images being used. In this analysis, we focus on the property of human vision system that helps to increase the amount of data hiding in the images practically. In this work, we hide the data in pixel which is selected by randomly using Fisher Yates algorithm. The security can be enhanced by cleverly embedding the data, along with a random choice of pixel position. It offers very high security of messages that hidden in images.
隐写术是一种在给定的封面媒体中嵌入私人、机密、敏感数据或信息而不对其进行任何明显改变的科学。本文提出了一种改进的基于RGB图像的3a隐写方法。该方法引入了在像素的每个通道(R、G或B)中存储可变位数的概念。我们提出了扩展的随机像素隐写算法,对所使用的图像类型没有任何限制。在这一分析中,我们关注的是人类视觉系统的特性,这有助于在实际中增加隐藏在图像中的数据量。在这项工作中,我们使用Fisher Yates算法将随机选择的数据隐藏在像素中。通过巧妙地嵌入数据,以及随机选择像素位置,可以增强安全性。它为隐藏在图像中的信息提供了非常高的安全性。
{"title":"Analysis of modified triple — A steganography technique using Fisher Yates algorithm","authors":"S. Alam, S. Zakariya, N. Akhtar","doi":"10.1109/HIS.2014.7086199","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086199","url":null,"abstract":"Steganography is a science of embedding private, confidential, sensitive data or information within the given cover media without making any visible changes to it. In this paper, we present a modified Triple-A method for RGB image based steganography. This method introduces the concept of storing variable number of bits in each channel (R, G or B) of pixel. We come out with extended Randomize pixel Steganography algorithm without any limitations on the type of images being used. In this analysis, we focus on the property of human vision system that helps to increase the amount of data hiding in the images practically. In this work, we hide the data in pixel which is selected by randomly using Fisher Yates algorithm. The security can be enhanced by cleverly embedding the data, along with a random choice of pixel position. It offers very high security of messages that hidden in images.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116022205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Drusen exudate lesion discrimination in colour fundus images 眼底彩色图像中脓性渗出病灶的鉴别
Pub Date : 2014-12-01 DOI: 10.1109/HIS.2014.7086193
Saima Waseem, M. Akram, Bilal Ashfaq Ahmed
Automatic screening and diagnosis of ocular disease through fundus images are in place and considered worldwide. One of the leading sight loosing disease known as age related macular degeneration (AMD) has many proposed automatic screening systems. These systems detect yellow bright lesion and through the number of lesion and their size the disease is graded as advance and earlier stage. It becomes difficult for these systems to differentiate drusens from exudates another bright lesion associated with Diabetic retinopathy. These two lesions look similar on retinal surface. Differentiating these two lesions can improve the performance of any automatic system. In this paper we proposed a novel approach to discriminate these lesions. The approach consists of two stage procedure. The first stage after pre-processing detects all bright pixels from the image. The suspicious pixels are removed from the detected region. On the second stage bright regions are classified as drusen and exudates through Support Vector Machine (SVM). Proposed method was evaluated on publically available dataset STARE. The system achieve 92% accuracy.
通过眼底图像进行眼部疾病的自动筛查和诊断已经在世界范围内实现。一种主要的视力丧失疾病被称为年龄相关性黄斑变性(AMD)有许多建议的自动筛查系统。这些系统检测到黄色明亮的病变,并通过病变的数量和大小将疾病分为晚期和早期。这些系统很难区分渗出物和脓肿,这是另一种与糖尿病视网膜病变相关的明亮病变。这两个病变在视网膜表面看起来很相似。区分这两种病变可以提高任何自动系统的性能。在本文中,我们提出了一种新的方法来区分这些病变。该方法包括两个阶段的程序。预处理后的第一阶段检测图像中的所有亮像素。从检测区域中去除可疑像素。在第二阶段,通过支持向量机(SVM)将明亮区域分类为积水和渗出。在公开数据集STARE上对该方法进行了评估。系统精度达到92%。
{"title":"Drusen exudate lesion discrimination in colour fundus images","authors":"Saima Waseem, M. Akram, Bilal Ashfaq Ahmed","doi":"10.1109/HIS.2014.7086193","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086193","url":null,"abstract":"Automatic screening and diagnosis of ocular disease through fundus images are in place and considered worldwide. One of the leading sight loosing disease known as age related macular degeneration (AMD) has many proposed automatic screening systems. These systems detect yellow bright lesion and through the number of lesion and their size the disease is graded as advance and earlier stage. It becomes difficult for these systems to differentiate drusens from exudates another bright lesion associated with Diabetic retinopathy. These two lesions look similar on retinal surface. Differentiating these two lesions can improve the performance of any automatic system. In this paper we proposed a novel approach to discriminate these lesions. The approach consists of two stage procedure. The first stage after pre-processing detects all bright pixels from the image. The suspicious pixels are removed from the detected region. On the second stage bright regions are classified as drusen and exudates through Support Vector Machine (SVM). Proposed method was evaluated on publically available dataset STARE. The system achieve 92% accuracy.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122994567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
ECG signals analysis for biometric recognition 生物特征识别中的心电信号分析
Pub Date : 2014-12-01 DOI: 10.1109/HIS.2014.7086192
M. Tantawi, A. Salem, M. Tolba
Electrocardiogram (ECG) as a new biometric trait has the advantage of being a liveliness indicator and difficult to be spoofed or falsified. According to the utilized features, the existing ECG based biometric systems can be classified to fiducial and non-fiducial systems. The computation of fiducial features requires the accurate detection of 11 fiducial points which is a very challenging task. On the other hand, non-fiducial approaches relax the detection process but usually result in high dimension feature space. This paper presents a systematic study for ECG based individual identification. A fiducial based approach that utilizes a feature set selected by information gain IG criterion is first introduced. Furthermore, a non-fiducial wavelet based approach is proposed. To avoid the high dimensionality of the resultant wavelet coefficient structure, the structure has been investigated and reduced using also IG criterion. The proposed feature sets were examined and compared using radial basis functions (RBF) neural network classifier. The conducted experiments using Physionet databases revealed the superiority of our suggested non-fiducial approach.
心电图作为一种新型的生物特征,具有生命力指标强、不易被欺骗和伪造等优点。根据利用的特征,现有的基于心电的生物识别系统可分为基准系统和非基准系统。基准特征的计算需要精确检测11个基点,这是一项非常具有挑战性的任务。另一方面,非基准方法简化了检测过程,但通常会导致高维特征空间。本文对基于心电的个体识别进行了系统的研究。首先介绍了一种基于基准的方法,该方法利用信息增益IG准则选择的特征集。在此基础上,提出了一种基于非基小波的方法。为了避免所得到的小波系数结构的高维数,对结构进行了研究,并使用IG准则进行了降维。利用径向基函数(RBF)神经网络分类器对所提出的特征集进行检测和比较。使用Physionet数据库进行的实验揭示了我们建议的非基准方法的优越性。
{"title":"ECG signals analysis for biometric recognition","authors":"M. Tantawi, A. Salem, M. Tolba","doi":"10.1109/HIS.2014.7086192","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086192","url":null,"abstract":"Electrocardiogram (ECG) as a new biometric trait has the advantage of being a liveliness indicator and difficult to be spoofed or falsified. According to the utilized features, the existing ECG based biometric systems can be classified to fiducial and non-fiducial systems. The computation of fiducial features requires the accurate detection of 11 fiducial points which is a very challenging task. On the other hand, non-fiducial approaches relax the detection process but usually result in high dimension feature space. This paper presents a systematic study for ECG based individual identification. A fiducial based approach that utilizes a feature set selected by information gain IG criterion is first introduced. Furthermore, a non-fiducial wavelet based approach is proposed. To avoid the high dimensionality of the resultant wavelet coefficient structure, the structure has been investigated and reduced using also IG criterion. The proposed feature sets were examined and compared using radial basis functions (RBF) neural network classifier. The conducted experiments using Physionet databases revealed the superiority of our suggested non-fiducial approach.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127231102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Effectiveness of modified iterative decoding algorithm for Cubic Product Codes 改进的三次积码迭代译码算法的有效性
Pub Date : 2014-12-01 DOI: 10.1109/HIS.2014.7086209
Atta-ur-Rahman, I. Qureshi
In this paper a modified iterative decoding algorithm (MIDA) is proposed for decoding the Cubic Product Codes (CPC), also called three dimensional product block codes. It is a hard decision decoder that was initially proposed by the same authors for decoding simple product codes, where the decoding complexity of the basic iterative algorithm was significantly reduced with negligible performance degradation. Two versions of the proposed algorithm are investigated that are with and without complexity reduction. A complexity and performance trade-off is also highlighted. Bit error rate (BER) performance of the proposed algorithm over a Rayleigh flat fading channel, is demonstrated by the simulations.
本文提出了一种改进的迭代译码算法(MIDA),用于解码三次积码(CPC),也称为三维积块码。这是一种硬决策解码器,最初由同一作者提出,用于解码简单的产品代码,其中基本迭代算法的解码复杂性显著降低,性能下降可以忽略不计。研究了复杂度降低和不降低的两种算法。还强调了复杂性和性能的权衡。仿真结果验证了该算法在瑞利平坦衰落信道下的误码率性能。
{"title":"Effectiveness of modified iterative decoding algorithm for Cubic Product Codes","authors":"Atta-ur-Rahman, I. Qureshi","doi":"10.1109/HIS.2014.7086209","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086209","url":null,"abstract":"In this paper a modified iterative decoding algorithm (MIDA) is proposed for decoding the Cubic Product Codes (CPC), also called three dimensional product block codes. It is a hard decision decoder that was initially proposed by the same authors for decoding simple product codes, where the decoding complexity of the basic iterative algorithm was significantly reduced with negligible performance degradation. Two versions of the proposed algorithm are investigated that are with and without complexity reduction. A complexity and performance trade-off is also highlighted. Bit error rate (BER) performance of the proposed algorithm over a Rayleigh flat fading channel, is demonstrated by the simulations.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125995114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A fuzzy logic-based emotional intelligence framework for evaluating and orienting new students at HCT Dubai colleges 基于模糊逻辑的迪拜大学HCT新生情商评估框架
Pub Date : 2014-12-01 DOI: 10.1109/HIS.2014.7086177
F. Bouslama, Michelle Housley, Andrew Steele
Academic institutions across the world face the challenge of providing new intakes of students with appropriate orientation and counseling services to better help students cope with the changes and challenges of life at university or college. These sessions are often based on measurements of the students' technical skills or Intelligence Quotient (IQ) levels such as mathematical computation and communications abilities. However, Emotional Intelligence (E.I.) tests, which have become an essential tool and an integral part of the recruiting, orientation, and counseling strategies of many individuals and organizations, are not often part of these evaluation schemes. At some academic institutions, a partial test of those skills is conducted but may not provide a holistic view of the emotional intelligence of each individual. In this paper, a set of EI tests covering four general areas of EI is proposed to evaluate the emotional intelligence of the new intakes at the HCT Dubai Colleges. These tests will help identify students who lack experience with non-cognitive capabilities including competencies and skills that may influence their abilities to succeed in coping with educational environmental demands and pressures. A fuzzy-based emotional intelligence modeling and processing framework is also proposed to better model and capture uncertainties in surveys of new intakes, and which will deal well with the complexities of the classification system. This new system is expected to help the HCT Dubai Colleges better design and prepare orientation and counseling interventions which will help students develop their abilities to perceive, to access and generate emotions to promote their emotional and intellectual growth.
世界各地的学术机构都面临着为新入学的学生提供适当的适应和咨询服务的挑战,以更好地帮助学生应对大学生活的变化和挑战。这些课程通常基于对学生的技术技能或智商(IQ)水平的测量,如数学计算和沟通能力。然而,情商测试已经成为许多个人和组织的招聘、定向和咨询策略的重要工具和组成部分,却往往不是这些评估方案的一部分。在一些学术机构,对这些技能进行了部分测试,但可能无法提供每个人情商的整体视图。在本文中,提出了一套涵盖情商四个一般领域的情商测试,以评估HCT迪拜学院新入学学生的情商。这些测试将有助于确定缺乏非认知能力经验的学生,包括可能影响其成功应对教育环境要求和压力的能力的能力和技能。提出了一种基于模糊的情绪智力建模和处理框架,以更好地建模和捕捉新入学调查中的不确定性,并能很好地处理分类系统的复杂性。这个新系统有望帮助HCT迪拜学院更好地设计和准备定向和咨询干预,帮助学生发展他们感知、获取和产生情感的能力,促进他们的情感和智力发展。
{"title":"A fuzzy logic-based emotional intelligence framework for evaluating and orienting new students at HCT Dubai colleges","authors":"F. Bouslama, Michelle Housley, Andrew Steele","doi":"10.1109/HIS.2014.7086177","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086177","url":null,"abstract":"Academic institutions across the world face the challenge of providing new intakes of students with appropriate orientation and counseling services to better help students cope with the changes and challenges of life at university or college. These sessions are often based on measurements of the students' technical skills or Intelligence Quotient (IQ) levels such as mathematical computation and communications abilities. However, Emotional Intelligence (E.I.) tests, which have become an essential tool and an integral part of the recruiting, orientation, and counseling strategies of many individuals and organizations, are not often part of these evaluation schemes. At some academic institutions, a partial test of those skills is conducted but may not provide a holistic view of the emotional intelligence of each individual. In this paper, a set of EI tests covering four general areas of EI is proposed to evaluate the emotional intelligence of the new intakes at the HCT Dubai Colleges. These tests will help identify students who lack experience with non-cognitive capabilities including competencies and skills that may influence their abilities to succeed in coping with educational environmental demands and pressures. A fuzzy-based emotional intelligence modeling and processing framework is also proposed to better model and capture uncertainties in surveys of new intakes, and which will deal well with the complexities of the classification system. This new system is expected to help the HCT Dubai Colleges better design and prepare orientation and counseling interventions which will help students develop their abilities to perceive, to access and generate emotions to promote their emotional and intellectual growth.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130251385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Laser marks detection from fundus images 眼底图像激光标记检测
Pub Date : 2014-12-01 DOI: 10.1109/HIS.2014.7086188
Faraz Tahir, M. Akram, M. Abbass, Albab Ahmad Khan
Eye diseases such as diabetic retinopathy may cause blindness. At the advanced stages of diabetic retinopathy further disease progression is stopped using laser treatment. Laser treatment leaves behind marks on the retinal surface that causes misbehaviors in automated retinal diagnostic system. These laser marks hinders the further analysis of the retinal images so it is desirable to detect laser marks and remove them to avoid any unnecessary processing. This paper presents a method to automatically detect laser marks from the retinal images and present some results based on the performance evaluation.
糖尿病视网膜病变等眼病可能导致失明。在糖尿病视网膜病变的晚期,使用激光治疗停止进一步的疾病进展。激光治疗会在视网膜表面留下痕迹,导致视网膜自动诊断系统出现错误行为。这些激光标记阻碍了视网膜图像的进一步分析,因此需要检测激光标记并去除它们以避免任何不必要的处理。本文提出了一种从视网膜图像中自动检测激光标记的方法,并在性能评估的基础上给出了一些结果。
{"title":"Laser marks detection from fundus images","authors":"Faraz Tahir, M. Akram, M. Abbass, Albab Ahmad Khan","doi":"10.1109/HIS.2014.7086188","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086188","url":null,"abstract":"Eye diseases such as diabetic retinopathy may cause blindness. At the advanced stages of diabetic retinopathy further disease progression is stopped using laser treatment. Laser treatment leaves behind marks on the retinal surface that causes misbehaviors in automated retinal diagnostic system. These laser marks hinders the further analysis of the retinal images so it is desirable to detect laser marks and remove them to avoid any unnecessary processing. This paper presents a method to automatically detect laser marks from the retinal images and present some results based on the performance evaluation.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121700993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Hybrid model for information filtering in location based social networks using text mining 基于文本挖掘的位置社交网络信息过滤混合模型
Pub Date : 2014-12-01 DOI: 10.1109/HIS.2014.7086206
Rodrigo Miranda Feitosa, S. Labidi, André Luis Silva dos Santos
The research aims to create an application that uses techniques from Machine Learning to extract and collate data geolocated - collected a Social Network, aiming to promote the Social Recommendation users. Existing research in the field of social recommendation deficiencies remain regarding the effectiveness of the filtered data. This paper presents a study and implementation using Text Mining techniques as a proposal for resolution of problems found in social recommendation and more effective results.
该研究旨在创建一个应用程序,该应用程序使用机器学习技术来提取和整理社交网络中收集的地理位置数据,旨在促进社交推荐用户。社会推荐领域的现有研究在过滤数据的有效性方面仍然存在不足。本文提出了一种使用文本挖掘技术来解决社交推荐中发现的问题并获得更有效结果的方法。
{"title":"Hybrid model for information filtering in location based social networks using text mining","authors":"Rodrigo Miranda Feitosa, S. Labidi, André Luis Silva dos Santos","doi":"10.1109/HIS.2014.7086206","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086206","url":null,"abstract":"The research aims to create an application that uses techniques from Machine Learning to extract and collate data geolocated - collected a Social Network, aiming to promote the Social Recommendation users. Existing research in the field of social recommendation deficiencies remain regarding the effectiveness of the filtered data. This paper presents a study and implementation using Text Mining techniques as a proposal for resolution of problems found in social recommendation and more effective results.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113939373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Extraction of association rules used for assessing web sites' quality from a set of criteria 从一组标准中提取用于评估网站质量的关联规则
Pub Date : 2014-12-01 DOI: 10.1109/HIS.2014.7086164
Rim Rekik, I. Kallel, A. Alimi
The amount of circulating data on the internet has witnessed a considerable increase during the last decades. A web site is the main source that provides users' needs. However, some of the existing web sites are not well intentioned by users. Many studies have treated the problem of assessing the web sites' quality of different categories such as ecommerce, education, entertainment, health, etc. The problematic implies a multiple criteria decision making (MCDM) due to the multiple conflicting criteria for assessment. Existing methods are mainly based on making a hierarchy to divide high level criteria, sub-level criteria and alternatives. There is no standard until now that defines important criteria for evaluation. Indeed, this paper presents a process of collecting and extracting data from a list of studies according to a Systematic Literature Review (SLR) method. In fact, it is necessary to know frequent criteria used in the literature for establishing the task of assessment. This paper proposes also a determination of an association rules' set extracted from a set of criteria by applying an Apriori method.
在过去的几十年里,互联网上流通的数据量有了相当大的增长。网站是提供用户需求的主要来源。然而,一些现有的网站并不是出于用户的好意。许多研究已经处理了评估不同类别网站质量的问题,如电子商务、教育、娱乐、健康等。由于多个相互冲突的评估标准,该问题暗示了多标准决策(MCDM)。现有的方法主要基于建立层次结构来划分高层标准、下级标准和备选方案。到目前为止,还没有标准定义评估的重要标准。实际上,本文介绍了一个根据系统文献综述(SLR)方法从研究列表中收集和提取数据的过程。事实上,有必要了解文献中常用的标准,以确定评估任务。本文还提出了一种应用Apriori方法从一组标准中提取关联规则集的方法。
{"title":"Extraction of association rules used for assessing web sites' quality from a set of criteria","authors":"Rim Rekik, I. Kallel, A. Alimi","doi":"10.1109/HIS.2014.7086164","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086164","url":null,"abstract":"The amount of circulating data on the internet has witnessed a considerable increase during the last decades. A web site is the main source that provides users' needs. However, some of the existing web sites are not well intentioned by users. Many studies have treated the problem of assessing the web sites' quality of different categories such as ecommerce, education, entertainment, health, etc. The problematic implies a multiple criteria decision making (MCDM) due to the multiple conflicting criteria for assessment. Existing methods are mainly based on making a hierarchy to divide high level criteria, sub-level criteria and alternatives. There is no standard until now that defines important criteria for evaluation. Indeed, this paper presents a process of collecting and extracting data from a list of studies according to a Systematic Literature Review (SLR) method. In fact, it is necessary to know frequent criteria used in the literature for establishing the task of assessment. This paper proposes also a determination of an association rules' set extracted from a set of criteria by applying an Apriori method.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123046397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Nonorthogonal DCT implementation for JPEG forensics JPEG取证的非正交DCT实现
Pub Date : 2014-12-01 DOI: 10.1109/HIS.2014.7086195
G. Fahmy
The detection of JPEG prior compression has become an essential task in the detection of forgery in image forensics. In this paper we propose a novel DCT implementation technique that can be utilized in the detection of any hacking or tampering of JPEG/DCT compressed images. The proposed approach is based upon recent literature ideas of recompressing JPEG image blocks and detecting if this block has been compressed before or not and how many times. In this paper we proposed a DCT implementation that has a onetime signature on processed coefficients or pixels and can be used as a tool to detect if this block has been compressed before using the proposed implementation or not. Any further processing can be easily detected and identified. The proposed DCT transformation is nonorthogonal and results in a minor amount of error due to this nonorthogonality, however it maintains an excellent tradeoff between compression performance, and transform error. Illustrative examples on several processed images are presented with complexity analysis.
JPEG图像的先验压缩检测已成为图像取证中伪造检测的一项重要任务。在本文中,我们提出了一种新的DCT实现技术,可用于检测任何对JPEG/DCT压缩图像的黑客攻击或篡改。所提出的方法基于最近的文献思想,即重新压缩JPEG图像块并检测该块之前是否被压缩过以及被压缩了多少次。在本文中,我们提出了一种DCT实现,该实现对处理过的系数或像素具有一次性签名,并且可以用作检测该块在使用所提出的实现之前是否已被压缩的工具。任何进一步的处理都可以很容易地检测和识别。所提出的DCT变换是非正交的,由于这种非正交性导致的误差很小,但是它在压缩性能和变换误差之间保持了很好的平衡。给出了几种处理后图像的实例,并进行了复杂度分析。
{"title":"Nonorthogonal DCT implementation for JPEG forensics","authors":"G. Fahmy","doi":"10.1109/HIS.2014.7086195","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086195","url":null,"abstract":"The detection of JPEG prior compression has become an essential task in the detection of forgery in image forensics. In this paper we propose a novel DCT implementation technique that can be utilized in the detection of any hacking or tampering of JPEG/DCT compressed images. The proposed approach is based upon recent literature ideas of recompressing JPEG image blocks and detecting if this block has been compressed before or not and how many times. In this paper we proposed a DCT implementation that has a onetime signature on processed coefficients or pixels and can be used as a tool to detect if this block has been compressed before using the proposed implementation or not. Any further processing can be easily detected and identified. The proposed DCT transformation is nonorthogonal and results in a minor amount of error due to this nonorthogonality, however it maintains an excellent tradeoff between compression performance, and transform error. Illustrative examples on several processed images are presented with complexity analysis.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127921111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ranking model adaptation for domain specific mining using binary classifier for sponsored ads 基于二元分类器的赞助广告领域挖掘排序模型自适应
Pub Date : 2014-12-01 DOI: 10.1109/HIS.2014.7086171
M. Krishnamurthy, N. Jaishree, A. S. Pillai, A. Kannan
Domain - specific search focuses on one area of knowledge. Applying broad based ranking algorithms to vertical search domains is not desirable. The broad based ranking model builds upon the data from multiple domains existing on the web. Vertical search engines attempt to use a focused crawler that index only relevant web pages to a predefined topic. With Ranking Adaptation Model, one can adapt an existing ranking model of a unique new domain. The binary classifiers classify the members of a given set of objects into two groups on the basis of whether they have some property or not. If it is property of relevancy, it is returned to the search query of that particular domain vertical. Sponsored ads are then placed alongside the organic search results and they are ranked with the help of bid, budget and quality score. The ad with the highest bid is placed first in the ad listings. Later, the ad with a maximum quality score is found by click through logs which is replaced in first position. Thus, both organic search and sponsored ads are returned for the specific domain, making it easy for the users to get access to real time ads and connect directly with advertisers as well as to get information on the search query.
特定领域搜索侧重于某一知识领域。将广泛的排名算法应用于垂直搜索域是不可取的。基于广泛的排名模型建立在来自web上存在的多个领域的数据之上。垂直搜索引擎尝试使用一个集中的爬虫,只索引相关的网页到一个预定义的主题。使用排名自适应模型,可以对一个独特的新领域的现有排名模型进行调整。二元分类器根据给定对象集合的成员是否具有某些属性将其分为两组。如果是相关属性,则返回到该特定领域垂直的搜索查询中。然后将赞助广告放置在自然搜索结果旁边,并根据出价、预算和质量得分对其进行排名。出价最高的广告放在广告列表的首位。随后,通过点击记录找到质量得分最高的广告,并将其替换为第一名。因此,对于特定的域,自然搜索和赞助广告都会返回,这使得用户很容易获得实时广告,并直接与广告商联系,以及获得有关搜索查询的信息。
{"title":"Ranking model adaptation for domain specific mining using binary classifier for sponsored ads","authors":"M. Krishnamurthy, N. Jaishree, A. S. Pillai, A. Kannan","doi":"10.1109/HIS.2014.7086171","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086171","url":null,"abstract":"Domain - specific search focuses on one area of knowledge. Applying broad based ranking algorithms to vertical search domains is not desirable. The broad based ranking model builds upon the data from multiple domains existing on the web. Vertical search engines attempt to use a focused crawler that index only relevant web pages to a predefined topic. With Ranking Adaptation Model, one can adapt an existing ranking model of a unique new domain. The binary classifiers classify the members of a given set of objects into two groups on the basis of whether they have some property or not. If it is property of relevancy, it is returned to the search query of that particular domain vertical. Sponsored ads are then placed alongside the organic search results and they are ranked with the help of bid, budget and quality score. The ad with the highest bid is placed first in the ad listings. Later, the ad with a maximum quality score is found by click through logs which is replaced in first position. Thus, both organic search and sponsored ads are returned for the specific domain, making it easy for the users to get access to real time ads and connect directly with advertisers as well as to get information on the search query.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124465144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2014 14th International Conference on Hybrid Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1