首页 > 最新文献

2012 World Congress on Information and Communication Technologies最新文献

英文 中文
Achieving energy efficiency by self-adjusting sensing range and positioning in wireless sensor networks 在无线传感器网络中通过自调节传感距离和定位实现能源效率
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409135
B. Misra, A. K. Nayak, S. Rai
Wireless sensor nodes are mostly battery operated, tiny in size, and deployed in some specified area. The limited capacity of the battery power hinders the performance of the node as well as of the network. This requires efficient management of battery power for optimal utilization of the sensor network technology. In this work we propose a novel method to extend the life time of the sensor network by organizing the sensors into a maximal number of non-disjoint set covers with non-uniform sensing ranges. Sensors present in one set cover remain active at any instant of time while other sensors are in sleep mode, so they consume negligible battery power. Each set cover is activated successively, thereby increasing the total lifetime of the network. Each sensor can move towards the center of the set of nodes it covers to further reduce the sensing range and further diminishes the consumption of energy. It is observed through extensive simulation work that the network lifetime can be increased by adjusting their sensing ranges as compared to sensors with fixed sensing ranges. This method employs genetic algorithm (GA) and greedy heuristic approach to determine optimal sensing range for efficient energy management in sensor network. Simulation results show its efficiency over the fixed-range strategies.
无线传感器节点大多由电池供电,体积很小,部署在特定区域。有限的电池容量会影响节点和网络的性能。这就要求对电池电量进行有效的管理,以实现传感器网络技术的最佳利用。在这项工作中,我们提出了一种新的方法,通过将传感器组织成最大数量的具有非均匀感知范围的不相交集覆盖来延长传感器网络的寿命。当其他传感器处于睡眠模式时,一个套中的传感器在任何时刻都保持活动状态,因此它们消耗的电池电量可以忽略不计。每个集盖依次激活,从而增加了网络的总寿命。每个传感器都可以向其覆盖的节点集的中心移动,从而进一步减小感知范围,进一步减少能量消耗。通过大量的仿真工作可以观察到,与固定传感范围的传感器相比,通过调整其传感范围可以增加网络寿命。该方法采用遗传算法和贪心启发式方法确定传感器网络的最优感知范围,实现传感器网络的高效能量管理。仿真结果表明,该方法优于固定距离策略。
{"title":"Achieving energy efficiency by self-adjusting sensing range and positioning in wireless sensor networks","authors":"B. Misra, A. K. Nayak, S. Rai","doi":"10.1109/WICT.2012.6409135","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409135","url":null,"abstract":"Wireless sensor nodes are mostly battery operated, tiny in size, and deployed in some specified area. The limited capacity of the battery power hinders the performance of the node as well as of the network. This requires efficient management of battery power for optimal utilization of the sensor network technology. In this work we propose a novel method to extend the life time of the sensor network by organizing the sensors into a maximal number of non-disjoint set covers with non-uniform sensing ranges. Sensors present in one set cover remain active at any instant of time while other sensors are in sleep mode, so they consume negligible battery power. Each set cover is activated successively, thereby increasing the total lifetime of the network. Each sensor can move towards the center of the set of nodes it covers to further reduce the sensing range and further diminishes the consumption of energy. It is observed through extensive simulation work that the network lifetime can be increased by adjusting their sensing ranges as compared to sensors with fixed sensing ranges. This method employs genetic algorithm (GA) and greedy heuristic approach to determine optimal sensing range for efficient energy management in sensor network. Simulation results show its efficiency over the fixed-range strategies.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124235038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Development of adaptive channel equalization using DE 基于DE的自适应信道均衡的开发
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409114
P. Khuntia, B. Sahu, C. Mohanty
The digital channel equalizers are located in the front end of the receivers to avoid the effect of Inter-Symbol-Interference (ISI). In this paper, the equalization problem has been viewed as an optimization problem. In past the Least Mean Square Algorithm (LMS), Recursive least square (RLS), Artificial Neural Network (ANN) and Genetic Algorithm (GA) have been successfully employed for nonlinear channel equalization. The LMS, RLS and ANN techniques are derivative based and hence are chances that the parameters may fall to local minima during training. Though GA is a derivative free technique, it takes more converging time. We propose a novel equalization technique based on Differential Evolution (DE). DE is an efficient and powerful population based stochastic search technique for solving optimization problems over continuous space and hence the channel equalization performance is expected to be superior.
为了避免码间干扰(ISI)的影响,数字信道均衡器位于接收机的前端。本文将均衡问题看作是一个优化问题。在非线性信道均衡中,已有最小均方算法(LMS)、递推最小二乘算法(RLS)、人工神经网络(ANN)和遗传算法(GA)等成功应用。LMS、RLS和ANN技术是基于导数的,因此在训练过程中参数有可能降至局部最小值。虽然遗传算法是一种无导数的技术,但其收敛时间较长。提出了一种基于差分进化(DE)的均衡技术。DE是一种高效且强大的基于种群的随机搜索技术,用于解决连续空间上的优化问题,因此信道均衡性能有望取得优异的成绩。
{"title":"Development of adaptive channel equalization using DE","authors":"P. Khuntia, B. Sahu, C. Mohanty","doi":"10.1109/WICT.2012.6409114","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409114","url":null,"abstract":"The digital channel equalizers are located in the front end of the receivers to avoid the effect of Inter-Symbol-Interference (ISI). In this paper, the equalization problem has been viewed as an optimization problem. In past the Least Mean Square Algorithm (LMS), Recursive least square (RLS), Artificial Neural Network (ANN) and Genetic Algorithm (GA) have been successfully employed for nonlinear channel equalization. The LMS, RLS and ANN techniques are derivative based and hence are chances that the parameters may fall to local minima during training. Though GA is a derivative free technique, it takes more converging time. We propose a novel equalization technique based on Differential Evolution (DE). DE is an efficient and powerful population based stochastic search technique for solving optimization problems over continuous space and hence the channel equalization performance is expected to be superior.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129046574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Joint power and subcarrier allocation for layered multicast streaming with proportional rate ratio constraint in OFDMA wireless networks OFDMA无线网络中具有比例率约束的分层组播流联合功率和子载波分配
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409245
Ying Wang, Xianfeng Liu, B. Lin, Fuwen Pang
This paper addresses the optimal multicast resource allocation problem in orthogonal frequency division multiple access (OFDMA) system, where the fine granularity scalability (FGS) video coding is exploited to perform source coding. With FGS, the multicast information is coded into base layer data and enhancement layer data, and there is a predefined proportional rate ratio between the transmission rate of the base layer and the enhancement layer so that an approximate playing progress can be maintained at the receiver. The multicast resource allocation problem is formulated as a mixed binary integer programming problem with the objective of maximizing the network total throughput while maintaining the required proportional rate ratio under a total transmit power constraint for the base station. The problem is solved by using Lagrangian dual decomposition method, resulting in an optimal joint subcarrier and power allocation algorithm with computational complexity being linear with the number of subcarriers. Numerical simulations are conducted to evaluate the performance of the proposed algorithm in terms of convergence and throughput. Simulation results show that the proposed optimal algorithm converges very fast and significantly outperforms the available suboptimal schemes.
研究正交频分多址(OFDMA)系统中多播资源的最优分配问题,利用细粒度可扩展性(FGS)视频编码进行源编码。FGS将组播信息编码为基础层数据和增强层数据,并在基础层和增强层的传输速率之间预先设定一个比例比率,以便在接收端保持一个近似的播放进度。多播资源分配问题是一个混合二进制整数规划问题,其目标是在基站总发射功率约束下,使网络总吞吐量最大化,同时保持所需的比例速率比。利用拉格朗日对偶分解方法求解该问题,得到了一种计算复杂度与子载波数线性相关的最优联合子载波和功率分配算法。通过数值仿真,从收敛性和吞吐量两方面对所提算法的性能进行了评价。仿真结果表明,该算法收敛速度快,显著优于现有的次优方案。
{"title":"Joint power and subcarrier allocation for layered multicast streaming with proportional rate ratio constraint in OFDMA wireless networks","authors":"Ying Wang, Xianfeng Liu, B. Lin, Fuwen Pang","doi":"10.1109/WICT.2012.6409245","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409245","url":null,"abstract":"This paper addresses the optimal multicast resource allocation problem in orthogonal frequency division multiple access (OFDMA) system, where the fine granularity scalability (FGS) video coding is exploited to perform source coding. With FGS, the multicast information is coded into base layer data and enhancement layer data, and there is a predefined proportional rate ratio between the transmission rate of the base layer and the enhancement layer so that an approximate playing progress can be maintained at the receiver. The multicast resource allocation problem is formulated as a mixed binary integer programming problem with the objective of maximizing the network total throughput while maintaining the required proportional rate ratio under a total transmit power constraint for the base station. The problem is solved by using Lagrangian dual decomposition method, resulting in an optimal joint subcarrier and power allocation algorithm with computational complexity being linear with the number of subcarriers. Numerical simulations are conducted to evaluate the performance of the proposed algorithm in terms of convergence and throughput. Simulation results show that the proposed optimal algorithm converges very fast and significantly outperforms the available suboptimal schemes.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117096464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An artificial bee colony algorithm based efficient prediction model for stock market indices 基于人工蜂群算法的股市指数高效预测模型
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409174
M. Rout, B. Majhi, U. M. Mohapatra, R. Mahapatra
The ABC algorithm is a new meta-heuristic approach, having the advantages of memory, multi-characters, local search, and a solution improvement mechanism. It can be used to identify a high quality optimal solution and offer a balance between complexity and performance, thus optimizing forecasting effectiveness. This paper proposes an efficient prediction model for forecasting of short and long range stock market prices of two well know stock indices, S&P 500 and DJIA using a simple adaptive linear combiner (ALC), whose weights are trained using artificial bee colony (ABC) algorithm. The Model is simulated in terms of mean square error (MSE) and extensive simulation study reveals that the performance of the proposed model with the test input patterns is more efficient, accurate than the PSO and GA based trained models.
ABC算法是一种新的元启发式算法,具有记忆、多字符、局部搜索和求解改进机制等优点。它可以用来识别高质量的最优解决方案,并提供复杂性和性能之间的平衡,从而优化预测的有效性。本文提出了一种简单的自适应线性组合器(ALC)预测标准普尔500指数和道琼斯工业平均指数的短期和长期股票市场价格的有效预测模型,其权重使用人工蜂群(ABC)算法进行训练。采用均方误差(MSE)对该模型进行了仿真,大量的仿真研究表明,与基于粒子群算法和遗传算法的训练模型相比,该模型具有更高的效率和准确性。
{"title":"An artificial bee colony algorithm based efficient prediction model for stock market indices","authors":"M. Rout, B. Majhi, U. M. Mohapatra, R. Mahapatra","doi":"10.1109/WICT.2012.6409174","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409174","url":null,"abstract":"The ABC algorithm is a new meta-heuristic approach, having the advantages of memory, multi-characters, local search, and a solution improvement mechanism. It can be used to identify a high quality optimal solution and offer a balance between complexity and performance, thus optimizing forecasting effectiveness. This paper proposes an efficient prediction model for forecasting of short and long range stock market prices of two well know stock indices, S&P 500 and DJIA using a simple adaptive linear combiner (ALC), whose weights are trained using artificial bee colony (ABC) algorithm. The Model is simulated in terms of mean square error (MSE) and extensive simulation study reveals that the performance of the proposed model with the test input patterns is more efficient, accurate than the PSO and GA based trained models.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126907057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Goal based methodology for Web specific Requirements Engineering 针对Web特定需求工程的基于目标的方法
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409070
S. Chawla, S. Srivastava
Web application domain requires specialized focus in Requirements Engineering because of their unique features like dynamic nature, presentation styles, large and heterogeneous audience, and navigational access to information. We propose WebURN notation, an extension to User Requirements Notation that is a goal and scenario based notation for early requirements analysis. An extended meta model for Goal Requirements Language and Use case Maps has been created for Web specific context. Also, the paper describes construction algorithms for creating the Requirements Analysis models from textual information entered by the Requirements Engineer. The Goal based approach, works closely with the Web specific functional and non-functional Requirements and delivers models with lesser conflicts, better choice amongst alternatives and handles crosscutting concerns corresponding to personalization and non-functional requirements.
Web应用程序领域需要在需求工程中进行专门的关注,因为它们具有独特的特性,如动态特性、表示样式、大量且异构的受众,以及对信息的导航访问。我们提出了WebURN表示法,它是用户需求表示法的扩展,是早期需求分析的基于目标和场景的表示法。为特定于Web的上下文创建了目标需求语言和用例图的扩展元模型。此外,本文还描述了根据需求工程师输入的文本信息创建需求分析模型的构造算法。基于目标的方法与特定于Web的功能和非功能需求紧密合作,并交付冲突较少的模型,在备选方案中有更好的选择,并处理与个性化和非功能需求相对应的横切关注点。
{"title":"A Goal based methodology for Web specific Requirements Engineering","authors":"S. Chawla, S. Srivastava","doi":"10.1109/WICT.2012.6409070","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409070","url":null,"abstract":"Web application domain requires specialized focus in Requirements Engineering because of their unique features like dynamic nature, presentation styles, large and heterogeneous audience, and navigational access to information. We propose WebURN notation, an extension to User Requirements Notation that is a goal and scenario based notation for early requirements analysis. An extended meta model for Goal Requirements Language and Use case Maps has been created for Web specific context. Also, the paper describes construction algorithms for creating the Requirements Analysis models from textual information entered by the Requirements Engineer. The Goal based approach, works closely with the Web specific functional and non-functional Requirements and delivers models with lesser conflicts, better choice amongst alternatives and handles crosscutting concerns corresponding to personalization and non-functional requirements.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114340745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Character recognition of Kannada text in low resolution display board images using zone wise statistical features 使用区域统计特征在低分辨率显示板图像中识别卡纳达语文本
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409051
S. Angadi, M. Kodabagi, M. Jerabandi
Automated systems for understanding text in low resolution natural scene images of display boards are facilitating several applications such as blind assistants, traffic guidance systems, tour guide systems, location aware systems and many more. The text recognition at character level is one the important processing steps for development of such systems. In this work, a novel method for recognition of Kannada basic characters using zone wise statistical features is proposed. The method works in two phases; In the first phase, the zone wise statistical features are obtained from training samples and knowledge base is constructed. During testing, the test image is processed to obtain zone wise statistical features and character is recognized using nearest neighbor classifier. The method has been evaluated for 1043 samples and achieves an average recognition accuracy of 83.49%. The method is robust and insensitive to noise, blur, variations in font size and style, uneven thickness and varying lightning conditions.
用于理解显示板的低分辨率自然场景图像中的文本的自动化系统正在促进几种应用,例如盲人助手,交通引导系统,导游系统,位置感知系统等等。字符级别的文本识别是开发此类系统的重要处理步骤之一。本文提出了一种利用区域统计特征识别卡纳达语基本字的新方法。该方法分为两个阶段;第一阶段,从训练样本中提取区域统计特征,构建知识库;在测试过程中,对测试图像进行处理,获得分区域统计特征,并使用最近邻分类器识别特征。该方法对1043个样本进行了测试,平均识别准确率为83.49%。该方法具有鲁棒性,对噪声、模糊、字体大小和样式的变化、厚度不均匀和闪电条件的变化不敏感。
{"title":"Character recognition of Kannada text in low resolution display board images using zone wise statistical features","authors":"S. Angadi, M. Kodabagi, M. Jerabandi","doi":"10.1109/WICT.2012.6409051","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409051","url":null,"abstract":"Automated systems for understanding text in low resolution natural scene images of display boards are facilitating several applications such as blind assistants, traffic guidance systems, tour guide systems, location aware systems and many more. The text recognition at character level is one the important processing steps for development of such systems. In this work, a novel method for recognition of Kannada basic characters using zone wise statistical features is proposed. The method works in two phases; In the first phase, the zone wise statistical features are obtained from training samples and knowledge base is constructed. During testing, the test image is processed to obtain zone wise statistical features and character is recognized using nearest neighbor classifier. The method has been evaluated for 1043 samples and achieves an average recognition accuracy of 83.49%. The method is robust and insensitive to noise, blur, variations in font size and style, uneven thickness and varying lightning conditions.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114458878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The motive for constructing a high level data model on cloud databases 在云数据库上构建高级数据模型的动机
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409257
Jianying Zhang, Hongbo Liu, Hong Yu
When traditional database theories applied to large scale computing environments like cloud computing, difficulties encountered in many aspects, such as ACID transaction, closed-world assumption, data consistency, security, and scalability. In this paper, on the basis of analyzing the limitation of closed world assumption for massively computing environments, and from the viewpoint of complex giant system, a high level data model of relational database outlining a solution to large scale computing environments is proposed, in which semantic clusters of data, called data multitrees are taken as distribution units. We mainly investigate the motive for constructing such a data model, and briefly introduce the model to make the paper complete.
当传统数据库理论应用于云计算等大规模计算环境时,在ACID事务、封闭世界假设、数据一致性、安全性、可扩展性等方面遇到了困难。本文在分析封闭世界假设对大规模计算环境的局限性的基础上,从复杂巨系统的角度出发,提出了一种面向大规模计算环境的关系数据库高级数据模型,该模型以数据的语义聚类(即数据多树)作为分布单元。本文主要探讨了构建该数据模型的动机,并对模型进行了简要介绍,使本文更加完整。
{"title":"The motive for constructing a high level data model on cloud databases","authors":"Jianying Zhang, Hongbo Liu, Hong Yu","doi":"10.1109/WICT.2012.6409257","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409257","url":null,"abstract":"When traditional database theories applied to large scale computing environments like cloud computing, difficulties encountered in many aspects, such as ACID transaction, closed-world assumption, data consistency, security, and scalability. In this paper, on the basis of analyzing the limitation of closed world assumption for massively computing environments, and from the viewpoint of complex giant system, a high level data model of relational database outlining a solution to large scale computing environments is proposed, in which semantic clusters of data, called data multitrees are taken as distribution units. We mainly investigate the motive for constructing such a data model, and briefly introduce the model to make the paper complete.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121607118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
HIXEn: An integration engine for multi-vocabulary health information using REST & semantic metadata mapping HIXEn:一个使用REST和语义元数据映射的多词汇健康信息集成引擎
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409162
S. Purkayastha
Integration of Health Information Systems (HIS) has been a challenge because of the different semantics that are used by health-care practices and different levels of health-care systems need different kinds of information. Looking through the case of two widely used open-source HIS (one patient-level and one aggregate country-level system), the paper analyses multiple approaches for integration. The paper develops a novel integration engine (HIXEn) that uses concepts of distributed hypermedia systems from RESTful architecture and Resource Descriptors from semantic web that can allow health information exchanges to flexibly connect different HIS.
卫生信息系统(HIS)的集成一直是一项挑战,因为卫生保健实践使用不同的语义,不同级别的卫生保健系统需要不同类型的信息。本文通过两个广泛使用的开源HIS(一个患者级系统和一个汇总国家级系统)的案例,分析了集成的多种途径。本文开发了一种新的集成引擎HIXEn,该引擎利用rest架构中的分布式超媒体系统和语义网中的资源描述符的概念,实现了健康信息交换灵活地连接不同的HIS。
{"title":"HIXEn: An integration engine for multi-vocabulary health information using REST & semantic metadata mapping","authors":"S. Purkayastha","doi":"10.1109/WICT.2012.6409162","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409162","url":null,"abstract":"Integration of Health Information Systems (HIS) has been a challenge because of the different semantics that are used by health-care practices and different levels of health-care systems need different kinds of information. Looking through the case of two widely used open-source HIS (one patient-level and one aggregate country-level system), the paper analyses multiple approaches for integration. The paper develops a novel integration engine (HIXEn) that uses concepts of distributed hypermedia systems from RESTful architecture and Resource Descriptors from semantic web that can allow health information exchanges to flexibly connect different HIS.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122256784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fusion of 4-slap fingerprint images with their qualities for human recognition 四巴掌指纹图像及其特征的融合以供人类识别
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409207
N. Singh, K. Tiwari, A. Nigam, P. Gupta
This paper presents an efficient multimodel bio-metric system based on 4 slap fingerprint images. The system utilizes 4 slap fingerprint scanner to simultaneously collect fingerprints of multiple fingers on a hand in one image. The acquired multi-finger images are first segmented to get individual fingers. Quality of each individual finger is estimated and its minutiae points are extracted. The minutiae points of each individual finger extracted from gallery 4 slap fingerprint image is compared with the corresponding individual finger of the query 4 slap fingerprint image to get matching score of that finger. Matching score between two 4 slap fingerprint images is obtained by fusing matching scores of various fingers along with their respective image quality and relative accuracies. Decision of matching has been taken based on the fused matching score. The system has been tested on two 4 slap fingerprint databases viz IITK-student and IITK-rural containing 1007 and 991 subjects respectively. Both databases are acquired in 2 sessions. The correct recognition rate obtained is 91.00% for IITK-rural database and 99.64% for IITK-student database. Respective EER values are 5.64% and 0.94%.
提出了一种基于4张巴掌指纹图像的高效多模型生物识别系统。该系统利用4个巴掌指纹扫描仪,在一张图像中同时采集一只手上多个手指的指纹。首先对采集到的多指图像进行分割得到单个手指;估计每个手指的质量并提取其细节点。将图库4巴掌指纹图像中提取的单个手指的特征点与查询4巴掌指纹图像中对应的单个手指进行比较,得到该手指的匹配分数。通过融合不同手指的匹配分数以及各自的图像质量和相对精度,得到两张4拍指纹图像之间的匹配分数。基于融合匹配分数进行匹配决策。该系统已在IITK-student和IITK-rural两个4个巴掌指纹数据库中分别进行了1007和991个受试者的测试。这两个数据库在2个会话中获得。IITK-rural数据库和IITK-student数据库的正确识别率分别为91.00%和99.64%。EER值分别为5.64%和0.94%。
{"title":"Fusion of 4-slap fingerprint images with their qualities for human recognition","authors":"N. Singh, K. Tiwari, A. Nigam, P. Gupta","doi":"10.1109/WICT.2012.6409207","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409207","url":null,"abstract":"This paper presents an efficient multimodel bio-metric system based on 4 slap fingerprint images. The system utilizes 4 slap fingerprint scanner to simultaneously collect fingerprints of multiple fingers on a hand in one image. The acquired multi-finger images are first segmented to get individual fingers. Quality of each individual finger is estimated and its minutiae points are extracted. The minutiae points of each individual finger extracted from gallery 4 slap fingerprint image is compared with the corresponding individual finger of the query 4 slap fingerprint image to get matching score of that finger. Matching score between two 4 slap fingerprint images is obtained by fusing matching scores of various fingers along with their respective image quality and relative accuracies. Decision of matching has been taken based on the fused matching score. The system has been tested on two 4 slap fingerprint databases viz IITK-student and IITK-rural containing 1007 and 991 subjects respectively. Both databases are acquired in 2 sessions. The correct recognition rate obtained is 91.00% for IITK-rural database and 99.64% for IITK-student database. Respective EER values are 5.64% and 0.94%.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122592747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
An improved unsupervised learning probabilistic model of word sense disambiguation 一种改进的无监督学习语义消歧概率模型
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409234
Xu Li, Xiuyan Zhao, Fenglong Fan, Bai Liu
Unsupervised learning can address the general limitation of supervised learning that sense-tagged text is not available for most domains and is expensive to create. However, the existing unsupervised learning probabilistic models are computationally expensive and convergence slowly because of large numbers and random initialization of model parameters. This paper reduces the noise jamming and the dimensionality of the models by using proposed feature selection and initial parameter estimation. Experimental result shows the accuracy and efficiency of the proposed probabilistic model are obviously improved.
无监督学习可以解决监督学习的一般限制,即大多数领域无法使用带有意义标记的文本,并且创建成本很高。然而,现有的无监督学习概率模型由于模型参数数量多且初始化随机,计算量大,收敛速度慢。本文采用所提出的特征选择和初始参数估计来降低模型的噪声干扰和维数。实验结果表明,该概率模型的精度和效率均有明显提高。
{"title":"An improved unsupervised learning probabilistic model of word sense disambiguation","authors":"Xu Li, Xiuyan Zhao, Fenglong Fan, Bai Liu","doi":"10.1109/WICT.2012.6409234","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409234","url":null,"abstract":"Unsupervised learning can address the general limitation of supervised learning that sense-tagged text is not available for most domains and is expensive to create. However, the existing unsupervised learning probabilistic models are computationally expensive and convergence slowly because of large numbers and random initialization of model parameters. This paper reduces the noise jamming and the dimensionality of the models by using proposed feature selection and initial parameter estimation. Experimental result shows the accuracy and efficiency of the proposed probabilistic model are obviously improved.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126614605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2012 World Congress on Information and Communication Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1