首页 > 最新文献

Second International Conference on the Innovative Computing Technology (INTECH 2012)最新文献

英文 中文
Detection evaluation and testing region incoming people's in a simple camera view 检测评价和测试区域传入的人在一个简单的相机视图
Pub Date : 2012-09-01 DOI: 10.1109/INTECH.2012.6457804
Abderrahmane Ezzahout, Y. Hadi, R. Thami
Moving object detection is considered as a crucial phase of automatic video surveillance systems, particularly, people detection is the first important step in any technique of video analysis processes which can be divided in many stations as motion estimation, tracking people etc. Several methods have been developed for this problem of separating the foreground and background pixels in video surveillance. This paper focuses on computable evaluation of some people detection algorithms for four different video sequences. Our study is based on quantitative and qualitative results respectively by calculating the loss of foreground pixels. Particularly, Three methods have been evaluated by using two metrics: False Negative Error (FNE) and False Positive Error (FPE). In the result we choose the algorithm witch minimize the Error (%). Practically the good technique which dominates on the video surveillance applications is the statistical representation of pixels in foreground which named Gaussian Mixture Model (GMM). In the second part of this paper we control the people entering in a supervised region and we trigger an alarm system in order to find out person presence.
运动目标检测是自动视频监控系统的关键环节,其中,人的检测是视频分析过程中最重要的一步,视频分析过程可分为运动估计、人的跟踪等多个环节。针对视频监控中前景和背景像素的分离问题,已经发展了几种方法。对四种不同视频序列的人物检测算法进行了可计算评价。我们的研究分别基于定量和定性结果,通过计算前景像素的损失。特别地,有三种方法通过使用两个指标进行评估:假阴性误差(FNE)和假阳性误差(FPE)。在计算结果中,我们选择了使误差(%)最小的算法。在实际应用中占据主导地位的较好的技术是前景像素的统计表示,即高斯混合模型(GMM)。在本文的第二部分中,我们对进入监管区域的人员进行控制,并触发报警系统来发现人员的存在。
{"title":"Detection evaluation and testing region incoming people's in a simple camera view","authors":"Abderrahmane Ezzahout, Y. Hadi, R. Thami","doi":"10.1109/INTECH.2012.6457804","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457804","url":null,"abstract":"Moving object detection is considered as a crucial phase of automatic video surveillance systems, particularly, people detection is the first important step in any technique of video analysis processes which can be divided in many stations as motion estimation, tracking people etc. Several methods have been developed for this problem of separating the foreground and background pixels in video surveillance. This paper focuses on computable evaluation of some people detection algorithms for four different video sequences. Our study is based on quantitative and qualitative results respectively by calculating the loss of foreground pixels. Particularly, Three methods have been evaluated by using two metrics: False Negative Error (FNE) and False Positive Error (FPE). In the result we choose the algorithm witch minimize the Error (%). Practically the good technique which dominates on the video surveillance applications is the statistical representation of pixels in foreground which named Gaussian Mixture Model (GMM). In the second part of this paper we control the people entering in a supervised region and we trigger an alarm system in order to find out person presence.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"34 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123144295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Min-Sum Algorithm based efficient high level methodology for design, simulation and hardware implementation of LDPC decoders 基于最小和算法的LDPC解码器设计、仿真和硬件实现的高效高级方法
Pub Date : 2012-09-01 DOI: 10.1109/INTECH.2012.6457751
A. Madi, A. Ahaitouf, A. Mansouri
A Variable Node Processing Unit (VNPU) and a Check Node Processing Unit (CNPU) are designed in order to be used in Low Density Parity Check (LDPC) decoding by the Min-Sum Algorithm (MSA). The designed blocks are fully parallel and flexible to be used for different block length when a regular (3, 6) LDPC codes are required. The proposed VNPU and CNPU have been first designed and implemented in software using Simulink tool following a modular design approach. In a second step, these blocks were described and simulated using Very High Speed integrated circuits Hardware Description Language (VHDL). Comparison between these two implementations shows that the proposed high level methodology is efficient to test and validate digital circuits before being implemented on desired Field Programmable Gate Array (FPGA) device.
设计了可变节点处理单元VNPU (Variable Node Processing Unit)和校验节点处理单元CNPU (Check Node Processing Unit),用于最小和算法(Min-Sum Algorithm)的LDPC (Low Density Parity Check)解码。当需要规则(3,6)LDPC码时,所设计的块是完全并行的,并且可以灵活地用于不同的块长度。所提出的VNPU和CNPU首先使用Simulink工具按照模块化设计方法在软件中设计和实现。第二步,使用超高速集成电路硬件描述语言(VHDL)对这些模块进行描述和仿真。这两种实现的比较表明,所提出的高级方法可以有效地测试和验证数字电路,然后在所需的现场可编程门阵列(FPGA)器件上实现。
{"title":"Min-Sum Algorithm based efficient high level methodology for design, simulation and hardware implementation of LDPC decoders","authors":"A. Madi, A. Ahaitouf, A. Mansouri","doi":"10.1109/INTECH.2012.6457751","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457751","url":null,"abstract":"A Variable Node Processing Unit (VNPU) and a Check Node Processing Unit (CNPU) are designed in order to be used in Low Density Parity Check (LDPC) decoding by the Min-Sum Algorithm (MSA). The designed blocks are fully parallel and flexible to be used for different block length when a regular (3, 6) LDPC codes are required. The proposed VNPU and CNPU have been first designed and implemented in software using Simulink tool following a modular design approach. In a second step, these blocks were described and simulated using Very High Speed integrated circuits Hardware Description Language (VHDL). Comparison between these two implementations shows that the proposed high level methodology is efficient to test and validate digital circuits before being implemented on desired Field Programmable Gate Array (FPGA) device.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131610885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMSGENERATOR: The contribution of XML technology and model transformations LMSGENERATOR: XML技术和模型转换的贡献
Pub Date : 2012-09-01 DOI: 10.1109/INTECH.2012.6457811
R. Dehbi, Mohamed Talea, A. Tragha
The generative engineering is revolutionizing software development just as automation and componentization revolutionized manufacturing. Key technologies for the development of this approach are the model transformations and XML technologies. In this paper we show the contribution of these two techniques in the implementation of a learning management system generator which is multi-target and adaptable to different runtime environment, that we named LMSGENERATEUR. Our innovation introduces the basics of a new programming approach of virtual learning environments. This approach is based on generative programming that integrates the user specifications (abstract models) and technologies desired in order to produce bricks software, then put them together to produce a solution adapted to area and users' needs. In this paper, we present our approach, the implementation of our approach: LMSGENERATOR and these two phases of generation based on transformation model and abstract XML descriptions and we promote the idea that using this approach based on open business models, adaptable, and scalable to obtain open learning environments, adaptable, and scalable. Specifically, thanks to the generative programming, new technologies or new needs can be easily integrated at any time simply by changing the generators and their transformation rule.
生成工程正在革新软件开发,就像自动化和组件化革新了制造业一样。开发这种方法的关键技术是模型转换和XML技术。在本文中,我们展示了这两种技术在实现一个多目标、适应不同运行环境的学习管理系统生成器中的贡献,我们将其命名为LMSGENERATEUR。我们的创新引入了虚拟学习环境的新编程方法的基础。这种方法基于生成式编程,它集成了用户规范(抽象模型)和生成砖块软件所需的技术,然后将它们组合在一起,生成适合区域和用户需求的解决方案。在本文中,我们提出了我们的方法,实现了我们的方法:LMSGENERATOR和基于转换模型和抽象XML描述的这两个阶段的生成,并提出了使用基于开放业务模型的这种方法来获得开放的学习环境,适应性强,可扩展性强。具体来说,由于生成式编程,只要改变生成器及其转换规则,就可以随时轻松地集成新技术或新需求。
{"title":"LMSGENERATOR: The contribution of XML technology and model transformations","authors":"R. Dehbi, Mohamed Talea, A. Tragha","doi":"10.1109/INTECH.2012.6457811","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457811","url":null,"abstract":"The generative engineering is revolutionizing software development just as automation and componentization revolutionized manufacturing. Key technologies for the development of this approach are the model transformations and XML technologies. In this paper we show the contribution of these two techniques in the implementation of a learning management system generator which is multi-target and adaptable to different runtime environment, that we named LMSGENERATEUR. Our innovation introduces the basics of a new programming approach of virtual learning environments. This approach is based on generative programming that integrates the user specifications (abstract models) and technologies desired in order to produce bricks software, then put them together to produce a solution adapted to area and users' needs. In this paper, we present our approach, the implementation of our approach: LMSGENERATOR and these two phases of generation based on transformation model and abstract XML descriptions and we promote the idea that using this approach based on open business models, adaptable, and scalable to obtain open learning environments, adaptable, and scalable. Specifically, thanks to the generative programming, new technologies or new needs can be easily integrated at any time simply by changing the generators and their transformation rule.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"75 2-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126985870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Road-signs detection by using of colour image segmentation method based on the 3D content correlation 基于三维内容相关的彩色图像分割方法在道路标志检测中的应用
Pub Date : 2012-09-01 DOI: 10.1109/INTECH.2012.6457820
H. Bendahri, A. Ezzine
This paper presents a method for segmentation of color images based on the statistical nature of the image. This method is designed to address the problem of detecting road signs in an image taken with a sensor embarked on a vehicle under natural conditions.
本文提出了一种基于图像统计特性的彩色图像分割方法。该方法旨在解决在自然条件下车载传感器拍摄的图像中检测道路标志的问题。
{"title":"Road-signs detection by using of colour image segmentation method based on the 3D content correlation","authors":"H. Bendahri, A. Ezzine","doi":"10.1109/INTECH.2012.6457820","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457820","url":null,"abstract":"This paper presents a method for segmentation of color images based on the statistical nature of the image. This method is designed to address the problem of detecting road signs in an image taken with a sensor embarked on a vehicle under natural conditions.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"27 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114035653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building semantic richness among natural language content 在自然语言内容中构建语义丰富性
Pub Date : 2012-09-01 DOI: 10.1109/INTECH.2012.6457821
S. Al-reyaee, P. Vijayakumar
In this work we propose Inclusive vector to keep the key words available in natural language database. The inclusive vectors are generated by the process of extraction of words given in the source and the cited items of records published in the ISI Thompson Citation Indexes. The proposed inclusive vector exhibits related words and the degree of their relationships. In this work we present the results of the implications of using vectors on the automatic classification of natural language text. In this system, preprocessed documents, extra words as well as word stems are at first found. We have used an enhanced algorithm to bring further semantic relations between the cited and source items in citation databases.
在这项工作中,我们提出了包含向量来保持自然语言数据库中的关键词可用性。包含向量是通过提取源中给出的词和ISI汤普森引文索引中发表的记录的被引项来生成的。所提出的包容性向量展示了相关词及其关系的程度。在这项工作中,我们提出了使用向量对自然语言文本自动分类的影响的结果。在该系统中,首先发现经过预处理的文档、多余的单词和词干。我们使用了一种增强的算法,在引文数据库中进一步建立被引条目和源条目之间的语义关系。
{"title":"Building semantic richness among natural language content","authors":"S. Al-reyaee, P. Vijayakumar","doi":"10.1109/INTECH.2012.6457821","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457821","url":null,"abstract":"In this work we propose Inclusive vector to keep the key words available in natural language database. The inclusive vectors are generated by the process of extraction of words given in the source and the cited items of records published in the ISI Thompson Citation Indexes. The proposed inclusive vector exhibits related words and the degree of their relationships. In this work we present the results of the implications of using vectors on the automatic classification of natural language text. In this system, preprocessed documents, extra words as well as word stems are at first found. We have used an enhanced algorithm to bring further semantic relations between the cited and source items in citation databases.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123957823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information security, new approach 信息安全,新途径
Pub Date : 2012-09-01 DOI: 10.1109/INTECH.2012.6457815
H. Elachgar, B. Regragui
The present paper deals with the 4th wave of the Information Security as a new approach to governance of information security. After introducing the four waves of information security, is focused on the last part which will be part of our thesis. In addition, following the PDCA approach (Plan, Do Check, Act), we will establish an inventory of information system with the SoM (Statement of Maturity), a risk assessment of assets, a business continuity plan to ensure a resumption of IT.
第四次信息安全浪潮是信息安全治理的新途径。在介绍了信息安全的四波浪潮之后,重点是最后一部分,这将是我们论文的一部分。此外,按照PDCA方法(计划、执行、检查、行动),我们将建立一个信息系统的清单,其中包括SoM(成熟度声明)、资产风险评估、业务连续性计划,以确保IT的恢复。
{"title":"Information security, new approach","authors":"H. Elachgar, B. Regragui","doi":"10.1109/INTECH.2012.6457815","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457815","url":null,"abstract":"The present paper deals with the 4th wave of the Information Security as a new approach to governance of information security. After introducing the four waves of information security, is focused on the last part which will be part of our thesis. In addition, following the PDCA approach (Plan, Do Check, Act), we will establish an inventory of information system with the SoM (Statement of Maturity), a risk assessment of assets, a business continuity plan to ensure a resumption of IT.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"255 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121481909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Cloud computing approach based on mobile agents for Web services discovery 一种基于移动代理的云计算方法,用于Web服务发现
Pub Date : 2012-09-01 DOI: 10.1109/INTECH.2012.6457745
Saouli Hamza, Benharkat Aïcha-Nabila, K. Okba, A. Youssef
The increasing use of Web services as well as the growing interest lent to Cloud computing technology and its capacity of calculation, storage and Cloud services, impose today that the traditional mechanisms of discovery must be re-examined. This specificity of discovery over the cloud led us to propose a new approach based on mobile agents dedicated to Web services discovery. More accurately, we propose a new algorithm of comparison between the request of the client and the description of the Web services. Our system is composed of two areas of Cloud. The first deals with Key words based research and second supports the filtering of the found web services. This discovery is performed by an algorithm based on the calculation of similarities between the request and the description of the web services.
Web服务的使用日益增加,对云计算技术及其计算、存储和云服务能力的兴趣日益增长,迫使今天必须重新审视传统的发现机制。通过云进行发现的这种特殊性使我们提出了一种基于专用于Web服务发现的移动代理的新方法。更准确地说,我们提出了一种新的算法来比较客户端的请求和Web服务的描述。我们的系统由两个云区域组成。前者处理基于关键词的搜索,后者支持对发现的web服务进行过滤。这种发现是由基于请求和web服务描述之间相似性计算的算法执行的。
{"title":"A Cloud computing approach based on mobile agents for Web services discovery","authors":"Saouli Hamza, Benharkat Aïcha-Nabila, K. Okba, A. Youssef","doi":"10.1109/INTECH.2012.6457745","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457745","url":null,"abstract":"The increasing use of Web services as well as the growing interest lent to Cloud computing technology and its capacity of calculation, storage and Cloud services, impose today that the traditional mechanisms of discovery must be re-examined. This specificity of discovery over the cloud led us to propose a new approach based on mobile agents dedicated to Web services discovery. More accurately, we propose a new algorithm of comparison between the request of the client and the description of the Web services. Our system is composed of two areas of Cloud. The first deals with Key words based research and second supports the filtering of the found web services. This discovery is performed by an algorithm based on the calculation of similarities between the request and the description of the web services.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115452613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Towards effective processing of large text collections 朝着有效处理大型文本集合的方向发展
Pub Date : 2012-09-01 DOI: 10.1109/INTECH.2012.6457784
J. Szymański, H. Krawczyk
In the article we describe the approach to parallel implementation of elementary operations for textual data categorization. In the experiments we evaluate parallel computations of similarity matrices and k-means algorithm. The test datasets have been prepared as graphs created from Wikipedia articles related with links. When we create the clustering data packages, we compute pairs of eigenvectors and eigenvalues for visualizations of the datasets. We describe the method used for evaluation of the clustering quality. Finally we discuss achieved results, point some improvements and perspectives for future development.
在本文中,我们描述了用于文本数据分类的基本操作的并行实现方法。在实验中,我们评估了相似矩阵和k-means算法的并行计算。测试数据集已经准备好了从维基百科链接相关文章中创建的图表。当我们创建聚类数据包时,我们计算数据集可视化的特征向量和特征值对。我们描述了用于评价聚类质量的方法。最后对取得的成果进行了讨论,并对今后的发展提出了改进意见和展望。
{"title":"Towards effective processing of large text collections","authors":"J. Szymański, H. Krawczyk","doi":"10.1109/INTECH.2012.6457784","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457784","url":null,"abstract":"In the article we describe the approach to parallel implementation of elementary operations for textual data categorization. In the experiments we evaluate parallel computations of similarity matrices and k-means algorithm. The test datasets have been prepared as graphs created from Wikipedia articles related with links. When we create the clustering data packages, we compute pairs of eigenvectors and eigenvalues for visualizations of the datasets. We describe the method used for evaluation of the clustering quality. Finally we discuss achieved results, point some improvements and perspectives for future development.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115672252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation based evaluation of MANET routing protocols for static WSN 基于仿真的静态WSN无线局域网路由协议评估
Pub Date : 2012-09-01 DOI: 10.1109/INTECH.2012.6457750
Ali A. S. Ihbeel, H. Sigiuk, Abdelhadi A. Alhnsh
Wireless Sensor Networks (WSNs) have been regarded as distinguished Ad Hoc networks that can be used for specific applications. Since a WSN consists of potentially hundreds of low cost, small size and battery powered sensor nodes, it has more potentials than other Ad Hoc networks to be deployed in many emerging applications. The WSNs raised many new challenges such as: the design of embedded sensors and wireless networking technologies, in other word the routing protocols. Many Ad Hoc routing protocols such as Ad Hoc On demand Distance Vector (AODV), Dynamic Source Routing Protocol (DSR), and Destination Sequenced Distance Vector (DSDV), which have been developed particularly for the wireless mobile Ad Hoc networks (MANETs), have been proved to perform well on MANETs. In this paper, a performance study will be extended, using some simulation network models, to investigate how well AODV, DSR, and DSDV routing protocols work on WSNs, in static environments, using NS-2 simulator. The performance study will focus on the impact of the network size, network density (up to 500 nodes), and the number of sources (data connections). The performance metrics to be used in this work are average end-to-end delay, packet delivery fraction, routing overheads, and average energy consumption per received packet.
无线传感器网络(WSNs)已被视为可用于特定应用的杰出Ad Hoc网络。由于WSN可能由数百个低成本,小尺寸和电池供电的传感器节点组成,因此它比其他Ad Hoc网络更有潜力部署在许多新兴应用中。无线传感器网络提出了许多新的挑战,如:嵌入式传感器的设计和无线网络技术,即路由协议的设计。许多自组织路由协议,如自组织按需距离矢量(AODV)、动态源路由协议(DSR)和目的地顺序距离矢量(DSDV),都是专门为无线移动自组织网络(manet)开发的,已经被证明在manet上表现良好。本文将扩展性能研究,使用一些仿真网络模型,使用NS-2模拟器研究AODV, DSR和DSDV路由协议在静态环境下如何在wsn上工作。性能研究将侧重于网络大小、网络密度(最多500个节点)和源数量(数据连接)的影响。在这项工作中使用的性能指标是平均端到端延迟、数据包交付分数、路由开销和每个接收到的数据包的平均能耗。
{"title":"Simulation based evaluation of MANET routing protocols for static WSN","authors":"Ali A. S. Ihbeel, H. Sigiuk, Abdelhadi A. Alhnsh","doi":"10.1109/INTECH.2012.6457750","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457750","url":null,"abstract":"Wireless Sensor Networks (WSNs) have been regarded as distinguished Ad Hoc networks that can be used for specific applications. Since a WSN consists of potentially hundreds of low cost, small size and battery powered sensor nodes, it has more potentials than other Ad Hoc networks to be deployed in many emerging applications. The WSNs raised many new challenges such as: the design of embedded sensors and wireless networking technologies, in other word the routing protocols. Many Ad Hoc routing protocols such as Ad Hoc On demand Distance Vector (AODV), Dynamic Source Routing Protocol (DSR), and Destination Sequenced Distance Vector (DSDV), which have been developed particularly for the wireless mobile Ad Hoc networks (MANETs), have been proved to perform well on MANETs. In this paper, a performance study will be extended, using some simulation network models, to investigate how well AODV, DSR, and DSDV routing protocols work on WSNs, in static environments, using NS-2 simulator. The performance study will focus on the impact of the network size, network density (up to 500 nodes), and the number of sources (data connections). The performance metrics to be used in this work are average end-to-end delay, packet delivery fraction, routing overheads, and average energy consumption per received packet.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129995264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Optimization of smart choice of shares portfolio using artificial intelligence 利用人工智能优化股票组合的智能选择
Pub Date : 2012-09-01 DOI: 10.1109/INTECH.2012.6457769
M. Elhachloufi, Z. Guennoun, F. Hamza
In this paper, we present an approach for optimal portfolio choice. This approach is divided into two parts: The first part is to select from an initial portfolio, the relevants shares that have a positive influence on the return and risk portfolio using regression neural networks, i.e: The shares have a low risks and high returns. These shares will built a sub portfolio. In the second part, we seek the proportions that optimize these sub the portfolio whose risk used is semi-variance using genetic algorithms. This approach allows to achieve a financial gain in terms of cost reduction and tax. In addition, a reduction in computational load during the optimization phase.
本文提出了一种最优投资组合选择方法。该方法分为两部分:第一部分是利用回归神经网络从初始投资组合中选择对收益有积极影响的相关股票和风险投资组合,即风险低收益高的股票。这些股票将构成次级投资组合。在第二部分中,我们利用遗传算法在风险为半方差的投资组合中寻求优化这些投资组合的比例。这种方法可以在降低成本和税收方面实现经济收益。此外,在优化阶段减少了计算负荷。
{"title":"Optimization of smart choice of shares portfolio using artificial intelligence","authors":"M. Elhachloufi, Z. Guennoun, F. Hamza","doi":"10.1109/INTECH.2012.6457769","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457769","url":null,"abstract":"In this paper, we present an approach for optimal portfolio choice. This approach is divided into two parts: The first part is to select from an initial portfolio, the relevants shares that have a positive influence on the return and risk portfolio using regression neural networks, i.e: The shares have a low risks and high returns. These shares will built a sub portfolio. In the second part, we seek the proportions that optimize these sub the portfolio whose risk used is semi-variance using genetic algorithms. This approach allows to achieve a financial gain in terms of cost reduction and tax. In addition, a reduction in computational load during the optimization phase.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124650062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Second International Conference on the Innovative Computing Technology (INTECH 2012)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1