首页 > 最新文献

2015 International Conference on Man and Machine Interfacing (MAMI)最新文献

英文 中文
Solution of multi objective linear fractional programming problem by Taylor series approach 多目标线性分式规划问题的泰勒级数解
Pub Date : 2015-12-01 DOI: 10.1109/MAMI.2015.7456582
P. K. De, M. Deb
This article proposes to handle multi objective linear fractional programming (MOLFP) problems in fuzzy environment. As a generalized mean value theorem first order Taylor series approach is used to convert multi objective linear fractional programming to multi objective linear programming problem by introducing imprecise aspiration level to each objective. Then additive weighted method has been used to get its solution. It has been observed that optimality reached for different weight values of the membership function for the different objective functions. The method has been presented by an algorithm and sensitivity analysis for the fuzzy multi objective linear fractional programming (FMOLFP) problem with respect to aspiration level and tolerance limit are also presented. The present approach is demonstrated with one numerical example.
提出了在模糊环境下处理多目标线性分式规划问题的方法。作为广义中值定理,一阶泰勒级数方法通过在每个目标中引入不精确期望水平,将多目标线性分式规划问题转化为多目标线性规划问题。然后用加性加权法求解。已经观察到,对于不同的目标函数,隶属函数的不同权重值达到了最优性。给出了该方法的一种算法,并对模糊多目标线性分式规划(FMOLFP)问题的期望值和公差极限进行了灵敏度分析。最后通过一个数值算例对该方法进行了验证。
{"title":"Solution of multi objective linear fractional programming problem by Taylor series approach","authors":"P. K. De, M. Deb","doi":"10.1109/MAMI.2015.7456582","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456582","url":null,"abstract":"This article proposes to handle multi objective linear fractional programming (MOLFP) problems in fuzzy environment. As a generalized mean value theorem first order Taylor series approach is used to convert multi objective linear fractional programming to multi objective linear programming problem by introducing imprecise aspiration level to each objective. Then additive weighted method has been used to get its solution. It has been observed that optimality reached for different weight values of the membership function for the different objective functions. The method has been presented by an algorithm and sensitivity analysis for the fuzzy multi objective linear fractional programming (FMOLFP) problem with respect to aspiration level and tolerance limit are also presented. The present approach is demonstrated with one numerical example.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131518783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Hyperspectral imaging data atmospheric correction challenges and solutions using QUAC and FLAASH algorithms 使用QUAC和FLAASH算法的高光谱成像数据大气校正挑战和解决方案
Pub Date : 2015-12-01 DOI: 10.1109/MAMI.2015.7456604
Amol D. Vibhute, K. Kale, Rajesh K. Dhumal, S. Mehrotra
Recently, Hyperspectral remote sensing technology has been proved to be a valuable tool to get reliable information with details for identifying different objects on the earth surface with high spectral resolution. Due to atmospheric effects the valuable information may be lost from hyperspectral data. Hence it is necessary to remove these effects from hyperspectral data for reliable identification of the objects on the earth surface. The atmospheric correction is a very critical task of hyperspectral images. The present paper highlights the advantages of hyperspectral data, challenges over it as a pre-processing with solutions through QUAC and FLAASH algorithms. The hyperspectral data acquired for Aurangabad district were used to test these algorithms. The result indicates that the size of hyperspectral image can be reduced. The ENVI 5.1 software with IDL language is an efficient way to visualize and analysis the hyperspectral images. Implementation of atmospheric correction algorithms like QUAC and FLAASH is successfully carried out. The QUAC model gives accurate and reliable results without any ancillary information but requires only wavelength and radiometric calibration with less time than FLAASH.
近年来,高光谱遥感技术已被证明是获得高光谱分辨率地球表面不同目标的可靠细节信息的宝贵工具。由于大气的影响,高光谱数据中有价值的信息可能会丢失。因此,为了可靠地识别地表目标,有必要从高光谱数据中去除这些影响。大气校正是高光谱图像的一项非常关键的任务。本文重点介绍了高光谱数据作为预处理的优势和挑战,并通过QUAC和FLAASH算法进行了解决。利用奥兰加巴德地区的高光谱数据对这些算法进行了测试。结果表明,该方法可以减小高光谱图像的尺寸。基于IDL语言的ENVI 5.1软件是实现高光谱图像可视化和分析的有效方法。成功实现了QUAC和FLAASH等大气校正算法。QUAC模型给出了准确可靠的结果,没有任何辅助信息,只需要波长和辐射校准,比FLAASH更短的时间。
{"title":"Hyperspectral imaging data atmospheric correction challenges and solutions using QUAC and FLAASH algorithms","authors":"Amol D. Vibhute, K. Kale, Rajesh K. Dhumal, S. Mehrotra","doi":"10.1109/MAMI.2015.7456604","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456604","url":null,"abstract":"Recently, Hyperspectral remote sensing technology has been proved to be a valuable tool to get reliable information with details for identifying different objects on the earth surface with high spectral resolution. Due to atmospheric effects the valuable information may be lost from hyperspectral data. Hence it is necessary to remove these effects from hyperspectral data for reliable identification of the objects on the earth surface. The atmospheric correction is a very critical task of hyperspectral images. The present paper highlights the advantages of hyperspectral data, challenges over it as a pre-processing with solutions through QUAC and FLAASH algorithms. The hyperspectral data acquired for Aurangabad district were used to test these algorithms. The result indicates that the size of hyperspectral image can be reduced. The ENVI 5.1 software with IDL language is an efficient way to visualize and analysis the hyperspectral images. Implementation of atmospheric correction algorithms like QUAC and FLAASH is successfully carried out. The QUAC model gives accurate and reliable results without any ancillary information but requires only wavelength and radiometric calibration with less time than FLAASH.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132730815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Test bench automation to overcome verification challenge of SOC Interconnect 测试台架自动化以克服SOC互连的验证挑战
Pub Date : 2015-12-01 DOI: 10.1109/MAMI.2015.7456600
S. Mohanty, Suchismita Sengupta, S. K. Mohapatra
With the increasing number of Intellectual Property (IP) cores in the todays system on chip (SOC), verification of Interconnect Bus matrix becomes a critical and time consuming task. Development of verification platform for complex SOC Interconnect takes several weeks considering it supports different kinds of protocol, large number of master and slave ports with multiple transaction types. To reduce overall time-to-market for SOC delivery, it is crucial to verify Interconnect in a very narrow time frame. In this research article, we present Test Bench(TB) automation solution for verifying completeness and correctness of data as it pass through interconnect fabric. Automation reduces verification effort by automatically creating authenticated infrastructure, stimulus vector and coverage model to support all transactions exchanged between Masters and Slaves within an SOC. This approach enables a protocol independent scoreboard to check data integrity and verify different data path transactions fo and from each port of bus fabric. We applied the proposed solution to various bus matrix testing which lead to 40% save in verification cycle.
随着当今片上系统(SOC)中知识产权(IP)内核的不断增加,互连总线矩阵的验证成为一项关键且耗时的任务。复杂SOC互连验证平台的开发需要数周时间,因为它支持不同类型的协议,大量的主、从端口和多种事务类型。为了缩短SOC交付的整体上市时间,在非常短的时间内验证Interconnect至关重要。在本文中,我们提出了测试台(TB)自动化解决方案,用于验证数据通过互连结构时的完整性和正确性。自动化通过自动创建经过验证的基础设施、刺激向量和覆盖模型来减少验证工作,以支持在SOC内主从之间交换的所有事务。这种方法使协议独立的记分板能够检查数据完整性,并验证进出总线结构的每个端口的不同数据路径事务。将该方案应用于各种总线矩阵测试,验证周期节省40%。
{"title":"Test bench automation to overcome verification challenge of SOC Interconnect","authors":"S. Mohanty, Suchismita Sengupta, S. K. Mohapatra","doi":"10.1109/MAMI.2015.7456600","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456600","url":null,"abstract":"With the increasing number of Intellectual Property (IP) cores in the todays system on chip (SOC), verification of Interconnect Bus matrix becomes a critical and time consuming task. Development of verification platform for complex SOC Interconnect takes several weeks considering it supports different kinds of protocol, large number of master and slave ports with multiple transaction types. To reduce overall time-to-market for SOC delivery, it is crucial to verify Interconnect in a very narrow time frame. In this research article, we present Test Bench(TB) automation solution for verifying completeness and correctness of data as it pass through interconnect fabric. Automation reduces verification effort by automatically creating authenticated infrastructure, stimulus vector and coverage model to support all transactions exchanged between Masters and Slaves within an SOC. This approach enables a protocol independent scoreboard to check data integrity and verify different data path transactions fo and from each port of bus fabric. We applied the proposed solution to various bus matrix testing which lead to 40% save in verification cycle.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127400494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Architecture of efficient word processing using Hadoop MapReduce for big data applications 基于Hadoop MapReduce的高效文字处理架构,用于大数据应用
Pub Date : 2015-12-01 DOI: 10.1109/MAMI.2015.7456612
Bichitra Mandal, Srinivas Sethi, R. Sahoo
Understanding the characteristics of MapReduce workloads in a Hadoop, is the key in making optimal and efficient configuration decisions and improving the system efficiency. MapReduce is a very popular parallel processing framework for large-scale data analytics which has become an effective method for processing massive data by using cluster of computers. In the last decade, the amount of customers, services and information increasing rapidly, yielding the big data analysis problem for service systems. To keep up with the increasing volume of datasets, it requires efficient analytical capability to process and analyze data in two phases. They are mapping and reducing. Between mapping and reducing phases, MapReduce requires a shuffling to globally exchange the intermediate data generated by the mapping. In this paper, it is proposed a novel shuffling strategy to enable efficient data movement and reduce for MapReduce shuffling with number of consecutive words and their count in the word processor. To improve its scalability and efficiency of word processor in big data environment, repetition of consecutive words count with shuffling is implemented on Hadoop. It can be implemented in a widely-adopted distributed computing platform and also in single word processor big documents using the MapReduce parallel processing paradigm.
了解Hadoop中MapReduce工作负载的特点,是做出最优高效配置决策和提高系统效率的关键。MapReduce是一个非常流行的用于大规模数据分析的并行处理框架,它已经成为使用计算机集群处理大量数据的有效方法。在过去的十年中,客户、服务和信息的数量迅速增加,产生了服务系统的大数据分析问题。为了跟上不断增长的数据集的数量,需要高效的分析能力来分两个阶段处理和分析数据。它们在映射和简化。在映射和约简阶段之间,MapReduce需要对映射生成的中间数据进行全局交换。本文提出了一种新的洗牌策略,以实现高效的数据移动和减少MapReduce在文字处理器中对连续单词数量及其计数的洗牌。为了提高文字处理器在大数据环境下的可扩展性和效率,在Hadoop上实现了重复连续单词计数和洗牌。它可以在广泛采用的分布式计算平台上实现,也可以在使用MapReduce并行处理范例的单字处理器大文档中实现。
{"title":"Architecture of efficient word processing using Hadoop MapReduce for big data applications","authors":"Bichitra Mandal, Srinivas Sethi, R. Sahoo","doi":"10.1109/MAMI.2015.7456612","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456612","url":null,"abstract":"Understanding the characteristics of MapReduce workloads in a Hadoop, is the key in making optimal and efficient configuration decisions and improving the system efficiency. MapReduce is a very popular parallel processing framework for large-scale data analytics which has become an effective method for processing massive data by using cluster of computers. In the last decade, the amount of customers, services and information increasing rapidly, yielding the big data analysis problem for service systems. To keep up with the increasing volume of datasets, it requires efficient analytical capability to process and analyze data in two phases. They are mapping and reducing. Between mapping and reducing phases, MapReduce requires a shuffling to globally exchange the intermediate data generated by the mapping. In this paper, it is proposed a novel shuffling strategy to enable efficient data movement and reduce for MapReduce shuffling with number of consecutive words and their count in the word processor. To improve its scalability and efficiency of word processor in big data environment, repetition of consecutive words count with shuffling is implemented on Hadoop. It can be implemented in a widely-adopted distributed computing platform and also in single word processor big documents using the MapReduce parallel processing paradigm.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126447881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
LBP and Weber law descriptor feature based CRF model for detection of man-made structures 基于LBP和韦伯定律描述子特征的人工结构检测CRF模型
Pub Date : 2015-12-01 DOI: 10.1109/MAMI.2015.7456581
S. Behera, P. Nanda
In this paper, we have proposed a combined Local Binary Pattern (LBP) and Weber Law Descriptor (WLD) feature based Conditional Random Field (CRF) model for detection of man made structures such as buildings in natural scenes. In natural scenes, the structure may have textural attributes or some portions of the object may be apparent as textures. The CRF model learning has been carried out in feature space. The spatial contextual dependencies of the structures has been taken care by the intrascale LBP features and interscale WLD features. The CRF model learning problem have been formulated in pseudolikelihood framework while the inferred labels have been obtained by maximizing the posterior distribution of the feature space. Iterated conditional mode algorithm (ICM) has been used to obtain the labels. The proposed algorithm could successfully be tested with many images and was found to be better than that of Kumar's algorithm in terms of detection accuracy.
本文提出了一种结合局部二值模式(LBP)和韦伯定律描述子(WLD)特征的条件随机场(CRF)模型,用于自然场景中建筑物等人造结构的检测。在自然场景中,结构可能具有纹理属性,或者对象的某些部分可能作为纹理出现。在特征空间中进行了CRF模型学习。尺度内LBP特征和尺度间WLD特征处理了结构的空间相关性。CRF模型的学习问题是在伪似然框架中提出的,而推断标签是通过最大化特征空间的后验分布来获得的。使用迭代条件模式算法(ICM)来获取标签。通过对多幅图像的测试,发现该算法在检测精度上优于Kumar算法。
{"title":"LBP and Weber law descriptor feature based CRF model for detection of man-made structures","authors":"S. Behera, P. Nanda","doi":"10.1109/MAMI.2015.7456581","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456581","url":null,"abstract":"In this paper, we have proposed a combined Local Binary Pattern (LBP) and Weber Law Descriptor (WLD) feature based Conditional Random Field (CRF) model for detection of man made structures such as buildings in natural scenes. In natural scenes, the structure may have textural attributes or some portions of the object may be apparent as textures. The CRF model learning has been carried out in feature space. The spatial contextual dependencies of the structures has been taken care by the intrascale LBP features and interscale WLD features. The CRF model learning problem have been formulated in pseudolikelihood framework while the inferred labels have been obtained by maximizing the posterior distribution of the feature space. Iterated conditional mode algorithm (ICM) has been used to obtain the labels. The proposed algorithm could successfully be tested with many images and was found to be better than that of Kumar's algorithm in terms of detection accuracy.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"107 Pt 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129096353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Building semantics of E-agriculture in India: Semantics in e-agriculture 印度电子农业语义的构建:电子农业中的语义
Pub Date : 2015-12-01 DOI: 10.1109/MAMI.2015.7456602
Sasmita Pani, Jibitesh Mishra
There exists various web based agriculture information systems. These systems are providing required information to farmers about different crops, soil, different farming techniques etc. These web based agriculture information systems deal with numerous kinds of data but they don't maintain consistency and semantics in data. Hence ontology is used in web and provides meaningful annotations and vocabulary of terms about a certain domain. Here in this paper we are building ontology in agriculture system in web ontology language (OWL). This paper shows various classes and subclasses using OWL DL in protege5.0 for an e-agriculture information system. This paper also provides various classes and subclasses and relationship among the classes in UML class diagram for a web based agriculture information system or e-agriculture.
目前存在着各种基于web的农业信息系统。这些系统向农民提供有关不同作物、土壤、不同耕作技术等所需的信息。这些基于web的农业信息系统处理的数据种类繁多,但数据的一致性和语义性不强。因此,在web中使用本体,提供关于某一领域的有意义的注释和术语表。本文采用web本体语言(OWL)构建农业系统本体。本文介绍了在protege5.0中使用OWL DL实现电子农业信息系统的各种类和子类。本文还为基于web的农业信息系统或电子农业提供了UML类图中的各种类和子类以及类之间的关系。
{"title":"Building semantics of E-agriculture in India: Semantics in e-agriculture","authors":"Sasmita Pani, Jibitesh Mishra","doi":"10.1109/MAMI.2015.7456602","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456602","url":null,"abstract":"There exists various web based agriculture information systems. These systems are providing required information to farmers about different crops, soil, different farming techniques etc. These web based agriculture information systems deal with numerous kinds of data but they don't maintain consistency and semantics in data. Hence ontology is used in web and provides meaningful annotations and vocabulary of terms about a certain domain. Here in this paper we are building ontology in agriculture system in web ontology language (OWL). This paper shows various classes and subclasses using OWL DL in protege5.0 for an e-agriculture information system. This paper also provides various classes and subclasses and relationship among the classes in UML class diagram for a web based agriculture information system or e-agriculture.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126551712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Identification of plant species using non-imaging hyperspectral data 利用非成像高光谱数据识别植物物种
Pub Date : 2015-12-01 DOI: 10.1109/MAMI.2015.7456613
Amarsinh Varpe, Yogesh D. Rajendra, Amol D. Vibhute, S. Gaikwad, K. Kale
Hyperspectral non-imaging data provides the spectral range from 400-2500nm which has the ability to identify each and every unique materials on the surface. The plant species identification is critical task manually and computationally. In the present paper, we have proposed plant species identification system based on non-imaging hyperspectral data and designed our own database for experiment. Also we have identified various plant species and performed support vector machine (SVM) algorithm on it for recognition. The overall accuracy 91% was achieved through SVM.
高光谱非成像数据提供400-2500nm的光谱范围,能够识别表面上的每一种独特材料。植物物种鉴定是一项人工和计算相结合的重要工作。本文提出了基于非成像高光谱数据的植物物种识别系统,并设计了自己的实验数据库。此外,我们还对多种植物进行了识别,并对其进行了支持向量机(SVM)算法的识别。支持向量机的总体准确率达到91%。
{"title":"Identification of plant species using non-imaging hyperspectral data","authors":"Amarsinh Varpe, Yogesh D. Rajendra, Amol D. Vibhute, S. Gaikwad, K. Kale","doi":"10.1109/MAMI.2015.7456613","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456613","url":null,"abstract":"Hyperspectral non-imaging data provides the spectral range from 400-2500nm which has the ability to identify each and every unique materials on the surface. The plant species identification is critical task manually and computationally. In the present paper, we have proposed plant species identification system based on non-imaging hyperspectral data and designed our own database for experiment. Also we have identified various plant species and performed support vector machine (SVM) algorithm on it for recognition. The overall accuracy 91% was achieved through SVM.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124555651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Performance evaluation of wireless propagation models for long term evolution using NS-3 基于NS-3的长期演进无线传播模型性能评估
Pub Date : 2015-12-01 DOI: 10.1109/MAMI.2015.7456599
Sanatan Mohanty, S. Mishra
Long Term Evolution (LTE) is the 4G wireless broadband access technology aimed at providing multimedia services based on IP networks. It has been designed to improve system capacity and coverage, improve user experience through higher data rates and reduced latency, reduced deployment and operating costs, and seamless integration with existing communication systems. This paper concerns about the wireless propagation models, which plays a very significant role in planning of any wireless network. In this paper, a comparison has been presented among the different propagation models both in terms of path loss and computational complexity using the NS-3 simulator.
LTE (Long Term Evolution)是基于IP网络提供多媒体业务的4G无线宽带接入技术。它旨在提高系统容量和覆盖范围,通过更高的数据速率和更低的延迟,降低部署和运营成本,以及与现有通信系统的无缝集成来改善用户体验。本文研究的无线传播模型在任何无线网络的规划中都起着非常重要的作用。本文利用NS-3仿真器对不同的传播模型在路径损耗和计算复杂度方面进行了比较。
{"title":"Performance evaluation of wireless propagation models for long term evolution using NS-3","authors":"Sanatan Mohanty, S. Mishra","doi":"10.1109/MAMI.2015.7456599","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456599","url":null,"abstract":"Long Term Evolution (LTE) is the 4G wireless broadband access technology aimed at providing multimedia services based on IP networks. It has been designed to improve system capacity and coverage, improve user experience through higher data rates and reduced latency, reduced deployment and operating costs, and seamless integration with existing communication systems. This paper concerns about the wireless propagation models, which plays a very significant role in planning of any wireless network. In this paper, a comparison has been presented among the different propagation models both in terms of path loss and computational complexity using the NS-3 simulator.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131625186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A comparison study among GPU and map reduce approach for searching operation on index file in database query processing 数据库查询处理中索引文件搜索操作的GPU与map约简方法的比较研究
Pub Date : 2015-12-01 DOI: 10.1109/MAMI.2015.7456608
A. Sahoo, Sundar Sourav Sarangi, Rachita Misra
As amount of data in different form is increased day by day; it is very difficult to process it. This unstructured form of data cannot be easily retrieved through query processing. Normally SQL query acts on structured data. To convert unstructured data into structured data, Hadoop provides map reduce approach. Instead using map function, we can use GPU approach for processing data in parallel and then we can use reduce function on processed data. Here we compare two approach i.e. map-reduce approach and gpu-reduce approach to calculate performance measurement for searching index file. As Hadoop is a framework which is purely based on Java, we use JCUDA programming language to implement gpu-reduce approach.
随着不同形式的数据量日益增加;这很难处理。这种非结构化形式的数据不能通过查询处理轻松检索。通常,SQL查询作用于结构化数据。为了将非结构化数据转换为结构化数据,Hadoop提供了map reduce方法。代替map函数,我们可以使用GPU方法并行处理数据,然后在处理后的数据上使用reduce函数。这里我们比较了两种方法,即map-reduce方法和gpu-reduce方法来计算索引文件搜索的性能度量。由于Hadoop是一个纯粹基于Java的框架,我们使用JCUDA编程语言来实现gpu-reduce方法。
{"title":"A comparison study among GPU and map reduce approach for searching operation on index file in database query processing","authors":"A. Sahoo, Sundar Sourav Sarangi, Rachita Misra","doi":"10.1109/MAMI.2015.7456608","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456608","url":null,"abstract":"As amount of data in different form is increased day by day; it is very difficult to process it. This unstructured form of data cannot be easily retrieved through query processing. Normally SQL query acts on structured data. To convert unstructured data into structured data, Hadoop provides map reduce approach. Instead using map function, we can use GPU approach for processing data in parallel and then we can use reduce function on processed data. Here we compare two approach i.e. map-reduce approach and gpu-reduce approach to calculate performance measurement for searching index file. As Hadoop is a framework which is purely based on Java, we use JCUDA programming language to implement gpu-reduce approach.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130344469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fault diagnosis based on intelligent particle filter 基于智能粒子滤波的故障诊断
Pub Date : 2015-12-01 DOI: 10.1109/MAMI.2015.7456586
Wei Sun, Jian Hou
Practical production systems are usually complex, nonlinear and non-Gaussian. Different from some other fault diagnosis methods, particle filter can applied to nonlinear and non-Gaussian systems effectively. The particle impoverishment problem exists in the traditional particle filter algorithm, which influences the results of state estimation. In this paper, we conclude that the general particle impoverishment problem comes from the impoverishment of particle diversity by analyzing the particle filter algorithm. We then design an intelligent particle filter(IPF) to deal with particle impoverishment. IPF relieves the particle impoverishment problem using the genetic strategy. In fact, the general PF is a special case of IPF relieves the particular parameters. Experiment on 160 MW unit fuel model shows that the intelligent particle filter can increase the particles diversity and improve the state estimation results.
实际生产系统通常是复杂的、非线性的、非高斯的。不同于其他故障诊断方法,粒子滤波可以有效地应用于非线性和非高斯系统。传统的粒子滤波算法存在粒子贫困化问题,影响了状态估计的结果。本文通过对粒子滤波算法的分析,得出一般粒子贫化问题来源于粒子多样性贫化的结论。然后,我们设计了一个智能粒子滤波器(IPF)来处理粒子贫化。IPF利用遗传策略解决了粒子贫困化问题。实际上,一般PF是IPF解除特定参数的一种特殊情况。在160mw机组燃料模型上的实验表明,该智能粒子滤波方法可以增加粒子多样性,改善状态估计结果。
{"title":"Fault diagnosis based on intelligent particle filter","authors":"Wei Sun, Jian Hou","doi":"10.1109/MAMI.2015.7456586","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456586","url":null,"abstract":"Practical production systems are usually complex, nonlinear and non-Gaussian. Different from some other fault diagnosis methods, particle filter can applied to nonlinear and non-Gaussian systems effectively. The particle impoverishment problem exists in the traditional particle filter algorithm, which influences the results of state estimation. In this paper, we conclude that the general particle impoverishment problem comes from the impoverishment of particle diversity by analyzing the particle filter algorithm. We then design an intelligent particle filter(IPF) to deal with particle impoverishment. IPF relieves the particle impoverishment problem using the genetic strategy. In fact, the general PF is a special case of IPF relieves the particular parameters. Experiment on 160 MW unit fuel model shows that the intelligent particle filter can increase the particles diversity and improve the state estimation results.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133178947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2015 International Conference on Man and Machine Interfacing (MAMI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1