首页 > 最新文献

2017 International Conference on Computing, Communication and Automation (ICCCA)最新文献

英文 中文
Design of low power and area efficient half adder using pass transistor and comparison of various performance parameters 采用通型晶体管设计低功耗、面积高效的半加法器,并对各性能参数进行了比较
Pub Date : 2017-05-01 DOI: 10.1109/CCAA.2017.8230033
Prashant Kumar, N. Bhandari, Lokesh Bhargav, Rashmi Rathi, S. C. Yadav
The main objective of this paper is to design the low power consumption and less area occupied combinational circuit here we designed half adder circuit using three different logic styles: CMOS NAND gate logic, CMOS transmission gate logic, and NMOS pass transistor logic. All the circuits are simulated and compared by using Cadence Virtuoso IC 6.1.5, 180nm CMOS Technology with the supply voltage of 5 V. In this paper we compare different performance parameters of these three logic styles, like power consumption, Number of transistors, propagation delay, rise time, fall time etc.
本文的主要目标是设计低功耗和占地面积少的组合电路,在这里我们设计了半加法器电路,采用三种不同的逻辑风格:CMOS NAND门逻辑,CMOS传输门逻辑和NMOS通管逻辑。采用Cadence Virtuoso IC 6.1.5, 180nm CMOS技术,电源电压为5v,对所有电路进行了仿真和比较。在本文中,我们比较了这三种逻辑方式的不同性能参数,如功耗、晶体管数量、传播延迟、上升时间、下降时间等。
{"title":"Design of low power and area efficient half adder using pass transistor and comparison of various performance parameters","authors":"Prashant Kumar, N. Bhandari, Lokesh Bhargav, Rashmi Rathi, S. C. Yadav","doi":"10.1109/CCAA.2017.8230033","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8230033","url":null,"abstract":"The main objective of this paper is to design the low power consumption and less area occupied combinational circuit here we designed half adder circuit using three different logic styles: CMOS NAND gate logic, CMOS transmission gate logic, and NMOS pass transistor logic. All the circuits are simulated and compared by using Cadence Virtuoso IC 6.1.5, 180nm CMOS Technology with the supply voltage of 5 V. In this paper we compare different performance parameters of these three logic styles, like power consumption, Number of transistors, propagation delay, rise time, fall time etc.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88549484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards location error resilient geographic routing for VANETs 面向VANETs的位置误差弹性地理路由研究
Pub Date : 2017-05-01 DOI: 10.1109/CCAA.2017.8229890
Reena Kasana, Sushil Kumar, Omprakash Kaiwartya
Geographic routing has received a lot of attention from researchers all over the world due to availability of low cost Global Positioning System (GPS) devices. It is considered as efficient routing for large scale networks and offers encouraging solutions for information dissemination in Vehicular ad hoc Networks (VANETs). The efficacy and scalability of all the geographic routing depends on the accuracy of location information obtained from positioning systems. Related literature implicitly assumed perfect location information. However, such belief is unrealistic in the real world. Measured location information inherently has inaccuracy, leading to performance degradation of geographic routing. In this paper, a novel location error tolerant geographical routing (LETGR) in vehicular environment is proposed that can reduce the impact of location inaccuracy in measurement due to instrument imprecision and obstacles in the realistic scenarios in highly mobile environment. LETGR takes the statistical error characteristic into account in its next forwarding vehicle selection logic to maximize the probability of message delivery. To alleviate the effect of mobility, LETGR exploits future locations of vehicles instead of current locations. Extended Kalman filter is used in the proposed algorithm for predicting and correcting future locations of the vehicles. Performance of the LETGR algorithm is evaluated via simulation and results show that LETGR algorithm performance is encouraging when the objective is to maximize the reception of data packets at the destination vehicle.
由于低成本的全球定位系统(GPS)设备的可用性,地理路由受到了全世界研究人员的广泛关注。它被认为是大规模网络的有效路由,为车辆自组织网络(VANETs)中的信息传播提供了令人鼓舞的解决方案。所有地理路由的有效性和可扩展性取决于从定位系统获得的位置信息的准确性。相关文献隐含地假设了完美的位置信息。然而,这样的信念在现实世界中是不现实的。实测位置信息固有的不准确性,导致地理路由的性能下降。本文提出了一种新的车辆环境下的位置容错地理路由(LETGR)方法,该方法可以减少高移动环境下现实场景中由于仪器精度不高和障碍物造成的位置不准确对测量的影响。LETGR在下一转发车辆选择逻辑中考虑了统计误差特性,使消息传递概率最大化。为了减轻移动性的影响,LETGR利用未来的车辆位置而不是当前的位置。该算法采用扩展卡尔曼滤波来预测和修正未来车辆的位置。通过仿真对LETGR算法的性能进行了评价,结果表明,当目标车辆的数据包接收量最大化时,LETGR算法的性能是令人鼓舞的。
{"title":"Towards location error resilient geographic routing for VANETs","authors":"Reena Kasana, Sushil Kumar, Omprakash Kaiwartya","doi":"10.1109/CCAA.2017.8229890","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229890","url":null,"abstract":"Geographic routing has received a lot of attention from researchers all over the world due to availability of low cost Global Positioning System (GPS) devices. It is considered as efficient routing for large scale networks and offers encouraging solutions for information dissemination in Vehicular ad hoc Networks (VANETs). The efficacy and scalability of all the geographic routing depends on the accuracy of location information obtained from positioning systems. Related literature implicitly assumed perfect location information. However, such belief is unrealistic in the real world. Measured location information inherently has inaccuracy, leading to performance degradation of geographic routing. In this paper, a novel location error tolerant geographical routing (LETGR) in vehicular environment is proposed that can reduce the impact of location inaccuracy in measurement due to instrument imprecision and obstacles in the realistic scenarios in highly mobile environment. LETGR takes the statistical error characteristic into account in its next forwarding vehicle selection logic to maximize the probability of message delivery. To alleviate the effect of mobility, LETGR exploits future locations of vehicles instead of current locations. Extended Kalman filter is used in the proposed algorithm for predicting and correcting future locations of the vehicles. Performance of the LETGR algorithm is evaluated via simulation and results show that LETGR algorithm performance is encouraging when the objective is to maximize the reception of data packets at the destination vehicle.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75978437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical comparison of models for dropout prophecy in MOOCs mooc辍学预测模型的实证比较
Pub Date : 2017-05-01 DOI: 10.1109/CCAA.2017.8229935
Nidhi Periwal, Keyur Rana
MOOCs are Massive Open Online Courses, which are offered on web and have become a focal point for students preferring e-learning. Regardless of enormous enrollment of students in MOOCs, the amount of dropout students in these courses are too high. For the success of MOOCs, their dropout rates must decrease. As the proportion of continuing and dropout students in MOOCs varies considerably, the class imbalance problem has been observed in normally all MOOCs dataset. Researchers have developed models to predict the dropout students in MOOCs using different techniques. The features, which affect these models, can be obtained during registration and interaction of students with MOOCs' portal. Using results of these models, appropriate actions can be taken for students in order to retain them. In this paper, we have created four models using various machine learning techniques over publically available dataset. After the empirical analysis and evaluation of these models, we found that model created by Naïve Bayes technique performed well for imbalance class data of MOOCs.
mooc是指在网络上提供的大规模开放在线课程,已经成为喜欢电子学习的学生的焦点。尽管mooc的招生人数庞大,但这些课程的辍学率过高。为了mooc的成功,他们的辍学率必须降低。由于mooc中继续生和辍学生的比例差异较大,通常在所有mooc数据集中都观察到班级失衡问题。研究人员已经开发出模型,使用不同的技术来预测mooc中的辍学学生。影响这些模型的特征可以在学生注册和与mooc门户的交互过程中获得。利用这些模型的结果,学生可以采取适当的行动来留住他们。在本文中,我们在公开可用的数据集上使用各种机器学习技术创建了四个模型。通过对这些模型的实证分析和评价,我们发现Naïve贝叶斯技术所建立的模型对于mooc的不平衡类数据表现良好。
{"title":"An empirical comparison of models for dropout prophecy in MOOCs","authors":"Nidhi Periwal, Keyur Rana","doi":"10.1109/CCAA.2017.8229935","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229935","url":null,"abstract":"MOOCs are Massive Open Online Courses, which are offered on web and have become a focal point for students preferring e-learning. Regardless of enormous enrollment of students in MOOCs, the amount of dropout students in these courses are too high. For the success of MOOCs, their dropout rates must decrease. As the proportion of continuing and dropout students in MOOCs varies considerably, the class imbalance problem has been observed in normally all MOOCs dataset. Researchers have developed models to predict the dropout students in MOOCs using different techniques. The features, which affect these models, can be obtained during registration and interaction of students with MOOCs' portal. Using results of these models, appropriate actions can be taken for students in order to retain them. In this paper, we have created four models using various machine learning techniques over publically available dataset. After the empirical analysis and evaluation of these models, we found that model created by Naïve Bayes technique performed well for imbalance class data of MOOCs.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78912100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A combination of Internet of Things (IoT) and graph database for future battlefield systems 未来战场系统的物联网(IoT)和图形数据库的结合
Pub Date : 2017-05-01 DOI: 10.1109/CCAA.2017.8230010
Gaurav Tripathi, Bhawna Sharma, S. Rajvanshi
Internet of things (IoT) has provided a technological platform for purposeful connectivity. IoT allows smart devices and sensors to sense, connect and control the devices even remotely. The development in the field of Internet of Things has been enormous and the application of these solutions is scalable to high limits. The quantum of Internet of Things (IoT) is developing fast and is predicted to reach each and every sector of the computing world. We are already converging towards smart homes, smart highways, and smart cities. The defense sector of any nation is also affected by these developments. Defense field's solution is primarily based on sensors and their deployments. The primary aim of sensory data is the conclusion of information suitable for tactical decision and analysis in the Future battlefield environment. From capturing soldier's vital health parameters to its weapons, ammunition, location status, every data has a purposeful meaning and is of particular importance to the tactical commander sitting in the command center. We are proposing a novel mechanism to combine Internet of Things with the emerging graph database for better decision support system so as to create a situational awareness about every parameter of the soldiers in the battlefield. We present a simulated use case scenario of the future battlefield to query the graph database for situational awareness pattern for tactical advantage over the opponents.
物联网(IoT)为有目的的连接提供了技术平台。物联网允许智能设备和传感器甚至远程感知、连接和控制设备。物联网领域的发展是巨大的,这些解决方案的应用具有很高的可扩展性。物联网(IoT)的量子发展迅速,预计将覆盖计算世界的每一个领域。我们已经在向智能家居、智能高速公路和智能城市发展。任何国家的国防部门也受到这些发展的影响。国防领域的解决方案主要基于传感器及其部署。传感数据的主要目的是得出适合未来战场环境下战术决策和分析的信息。从捕获士兵的重要健康参数到武器、弹药、位置状态,每一个数据都有其目的意义,对坐在指挥中心的战术指挥官来说尤为重要。我们提出了一种新的机制,将物联网与新兴的图形数据库相结合,以更好地支持决策系统,从而对战场上士兵的每个参数产生态势感知。我们提出了一个未来战场的模拟用例场景,以查询图形数据库中的态势感知模式,以获得优于对手的战术优势。
{"title":"A combination of Internet of Things (IoT) and graph database for future battlefield systems","authors":"Gaurav Tripathi, Bhawna Sharma, S. Rajvanshi","doi":"10.1109/CCAA.2017.8230010","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8230010","url":null,"abstract":"Internet of things (IoT) has provided a technological platform for purposeful connectivity. IoT allows smart devices and sensors to sense, connect and control the devices even remotely. The development in the field of Internet of Things has been enormous and the application of these solutions is scalable to high limits. The quantum of Internet of Things (IoT) is developing fast and is predicted to reach each and every sector of the computing world. We are already converging towards smart homes, smart highways, and smart cities. The defense sector of any nation is also affected by these developments. Defense field's solution is primarily based on sensors and their deployments. The primary aim of sensory data is the conclusion of information suitable for tactical decision and analysis in the Future battlefield environment. From capturing soldier's vital health parameters to its weapons, ammunition, location status, every data has a purposeful meaning and is of particular importance to the tactical commander sitting in the command center. We are proposing a novel mechanism to combine Internet of Things with the emerging graph database for better decision support system so as to create a situational awareness about every parameter of the soldiers in the battlefield. We present a simulated use case scenario of the future battlefield to query the graph database for situational awareness pattern for tactical advantage over the opponents.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80061672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Source-to-source translation: Impact on the performance of high level synthesis 源到源转换:对高级合成性能的影响
Pub Date : 2017-05-01 DOI: 10.1109/CCAA.2017.8229944
Meena Belwal, Sudarshan TSB
The recent advancement in software industry such as Microsoft utilizing FPGAs (Field Programmable Gate Arrays) for acceleration in its search engine Bing and Intel's initiative to have its CPU along with Altera FPGA in the same chip indicates FPGA's potential as well as growing demand in the field of high performance computing. FPGAs provide accelerated computation due to their flexible architecture. However it creates challenges for the system designer as efficient design in terms of latency, power and energy demands hardware programming expertise. Hardware coding is a time consuming as well as an error prone task. High Level Synthesis (HLS) addresses these challenges by enabling programmer to code in High-level languages (HLL) such as C, C++, SystemC, CUDA and translating this code to hardware language such as Verilog or VHDL. Even though HLS tools provide several optimizations, their performance is limited due to the implementation constraints. Some of the software constructs widely used in high level language such as dynamic memory allocation, pointer-based data structures and recursion are very hard to implement well in hardware and thereby restricting the performance of HLS. Source-to-source translation is a mechanism to optimize the code in HLL so that the compiler can perform better in terms of code optimization. This article investigates whether the source-to-source translation widely used in HLL can also benefit high level synthesis. For this study, Bones source-to-source compiler is selected to perform the translation of C code to C (Optimized-C) and OpenMP code. These three types of code: C, Optimized-C and OpenMP were synthesized in LegUP HLS for three benchmarks; the performance statistics were measured for all the nine cases and analysis was conducted in terms of speedup, area reduction, power and energy consumption. OpenMP code performed better as compared to original C code in terms of execution time (speedup range 1.86–3.49), area (gain range 1–6.55) and energy (gain range 1.86–3.55). However optimized-C code did not always perform better than the original C-code in terms of execution time (speedup range 0.27–3.08), area (gain range 0.83–5.7) and energy (gain range 0.27–3.13). The power statistics observed were almost the same for all the three input versions of the code.
最近软件行业的进步,如微软利用FPGA(现场可编程门阵列)在其搜索引擎Bing中加速,以及英特尔将其CPU与Altera FPGA放在同一芯片中的计划,表明FPGA的潜力以及高性能计算领域不断增长的需求。fpga由于其灵活的结构提供了加速计算。然而,它给系统设计人员带来了挑战,因为在延迟、功率和能源方面,高效设计需要硬件编程专业知识。硬件编码是一项耗时且容易出错的任务。高级综合(HLS)通过使程序员能够使用C, c++, SystemC, CUDA等高级语言(HLL)进行编码,并将这些代码翻译为Verilog或VHDL等硬件语言,从而解决了这些挑战。尽管HLS工具提供了几种优化,但由于实现约束,它们的性能受到限制。一些在高级语言中广泛使用的软件结构,如动态内存分配、基于指针的数据结构和递归,很难在硬件上很好地实现,从而限制了HLS的性能。源到源转换是一种在HLL中优化代码的机制,这样编译器就可以在代码优化方面执行得更好。本文探讨了在HLL中广泛使用的源到源翻译是否也有利于高层次的综合。本研究选择Bones源到源编译器来执行C代码到C (Optimized-C)和OpenMP代码的翻译。在LegUP HLS中对C、Optimized-C和OpenMP三种代码进行了综合,并进行了三次基准测试;对所有9种情况进行性能统计,并从加速、面积减少、功耗和能耗方面进行分析。与原始C代码相比,OpenMP代码在执行时间(加速范围1.86-3.49)、面积(增益范围1-6.55)和能量(增益范围1.86-3.55)方面表现更好。然而,在执行时间(加速范围0.27-3.08)、面积(增益范围0.83-5.7)和能量(增益范围0.27-3.13)方面,优化后的c代码并不总是比原始c代码表现得更好。对于所有三个输入版本的代码,观察到的功率统计数据几乎相同。
{"title":"Source-to-source translation: Impact on the performance of high level synthesis","authors":"Meena Belwal, Sudarshan TSB","doi":"10.1109/CCAA.2017.8229944","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229944","url":null,"abstract":"The recent advancement in software industry such as Microsoft utilizing FPGAs (Field Programmable Gate Arrays) for acceleration in its search engine Bing and Intel's initiative to have its CPU along with Altera FPGA in the same chip indicates FPGA's potential as well as growing demand in the field of high performance computing. FPGAs provide accelerated computation due to their flexible architecture. However it creates challenges for the system designer as efficient design in terms of latency, power and energy demands hardware programming expertise. Hardware coding is a time consuming as well as an error prone task. High Level Synthesis (HLS) addresses these challenges by enabling programmer to code in High-level languages (HLL) such as C, C++, SystemC, CUDA and translating this code to hardware language such as Verilog or VHDL. Even though HLS tools provide several optimizations, their performance is limited due to the implementation constraints. Some of the software constructs widely used in high level language such as dynamic memory allocation, pointer-based data structures and recursion are very hard to implement well in hardware and thereby restricting the performance of HLS. Source-to-source translation is a mechanism to optimize the code in HLL so that the compiler can perform better in terms of code optimization. This article investigates whether the source-to-source translation widely used in HLL can also benefit high level synthesis. For this study, Bones source-to-source compiler is selected to perform the translation of C code to C (Optimized-C) and OpenMP code. These three types of code: C, Optimized-C and OpenMP were synthesized in LegUP HLS for three benchmarks; the performance statistics were measured for all the nine cases and analysis was conducted in terms of speedup, area reduction, power and energy consumption. OpenMP code performed better as compared to original C code in terms of execution time (speedup range 1.86–3.49), area (gain range 1–6.55) and energy (gain range 1.86–3.55). However optimized-C code did not always perform better than the original C-code in terms of execution time (speedup range 0.27–3.08), area (gain range 0.83–5.7) and energy (gain range 0.27–3.13). The power statistics observed were almost the same for all the three input versions of the code.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82687934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Touch-n-play: An intelligent home automation system 一键即用:智能家居自动化系统
Pub Date : 2017-05-01 DOI: 10.1109/CCAA.2017.8229942
Bababe B. Adam, A. Jha, Rajiv Kumar
The comfort of the home and the Society are helped by the “things” which surround them. These things are connected to each other, either directly or indirectly via the internet of things. Having full access to controlling these devices remotely with reasonable precision within the network when required is a key element in the home automation process. Many aspects of home automation needs to be developed so as to enhance it. This research gives a solution to having a precise and direct control and automatic detection of current state of devices with the use of micro-controller via an android application. It also gives a practical implementation of home automation using Wi-Fi in comparison to other technologies.
家庭和社会的舒适有赖于周围的“事物”。这些东西通过物联网直接或间接地相互连接。当需要时,在网络内以合理的精度远程控制这些设备是家庭自动化过程中的关键因素。家庭自动化的许多方面需要开发,以增强它。本研究提出了一种通过android应用程序利用微控制器对设备的当前状态进行精确、直接的控制和自动检测的解决方案。与其他技术相比,本文还给出了使用Wi-Fi实现家庭自动化的实际实现。
{"title":"Touch-n-play: An intelligent home automation system","authors":"Bababe B. Adam, A. Jha, Rajiv Kumar","doi":"10.1109/CCAA.2017.8229942","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229942","url":null,"abstract":"The comfort of the home and the Society are helped by the “things” which surround them. These things are connected to each other, either directly or indirectly via the internet of things. Having full access to controlling these devices remotely with reasonable precision within the network when required is a key element in the home automation process. Many aspects of home automation needs to be developed so as to enhance it. This research gives a solution to having a precise and direct control and automatic detection of current state of devices with the use of micro-controller via an android application. It also gives a practical implementation of home automation using Wi-Fi in comparison to other technologies.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77623787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Study of various data compression techniques used in lossless compression of ECG signals 研究各种用于心电信号无损压缩的数据压缩技术
Pub Date : 2017-05-01 DOI: 10.1109/CCAA.2017.8229958
R. P. Tripathi, G. Mishra
As we know that developments in technology are introducing various methods for Tele-cardiology. Tele-cardiology includes many of the applications and this is one of the fields in telemedicine which have seen excellent growth. In the procedures of Tele-cardiology we record a extremely large amount of ECG real time data. Therefore we require an efficient and lossless technique that is able to perform compression of recorded ECG signals. In this paper we have studied and analyzed various lossless data compression techniques used in the compression of ECG signals. In the course of studying various techniques we have presented the analysis of some most widely used time domain techniques those are AZTEC (Amplitude zone time epoch coding) technique and Turning point technique (TP) and in transformation based compression techniques we have presented the study of Discrete Cosine Transform technique (DCT) performed with Huffman coding technique and Empirical Mode Decomposition (EMD) technique. The overall performance of all these techniques are studied and analyzed on the basis of two main parameters those are the compression ratio (CR) and Percent Root means square Difference (PRD). We have used the data base of physionet.org website for the calculation of CR and PRD. We have calculated and compared the CR and PRD values using all above discussed techniques for 28 sets of the recorded data.
正如我们所知,技术的发展正在为远程心脏病学引入各种方法。远程心脏病学包括许多应用,这是远程医疗中增长最快的领域之一。在远程心脏病学的过程中,我们记录了大量的心电图实时数据。因此,我们需要一种高效无损的技术,能够对记录的心电信号进行压缩。本文对用于心电信号压缩的各种无损数据压缩技术进行了研究和分析。在研究各种技术的过程中,我们介绍了一些最广泛使用的时域技术的分析,这些技术是AZTEC(振幅区时间历元编码)技术和转折点技术(TP),在基于变换的压缩技术中,我们介绍了使用霍夫曼编码技术和经验模态分解(EMD)技术进行的离散余弦变换技术(DCT)的研究。基于压缩比(CR)和百分均方根差(PRD)两个主要参数,对所有这些技术的总体性能进行了研究和分析。我们使用了physionet.org网站的数据库来计算CR和PRD。我们使用上述所有技术计算并比较了28组记录数据的CR和PRD值。
{"title":"Study of various data compression techniques used in lossless compression of ECG signals","authors":"R. P. Tripathi, G. Mishra","doi":"10.1109/CCAA.2017.8229958","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229958","url":null,"abstract":"As we know that developments in technology are introducing various methods for Tele-cardiology. Tele-cardiology includes many of the applications and this is one of the fields in telemedicine which have seen excellent growth. In the procedures of Tele-cardiology we record a extremely large amount of ECG real time data. Therefore we require an efficient and lossless technique that is able to perform compression of recorded ECG signals. In this paper we have studied and analyzed various lossless data compression techniques used in the compression of ECG signals. In the course of studying various techniques we have presented the analysis of some most widely used time domain techniques those are AZTEC (Amplitude zone time epoch coding) technique and Turning point technique (TP) and in transformation based compression techniques we have presented the study of Discrete Cosine Transform technique (DCT) performed with Huffman coding technique and Empirical Mode Decomposition (EMD) technique. The overall performance of all these techniques are studied and analyzed on the basis of two main parameters those are the compression ratio (CR) and Percent Root means square Difference (PRD). We have used the data base of physionet.org website for the calculation of CR and PRD. We have calculated and compared the CR and PRD values using all above discussed techniques for 28 sets of the recorded data.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91526364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A hybrid approach for image retrieval using visual descriptors 一种使用视觉描述符的混合图像检索方法
Pub Date : 2017-05-01 DOI: 10.1109/CCAA.2017.8229965
Ruchi Jayaswal, Jaimala Jha
Retrieval of images using visual features is a hot research field in image processing used in various utilizations like in the business field, medical image, geographical images etc. In this research work, we propose and implement a fused approach for image retrieval technique using HSV Histogram color feature, LBP and SFTA texture feature of an image. Standardized Euclidean distance is operated as similarity check method. Wang image repository is used having 1000 images categorized into 10 classes of images for experimental evaluation. Experimental outcomes clear that the proposed system gives improved result of precision than the other conventional methods which is also presented in this paper.
基于视觉特征的图像检索是图像处理中的一个研究热点,在商业、医学、地理等领域有着广泛的应用。在本研究中,我们提出并实现了一种融合HSV直方图颜色特征、LBP和SFTA纹理特征的图像检索技术。采用标准化欧氏距离作为相似性检验方法。Wang图像库将1000张图像分为10类进行实验评估。实验结果表明,该系统的精度比其他传统方法有所提高。
{"title":"A hybrid approach for image retrieval using visual descriptors","authors":"Ruchi Jayaswal, Jaimala Jha","doi":"10.1109/CCAA.2017.8229965","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229965","url":null,"abstract":"Retrieval of images using visual features is a hot research field in image processing used in various utilizations like in the business field, medical image, geographical images etc. In this research work, we propose and implement a fused approach for image retrieval technique using HSV Histogram color feature, LBP and SFTA texture feature of an image. Standardized Euclidean distance is operated as similarity check method. Wang image repository is used having 1000 images categorized into 10 classes of images for experimental evaluation. Experimental outcomes clear that the proposed system gives improved result of precision than the other conventional methods which is also presented in this paper.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87151844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A novel clustering framework using farthest neighbour approach 一种基于最近邻方法的聚类框架
Pub Date : 2017-05-01 DOI: 10.1109/CCAA.2017.8229793
Suvendu Kanungo, A. Shukla
In this digital world, we are facing the flood of data, but depriving for knowledge. The eminent need of mining is useful to extract the hidden pattern from the wide availability of vast amount of data. Clustering is one such useful mining tool to handle this unfavorable situation by carrying out crucial steps refers as cluster analysis. It is the process of a grouping of patterns into clusters based on similarity. Partition based clustering algorithms are widely accepted for much diverse application such as pattern analysis, image segmentation, identification system. Among the different variations of the partition based clustering, due to its monotony and ease of implementation K-means algorithm gained a lot of attraction in the various field of research. A severe problem associated with the algorithm is that it is highly sophisticated while selecting the initial centroid and may converge to a local optimum solution of the criterion function value if the initial centroid is not chosen accurately. Additionally, it requires the prior information regarding a number of clusters to be formed and the computation of K-means are expensive. K-means algorithm is a two-step process includes initialization and assignment step. This paper works on initialization step of the algorithm and proposed an efficient enhanced K-means clustering algorithm which eliminates the deficiency of the existing one. A new initialization approach has been introduced in the paper to drawn an initial cluster centers for K means Algorithm. The paper also compares proposed technique with K-means technique.
在这个数字化的世界里,我们面对着海量的数据,却缺乏知识。挖掘的突出需求有助于从广泛可用的大量数据中提取隐藏的模式。聚类就是这样一种有用的挖掘工具,通过执行称为聚类分析的关键步骤来处理这种不利情况。它是基于相似性将模式分组成簇的过程。基于分割的聚类算法在模式分析、图像分割、识别系统等方面有着广泛的应用。在各种基于分割的聚类算法中,K-means算法由于其单调性和易于实现的特点,在各个研究领域受到了广泛的关注。该算法的一个严重问题是初始质心的选取过于复杂,如果初始质心选取不准确,可能会收敛到准则函数值的局部最优解。此外,它需要形成许多簇的先验信息,并且K-means的计算成本很高。K-means算法分为初始化和赋值两步。本文对算法的初始化步骤进行了研究,提出了一种高效的增强k均值聚类算法,消除了现有算法的不足。本文提出了一种新的初始化方法来绘制K均值算法的初始聚类中心。本文还将该方法与K-means方法进行了比较。
{"title":"A novel clustering framework using farthest neighbour approach","authors":"Suvendu Kanungo, A. Shukla","doi":"10.1109/CCAA.2017.8229793","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229793","url":null,"abstract":"In this digital world, we are facing the flood of data, but depriving for knowledge. The eminent need of mining is useful to extract the hidden pattern from the wide availability of vast amount of data. Clustering is one such useful mining tool to handle this unfavorable situation by carrying out crucial steps refers as cluster analysis. It is the process of a grouping of patterns into clusters based on similarity. Partition based clustering algorithms are widely accepted for much diverse application such as pattern analysis, image segmentation, identification system. Among the different variations of the partition based clustering, due to its monotony and ease of implementation K-means algorithm gained a lot of attraction in the various field of research. A severe problem associated with the algorithm is that it is highly sophisticated while selecting the initial centroid and may converge to a local optimum solution of the criterion function value if the initial centroid is not chosen accurately. Additionally, it requires the prior information regarding a number of clusters to be formed and the computation of K-means are expensive. K-means algorithm is a two-step process includes initialization and assignment step. This paper works on initialization step of the algorithm and proposed an efficient enhanced K-means clustering algorithm which eliminates the deficiency of the existing one. A new initialization approach has been introduced in the paper to drawn an initial cluster centers for K means Algorithm. The paper also compares proposed technique with K-means technique.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87177073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Risk for big data in the cloud 云中的大数据风险
Pub Date : 2017-05-01 DOI: 10.1109/CCAA.2017.8229815
Nayan Chitransh, C. Mehrotra, A. Singh
With a lot of technological advancements in recent years the data generation has increased as an outcome, large amount of data is being generated which is a major issue to the organization, as an example social media is flooding data each day, which is unmanageable. Here in this paper we would discuss the various risks and issues which are associated with big data. Big Data as itself describes the large volume of data, either structured or unstructured, in now a day's data came from so many sources that usual database systems are not able to handle that data so we need Big Data, but Big Data requires a huge assurance of hardware and processing resources which make Big Data costly. In order to provide Big Data services to every user we take help form Cloud computing which offer Big Data implementation cheaper. Cloud Computing is a technology which offers sharing of computing resources such as servers, devices etc instead of having personal one, services over Cloud Computing are delivered via Internet.
随着近年来技术的进步,数据的产生也随之增加,大量的数据正在产生,这对组织来说是一个重大问题,例如,社交媒体每天都在产生大量的数据,这是难以管理的。在本文中,我们将讨论与大数据相关的各种风险和问题。大数据本身描述了大量的数据,无论是结构化的还是非结构化的,现在每天的数据来自如此多的来源,通常的数据库系统无法处理这些数据,所以我们需要大数据,但是大数据需要巨大的硬件保证和处理资源,这使得大数据成本很高。为了向每一个用户提供大数据服务,我们借助云计算的帮助,使大数据实现更便宜。云计算是一种提供共享计算资源的技术,如服务器、设备等,而不是个人的,云计算上的服务通过互联网提供。
{"title":"Risk for big data in the cloud","authors":"Nayan Chitransh, C. Mehrotra, A. Singh","doi":"10.1109/CCAA.2017.8229815","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229815","url":null,"abstract":"With a lot of technological advancements in recent years the data generation has increased as an outcome, large amount of data is being generated which is a major issue to the organization, as an example social media is flooding data each day, which is unmanageable. Here in this paper we would discuss the various risks and issues which are associated with big data. Big Data as itself describes the large volume of data, either structured or unstructured, in now a day's data came from so many sources that usual database systems are not able to handle that data so we need Big Data, but Big Data requires a huge assurance of hardware and processing resources which make Big Data costly. In order to provide Big Data services to every user we take help form Cloud computing which offer Big Data implementation cheaper. Cloud Computing is a technology which offers sharing of computing resources such as servers, devices etc instead of having personal one, services over Cloud Computing are delivered via Internet.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85971913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2017 International Conference on Computing, Communication and Automation (ICCCA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1