This paper investigates the performance of hard-decision and soft-data fusion schemes for a cooperative spectrum sensing (CSS) in noisy-Rayleigh faded channel. Hard-decision fusion operations on the local binary decisions and soft-data fusion operations on the energy values obtained from the different cognitive radio (CR) users are performed at fusion center (FC)and a final decision on the status of a primary user (PU) is made. More precisely, the performance of CSS with various hard-decision fusion schemes (OR-rule, AND-rule, and majority-rule) and soft-data fusion schemes (square law selection (SLS), maximal ratio combining (MRC), square law combining (SLC), and selection combining (SC)) is analyzed in this work. Towardsthat, novel and closed-form analytic expressions are derived for probability of detection under all soft schemes in Rayleigh fading channel. A comparative performance between hard-decision and soft-data fusion schemes has been illustrated for different network parameters: time-band width product, average sensingchannel signal-to-noise ratio (SNR), and detection threshold. The optimal detection thresholds for which minimum total error rate is obtained for both soft and hard schemes are also indicated.
{"title":"Analysis of Hard-Decision and Soft-Data Fusion Schemes for Cooperative Spectrum Sensing in Rayleigh Fading Channel","authors":"S. Nallagonda, Y. Kumar, P. Shilpa","doi":"10.1109/IACC.2017.0057","DOIUrl":"https://doi.org/10.1109/IACC.2017.0057","url":null,"abstract":"This paper investigates the performance of hard-decision and soft-data fusion schemes for a cooperative spectrum sensing (CSS) in noisy-Rayleigh faded channel. Hard-decision fusion operations on the local binary decisions and soft-data fusion operations on the energy values obtained from the different cognitive radio (CR) users are performed at fusion center (FC)and a final decision on the status of a primary user (PU) is made. More precisely, the performance of CSS with various hard-decision fusion schemes (OR-rule, AND-rule, and majority-rule) and soft-data fusion schemes (square law selection (SLS), maximal ratio combining (MRC), square law combining (SLC), and selection combining (SC)) is analyzed in this work. Towardsthat, novel and closed-form analytic expressions are derived for probability of detection under all soft schemes in Rayleigh fading channel. A comparative performance between hard-decision and soft-data fusion schemes has been illustrated for different network parameters: time-band width product, average sensingchannel signal-to-noise ratio (SNR), and detection threshold. The optimal detection thresholds for which minimum total error rate is obtained for both soft and hard schemes are also indicated.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130995488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The deployment of biosensors is increasing with advancement of bio-electronics. Owing to challenging state of working of biosensors, the present applications of biosensors are capable of capturing only certain types of signals till date. This detection also depends on the type of the bio transducers used for signal generation and therefore, the signals generated from biosensors cannot be considered to be error free. This paper has reviewed some of the existing research contributions towards biosensor validation to find that there are no computational framework that efficiently performs validation as majority of the technique uses either clinical approach or experimental approach, which limits the validation of bio signal performance. Therefore, this paper presents a novel computational framework that uses enhanced version of auto-associative neural network and significantly optimizes the validation performance of biosensors as compared to other conventional optimization techniques.
{"title":"Optimized Performance Validation of Biosensors with High Fault Tolerance","authors":"Subhas A. Meti, V. Sangam","doi":"10.1109/IACC.2017.0076","DOIUrl":"https://doi.org/10.1109/IACC.2017.0076","url":null,"abstract":"The deployment of biosensors is increasing with advancement of bio-electronics. Owing to challenging state of working of biosensors, the present applications of biosensors are capable of capturing only certain types of signals till date. This detection also depends on the type of the bio transducers used for signal generation and therefore, the signals generated from biosensors cannot be considered to be error free. This paper has reviewed some of the existing research contributions towards biosensor validation to find that there are no computational framework that efficiently performs validation as majority of the technique uses either clinical approach or experimental approach, which limits the validation of bio signal performance. Therefore, this paper presents a novel computational framework that uses enhanced version of auto-associative neural network and significantly optimizes the validation performance of biosensors as compared to other conventional optimization techniques.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128378555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heterogeneity and complexity of distributed computing increases rapidly as high speed processors are widely available. In modern computing environment, resources are dynamic, heterogeneous, geographically spread over different computational domains and connected through different capacity of high speed communication links. In a large distributed environment a modular program can be considered as a set of loosely coupled interacting modules/tasks (since all the modules/tasks are considered as simultaneously and independently executable) and represented by task interaction graph (TIG) model. Parallel execution of these interacting modules/tasks is highly preferred to reduce the overall completion time of a program. During parallel execution of tasks, the communication overhead due to message passing may increase the cost of parallel execution. Parallel execution of tasks is chosen if and only if parallel execution cost together with communication overhead is less than serial execution cost. So, resources are to be allocated such that advantage of parallel execution is maintained. In this paper, for any task and resource graph, we propose a heuristics based approach to find out an optimal number of tasks that can be executed in parallel on a set of resources where they can be executed.
{"title":"A Heuristic-Based Resource Allocation Approach for Parallel Execution of Interacting Tasks","authors":"Uddalok Sen, M. Sarkar, N. Mukherjee","doi":"10.1109/IACC.2017.0158","DOIUrl":"https://doi.org/10.1109/IACC.2017.0158","url":null,"abstract":"Heterogeneity and complexity of distributed computing increases rapidly as high speed processors are widely available. In modern computing environment, resources are dynamic, heterogeneous, geographically spread over different computational domains and connected through different capacity of high speed communication links. In a large distributed environment a modular program can be considered as a set of loosely coupled interacting modules/tasks (since all the modules/tasks are considered as simultaneously and independently executable) and represented by task interaction graph (TIG) model. Parallel execution of these interacting modules/tasks is highly preferred to reduce the overall completion time of a program. During parallel execution of tasks, the communication overhead due to message passing may increase the cost of parallel execution. Parallel execution of tasks is chosen if and only if parallel execution cost together with communication overhead is less than serial execution cost. So, resources are to be allocated such that advantage of parallel execution is maintained. In this paper, for any task and resource graph, we propose a heuristics based approach to find out an optimal number of tasks that can be executed in parallel on a set of resources where they can be executed.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128383711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To enhance the quality of vehicular communication, vehicular ad hoc network has to be improved to handle the traffic related issues and maintain privacy. In order to fulfill the same, many schemes have been proposed in last decade. The Identity based Batch verification(IBV) scheme is one such scheme, which makes VANET more secure and efficient. Maintaining privacy through anonymity and reduction of verification time of messages by verifying them in Batch, are the main objectives of this scheme. This paper highlights the security issues of the current IBV scheme and introduces the concept of the random change of Anonymous Identity with time as well as location, to prevent the security attack and to maintain the privacy. In this scheme, performances are evaluated in terms of delay and transmission overhead.
{"title":"Enhancing Identity Based Batch Verification Scheme for Security and Privacy in VANET","authors":"P. Mahapatra, A. Naveena","doi":"10.1109/IACC.2017.0088","DOIUrl":"https://doi.org/10.1109/IACC.2017.0088","url":null,"abstract":"To enhance the quality of vehicular communication, vehicular ad hoc network has to be improved to handle the traffic related issues and maintain privacy. In order to fulfill the same, many schemes have been proposed in last decade. The Identity based Batch verification(IBV) scheme is one such scheme, which makes VANET more secure and efficient. Maintaining privacy through anonymity and reduction of verification time of messages by verifying them in Batch, are the main objectives of this scheme. This paper highlights the security issues of the current IBV scheme and introduces the concept of the random change of Anonymous Identity with time as well as location, to prevent the security attack and to maintain the privacy. In this scheme, performances are evaluated in terms of delay and transmission overhead.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133475597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the past few decades, the designers concentrating on different techniques to design low power chips. The power consumption can be reduced by minimizing the leakage power and leakage current in that specified design. Power consumption is main criteria in digital memory circuits, to reduce and to recover the power, we have many techniques are available. A Stacked Keeper Body Bias (SKBB) is one of the technique is applied to the conditional circuitry of the memory block. The modification and replacements were done in the conditional circuitry. The Bit Line, Write Line decoder, Priority Encoder was used to design efficient 2Byte CAM. The result shows that it is dissipating 50% less power than the conventional CAM Design.
{"title":"Stacked Keeper with Body Bias Approach to Reduce Leakage Power for 2-Byte CAM Using 180NM CMOS Technology","authors":"K. Naresh, V. Madhavarao, M. Sravanthi, M. Ratnam","doi":"10.1109/IACC.2017.0113","DOIUrl":"https://doi.org/10.1109/IACC.2017.0113","url":null,"abstract":"Over the past few decades, the designers concentrating on different techniques to design low power chips. The power consumption can be reduced by minimizing the leakage power and leakage current in that specified design. Power consumption is main criteria in digital memory circuits, to reduce and to recover the power, we have many techniques are available. A Stacked Keeper Body Bias (SKBB) is one of the technique is applied to the conditional circuitry of the memory block. The modification and replacements were done in the conditional circuitry. The Bit Line, Write Line decoder, Priority Encoder was used to design efficient 2Byte CAM. The result shows that it is dissipating 50% less power than the conventional CAM Design.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"0905 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125635742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Now-a-days, Speech Recognition had become a prominent and challenging research domain because of its vast usage. The factors affecting Speech Recognition are Vocalization, Pitch, Tone, Noise, Pronunciation, Frequency, finding where the phoneme starts and stops, Loudness, Speed, Accent and so on. Research is going on to enhance the efficacy of Speech Recognition. Speech Recognition requires efficient models, algorithms and programming frameworks to analyze large amount of real-time data. These algorithms and programming paradigms have to learn knowledge on their own to fit in to the model for massively evolving data in real-time. The developments in parallel computing platforms opens four major possibilities for Speech Recognition systems: improving recognition accuracy, increasing recognition throughput, reducing recognition latency and reducing the recognition training period.
{"title":"Continuous Automatic Speech Recognition System Using MapReduce Framework","authors":"M. Vikram, N. Reddy, K. Madhavi","doi":"10.1109/IACC.2017.0031","DOIUrl":"https://doi.org/10.1109/IACC.2017.0031","url":null,"abstract":"Now-a-days, Speech Recognition had become a prominent and challenging research domain because of its vast usage. The factors affecting Speech Recognition are Vocalization, Pitch, Tone, Noise, Pronunciation, Frequency, finding where the phoneme starts and stops, Loudness, Speed, Accent and so on. Research is going on to enhance the efficacy of Speech Recognition. Speech Recognition requires efficient models, algorithms and programming frameworks to analyze large amount of real-time data. These algorithms and programming paradigms have to learn knowledge on their own to fit in to the model for massively evolving data in real-time. The developments in parallel computing platforms opens four major possibilities for Speech Recognition systems: improving recognition accuracy, increasing recognition throughput, reducing recognition latency and reducing the recognition training period.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123035698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In comparative genomics, genome rearrangement evolution is an important effort. Genome conversion is the major problem in this field using different sorting process. Transforming one sequence into another and finding an optimal solution is a useful tool for analyzing real evolutionary scenario but it will be much better if we find all possible solution for that. In order to obtain more accurate result, some solution should be taken into consideration as there is large number of different optimal sorting sequence. Reversal and translocation are the two common genome sorting process used in development of mammalian species. The problem of genome sorting using reversal and translocation is to find the shortest sequence that transforms any source genome A into some target genome B. Currently the question is resolved by lessening of sorting by reversal and sorting by translocation problem separately, but here we are applying both the sorting process together at the same time. By this paper we present an algorithm for the two sorting process that explicitly treats them as two distinct operations, along with that finding the various solutions which is a better hypothetical and real-world solution than just finding a solo one. If we have single solution for any problem then we cannot decide whether this solution is the perfect one or not but if we have more solution indeed we can find the best one among them and say this is the perfect solution. We also present an example which proves that this solution is more prominent than previous one.
{"title":"Stack Solution for Finding Optimal One","authors":"P. Kumar, G. Sahoo","doi":"10.1109/IACC.2017.0159","DOIUrl":"https://doi.org/10.1109/IACC.2017.0159","url":null,"abstract":"In comparative genomics, genome rearrangement evolution is an important effort. Genome conversion is the major problem in this field using different sorting process. Transforming one sequence into another and finding an optimal solution is a useful tool for analyzing real evolutionary scenario but it will be much better if we find all possible solution for that. In order to obtain more accurate result, some solution should be taken into consideration as there is large number of different optimal sorting sequence. Reversal and translocation are the two common genome sorting process used in development of mammalian species. The problem of genome sorting using reversal and translocation is to find the shortest sequence that transforms any source genome A into some target genome B. Currently the question is resolved by lessening of sorting by reversal and sorting by translocation problem separately, but here we are applying both the sorting process together at the same time. By this paper we present an algorithm for the two sorting process that explicitly treats them as two distinct operations, along with that finding the various solutions which is a better hypothetical and real-world solution than just finding a solo one. If we have single solution for any problem then we cannot decide whether this solution is the perfect one or not but if we have more solution indeed we can find the best one among them and say this is the perfect solution. We also present an example which proves that this solution is more prominent than previous one.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129132479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emerging sensor based electronic gadgets desire to seek high levels of energy conservation by adopting extreme low power techniques in combination with traditional techniques. In this study the authors examine memory units with data retention capability in the Energy-Delay space for an emerging application namely Transiently Powered System for three levels of power and performance optimization. The study presents a novel Dual Edge Triggered Flip-Flop (DETRFF) with retention latch that is suitable for ultra low power application with dynamic voltage switch between super and sub threshold levels. The DETRFF designs are simulated in 45nm NCSU CMOS technology using Cadence. The proposed design excels in the EDP and Leakage Energy metrics as compared to the existing DETFF designs.
{"title":"Novel Ultra Low Power Dual Edge Triggered Retention Flip-Flop for Transiently Powered Systems","authors":"Madhavi Dasari, R. Nikhil, A. Chavan","doi":"10.1109/IACC.2017.0109","DOIUrl":"https://doi.org/10.1109/IACC.2017.0109","url":null,"abstract":"Emerging sensor based electronic gadgets desire to seek high levels of energy conservation by adopting extreme low power techniques in combination with traditional techniques. In this study the authors examine memory units with data retention capability in the Energy-Delay space for an emerging application namely Transiently Powered System for three levels of power and performance optimization. The study presents a novel Dual Edge Triggered Flip-Flop (DETRFF) with retention latch that is suitable for ultra low power application with dynamic voltage switch between super and sub threshold levels. The DETRFF designs are simulated in 45nm NCSU CMOS technology using Cadence. The proposed design excels in the EDP and Leakage Energy metrics as compared to the existing DETFF designs.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115932470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Savitha, Rajiv R. Chetwani, Y. R. Bhanumathy, M. Ravindra
ISRO Satellite Centre of the Indian Space Research Organization develops satellites for variety of scientific applications like communication, navigation, earth observation and many more. These satellites consist of very complex intensive systems which carry out advanced mission functions. Hence software plays a critical constituent for mission success. Some of the geostationary missions onboard software is finalized, changes are minimal for the new spacecraft. This model based system for software change analysis for embedded systems deals with managing changes to existing software items and re configuring in any part of development life cycle. This model helps in handing change management, such as maintenance of a component library, predicting the impacts of changes in reused modules, analyzing the behavior of the combination of reused modules. The use of reusable software modules has augmented the development of embedded software for GSAT series of satellites. This model mainly reduces the time during each phase of software life cycle when the changes are to be implemented within short span of time. This paper describes how complete traceability can be established between specifications and design requirements, model elements and their realization.
{"title":"Model Based System for Software Change Analysis for Embedded Systems on Spacecraft","authors":"A. Savitha, Rajiv R. Chetwani, Y. R. Bhanumathy, M. Ravindra","doi":"10.1109/IACC.2017.0093","DOIUrl":"https://doi.org/10.1109/IACC.2017.0093","url":null,"abstract":"ISRO Satellite Centre of the Indian Space Research Organization develops satellites for variety of scientific applications like communication, navigation, earth observation and many more. These satellites consist of very complex intensive systems which carry out advanced mission functions. Hence software plays a critical constituent for mission success. Some of the geostationary missions onboard software is finalized, changes are minimal for the new spacecraft. This model based system for software change analysis for embedded systems deals with managing changes to existing software items and re configuring in any part of development life cycle. This model helps in handing change management, such as maintenance of a component library, predicting the impacts of changes in reused modules, analyzing the behavior of the combination of reused modules. The use of reusable software modules has augmented the development of embedded software for GSAT series of satellites. This model mainly reduces the time during each phase of software life cycle when the changes are to be implemented within short span of time. This paper describes how complete traceability can be established between specifications and design requirements, model elements and their realization.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116323627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Video quality assessment aims to compute the formalmeasure of perceived video degradation when video is passedthrough a video transmission/processing system. Most of theexisting video quality measures extend Image Quality Measuresby applying them on each frame and later combining the qualityvalues of each frame to get the quality of the entire video. Whencombining the quality values of frames, a simple average or invery few metrics, weighted average has been traditionally used. In this work, saliency of a frame has been used to compute theweight required for each frame to obtain the quality value ofvideo. The goal of every objective quality metric is to correlateas closely as possible to the perceived quality, and the objectiveof saliency is parallel to this as the saliency values should matchthe human perception. Hence we have experimented by usingsaliency to get the final video quality. The idea is demonstratedby using a number of state of art quality metrics on some of thebenchmark datasets.
{"title":"Saliency Based Assessment of Videos from Frame-Wise Quality Measures","authors":"B. Roja, B. Sandhya","doi":"10.1109/IACC.2017.0135","DOIUrl":"https://doi.org/10.1109/IACC.2017.0135","url":null,"abstract":"Video quality assessment aims to compute the formalmeasure of perceived video degradation when video is passedthrough a video transmission/processing system. Most of theexisting video quality measures extend Image Quality Measuresby applying them on each frame and later combining the qualityvalues of each frame to get the quality of the entire video. Whencombining the quality values of frames, a simple average or invery few metrics, weighted average has been traditionally used. In this work, saliency of a frame has been used to compute theweight required for each frame to obtain the quality value ofvideo. The goal of every objective quality metric is to correlateas closely as possible to the perceived quality, and the objectiveof saliency is parallel to this as the saliency values should matchthe human perception. Hence we have experimented by usingsaliency to get the final video quality. The idea is demonstratedby using a number of state of art quality metrics on some of thebenchmark datasets.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126137078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}