Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269666
Rashmi Vashisth, Akshit Sharma, S. Malhotra, Saurabh Deswal, Aman Budhraja
In this paper, we are introducing a robot with a gesture controlled 3-axis accelerometer (ADXL335) with an ATmega16 microcontroller. Gesture recognition is a topic which comes under the purview of the computer science, Electronics & Communication and language technologies field for the purpose of interpreting human gestures with the help of mathematical algorithms. The gestures can be interpreted from any kind of physical movement or condition, but usually arise from a person. Gesture recognition can be explained as a method by which a computer can understand the language of the human body, thereby creating a communication bridge between humans and machines than normal text based or a terminal user interfaces or even graphical user interfaces (GUIs) that still restrict most of the mouse and keyboard inputs.
{"title":"Gesture control robot using accelerometer","authors":"Rashmi Vashisth, Akshit Sharma, S. Malhotra, Saurabh Deswal, Aman Budhraja","doi":"10.1109/ISPCC.2017.8269666","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269666","url":null,"abstract":"In this paper, we are introducing a robot with a gesture controlled 3-axis accelerometer (ADXL335) with an ATmega16 microcontroller. Gesture recognition is a topic which comes under the purview of the computer science, Electronics & Communication and language technologies field for the purpose of interpreting human gestures with the help of mathematical algorithms. The gestures can be interpreted from any kind of physical movement or condition, but usually arise from a person. Gesture recognition can be explained as a method by which a computer can understand the language of the human body, thereby creating a communication bridge between humans and machines than normal text based or a terminal user interfaces or even graphical user interfaces (GUIs) that still restrict most of the mouse and keyboard inputs.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134211094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269696
M. Kodeeswari, Philemon Daniel
The objective of this paper is to use image processing techniques to identify the lane lines on the hilly road based on Hough transform. Vision based approach is utilized as it performs well in a wide variety of situations by extracting rich set of information compared to other sensors. The proposed method processes the live video stream from a monocular camera using matlab and extracts the position of lane markings and an algorithm is used to find the lane lines present on the road.
{"title":"Lane line detection in real time based on morphological operations for driver assistance system","authors":"M. Kodeeswari, Philemon Daniel","doi":"10.1109/ISPCC.2017.8269696","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269696","url":null,"abstract":"The objective of this paper is to use image processing techniques to identify the lane lines on the hilly road based on Hough transform. Vision based approach is utilized as it performs well in a wide variety of situations by extracting rich set of information compared to other sensors. The proposed method processes the live video stream from a monocular camera using matlab and extracts the position of lane markings and an algorithm is used to find the lane lines present on the road.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132132933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269755
Radhika Thakur, Shruti Jain, M. Sood
Digital systems are mainly used in data processing, control systems and computation. They are having numberof advantages over analog system: one of advantage is fast arithmetic operation. There are different techniques for performing arithmetic operations such as Binary Signed Digit(BSD), Wallace, Booth multiplication etc. Using binary number system for arithmetic operation generates carry which creates delay and reduce the speed of operation. To overcome this problem we are using higher radix number system such as Quaternary Signed Digit (QSD). QSD number system is base 4 number system. QSD is represented by decimal numbers as : 0, 1, 2 and 3. It is responsible for carry free arithmetic operations. In this paper we proposed a high speed, low power QSD multiplier which is capable of doing carry free operation. This circuit can multiply both signed and unsigned numbers without any extra delay. This circuit also increases the speed of operation and is less complex. The circuit is simulated on XilinxSPARTAN 3E-100or250 field programmable gate array (FPGA) board using Verilog HDL.
数字系统主要用于数据处理、控制系统和计算。与模拟系统相比,它们具有许多优点:其中一个优点是快速的算术运算。执行算术运算有不同的技术,如二进制符号数(BSD)、华莱士、布斯乘法等。采用二进制进行算术运算会产生进位,造成运算延迟,降低运算速度。为了克服这个问题,我们使用了更高的基数系统,如四元有符号数(QSD)。QSD数制是4进制数制。QSD用十进制数字表示为:0、1、2和3。它负责无进位算术运算。本文提出了一种高速、低功耗、可进行免进位运算的QSD乘法器。这个电路可以将有符号数和无符号数相乘,而不会有任何额外的延迟。这种电路也提高了操作速度,而且不那么复杂。该电路在XilinxSPARTAN 3e - 100或250现场可编程门阵列(FPGA)板上使用Verilog HDL进行仿真。
{"title":"FPGA implementation of unsigned multiplier circuit based on quaternary signed digit number system","authors":"Radhika Thakur, Shruti Jain, M. Sood","doi":"10.1109/ISPCC.2017.8269755","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269755","url":null,"abstract":"Digital systems are mainly used in data processing, control systems and computation. They are having numberof advantages over analog system: one of advantage is fast arithmetic operation. There are different techniques for performing arithmetic operations such as Binary Signed Digit(BSD), Wallace, Booth multiplication etc. Using binary number system for arithmetic operation generates carry which creates delay and reduce the speed of operation. To overcome this problem we are using higher radix number system such as Quaternary Signed Digit (QSD). QSD number system is base 4 number system. QSD is represented by decimal numbers as : 0, 1, 2 and 3. It is responsible for carry free arithmetic operations. In this paper we proposed a high speed, low power QSD multiplier which is capable of doing carry free operation. This circuit can multiply both signed and unsigned numbers without any extra delay. This circuit also increases the speed of operation and is less complex. The circuit is simulated on XilinxSPARTAN 3E-100or250 field programmable gate array (FPGA) board using Verilog HDL.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114303469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269740
P. Chauhan, Bhumika Gupta, Upendra Ballabh
Picture pressure is a fundamental innovation in sight and sound and advanced correspondence fields. Fractal picture pressure is a potential picture pressure plot because of its potential high pressure proportion, quick decompression and multi determination properties. Fractal picture pressure uses the presence of self symmetry of pictures. It is an lopsided method which take more time in compression than decompressing an image. The intellection is to do utmost of the work during compression. However the high computational unpredictability of fractal picture encoding incredibly limits its applications. A few procedures and enhancements have been recommended to accelerate the fractal picture pressure on polynomial insertion. This paper introduces an audit of the methods such as DWT and CLAHE distributed for speedier fractal picture compression using polynomial introduction with pre pack. Preliminary results shows a quite improvement in compression ratio, mean square blunder and the pinnacle flag to clamor proportion (PSNR).
{"title":"Polynomial based fractal image compression using DWT screening","authors":"P. Chauhan, Bhumika Gupta, Upendra Ballabh","doi":"10.1109/ISPCC.2017.8269740","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269740","url":null,"abstract":"Picture pressure is a fundamental innovation in sight and sound and advanced correspondence fields. Fractal picture pressure is a potential picture pressure plot because of its potential high pressure proportion, quick decompression and multi determination properties. Fractal picture pressure uses the presence of self symmetry of pictures. It is an lopsided method which take more time in compression than decompressing an image. The intellection is to do utmost of the work during compression. However the high computational unpredictability of fractal picture encoding incredibly limits its applications. A few procedures and enhancements have been recommended to accelerate the fractal picture pressure on polynomial insertion. This paper introduces an audit of the methods such as DWT and CLAHE distributed for speedier fractal picture compression using polynomial introduction with pre pack. Preliminary results shows a quite improvement in compression ratio, mean square blunder and the pinnacle flag to clamor proportion (PSNR).","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128592671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269670
Yogesh Paul, Vibha Goyal, R. Jaswal
The extraction of the feature is a significant method to extract the useful information which is hidden in the signal acquired form the types of different. These signals may be speech, EEG, EMG, ECG, EOG etc. Here, within this paper, we carry on further with EMG signal to discuss the comparative analysis in between linear SVM and KNN classifier using time domain features. For the purpose of successful classification of EMG signal, careful selection of feature is required. Within this paper, seven elementary time domain features are realized as they are frequently used for the same.
{"title":"Comparative analysis between SVM & KNN classifier for EMG signal classification on elementary time domain features","authors":"Yogesh Paul, Vibha Goyal, R. Jaswal","doi":"10.1109/ISPCC.2017.8269670","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269670","url":null,"abstract":"The extraction of the feature is a significant method to extract the useful information which is hidden in the signal acquired form the types of different. These signals may be speech, EEG, EMG, ECG, EOG etc. Here, within this paper, we carry on further with EMG signal to discuss the comparative analysis in between linear SVM and KNN classifier using time domain features. For the purpose of successful classification of EMG signal, careful selection of feature is required. Within this paper, seven elementary time domain features are realized as they are frequently used for the same.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132578051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269756
Manish Gupta, Anurag Jain
Cloud computing is a reliable computing platform for large computational intensive or data intensive tasks. This has been accepted by many industrial giants of software industry for their software solutions, companies like Microsoft, Accenture, Ericson etc has adopted cloud computing as their first choice for cheap and reliable computing. But which increase in number of clients adopting this there is requirement of much more cost efficient and high performance computing for more trust and reliability among the client and the service provide to guarantee cheap and more efficient solutions. So the tasks in cloud need to be allocated in an efficient manner to provide high resource utilization and least execution time for high performance, at the same time provide least computational cost as cloud follows pay-per use model. Many resource algorithms are been proposed to improve the performance, but are not cost efficient at same time. Algorithms like genetic, particle swarm and ant colony algorithm are efficient solutions but not cost efficient. So this paper presets an study of various existing algorithms.
{"title":"A survey on cost aware task allocation algorithm for cloud environment","authors":"Manish Gupta, Anurag Jain","doi":"10.1109/ISPCC.2017.8269756","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269756","url":null,"abstract":"Cloud computing is a reliable computing platform for large computational intensive or data intensive tasks. This has been accepted by many industrial giants of software industry for their software solutions, companies like Microsoft, Accenture, Ericson etc has adopted cloud computing as their first choice for cheap and reliable computing. But which increase in number of clients adopting this there is requirement of much more cost efficient and high performance computing for more trust and reliability among the client and the service provide to guarantee cheap and more efficient solutions. So the tasks in cloud need to be allocated in an efficient manner to provide high resource utilization and least execution time for high performance, at the same time provide least computational cost as cloud follows pay-per use model. Many resource algorithms are been proposed to improve the performance, but are not cost efficient at same time. Algorithms like genetic, particle swarm and ant colony algorithm are efficient solutions but not cost efficient. So this paper presets an study of various existing algorithms.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129330056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269684
Sukhmani K. Thethi, Ravi Kumar
This paper presents a novel register allocation technique as well as the conventional technique for the implementation of Principal Component Analysis (PCA) incorporating variable reuse technique. PCA deals with a large dimensional data and is a computationally intensive technique. The purpose of this paper is to avoid register switching and hence reduction in dynamic power consumption as well as area during the implementation of PCA. Syntheses of verilog codes written for both the techniques were carried out in RC (cadence) tool. In case of generic synthesis, a substantial decrease of 56.867% in power and 56.66% in case of area was observed; whereas, in case of mapped synthesis, significant reduction of 86.145% in power and 74.79% in area was observed for the proposed technique in contrast to the conventional one.
{"title":"Area and power efficient register allocation technique for the implementation of PCA","authors":"Sukhmani K. Thethi, Ravi Kumar","doi":"10.1109/ISPCC.2017.8269684","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269684","url":null,"abstract":"This paper presents a novel register allocation technique as well as the conventional technique for the implementation of Principal Component Analysis (PCA) incorporating variable reuse technique. PCA deals with a large dimensional data and is a computationally intensive technique. The purpose of this paper is to avoid register switching and hence reduction in dynamic power consumption as well as area during the implementation of PCA. Syntheses of verilog codes written for both the techniques were carried out in RC (cadence) tool. In case of generic synthesis, a substantial decrease of 56.867% in power and 56.66% in case of area was observed; whereas, in case of mapped synthesis, significant reduction of 86.145% in power and 74.79% in area was observed for the proposed technique in contrast to the conventional one.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125718149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269697
Naveen Malik, Vinisha Malik, Sandip Goel
Semantic Web based Applications have become very applauded these days due to their huge employment in social networks, e-learning, multimedia processing and health care industry besides their engagement in information retrieval. Semantic Web based Applications are accented by machine comprehensibility of the content, sharing and reuse among heterogeneous applications, modular structure of its domain vocabulary, and their availability as a service. Their welfares and vast usage implies us to assess quality with consideration to divergent aspects such as ontology and few other quality attributes. This paper ventures a rigorous verdict of the state-of-art in this direction. Quality assessment of Semantic Web based Applications has been probed with focus on the process, contributions and limitations of each work besides research gaps in the direction.
{"title":"Towards an analysis for quality assessment of semantic web based applications and SaaS","authors":"Naveen Malik, Vinisha Malik, Sandip Goel","doi":"10.1109/ISPCC.2017.8269697","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269697","url":null,"abstract":"Semantic Web based Applications have become very applauded these days due to their huge employment in social networks, e-learning, multimedia processing and health care industry besides their engagement in information retrieval. Semantic Web based Applications are accented by machine comprehensibility of the content, sharing and reuse among heterogeneous applications, modular structure of its domain vocabulary, and their availability as a service. Their welfares and vast usage implies us to assess quality with consideration to divergent aspects such as ontology and few other quality attributes. This paper ventures a rigorous verdict of the state-of-art in this direction. Quality assessment of Semantic Web based Applications has been probed with focus on the process, contributions and limitations of each work besides research gaps in the direction.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125911588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269669
Meenu Dave, Jahangir Kamal
As the Big Data gets recognition, everything that is being stored electronically in bulk cannot be termed as Big Data. Nowadays efforts are being made to extract maximum useful information from analyzing Big Data, as it contains growing value to the organization and actionable relationships are abundantly found in Big Data stores as compared to the small stores. Big Data from various organizations or industries is being recognized on the basis of certain characteristics (dimensions) and structure. The characteristics of Big Data started with 3Vs (Volume, Velocity, and Variety), but new dimensions are getting evolved day by day and thus broadening the dimensions and definition of Big Data. In this paper, the growing characteristics and structure of Big Data with new definitions from academia and corporate world have been elaborated.
{"title":"Identifying big data dimensions and structure","authors":"Meenu Dave, Jahangir Kamal","doi":"10.1109/ISPCC.2017.8269669","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269669","url":null,"abstract":"As the Big Data gets recognition, everything that is being stored electronically in bulk cannot be termed as Big Data. Nowadays efforts are being made to extract maximum useful information from analyzing Big Data, as it contains growing value to the organization and actionable relationships are abundantly found in Big Data stores as compared to the small stores. Big Data from various organizations or industries is being recognized on the basis of certain characteristics (dimensions) and structure. The characteristics of Big Data started with 3Vs (Volume, Velocity, and Variety), but new dimensions are getting evolved day by day and thus broadening the dimensions and definition of Big Data. In this paper, the growing characteristics and structure of Big Data with new definitions from academia and corporate world have been elaborated.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129497182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269743
M. Goswami, S. Maheshwari, Amarjeet Poonia
Plant disease is main reason of agricultural crops production losses. Leaf disease in plant occurs due to fungai, virus and bacterias. Image contains various important features which is used in classification. In this paper author initially detect disease then classify disease using extracted features. It takes five diseased leaves (Black rot, Black Measles, Leaf blight, Septoria leaf spot, Bacterial spot) and healthy leaf images then identify leaf is diseased or healthy then if leaf is diseased then classify type of disease. Color features mean, standard deviation, skewness and kurtosis computed then Region based shape feature calculated to identify the size of spots. Texture feature calculated using gray level co-occurrence matrix (GLCM) which identifies texture of image using distance and 45° angle variation in GLCM. Extracted features sent to trained feed forward neural network and classify diseased with color, shape and texture feature individually and combination of all features then observe that combination of color, shape and texture feature improve the performance of classification accuracy.
{"title":"Combinational feature approach: Performance improvement for image processing based leaf disease classification","authors":"M. Goswami, S. Maheshwari, Amarjeet Poonia","doi":"10.1109/ISPCC.2017.8269743","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269743","url":null,"abstract":"Plant disease is main reason of agricultural crops production losses. Leaf disease in plant occurs due to fungai, virus and bacterias. Image contains various important features which is used in classification. In this paper author initially detect disease then classify disease using extracted features. It takes five diseased leaves (Black rot, Black Measles, Leaf blight, Septoria leaf spot, Bacterial spot) and healthy leaf images then identify leaf is diseased or healthy then if leaf is diseased then classify type of disease. Color features mean, standard deviation, skewness and kurtosis computed then Region based shape feature calculated to identify the size of spots. Texture feature calculated using gray level co-occurrence matrix (GLCM) which identifies texture of image using distance and 45° angle variation in GLCM. Extracted features sent to trained feed forward neural network and classify diseased with color, shape and texture feature individually and combination of all features then observe that combination of color, shape and texture feature improve the performance of classification accuracy.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122960557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}