Pub Date : 2020-07-01DOI: 10.1109/ICCSP48568.2020.9182160
VS Vivek, S. Vidhya, P. MadhanMohan
Different audio environments require different settings in hearing aid to acquire high-quality speech. Manual tuning of hearing aid settings can be irritating. Thus, hearing aids can be provided with options and settings that can be tuned based on the audio environment. In this paper we provide a simple sound classification system that could be used to automatically switch between various hearing aid algorithms based on the auditory related scene. Features like MFCC, Mel-spectrogram, Chroma, Spectral contrast and Tonnetz are extracted from several hours of audio from five classes like “music,” “noise,” “speech with noise,” “silence,” and “clean speech” for training and testing the network. Using these features audio is processed by the convolution neural network. We show that our system accomplishes high precision with just three to five second duration per scene. The algorithm is efficient and consumes less memory footprint. It is possible to implement the system in digital hearing aid.
{"title":"Acoustic Scene Classification in Hearing aid using Deep Learning","authors":"VS Vivek, S. Vidhya, P. MadhanMohan","doi":"10.1109/ICCSP48568.2020.9182160","DOIUrl":"https://doi.org/10.1109/ICCSP48568.2020.9182160","url":null,"abstract":"Different audio environments require different settings in hearing aid to acquire high-quality speech. Manual tuning of hearing aid settings can be irritating. Thus, hearing aids can be provided with options and settings that can be tuned based on the audio environment. In this paper we provide a simple sound classification system that could be used to automatically switch between various hearing aid algorithms based on the auditory related scene. Features like MFCC, Mel-spectrogram, Chroma, Spectral contrast and Tonnetz are extracted from several hours of audio from five classes like “music,” “noise,” “speech with noise,” “silence,” and “clean speech” for training and testing the network. Using these features audio is processed by the convolution neural network. We show that our system accomplishes high precision with just three to five second duration per scene. The algorithm is efficient and consumes less memory footprint. It is possible to implement the system in digital hearing aid.","PeriodicalId":321133,"journal":{"name":"2020 International Conference on Communication and Signal Processing (ICCSP)","volume":"98 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113993792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICCSP48568.2020.9182252
K. Sankaran, P. Srikanth, S. Basha, Y. Subbarayudu
High-Efficiency Video Coding (HEVC) is a video coding method which is used for the better accuracy among the other standards based on the compression of the video frames such as H.264/AVC. The HEVC is mainly used for the reduction of the bit rate. The proposed system works under the medical images, in image processing stages the de-blocking filter, is one of the filter used to reduce the noise by decoding the compressed video and Sample Adaptive Offset (SAO) filter decreases the distortion of the sample in the image. By using these two methods the video is obtained in better quality.
高效视频编码(High-Efficiency Video Coding, HEVC)是一种基于H.264/AVC等视频帧压缩的视频编码方法。HEVC主要用于降低比特率。该系统工作在医学图像下,在图像处理阶段,去块滤波器是通过解码压缩视频来降低噪声的滤波器之一,样本自适应偏移(SAO)滤波器降低图像中样本的失真。通过这两种方法,获得了较好的视频质量。
{"title":"An Efficient De-blocking Filter for Quality Improvement of Medical Image Analysis","authors":"K. Sankaran, P. Srikanth, S. Basha, Y. Subbarayudu","doi":"10.1109/ICCSP48568.2020.9182252","DOIUrl":"https://doi.org/10.1109/ICCSP48568.2020.9182252","url":null,"abstract":"High-Efficiency Video Coding (HEVC) is a video coding method which is used for the better accuracy among the other standards based on the compression of the video frames such as H.264/AVC. The HEVC is mainly used for the reduction of the bit rate. The proposed system works under the medical images, in image processing stages the de-blocking filter, is one of the filter used to reduce the noise by decoding the compressed video and Sample Adaptive Offset (SAO) filter decreases the distortion of the sample in the image. By using these two methods the video is obtained in better quality.","PeriodicalId":321133,"journal":{"name":"2020 International Conference on Communication and Signal Processing (ICCSP)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122489747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICCSP48568.2020.9182207
Gopa Bhaumik, Monu Verma, M. C. Govil, S. Vipparthi
Hand gesture recognition (HGR) has gained significant attention in recent year due to its varied applicability and ability to interact with machines efficiently. Hand gestures provide a way of communication for hearing-impaired persons. The HGR is a quite challenging task as its performance is influenced by various aspects such as illumination variations, cluttered backgrounds, spontaneous capture, multi-view etc. Thus, to resolve these issues in this paper, we propose an extended radial mean response (EXTRA) pattern for hand gesture recognition. The EXTRA pattern encodes the intensity variations by establishing a reconciled relationship between local neighboring pixels located at two radials r1 and r2. The gradient information between radials preserves the transitional texture that enhances the robustness to deal with illuminations changes. Moreover, the EXTRA pattern holds extensive radial information, thus it can conserve both high level and micro level edge variations that filter hand posture texture from the cluttered background. Furthermore, the mean responsive relationship between adjacency radial pixels improves robustness to noise conditions. The proposed technique is evaluated on three standard datasets viz NUS hand posture dataset-I, MUGD and Finger Spelling dataset. The experimental results and visual representations show that the proposed technique performs better than the existing algorithms for the purpose intended.
{"title":"EXTRA: An Extended Radial Mean Response Pattern for Hand Gesture Recognition","authors":"Gopa Bhaumik, Monu Verma, M. C. Govil, S. Vipparthi","doi":"10.1109/ICCSP48568.2020.9182207","DOIUrl":"https://doi.org/10.1109/ICCSP48568.2020.9182207","url":null,"abstract":"Hand gesture recognition (HGR) has gained significant attention in recent year due to its varied applicability and ability to interact with machines efficiently. Hand gestures provide a way of communication for hearing-impaired persons. The HGR is a quite challenging task as its performance is influenced by various aspects such as illumination variations, cluttered backgrounds, spontaneous capture, multi-view etc. Thus, to resolve these issues in this paper, we propose an extended radial mean response (EXTRA) pattern for hand gesture recognition. The EXTRA pattern encodes the intensity variations by establishing a reconciled relationship between local neighboring pixels located at two radials r1 and r2. The gradient information between radials preserves the transitional texture that enhances the robustness to deal with illuminations changes. Moreover, the EXTRA pattern holds extensive radial information, thus it can conserve both high level and micro level edge variations that filter hand posture texture from the cluttered background. Furthermore, the mean responsive relationship between adjacency radial pixels improves robustness to noise conditions. The proposed technique is evaluated on three standard datasets viz NUS hand posture dataset-I, MUGD and Finger Spelling dataset. The experimental results and visual representations show that the proposed technique performs better than the existing algorithms for the purpose intended.","PeriodicalId":321133,"journal":{"name":"2020 International Conference on Communication and Signal Processing (ICCSP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122515454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICCSP48568.2020.9182431
Deepika Verma, S. Prince
Over the last two decades lot of research is going on in context of selection of right wavelength light source from visible or near infrared region of the spectrum for optical wireless communication under different channel state. In our work, the light sources of wavelength 450nm, 532nm and 635nm from visible spectrum are chosen and investigated. The effect of fog on them using experimental setup in terms of visibility range and attenuation coefficient are analyzed. Obtained results show that 532nm provides the less attenuation coefficient value and more visibility as compared to other two wavelengths. The calculated values are validated with Ijaz fog model
{"title":"Experimental Study of Fog Effect on Wireless Optical Communication Channel for Visible Wavelengths","authors":"Deepika Verma, S. Prince","doi":"10.1109/ICCSP48568.2020.9182431","DOIUrl":"https://doi.org/10.1109/ICCSP48568.2020.9182431","url":null,"abstract":"Over the last two decades lot of research is going on in context of selection of right wavelength light source from visible or near infrared region of the spectrum for optical wireless communication under different channel state. In our work, the light sources of wavelength 450nm, 532nm and 635nm from visible spectrum are chosen and investigated. The effect of fog on them using experimental setup in terms of visibility range and attenuation coefficient are analyzed. Obtained results show that 532nm provides the less attenuation coefficient value and more visibility as compared to other two wavelengths. The calculated values are validated with Ijaz fog model","PeriodicalId":321133,"journal":{"name":"2020 International Conference on Communication and Signal Processing (ICCSP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127956767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICCSP48568.2020.9182398
Vidhu Tanwar, K. Sharma
Online Social media for news consumption is a double-edged sword. If we ponder on the positives outcomes for this, it includes easy access, negligible cost, smart categorization and out reach to the very customer in seconds. But, as every coin has two sides and when we flip side of this, a series of issues come up which need immediate attention and most important among them is spreading of fake news. This has become a serious threat for the governments of countries to keep their harmony intact, keep faith of public in democracy and justice and sustenance of public trust. Therefore fake news detection, especially in social media platform has become an emerging research topic that is attracting tremendous attention. Current set of detection algorithms are specially showing their inability to learn the shared representation of texts and visuals combined (popularly known as multimodal) information. Therefore, we present a variational auto encoder based framework, which consists of three major components encoder, decoder and fake news detector. It utilize the concatenation of visual latent features from three popular CNN architecture (VGG19, ResNet50, InceptionV3) combined with textual information to detect fake news with the help of binary classifier. We conducted the experiment on publically available Twitter dataset. The experimental result shows that out model improves state of the art method by the margin of $sim$2% in accuracy and $sim$3% in F1-score.
{"title":"Multi-Model Fake News Detection based on Concatenation of Visual Latent Features","authors":"Vidhu Tanwar, K. Sharma","doi":"10.1109/ICCSP48568.2020.9182398","DOIUrl":"https://doi.org/10.1109/ICCSP48568.2020.9182398","url":null,"abstract":"Online Social media for news consumption is a double-edged sword. If we ponder on the positives outcomes for this, it includes easy access, negligible cost, smart categorization and out reach to the very customer in seconds. But, as every coin has two sides and when we flip side of this, a series of issues come up which need immediate attention and most important among them is spreading of fake news. This has become a serious threat for the governments of countries to keep their harmony intact, keep faith of public in democracy and justice and sustenance of public trust. Therefore fake news detection, especially in social media platform has become an emerging research topic that is attracting tremendous attention. Current set of detection algorithms are specially showing their inability to learn the shared representation of texts and visuals combined (popularly known as multimodal) information. Therefore, we present a variational auto encoder based framework, which consists of three major components encoder, decoder and fake news detector. It utilize the concatenation of visual latent features from three popular CNN architecture (VGG19, ResNet50, InceptionV3) combined with textual information to detect fake news with the help of binary classifier. We conducted the experiment on publically available Twitter dataset. The experimental result shows that out model improves state of the art method by the margin of $sim$2% in accuracy and $sim$3% in F1-score.","PeriodicalId":321133,"journal":{"name":"2020 International Conference on Communication and Signal Processing (ICCSP)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128582107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICCSP48568.2020.9182114
A. Devarajan, T. Sudalaimuthu, K. Sankaran
Cloud computing is unavoidable significant development that utilizes progressive related to IaaS. The storage is increasing day by day due to upgrades in data distribution and data storing in IaaS services. Having lot of benefit of cloud such as scalability, accessibility, cost saving, almost all industry is interested in shifting their data to cloud storage. With this IaaS services, it is essential to know the biggest challenge related to the data storage management capabilities and also distribution across numerous customer. This also has impact on performance and user experience related to the bandwidth utilization. In this paper the proposed Storage Management Optimization (SMO) eliminates duplicate data to save storage space and increase bandwidth utilization with respect to storage speed of network. The well-structured metadata is used to identify duplication on the corresponding data elements. Evaluation of a metadata prototype helps to analyze the file access patterns of user and to determine the future access prediction in terms of frequent accessibility ranking system. The SMO system generates a dashboard having details related to application data files and access details. Implementation using the proposed system SMO in simulation platform can show space optimization upto 11.85% than the normal system and bandwidth increases with respect to accessibility at the rate almost 84%.
{"title":"Enhanced Storage Management Optimization in IaaS Cloud Environment","authors":"A. Devarajan, T. Sudalaimuthu, K. Sankaran","doi":"10.1109/ICCSP48568.2020.9182114","DOIUrl":"https://doi.org/10.1109/ICCSP48568.2020.9182114","url":null,"abstract":"Cloud computing is unavoidable significant development that utilizes progressive related to IaaS. The storage is increasing day by day due to upgrades in data distribution and data storing in IaaS services. Having lot of benefit of cloud such as scalability, accessibility, cost saving, almost all industry is interested in shifting their data to cloud storage. With this IaaS services, it is essential to know the biggest challenge related to the data storage management capabilities and also distribution across numerous customer. This also has impact on performance and user experience related to the bandwidth utilization. In this paper the proposed Storage Management Optimization (SMO) eliminates duplicate data to save storage space and increase bandwidth utilization with respect to storage speed of network. The well-structured metadata is used to identify duplication on the corresponding data elements. Evaluation of a metadata prototype helps to analyze the file access patterns of user and to determine the future access prediction in terms of frequent accessibility ranking system. The SMO system generates a dashboard having details related to application data files and access details. Implementation using the proposed system SMO in simulation platform can show space optimization upto 11.85% than the normal system and bandwidth increases with respect to accessibility at the rate almost 84%.","PeriodicalId":321133,"journal":{"name":"2020 International Conference on Communication and Signal Processing (ICCSP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115430566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICCSP48568.2020.9182121
Kranti Patil, Anurag Mahajan, S. Balamurugan, P. Arulmozhivarman, R. Makkar
Optical Coherence Tomography (OCT) is a growing non-invasive imaging technology that is capable of generating high-resolution cross-sectional images and high processing speed. It is extensively used for the diagnosis of retinal diseases in ophthalmology, estimation of blood flow and in the field of oncology, cardiology, and dermatology as an imaging device. The spectral-domain OCT (SD-OCT) uses low coherence interferometry to get depth-resolved information of the sample with resolution in the micrometer range and imaging depth in the millimeter range. The complexity of the OCT algorithm demands high processing speed from the underlying platform. The aim is to develop the signal processing algorithm to achieve improved imaging depth. The methods such as background removal, re-sampling, FFT are used to get the desired depth profile of the sample. The response of the actual hardware model is predicted from the outputs. This depth profile gives the information of depth about the sample and the maximum depth depends on the number of pixel information obtained from the spectrometer.
{"title":"Development of Signal Processing Algorithm for Optical Coherence Tomography","authors":"Kranti Patil, Anurag Mahajan, S. Balamurugan, P. Arulmozhivarman, R. Makkar","doi":"10.1109/ICCSP48568.2020.9182121","DOIUrl":"https://doi.org/10.1109/ICCSP48568.2020.9182121","url":null,"abstract":"Optical Coherence Tomography (OCT) is a growing non-invasive imaging technology that is capable of generating high-resolution cross-sectional images and high processing speed. It is extensively used for the diagnosis of retinal diseases in ophthalmology, estimation of blood flow and in the field of oncology, cardiology, and dermatology as an imaging device. The spectral-domain OCT (SD-OCT) uses low coherence interferometry to get depth-resolved information of the sample with resolution in the micrometer range and imaging depth in the millimeter range. The complexity of the OCT algorithm demands high processing speed from the underlying platform. The aim is to develop the signal processing algorithm to achieve improved imaging depth. The methods such as background removal, re-sampling, FFT are used to get the desired depth profile of the sample. The response of the actual hardware model is predicted from the outputs. This depth profile gives the information of depth about the sample and the maximum depth depends on the number of pixel information obtained from the spectrometer.","PeriodicalId":321133,"journal":{"name":"2020 International Conference on Communication and Signal Processing (ICCSP)","volume":"390 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115586231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICCSP48568.2020.9182348
Sharad Chandra Rajpoot, Prashant Singh Rajpoot, M. R. Khan
In power system the distribution section is mostly affected system. This system is usual intricate and unbalance which affect the efficiency of power system by degrading the power quality. The most common causes by which performance of distribution system is degraded are power pilferage, unauthorized load connection, asynchronous communication and unscrupulous monitoring of system. For better quality of power system it should have the real time monitoring & controlling, synchronous communication and superior cyber security and fault management. In this paper we are introducing the integrated system which will acquire the concept of Micro Phasor Measurement Unit ($mu$ PMU), Internet Of Things (IoT) based wireless communication LoRa. This integrated system will provide the real time monitoring, synchronous communication among the different elements of the system along with the encrypted form of the cyber security. It assist in the load shedding, load management and their forecasting.
{"title":"Electricity Pilferage, Fault Detection and their Isolation for Power Quality enhancement in Electrical Distribution System by espouse SDS with Smart Switching Control based on μ PMU, IoT-LoRa technology","authors":"Sharad Chandra Rajpoot, Prashant Singh Rajpoot, M. R. Khan","doi":"10.1109/ICCSP48568.2020.9182348","DOIUrl":"https://doi.org/10.1109/ICCSP48568.2020.9182348","url":null,"abstract":"In power system the distribution section is mostly affected system. This system is usual intricate and unbalance which affect the efficiency of power system by degrading the power quality. The most common causes by which performance of distribution system is degraded are power pilferage, unauthorized load connection, asynchronous communication and unscrupulous monitoring of system. For better quality of power system it should have the real time monitoring & controlling, synchronous communication and superior cyber security and fault management. In this paper we are introducing the integrated system which will acquire the concept of Micro Phasor Measurement Unit ($mu$ PMU), Internet Of Things (IoT) based wireless communication LoRa. This integrated system will provide the real time monitoring, synchronous communication among the different elements of the system along with the encrypted form of the cyber security. It assist in the load shedding, load management and their forecasting.","PeriodicalId":321133,"journal":{"name":"2020 International Conference on Communication and Signal Processing (ICCSP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114458578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICCSP48568.2020.9182428
J. Chattopadhyay, Spv . Subba Rao
Due to the large number of users, wireless communication is finding restriction to provide adequate bandwidth and Quality of Service (QoS). It is expected that the demand will be fulfilled by 5G technology. There is also a large data rate requirement for satellite communication. Today cellular technology fails to deliver this data rate due to their LOS requirement and also increased number of cells. The solution to this can be to use Milli-metric wave communication and Massive Multiple Input Multiple Output (MIMO). MIMO with multiple transmit and receive antenna can ensure spectrum efficiency (SE) and data reliability by using space multiplexing and spectral diversity. MIMO with the help of beam-forming antenna and channel state information (CSI) can provide energy efficiency (EE) also. Similar concepts can be extended to satellite communication. This paper has identified the tasks related to the implementation of MIMO and also suggested prototype set up for their evaluation.
{"title":"Task identification in Massive MIMO Technology for Its Effective Implementation in 5G and Satellite Communication","authors":"J. Chattopadhyay, Spv . Subba Rao","doi":"10.1109/ICCSP48568.2020.9182428","DOIUrl":"https://doi.org/10.1109/ICCSP48568.2020.9182428","url":null,"abstract":"Due to the large number of users, wireless communication is finding restriction to provide adequate bandwidth and Quality of Service (QoS). It is expected that the demand will be fulfilled by 5G technology. There is also a large data rate requirement for satellite communication. Today cellular technology fails to deliver this data rate due to their LOS requirement and also increased number of cells. The solution to this can be to use Milli-metric wave communication and Massive Multiple Input Multiple Output (MIMO). MIMO with multiple transmit and receive antenna can ensure spectrum efficiency (SE) and data reliability by using space multiplexing and spectral diversity. MIMO with the help of beam-forming antenna and channel state information (CSI) can provide energy efficiency (EE) also. Similar concepts can be extended to satellite communication. This paper has identified the tasks related to the implementation of MIMO and also suggested prototype set up for their evaluation.","PeriodicalId":321133,"journal":{"name":"2020 International Conference on Communication and Signal Processing (ICCSP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115337952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICCSP48568.2020.9182208
M. Renuka, G. Valantina
An Artificial Intelligence based Novel dual-channel multiplier (AINDCM) for the area and power-efficient second-order piecewise- polynomial function evaluation for three-dimensional graphics applications is presented in this paper. In any multiplier, the working of the estimation method is highly dependent on the type of adder structure. Different hardware structures of adders and their implementations are presented. The proposed multipliers overcome the drawbacks of conventional DCM multiplier using Parallel Prefix adders which decrease the hardware difficulty. The proposed scheme performs complex methods with a power- efficient and area-efficient approach. The prefix adders reduce the hardware computational effort in the piecewise polynomial approximation with uniform or non-uniform segmentation. These units accomplish the low power consumption compared to CPA with large input word size. The parameters area, delay, and power will be analyzed and compared.
{"title":"Piecewise-Polynomial Function Evaluation in 3-D Graphics- Artificial Intelligence based New Digital Multiplier","authors":"M. Renuka, G. Valantina","doi":"10.1109/ICCSP48568.2020.9182208","DOIUrl":"https://doi.org/10.1109/ICCSP48568.2020.9182208","url":null,"abstract":"An Artificial Intelligence based Novel dual-channel multiplier (AINDCM) for the area and power-efficient second-order piecewise- polynomial function evaluation for three-dimensional graphics applications is presented in this paper. In any multiplier, the working of the estimation method is highly dependent on the type of adder structure. Different hardware structures of adders and their implementations are presented. The proposed multipliers overcome the drawbacks of conventional DCM multiplier using Parallel Prefix adders which decrease the hardware difficulty. The proposed scheme performs complex methods with a power- efficient and area-efficient approach. The prefix adders reduce the hardware computational effort in the piecewise polynomial approximation with uniform or non-uniform segmentation. These units accomplish the low power consumption compared to CPA with large input word size. The parameters area, delay, and power will be analyzed and compared.","PeriodicalId":321133,"journal":{"name":"2020 International Conference on Communication and Signal Processing (ICCSP)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114682219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}