Pub Date : 2020-09-28DOI: 10.1109/ICCCSP49186.2020.9315255
R. Katarya, Mahboob Massoudi
In the last several years' social media has become a pivotal open communication model for connecting people through several platforms and in this generation, social media networks have become extremely popular, people have become more in touch with social media networks. They are using online social networks to keep in touch with other people, relatives, and friends. In the past, people were using verbal and non-verbal networks to share their ideas, opinion, feeling, and emotions with other individuals. Nowadays, people are using social media networks to share their ideas, opinions, and feeling. Also, access to news is effortless and comfort with the using of social media networks which in the past people had to use newspapers and magazines to get aware of the world situations, but now they are using from social media networks to read the latest news just in a minute after a bad or good news occurred in the world.Nowadays, people are addicted to reading the news by using social media networks, which is the easiest way for them, but one issue that sometimes decreases the popularity of social media is dealing with fake news. Here the main work is to seek the best outcome to find and detect fake and misleading news from social media networks. In addition to this, various research articles have pointed out to our research questions that are noticed in section three.
{"title":"Recognizing Fake News in Social Media with Deep Learning: A Systematic Review","authors":"R. Katarya, Mahboob Massoudi","doi":"10.1109/ICCCSP49186.2020.9315255","DOIUrl":"https://doi.org/10.1109/ICCCSP49186.2020.9315255","url":null,"abstract":"In the last several years' social media has become a pivotal open communication model for connecting people through several platforms and in this generation, social media networks have become extremely popular, people have become more in touch with social media networks. They are using online social networks to keep in touch with other people, relatives, and friends. In the past, people were using verbal and non-verbal networks to share their ideas, opinion, feeling, and emotions with other individuals. Nowadays, people are using social media networks to share their ideas, opinions, and feeling. Also, access to news is effortless and comfort with the using of social media networks which in the past people had to use newspapers and magazines to get aware of the world situations, but now they are using from social media networks to read the latest news just in a minute after a bad or good news occurred in the world.Nowadays, people are addicted to reading the news by using social media networks, which is the easiest way for them, but one issue that sometimes decreases the popularity of social media is dealing with fake news. Here the main work is to seek the best outcome to find and detect fake and misleading news from social media networks. In addition to this, various research articles have pointed out to our research questions that are noticed in section three.","PeriodicalId":310458,"journal":{"name":"2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128203322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-28DOI: 10.1109/icccsp49186.2020.9315249
{"title":"[ICCCSP 2020 Front cover]","authors":"","doi":"10.1109/icccsp49186.2020.9315249","DOIUrl":"https://doi.org/10.1109/icccsp49186.2020.9315249","url":null,"abstract":"","PeriodicalId":310458,"journal":{"name":"2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP)","volume":"24 18","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120859227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-28DOI: 10.1109/ICCCSP49186.2020.9315256
J. Singh, V. Sharmila
Machine learning and Artificial Intelligent techniques are the most used techniques. It gives opportunity to online sharing market where sharing and adopting model is being popular. It gives attackers many new opportunities. Deep neural network is the most used approached for artificial techniques. In this paper we are presenting a Proof of Concept method to detect Trojan attacks on the Deep Neural Network. Deploying trojan models can be dangerous in normal human lives (Application like Automated vehicle). First inverse the neuron network to create general trojan triggers, and then retrain the model with external datasets to inject Trojan trigger to the model. The malicious behaviors are only activated with the trojan trigger Input. In attack, original datasets are not required to train the model. In practice, usually datasets are not shared due to privacy or copyright concerns. We use five different applications to demonstrate the attack, and perform an analysis on the factors that affect the attack. The behavior of a trojan modification can be triggered without affecting the test accuracy for normal input datasets. After generating the trojan trigger and performing an attack. It's applying SHAP as defense against such attacks. SHAP is known for its unique explanation for model predictions.
{"title":"Detecting Trojan Attacks on Deep Neural Networks","authors":"J. Singh, V. Sharmila","doi":"10.1109/ICCCSP49186.2020.9315256","DOIUrl":"https://doi.org/10.1109/ICCCSP49186.2020.9315256","url":null,"abstract":"Machine learning and Artificial Intelligent techniques are the most used techniques. It gives opportunity to online sharing market where sharing and adopting model is being popular. It gives attackers many new opportunities. Deep neural network is the most used approached for artificial techniques. In this paper we are presenting a Proof of Concept method to detect Trojan attacks on the Deep Neural Network. Deploying trojan models can be dangerous in normal human lives (Application like Automated vehicle). First inverse the neuron network to create general trojan triggers, and then retrain the model with external datasets to inject Trojan trigger to the model. The malicious behaviors are only activated with the trojan trigger Input. In attack, original datasets are not required to train the model. In practice, usually datasets are not shared due to privacy or copyright concerns. We use five different applications to demonstrate the attack, and perform an analysis on the factors that affect the attack. The behavior of a trojan modification can be triggered without affecting the test accuracy for normal input datasets. After generating the trojan trigger and performing an attack. It's applying SHAP as defense against such attacks. SHAP is known for its unique explanation for model predictions.","PeriodicalId":310458,"journal":{"name":"2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128080945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-28DOI: 10.1109/ICCCSP49186.2020.9315214
S. Revathy, S. Priya
The agricultural activity is considered to be the largest trade, as it is feeding more people. Though it is playing a vital role in economic development, the farmers who are producing the crops are not reaping abundant benefits from it. Precision agriculture systems equipped with advanced technologies allow farmers to increase their productivity in ecological friendly manner, but with minimum profit. Our traditional producer-consumer model makes farmers to hinge on the interagent or middleman to sell their goods to consumers, as they have fewer restrictions in selling the goods directly to consumers. This paper proposes Blockchain based Producer- Consumer Model (BPCM) which allows farmer to sell their goods directly to the consumers by preventing the interagents, to gain profit over farmers using smart contracts. It also resists the Sybil attack, by blocking interagent with multiple counterfeit identities to trade with consumers thereby increasing profit of the farmers.
{"title":"Blockchain based Producer-Consumer Model for Farmers","authors":"S. Revathy, S. Priya","doi":"10.1109/ICCCSP49186.2020.9315214","DOIUrl":"https://doi.org/10.1109/ICCCSP49186.2020.9315214","url":null,"abstract":"The agricultural activity is considered to be the largest trade, as it is feeding more people. Though it is playing a vital role in economic development, the farmers who are producing the crops are not reaping abundant benefits from it. Precision agriculture systems equipped with advanced technologies allow farmers to increase their productivity in ecological friendly manner, but with minimum profit. Our traditional producer-consumer model makes farmers to hinge on the interagent or middleman to sell their goods to consumers, as they have fewer restrictions in selling the goods directly to consumers. This paper proposes Blockchain based Producer- Consumer Model (BPCM) which allows farmer to sell their goods directly to the consumers by preventing the interagents, to gain profit over farmers using smart contracts. It also resists the Sybil attack, by blocking interagent with multiple counterfeit identities to trade with consumers thereby increasing profit of the farmers.","PeriodicalId":310458,"journal":{"name":"2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128098201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-28DOI: 10.1109/ICCCSP49186.2020.9315217
N. Sripriya, S. Poornima, S. Mohanavalli, R. Bhaiya, V. Nikita
Visually-impaired people often feel handicapped and find it difficult to explore the world around them freely without any kind of help and assistance. This lack of independence restricts them from feeling confident about themselves. Furthermore, all virtual travel assistants currently existing in the market, are chatbots associated with textual interaction. While this may be useful more often than not, in certain situations, users may find textual interaction uncomfortable and prefer a conversational interface over text. An important issue of concern in the tourism industry is the monetary expense invested in travel guides by tourists. A travel assistant capable of acting as a virtual guide can help reduce this investment by a large factor. Hence, it is proposed to develop a speech-based travel bot capable of acting as a virtual tour guide. The bot will play the role of a tour guide by suggesting places and giving information about the place such as opening hours, rating, address to aid the user in knowing more about the place by interacting with the user and providing relevant information. The proposed system can also find great use in situations where a person may feel uncomfortable to text such as when driving. This speech-based virtual travel assistant/bot is implemented using speech recognition, speech synthesis, and natural language techniques to gather the user’s preference and provide the intended output. The developed bot has proved to be efficient for searching for different kinds of places and interacts well with the user and helps to find further details about a specific place as per the user’s queries. The intents for most of the queries are correctly recognized, which helps in efficient dialogue management.
{"title":"Speech-Based Virtual Travel Assistant For Visually Impaired","authors":"N. Sripriya, S. Poornima, S. Mohanavalli, R. Bhaiya, V. Nikita","doi":"10.1109/ICCCSP49186.2020.9315217","DOIUrl":"https://doi.org/10.1109/ICCCSP49186.2020.9315217","url":null,"abstract":"Visually-impaired people often feel handicapped and find it difficult to explore the world around them freely without any kind of help and assistance. This lack of independence restricts them from feeling confident about themselves. Furthermore, all virtual travel assistants currently existing in the market, are chatbots associated with textual interaction. While this may be useful more often than not, in certain situations, users may find textual interaction uncomfortable and prefer a conversational interface over text. An important issue of concern in the tourism industry is the monetary expense invested in travel guides by tourists. A travel assistant capable of acting as a virtual guide can help reduce this investment by a large factor. Hence, it is proposed to develop a speech-based travel bot capable of acting as a virtual tour guide. The bot will play the role of a tour guide by suggesting places and giving information about the place such as opening hours, rating, address to aid the user in knowing more about the place by interacting with the user and providing relevant information. The proposed system can also find great use in situations where a person may feel uncomfortable to text such as when driving. This speech-based virtual travel assistant/bot is implemented using speech recognition, speech synthesis, and natural language techniques to gather the user’s preference and provide the intended output. The developed bot has proved to be efficient for searching for different kinds of places and interacts well with the user and helps to find further details about a specific place as per the user’s queries. The intents for most of the queries are correctly recognized, which helps in efficient dialogue management.","PeriodicalId":310458,"journal":{"name":"2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116571249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-28DOI: 10.1109/ICCCSP49186.2020.9315270
M. Suganthi, J. Sathiaseelan
Image Mining is an extension of data mining, which is concerned with extracting beneficial information from image data. Images are classified based on texture, size, color and morphology. Neural Networks, ImageNet, VGG16, AlexNet are renowned image recognition techniques used to identify various agriculture, medical, aerial images and so on. Convolution neural network (CNN) is a Machine learning method used to classify the images which are popularly known for robust feature extraction and information mining. A comparative study of seven CNN based hybrid image classification techniques namely CNN-ELM, CNN-KNN, CNN-GA, MLP-CNN, CNN-SVM, CNN-RNN, CNN-LSTM has been done to determine their accuracy.
{"title":"An Exploratory of Hybrid Techniques on Deep Learning for Image Classification","authors":"M. Suganthi, J. Sathiaseelan","doi":"10.1109/ICCCSP49186.2020.9315270","DOIUrl":"https://doi.org/10.1109/ICCCSP49186.2020.9315270","url":null,"abstract":"Image Mining is an extension of data mining, which is concerned with extracting beneficial information from image data. Images are classified based on texture, size, color and morphology. Neural Networks, ImageNet, VGG16, AlexNet are renowned image recognition techniques used to identify various agriculture, medical, aerial images and so on. Convolution neural network (CNN) is a Machine learning method used to classify the images which are popularly known for robust feature extraction and information mining. A comparative study of seven CNN based hybrid image classification techniques namely CNN-ELM, CNN-KNN, CNN-GA, MLP-CNN, CNN-SVM, CNN-RNN, CNN-LSTM has been done to determine their accuracy.","PeriodicalId":310458,"journal":{"name":"2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115353855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-28DOI: 10.1109/ICCCSP49186.2020.9315248
Mohammad Sheraj, Ashish Chopra
We aim to achieve the highest data compression ratio in a lossy scenario while still maintaining the original image or audio files characteristics and resolution/bitrate. For this we would run feature extraction on chunks of the data and store them in a database with a specific hash as a key. This hash will be stored in the file and the full data later reconstructed from the database. The database will be created by training on a vast range of data and storing only the most common chunks encountered by hash. The compression ratio achieved for image it is 0.01 over standard raw input data.
{"title":"Data Compression Algorithm for Audio and Image using Feature Extraction","authors":"Mohammad Sheraj, Ashish Chopra","doi":"10.1109/ICCCSP49186.2020.9315248","DOIUrl":"https://doi.org/10.1109/ICCCSP49186.2020.9315248","url":null,"abstract":"We aim to achieve the highest data compression ratio in a lossy scenario while still maintaining the original image or audio files characteristics and resolution/bitrate. For this we would run feature extraction on chunks of the data and store them in a database with a specific hash as a key. This hash will be stored in the file and the full data later reconstructed from the database. The database will be created by training on a vast range of data and storing only the most common chunks encountered by hash. The compression ratio achieved for image it is 0.01 over standard raw input data.","PeriodicalId":310458,"journal":{"name":"2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126751837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-28DOI: 10.1109/ICCCSP49186.2020.9315273
H. Gokul, S. V. Kanna, H. Akshay Kumar, Vignesh Ravikumar
This research article focuses on enabling real-time Human-Robot Interaction (HRI) with two input control modalities designed to control a robotic arm. The robotic arm comprises of 3 Degrees of Freedom (DOF) and is controlled using the following 2 strategies: Inertial Sensor-glove and Image-based visual servoing. The Mathematical model of the given 3 DOF robotic arm was derived and the two proposed methods were explicitly designed for the same. The inertial sensor-glove, worn by the user, is embedded with inertial sensors that provide data to track the motion of the user's arm. The Image-based visual servoing modality is a mono-vision based approach which tracks the motion of a target object held by the user using a camera and operates similar to a mouse-pointer but in 3 dimensions. Fitts’s Targeting tasks were performed to analyse the performance of these Human-Robot Interactive input modalities.
{"title":"Design of Imitative Control Modalities for a 3 Degree of Freedom Robotic Arm","authors":"H. Gokul, S. V. Kanna, H. Akshay Kumar, Vignesh Ravikumar","doi":"10.1109/ICCCSP49186.2020.9315273","DOIUrl":"https://doi.org/10.1109/ICCCSP49186.2020.9315273","url":null,"abstract":"This research article focuses on enabling real-time Human-Robot Interaction (HRI) with two input control modalities designed to control a robotic arm. The robotic arm comprises of 3 Degrees of Freedom (DOF) and is controlled using the following 2 strategies: Inertial Sensor-glove and Image-based visual servoing. The Mathematical model of the given 3 DOF robotic arm was derived and the two proposed methods were explicitly designed for the same. The inertial sensor-glove, worn by the user, is embedded with inertial sensors that provide data to track the motion of the user's arm. The Image-based visual servoing modality is a mono-vision based approach which tracks the motion of a target object held by the user using a camera and operates similar to a mouse-pointer but in 3 dimensions. Fitts’s Targeting tasks were performed to analyse the performance of these Human-Robot Interactive input modalities.","PeriodicalId":310458,"journal":{"name":"2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126840952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-28DOI: 10.1109/ICCCSP49186.2020.9315272
R. Kayalvizhi, S. Malarvizhi, S. Choudhury, A. Topkar, P. Vijayakumar
Deep learning algorithms have the ability to learn complex functions and provide state-of-the-art results for com-puter vision problems. In recent times, these algorithms far exceeded the existing computer vision based techniques for object detection in X-ray imaging systems. So far, in literature single class of object namely gun and its parts were considered for detection using the SIXray10 database. We propose deep learning-based solution for the detection of sharp objects namely knife, scissors, wrench, pliers in the SIXray 10database. We propose two models namely model A and model B using a common object detection algorithm- YOLOv3 (You Only Look Once) with InceptionV3 and ResNet-50. YOLO is a deep neural network based object detection algorithm that performs the task in one-shot which allows real time inference in video of 15-30 fps. The model is FCN (Fully Convolutional Network) as has the capacity to perform both regression and classification by sharing weights for both the tasks. The network predicts a rectangular box called bounding box around the predicted object of interest along with the associated class. We analyze the performance of both model in terms of mAP. We achieve mean accuracy of 59.95% for model-A and 63.35% for Model-B. The most daunting part of the project is the low ratio of harmful to nonharmful items. By performing rigorous experiments we came up with the best set of possible results which uses varied pretrained neural networks for feature extraction in tandem with YOLO model for object detection. We endeavor to improve on these existing results so as these systems can be successfully deployed in airports to minimize human error and improve security in such environments.
{"title":"Detection of sharp objects using deep neural network based object detection algorithm","authors":"R. Kayalvizhi, S. Malarvizhi, S. Choudhury, A. Topkar, P. Vijayakumar","doi":"10.1109/ICCCSP49186.2020.9315272","DOIUrl":"https://doi.org/10.1109/ICCCSP49186.2020.9315272","url":null,"abstract":"Deep learning algorithms have the ability to learn complex functions and provide state-of-the-art results for com-puter vision problems. In recent times, these algorithms far exceeded the existing computer vision based techniques for object detection in X-ray imaging systems. So far, in literature single class of object namely gun and its parts were considered for detection using the SIXray10 database. We propose deep learning-based solution for the detection of sharp objects namely knife, scissors, wrench, pliers in the SIXray 10database. We propose two models namely model A and model B using a common object detection algorithm- YOLOv3 (You Only Look Once) with InceptionV3 and ResNet-50. YOLO is a deep neural network based object detection algorithm that performs the task in one-shot which allows real time inference in video of 15-30 fps. The model is FCN (Fully Convolutional Network) as has the capacity to perform both regression and classification by sharing weights for both the tasks. The network predicts a rectangular box called bounding box around the predicted object of interest along with the associated class. We analyze the performance of both model in terms of mAP. We achieve mean accuracy of 59.95% for model-A and 63.35% for Model-B. The most daunting part of the project is the low ratio of harmful to nonharmful items. By performing rigorous experiments we came up with the best set of possible results which uses varied pretrained neural networks for feature extraction in tandem with YOLO model for object detection. We endeavor to improve on these existing results so as these systems can be successfully deployed in airports to minimize human error and improve security in such environments.","PeriodicalId":310458,"journal":{"name":"2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126411474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-28DOI: 10.1109/ICCCSP49186.2020.9315243
K. Vidya, Lavanya Amalbabu, K.S Sowndharya, S. balaji
In our society, complaint systems are all done manually by humans and there are no automated system wherein the complaints are identified and sent to the respective authority all by itself. A complaint about our day to day activities based on the images can be identified and given as a report. This reduces lot of manual time and may speedup the remedial process. In this system, the users need to upload an image which will be analyzed and classified based on remedial departments. Thus the amount of manual work both on the user and the organization is minimized by feasibly producing an automated system that can generate a report about the problem without human intervention during the process. Based on the institutional requirements, the complaints are classified as website-based and object-based using an image classification system. The web-related complaints are handled by optical character recognition and the object-based complaints are handled by object detection and data mining techniques. The images are trained and tested through various classification system and their performances are compared. The user is also provided with a feature of adding location along with the complaint image which makes less complication in finding the place of fault and based on which a report is generated and forwarded to corresponding department. Thus, this paper aims in proposing an automated complaint generation and reporting system for Institutions by classifying user input images using image processing and data mining techniques.
{"title":"Secured Automated Complaint Generation System for Organizations","authors":"K. Vidya, Lavanya Amalbabu, K.S Sowndharya, S. balaji","doi":"10.1109/ICCCSP49186.2020.9315243","DOIUrl":"https://doi.org/10.1109/ICCCSP49186.2020.9315243","url":null,"abstract":"In our society, complaint systems are all done manually by humans and there are no automated system wherein the complaints are identified and sent to the respective authority all by itself. A complaint about our day to day activities based on the images can be identified and given as a report. This reduces lot of manual time and may speedup the remedial process. In this system, the users need to upload an image which will be analyzed and classified based on remedial departments. Thus the amount of manual work both on the user and the organization is minimized by feasibly producing an automated system that can generate a report about the problem without human intervention during the process. Based on the institutional requirements, the complaints are classified as website-based and object-based using an image classification system. The web-related complaints are handled by optical character recognition and the object-based complaints are handled by object detection and data mining techniques. The images are trained and tested through various classification system and their performances are compared. The user is also provided with a feature of adding location along with the complaint image which makes less complication in finding the place of fault and based on which a report is generated and forwarded to corresponding department. Thus, this paper aims in proposing an automated complaint generation and reporting system for Institutions by classifying user input images using image processing and data mining techniques.","PeriodicalId":310458,"journal":{"name":"2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129783203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}