Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9057928
Y. Joshi, Udit Chawla, Shipra Shukla
The volume of big data has opened up great opportunities for prediction and analysis of different aspects of weather. Data Visualisation is common in day to day life. Various charts and graphs are used to illustrate the practical approach towards the classification of rainfall with the help of data visualisation methods. Since it was impossible to analyze the large datasets earlier, the data visualisation techniques has made easier to plot the graphs for the better understanding of the weather. With the help of data visualisation patterns such as the highest, lowest and average rainfall in the States/Union Territories the weather of India has been visualised. In this paper, the rainfall pattern in the States/Union Territories of India was successfully visualised. The pattern identifies drought prone region in India, decrease in the annual rainfall over the century and heavy rainfall in the coastal regions of India.
{"title":"Rainfall Prediction Using Data Visualisation Techniques","authors":"Y. Joshi, Udit Chawla, Shipra Shukla","doi":"10.1109/Confluence47617.2020.9057928","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9057928","url":null,"abstract":"The volume of big data has opened up great opportunities for prediction and analysis of different aspects of weather. Data Visualisation is common in day to day life. Various charts and graphs are used to illustrate the practical approach towards the classification of rainfall with the help of data visualisation methods. Since it was impossible to analyze the large datasets earlier, the data visualisation techniques has made easier to plot the graphs for the better understanding of the weather. With the help of data visualisation patterns such as the highest, lowest and average rainfall in the States/Union Territories the weather of India has been visualised. In this paper, the rainfall pattern in the States/Union Territories of India was successfully visualised. The pattern identifies drought prone region in India, decrease in the annual rainfall over the century and heavy rainfall in the coastal regions of India.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121781034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9058044
Rohit Kumar Kaliyar
Language modeling is the task of assigning a probability distribution over sequences of words that matches the distribution of a language. A language model is required to represent the text to a form understandable from the machine point of view. A language model is capable to predict the probability of a word occurring in the context-related text. Although it sounds formidable, in the existing research, most of the language models are based on unidirectional training. In this paper, we have investigated a bi-directional training model-BERT (Bidirectional Encoder Representations from Transformers). BERT builds on top of the bidirectional idea as compared to other word embedding models (like Elmo). It practices the comparatively new transformer encoder-based architecture to compute word embedding. In this paper, it has been described that how this model is to be producing or achieving state-of-the-art results on various NLP tasks. BERT has the capability to train the model in bi-directional over a large corpus. All the existing methods are based on unidirectional training (either the left or the right). This bi-directionality of the language model helps to obtain better results in the context-related classification tasks in which the word(s) was used as input vectors. Additionally, BERT is outlined to do multi-task learning using context-related datasets. It can perform different NLP tasks simultaneously. This survey focuses on the detailed representation of the BERT- based technique for word embedding, its architecture, and the importance of this model for pre-training purposes using a large corpus.
语言建模是在匹配语言分布的单词序列上分配概率分布的任务。语言模型需要将文本表示为机器可以理解的形式。语言模型能够预测单词在与上下文相关的文本中出现的概率。虽然听起来很可怕,但在现有的研究中,大多数语言模型都是基于单向训练的。本文研究了一种双向训练模型——bert (Bidirectional Encoder Representations from Transformers)。与其他词嵌入模型(如Elmo)相比,BERT是建立在双向思想之上的。它采用了相对较新的基于变压器编码器的架构来计算词嵌入。在本文中,已经描述了该模型如何在各种NLP任务上产生或实现最先进的结果。BERT具有在大型语料库上双向训练模型的能力。现有的方法都是基于单向训练(左或右)。语言模型的这种双向性有助于在使用单词作为输入向量的与上下文相关的分类任务中获得更好的结果。此外,BERT概述了使用与上下文相关的数据集进行多任务学习。它可以同时执行不同的NLP任务。本调查的重点是基于BERT的词嵌入技术的详细表示,它的体系结构,以及该模型在使用大型语料库进行预训练时的重要性。
{"title":"A Multi-layer Bidirectional Transformer Encoder for Pre-trained Word Embedding: A Survey of BERT","authors":"Rohit Kumar Kaliyar","doi":"10.1109/Confluence47617.2020.9058044","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9058044","url":null,"abstract":"Language modeling is the task of assigning a probability distribution over sequences of words that matches the distribution of a language. A language model is required to represent the text to a form understandable from the machine point of view. A language model is capable to predict the probability of a word occurring in the context-related text. Although it sounds formidable, in the existing research, most of the language models are based on unidirectional training. In this paper, we have investigated a bi-directional training model-BERT (Bidirectional Encoder Representations from Transformers). BERT builds on top of the bidirectional idea as compared to other word embedding models (like Elmo). It practices the comparatively new transformer encoder-based architecture to compute word embedding. In this paper, it has been described that how this model is to be producing or achieving state-of-the-art results on various NLP tasks. BERT has the capability to train the model in bi-directional over a large corpus. All the existing methods are based on unidirectional training (either the left or the right). This bi-directionality of the language model helps to obtain better results in the context-related classification tasks in which the word(s) was used as input vectors. Additionally, BERT is outlined to do multi-task learning using context-related datasets. It can perform different NLP tasks simultaneously. This survey focuses on the detailed representation of the BERT- based technique for word embedding, its architecture, and the importance of this model for pre-training purposes using a large corpus.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121975446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9057936
K. Kaul, Smriti Sehgal
The need for Single Image Dehazing came up as a result of hazy input images captured during foggy or hazy weather. This occurs due to the fact that certain dust particles and smog can easily scatter light, especially during morning haze, some firework or at the dawn time. Therefore, a hazy image gets piled over the original image. And hence, it becomes a challenging task to retrieve the original image from the input hazy image. Generally for single image dehazing, a massive dataset of input hazy image is required, the reason being Deep Learning is the backbone of the entire functionality of this concept. Deep Neural Networks require multiple hidden layers between the input hazy image and the output layer. Though Single Image Dehazing employs methods like polarization, prior based approach, extra information method, prior based method, learning based method have shown the greatest level of accuracy in recovering a clear image. Amongst the existing methods, polarization method and contrast based methods weren’t applicable in real time scenarios. Although, Dark Channel Prior based method was one of the most successful amongst the prior based strategies, it’s drawback was that it overestimates the thickness of the haze. In this paper, the main focus will be at comparing different Deep Learning methods, stressing upon various Convolutional Neural Networks, thereby giving a deep insight of various CNN strategies for retrieving the original dehazed image.
{"title":"Single Image Dehazing Using Neural Network","authors":"K. Kaul, Smriti Sehgal","doi":"10.1109/Confluence47617.2020.9057936","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9057936","url":null,"abstract":"The need for Single Image Dehazing came up as a result of hazy input images captured during foggy or hazy weather. This occurs due to the fact that certain dust particles and smog can easily scatter light, especially during morning haze, some firework or at the dawn time. Therefore, a hazy image gets piled over the original image. And hence, it becomes a challenging task to retrieve the original image from the input hazy image. Generally for single image dehazing, a massive dataset of input hazy image is required, the reason being Deep Learning is the backbone of the entire functionality of this concept. Deep Neural Networks require multiple hidden layers between the input hazy image and the output layer. Though Single Image Dehazing employs methods like polarization, prior based approach, extra information method, prior based method, learning based method have shown the greatest level of accuracy in recovering a clear image. Amongst the existing methods, polarization method and contrast based methods weren’t applicable in real time scenarios. Although, Dark Channel Prior based method was one of the most successful amongst the prior based strategies, it’s drawback was that it overestimates the thickness of the haze. In this paper, the main focus will be at comparing different Deep Learning methods, stressing upon various Convolutional Neural Networks, thereby giving a deep insight of various CNN strategies for retrieving the original dehazed image.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122462188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9058275
Manoj Wairiya, Anjali C. Shah, G.P. Sahu
Regardless of the occupation, in order to acquire knowledge, skills and competences training and education plays a vital role. Mobile learning brings the concept of using mobile devices such as mobile phones, smartphones, etc. for learning purpose. This document brings out the subject of mobile learning for educational purposes and presents prospects and opportunities of M-Learning; it also explores various implications and challenges faced in its implementation. A survey is conducted in Government and Private Institutes in India to determine both instructors and students cognizance and perception towards the trend of M-learning, to assess the productiveness, and to analyze social and cultural challenges that affects the adoption of M-learning in India. A questionnaire was distributed to 390 students and 57 instructors from some educational institution in India. From the result we came to know that instructors and students have positive perception towards M-learning, and accepted that M-learning improves the learning and teaching process. However, there are few challenges that will act as an obstacle to M-Learning implementation.
{"title":"Mobile Learning Adoption: An Empirical Study","authors":"Manoj Wairiya, Anjali C. Shah, G.P. Sahu","doi":"10.1109/Confluence47617.2020.9058275","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9058275","url":null,"abstract":"Regardless of the occupation, in order to acquire knowledge, skills and competences training and education plays a vital role. Mobile learning brings the concept of using mobile devices such as mobile phones, smartphones, etc. for learning purpose. This document brings out the subject of mobile learning for educational purposes and presents prospects and opportunities of M-Learning; it also explores various implications and challenges faced in its implementation. A survey is conducted in Government and Private Institutes in India to determine both instructors and students cognizance and perception towards the trend of M-learning, to assess the productiveness, and to analyze social and cultural challenges that affects the adoption of M-learning in India. A questionnaire was distributed to 390 students and 57 instructors from some educational institution in India. From the result we came to know that instructors and students have positive perception towards M-learning, and accepted that M-learning improves the learning and teaching process. However, there are few challenges that will act as an obstacle to M-Learning implementation.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115795918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9058174
M. Reynoso, M. Diván
Nowadays, the data has a dynamic never seen before. They have continuous growth which is associated with the permanent generation from both data entered by users and new data derived from systems. In this context, the data visualization plays a highlighted role because it allows synthetically communicating high-data volume making them understandable for the End-user. This aspect constitutes a key asset throughout the decision-making process in any organization because incorporates dynamism and fosters different kinds of analysis. For that reason, the use of guidelines allows tutoring a process and in particular, those related to data visualizations. In this work, the general visualization process is described in order to schematize the way in which user requirements and sketches converge. Next, the visualization design describes the way in which the sketches could be implemented using the software. As a contribution, here an application case using the forest’s fires dataset of Argentina between 2011 and 2017 is shown in order to serve as a reference for the guidelines’ using. The case was implemented using Qlik Sense Cloud, incorporating a set of dynamic behaviors included in the platform, such as the possibility of making zoom on maps or sharing the selections between visual components. The employed data are freely available on datos.gob.ar, the Open-data platform of Argentina’s Government. The case allows exemplifying the use of guidelines, its applicability, and the chosen data visualization strategy in a consistent way.
如今,数据呈现出前所未有的动态变化。它们有持续的增长,这与用户输入的数据和来自系统的新数据的永久生成有关。在这种情况下,数据可视化发挥了突出的作用,因为它允许综合地交流高数据量,使最终用户能够理解它们。这个方面在任何组织的决策过程中都是一个关键的资产,因为它包含了动态并促进了不同类型的分析。因此,使用指南可以指导一个过程,特别是与数据可视化相关的过程。在这项工作中,描述了一般的可视化过程,以便将用户需求和草图融合的方式形象化。接下来,可视化设计描述了使用软件实现草图的方式。作为贡献,这里展示了使用2011年至2017年阿根廷森林火灾数据集的应用案例,以作为指南使用的参考。该案例使用Qlik Sense Cloud实现,结合了平台中包含的一组动态行为,例如在地图上进行缩放或在视觉组件之间共享选择的可能性。所使用的数据可以在datos.gob上免费获得。阿根廷政府的开放数据平台。该案例允许以一致的方式举例说明指南的使用、其适用性和所选择的数据可视化策略。
{"title":"Applying Data Visualization Guideline on Forest Fires in Argentina","authors":"M. Reynoso, M. Diván","doi":"10.1109/Confluence47617.2020.9058174","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9058174","url":null,"abstract":"Nowadays, the data has a dynamic never seen before. They have continuous growth which is associated with the permanent generation from both data entered by users and new data derived from systems. In this context, the data visualization plays a highlighted role because it allows synthetically communicating high-data volume making them understandable for the End-user. This aspect constitutes a key asset throughout the decision-making process in any organization because incorporates dynamism and fosters different kinds of analysis. For that reason, the use of guidelines allows tutoring a process and in particular, those related to data visualizations. In this work, the general visualization process is described in order to schematize the way in which user requirements and sketches converge. Next, the visualization design describes the way in which the sketches could be implemented using the software. As a contribution, here an application case using the forest’s fires dataset of Argentina between 2011 and 2017 is shown in order to serve as a reference for the guidelines’ using. The case was implemented using Qlik Sense Cloud, incorporating a set of dynamic behaviors included in the platform, such as the possibility of making zoom on maps or sharing the selections between visual components. The employed data are freely available on datos.gob.ar, the Open-data platform of Argentina’s Government. The case allows exemplifying the use of guidelines, its applicability, and the chosen data visualization strategy in a consistent way.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126822185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9057960
Tushar Tyagi, Parth Gupta, Prabhishek Singh
This paper presents a new hybrid and parallel processing image fusion technique for multi-focus images. Here, two different methods are used i.e. Stationary Wavelet Transform (SWT) and Principal Component Analysis (PCA) that are implemented on the input images in parallel. These two methods are applied on same input dataset. This method is although computationally bit slower than the compared method but still it shows better results. The fused images obtained from the SWT and PCA are later again fused using PCA method. This is a parallel processing technique. The result of proposed method is compared with other traditional and conventional methods like DWT, SWT and PCA. It is observed that the result of proposed method is better than the compared methods. The result of the proposed method is analyzed qualitatively (visual appearance) and quantitatively using CC (Correlation Coefficient), UIQI (Universal Image Quality Index), and PSNR (Peak Signal-to-Noise Ratio). The proposed technique will have the capability to be implemented in real time applications of Visual Sensor Network (VSN).
{"title":"A Hybrid Multi-focus Image Fusion Technique using SWT and PCA","authors":"Tushar Tyagi, Parth Gupta, Prabhishek Singh","doi":"10.1109/Confluence47617.2020.9057960","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9057960","url":null,"abstract":"This paper presents a new hybrid and parallel processing image fusion technique for multi-focus images. Here, two different methods are used i.e. Stationary Wavelet Transform (SWT) and Principal Component Analysis (PCA) that are implemented on the input images in parallel. These two methods are applied on same input dataset. This method is although computationally bit slower than the compared method but still it shows better results. The fused images obtained from the SWT and PCA are later again fused using PCA method. This is a parallel processing technique. The result of proposed method is compared with other traditional and conventional methods like DWT, SWT and PCA. It is observed that the result of proposed method is better than the compared methods. The result of the proposed method is analyzed qualitatively (visual appearance) and quantitatively using CC (Correlation Coefficient), UIQI (Universal Image Quality Index), and PSNR (Peak Signal-to-Noise Ratio). The proposed technique will have the capability to be implemented in real time applications of Visual Sensor Network (VSN).","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127254596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9058134
Protyush De, Aratrika Das, P. Dutta, Semanti Chakraborty, Debangana Brahma, Sayanti Banerjee
Food Industry has been progressing rapidly and has been one of the fastest growing industries in the world. From the poverty to the elite class food is one of the major components for the people to live. Keeping this in mind, we have been motivated to take up a challenge to construct a device that can detect the taste of food in a unique and automated way. In the electronic tongue which we have proposed to develop will consist of an IR sensor which will emit Infrared rays to detect the proper taste of the food and will identify the five tastes (bitter, salty, sour, sweet, and umami). It will not only detect the particular taste but it will also show the percentage presence of the particular taste detected (sweetness, saltiness etc.). The whole process will be done during the preparation of the food if it is kept at the production centre. This will be done with the help of a sensor whose architecture has been designed and the sensor has been incorporated in the arduino board where different color LED’s has been provided which will glow depending on the particular taste has which will be detected. The following sensor circuit has been designed in OrcadPspice and its simulation with different colors of LED’s has been taken to segregate the five tastes separately. The colors are namely violet for sour taste, blue for bitter taste, green for salty taste, orange for sweet taste and red for savory (umami) taste. Besides that, applying the concepts of biotechnology each of the chemicals producing the particular sense has been identified and IR rays has been passed through it to generate the signals in a microcontroller. The amplitude of the generated distortion will indicate the percentage presence of the taste in the food. This will not only enhance the quality of food but also help in indicating any rotten particle or microorganism present in food. It will have a large application in dry food manufacturing industries (lays, kurkure, nestle, metro),food courts, restaurants and medicine manufacturing companies.
{"title":"An Automated Electronic tounge with instantaneous taste detectors using IR sensor and Arduino","authors":"Protyush De, Aratrika Das, P. Dutta, Semanti Chakraborty, Debangana Brahma, Sayanti Banerjee","doi":"10.1109/Confluence47617.2020.9058134","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9058134","url":null,"abstract":"Food Industry has been progressing rapidly and has been one of the fastest growing industries in the world. From the poverty to the elite class food is one of the major components for the people to live. Keeping this in mind, we have been motivated to take up a challenge to construct a device that can detect the taste of food in a unique and automated way. In the electronic tongue which we have proposed to develop will consist of an IR sensor which will emit Infrared rays to detect the proper taste of the food and will identify the five tastes (bitter, salty, sour, sweet, and umami). It will not only detect the particular taste but it will also show the percentage presence of the particular taste detected (sweetness, saltiness etc.). The whole process will be done during the preparation of the food if it is kept at the production centre. This will be done with the help of a sensor whose architecture has been designed and the sensor has been incorporated in the arduino board where different color LED’s has been provided which will glow depending on the particular taste has which will be detected. The following sensor circuit has been designed in OrcadPspice and its simulation with different colors of LED’s has been taken to segregate the five tastes separately. The colors are namely violet for sour taste, blue for bitter taste, green for salty taste, orange for sweet taste and red for savory (umami) taste. Besides that, applying the concepts of biotechnology each of the chemicals producing the particular sense has been identified and IR rays has been passed through it to generate the signals in a microcontroller. The amplitude of the generated distortion will indicate the percentage presence of the taste in the food. This will not only enhance the quality of food but also help in indicating any rotten particle or microorganism present in food. It will have a large application in dry food manufacturing industries (lays, kurkure, nestle, metro),food courts, restaurants and medicine manufacturing companies.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130652182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9058190
Rishi Sharma, Shilpi Sharma
Studying the variety of the malwares behavior, especially ransomware variants that has a philosophy of kidnapping the data residing on the disk. The Internet has become an essential part of daily life as more and more people use services that are offered on the Internet. Future wars will be cyber wars and the attacks will be a sturdy amalgamation of cryptography along with malware to distort information systems and its security. The explosive Internet growth facilitates cyberattacks.Malware plays an indispensable actor in launching malicious activities to monetize. Also Studying the malware behavior especially, the ransomwares/crypto mining and creating an approach to have a proactive mechanism in place for detection.
{"title":"A novel Approach to counter Ransomwares","authors":"Rishi Sharma, Shilpi Sharma","doi":"10.1109/Confluence47617.2020.9058190","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9058190","url":null,"abstract":"Studying the variety of the malwares behavior, especially ransomware variants that has a philosophy of kidnapping the data residing on the disk. The Internet has become an essential part of daily life as more and more people use services that are offered on the Internet. Future wars will be cyber wars and the attacks will be a sturdy amalgamation of cryptography along with malware to distort information systems and its security. The explosive Internet growth facilitates cyberattacks.Malware plays an indispensable actor in launching malicious activities to monetize. Also Studying the malware behavior especially, the ransomwares/crypto mining and creating an approach to have a proactive mechanism in place for detection.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130390016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9057811
B. Saxena, V. Saxena
Influence maximization (IM) in online social networks (OSNs) has been extensively studied in the past few years, owing to its potential of impacting online marketing. IM aims at solving the problem of selecting a small set of influential nodes, who can lead to maximum influence spread across a social network. An integral part of IM is the modelling of the underlying diffusion process, which has a substantial impact on the spread achieved by any seed set. In this paper, Hurst-based diffusion model for IM has been proposed, under which node’s activation depends upon the nature of self-similarity exhibited in its past activity pattern. Assessment of the self-similarity trend exhibited by a node’s activity pattern, has been done using Hurst exponent (H). On the basis of the results achieved, the proposed model has been found to perform significantly better than two widely popular diffusion models, Independent Cascade and Linear Threshold, which are often used for IM in OSNs.
{"title":"Influence Maximization in Social Networks using Hurst exponent based Diffusion Model","authors":"B. Saxena, V. Saxena","doi":"10.1109/Confluence47617.2020.9057811","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9057811","url":null,"abstract":"Influence maximization (IM) in online social networks (OSNs) has been extensively studied in the past few years, owing to its potential of impacting online marketing. IM aims at solving the problem of selecting a small set of influential nodes, who can lead to maximum influence spread across a social network. An integral part of IM is the modelling of the underlying diffusion process, which has a substantial impact on the spread achieved by any seed set. In this paper, Hurst-based diffusion model for IM has been proposed, under which node’s activation depends upon the nature of self-similarity exhibited in its past activity pattern. Assessment of the self-similarity trend exhibited by a node’s activity pattern, has been done using Hurst exponent (H). On the basis of the results achieved, the proposed model has been found to perform significantly better than two widely popular diffusion models, Independent Cascade and Linear Threshold, which are often used for IM in OSNs.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132303686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9058060
Ramesh Chandra Sahoo, S. Pradhan, Poonam Tanwar
A deep neural network such as convolutional neural network is a popular and most commonly applied technique in image processing for classification for the last few years. The overhead of the feature extraction step will be avoided due to the implicit feature extraction nature of convolutional neural network (CNN) and these extracted features contain substantial information that could be sufficient for an image classification problem. Fully connected (FC) layers in CNN take the results of the last convolution and/or pooling layer and then use them to recognize or classifying images into labels. In this paper, we present an associative memory-based model named Hopfield network as a fully connected layer to store patterns for classification in CNN architecture like LeNet-5. The main purpose of using Hopfield network is to avoid backpropagation as it is a fully connected recurrent network as the state-of-art results which we have obtained are comparable with other models. To measure the performance of the new architecture, we used NIT, Rourkela, Odia characters dataset and compared it with other models for classification.
{"title":"HopNet based Associative Memory as FC layer in CNN for Odia Character Classification","authors":"Ramesh Chandra Sahoo, S. Pradhan, Poonam Tanwar","doi":"10.1109/Confluence47617.2020.9058060","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9058060","url":null,"abstract":"A deep neural network such as convolutional neural network is a popular and most commonly applied technique in image processing for classification for the last few years. The overhead of the feature extraction step will be avoided due to the implicit feature extraction nature of convolutional neural network (CNN) and these extracted features contain substantial information that could be sufficient for an image classification problem. Fully connected (FC) layers in CNN take the results of the last convolution and/or pooling layer and then use them to recognize or classifying images into labels. In this paper, we present an associative memory-based model named Hopfield network as a fully connected layer to store patterns for classification in CNN architecture like LeNet-5. The main purpose of using Hopfield network is to avoid backpropagation as it is a fully connected recurrent network as the state-of-art results which we have obtained are comparable with other models. To measure the performance of the new architecture, we used NIT, Rourkela, Odia characters dataset and compared it with other models for classification.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129243594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}