Pub Date : 2020-02-07DOI: 10.1504/ijics.2020.10026785
Somya Ranjan Sahoo, B. Gupta
The popularity of online social networks like Facebook and Twitter has become the regular way of communication and interaction. Due to the popularity of such networks, the attackers try to reveal suspicious behaviour in the form of fake profile. To stop fake profile, various approaches are proposed in the recent years. The focus of recent work is to implement a machine learning technique to detect fake profile on Facebook platform by analysing public as well as private features. In this paper, a machine learning-based approach is proposed for detecting suspicious profiles for tapping and tainting multimedia big data on Facebook. Multimedia big data is a type of dataset in which the data is heterogeneous, human centric and has more media related contents with huge volumes like text, audio and video generated in different online social network. The experimental result of our work using content-based and profile-based features delivers first rate performance as compared to other approaches.
{"title":"Fake profile detection in multimedia big data on online social networks","authors":"Somya Ranjan Sahoo, B. Gupta","doi":"10.1504/ijics.2020.10026785","DOIUrl":"https://doi.org/10.1504/ijics.2020.10026785","url":null,"abstract":"The popularity of online social networks like Facebook and Twitter has become the regular way of communication and interaction. Due to the popularity of such networks, the attackers try to reveal suspicious behaviour in the form of fake profile. To stop fake profile, various approaches are proposed in the recent years. The focus of recent work is to implement a machine learning technique to detect fake profile on Facebook platform by analysing public as well as private features. In this paper, a machine learning-based approach is proposed for detecting suspicious profiles for tapping and tainting multimedia big data on Facebook. Multimedia big data is a type of dataset in which the data is heterogeneous, human centric and has more media related contents with huge volumes like text, audio and video generated in different online social network. The experimental result of our work using content-based and profile-based features delivers first rate performance as compared to other approaches.","PeriodicalId":164016,"journal":{"name":"Int. J. Inf. Comput. Secur.","volume":"21 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120845579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-07DOI: 10.1504/ijics.2020.10026779
M. Diwakar, Pardeep Kumar
Recently in medical imaging, various cases of cancers have been explored because of high dose radiation in computed tomography (CT) scan examinations. These high radiation doses are given to patients to achieve good quality CT images. Instead of increasing radiation dose, an alternate method is required to get high quality images for diagnosis purpose. In this paper, we propose a method where, the noise of CT images will be estimated using patch-based gradient approximation. Further, estimated noise is used to denoise the CT images in tetrolet domain. In proposed scheme, a locally adaptive-based thresholding in tetrolet domain and non-local means filtering have been performed to suppress noise from CT images. Estimation noise from proposed method has been compared from added noise in CT images and it was observed that noise is almost correctly estimated by proposed method. To verify the strength of noise suppression in proposed scheme, comparison with recent other existing methods have been performed. The PSNR and visual quality of experimental results indicate that the proposed scheme gives excellent outcomes in compare to existing schemes.
{"title":"Blind noise estimation-based CT image denoising in tetrolet domain","authors":"M. Diwakar, Pardeep Kumar","doi":"10.1504/ijics.2020.10026779","DOIUrl":"https://doi.org/10.1504/ijics.2020.10026779","url":null,"abstract":"Recently in medical imaging, various cases of cancers have been explored because of high dose radiation in computed tomography (CT) scan examinations. These high radiation doses are given to patients to achieve good quality CT images. Instead of increasing radiation dose, an alternate method is required to get high quality images for diagnosis purpose. In this paper, we propose a method where, the noise of CT images will be estimated using patch-based gradient approximation. Further, estimated noise is used to denoise the CT images in tetrolet domain. In proposed scheme, a locally adaptive-based thresholding in tetrolet domain and non-local means filtering have been performed to suppress noise from CT images. Estimation noise from proposed method has been compared from added noise in CT images and it was observed that noise is almost correctly estimated by proposed method. To verify the strength of noise suppression in proposed scheme, comparison with recent other existing methods have been performed. The PSNR and visual quality of experimental results indicate that the proposed scheme gives excellent outcomes in compare to existing schemes.","PeriodicalId":164016,"journal":{"name":"Int. J. Inf. Comput. Secur.","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126625724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-07DOI: 10.1504/ijics.2020.10026776
Shashank Gupta, B. Gupta, Pooja Chaudhary
This article presents an enhanced JavaScript feature-injection based framework that obstructs the execution of cross-site scripting (XSS) worms from the virtual machines of cloud-based online social network (OSN). It calculates the features of clustered-sanitised compressed templates of JavaScript attack vectors embedded in the HTTP response messages. Any variation observed in such JavaScript feature set indicates the injection of XSS worms on the cloud-based OSN server. The injected worms will further undergo through the process of nested context-aware sanitisation for its safe interpretation on the web browser. The prototype of our framework was developed in Java and installed in the virtual machines of cloud environment. The experimental evaluation of our framework was performed on the platform of OSN-based web applications deployed in the cloud platform. The performance analysis done revealed that our framework detects the injection of malicious JavaScript code with low false negative rate and acceptable performance overhead.
{"title":"Nested context-aware sanitisation and feature injection in clustered templates of JavaScript worms on the cloud-based OSN","authors":"Shashank Gupta, B. Gupta, Pooja Chaudhary","doi":"10.1504/ijics.2020.10026776","DOIUrl":"https://doi.org/10.1504/ijics.2020.10026776","url":null,"abstract":"This article presents an enhanced JavaScript feature-injection based framework that obstructs the execution of cross-site scripting (XSS) worms from the virtual machines of cloud-based online social network (OSN). It calculates the features of clustered-sanitised compressed templates of JavaScript attack vectors embedded in the HTTP response messages. Any variation observed in such JavaScript feature set indicates the injection of XSS worms on the cloud-based OSN server. The injected worms will further undergo through the process of nested context-aware sanitisation for its safe interpretation on the web browser. The prototype of our framework was developed in Java and installed in the virtual machines of cloud environment. The experimental evaluation of our framework was performed on the platform of OSN-based web applications deployed in the cloud platform. The performance analysis done revealed that our framework detects the injection of malicious JavaScript code with low false negative rate and acceptable performance overhead.","PeriodicalId":164016,"journal":{"name":"Int. J. Inf. Comput. Secur.","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115945853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-07DOI: 10.1504/ijics.2020.10026792
Srilekha Mukherjee, G. Sanyal
This paper presents a steganographic approach of concealing the secret data so as to facilitate secure communication. Eight neighbour bits swap (ENBS) encryption has been used on the chosen cover image in the first stage. This results in the scrambling of the data bits, thereby disrupting the normal pixel orientation. Finally data bits from the secret image are embedded within the scrambled cover using the technique of arithmetic progression. Lastly inverse eight neighbour bits swap (ENBS) transformation is applied on the above generated image. This results in a descrambling operation, i.e., reverting back the normal orientation. Henceforth the stego is generated. Several quantitative and qualitative benchmarks analysis pertaining to this approach is made. All the results show that the imperceptibility is well maintained. Also the payload is high with negligible distortion produced in the image.
{"title":"Eight neighbour bits swap encryption-based image steganography using arithmetic progression technique","authors":"Srilekha Mukherjee, G. Sanyal","doi":"10.1504/ijics.2020.10026792","DOIUrl":"https://doi.org/10.1504/ijics.2020.10026792","url":null,"abstract":"This paper presents a steganographic approach of concealing the secret data so as to facilitate secure communication. Eight neighbour bits swap (ENBS) encryption has been used on the chosen cover image in the first stage. This results in the scrambling of the data bits, thereby disrupting the normal pixel orientation. Finally data bits from the secret image are embedded within the scrambled cover using the technique of arithmetic progression. Lastly inverse eight neighbour bits swap (ENBS) transformation is applied on the above generated image. This results in a descrambling operation, i.e., reverting back the normal orientation. Henceforth the stego is generated. Several quantitative and qualitative benchmarks analysis pertaining to this approach is made. All the results show that the imperceptibility is well maintained. Also the payload is high with negligible distortion produced in the image.","PeriodicalId":164016,"journal":{"name":"Int. J. Inf. Comput. Secur.","volume":"241 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121095997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-07DOI: 10.1504/ijics.2020.10026788
A. K. Agrawal, Y. Singh
Different methods have been proposed for face recognition during the past decades that differ essentially on how to determine discriminant facial features for better recognition. Recently, very deep neural networks achieved great success on general object recognition because of their potential in learning capability. This paper presents convolution neural network (CNN)-based architecture for face recognition in unconstrained environment. The proposed architecture is based on a standard architecture of residual network. The recognition performance shows that the proposed framework of CNN achieves the state-of-art performance on publicly available challenging datasets LFW, face94, face95, face96 and Grimace.
{"title":"Unconstrained face recognition using deep convolution neural network","authors":"A. K. Agrawal, Y. Singh","doi":"10.1504/ijics.2020.10026788","DOIUrl":"https://doi.org/10.1504/ijics.2020.10026788","url":null,"abstract":"Different methods have been proposed for face recognition during the past decades that differ essentially on how to determine discriminant facial features for better recognition. Recently, very deep neural networks achieved great success on general object recognition because of their potential in learning capability. This paper presents convolution neural network (CNN)-based architecture for face recognition in unconstrained environment. The proposed architecture is based on a standard architecture of residual network. The recognition performance shows that the proposed framework of CNN achieves the state-of-art performance on publicly available challenging datasets LFW, face94, face95, face96 and Grimace.","PeriodicalId":164016,"journal":{"name":"Int. J. Inf. Comput. Secur.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126925141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-07DOI: 10.1504/ijics.2020.10026778
Shelza Suri, R. Vijay
The paper presents a coupled map lattice (CML) and deoxyribonucleic acid (DNA)-based image encryption algorithm that uses genetic algorithm (GA) to get the optimised results. The algorithm uses the chaotic method CML and DNA to create an initial population of DNA masks in its first stage. The GA is applied in the second stage to obtain the best mask for encrypting the given plain image. The paper also discusses the use of two more chaotic functions, i.e., logistic map (LM) and transformed logistic map (TLM) with DNA-GA-based hybrid combination. The paper evaluates and compares the performance of the proposed CML-DNA-GA algorithm with LM-DNA-GA, TLM-DNA-GA hybrid approaches. The results show that the proposed approach performs better than the other two. It also discusses the impact of using a bi-objective GA optimisation for image encryption and applies the same to the all three discussed techniques. The results show that bi-objective optimisation of the proposed algorithm gives balanced results with respect to the selected fitness functions.
{"title":"A coupled map lattice-based image encryption approach using DNA and bi-objective genetic algorithm","authors":"Shelza Suri, R. Vijay","doi":"10.1504/ijics.2020.10026778","DOIUrl":"https://doi.org/10.1504/ijics.2020.10026778","url":null,"abstract":"The paper presents a coupled map lattice (CML) and deoxyribonucleic acid (DNA)-based image encryption algorithm that uses genetic algorithm (GA) to get the optimised results. The algorithm uses the chaotic method CML and DNA to create an initial population of DNA masks in its first stage. The GA is applied in the second stage to obtain the best mask for encrypting the given plain image. The paper also discusses the use of two more chaotic functions, i.e., logistic map (LM) and transformed logistic map (TLM) with DNA-GA-based hybrid combination. The paper evaluates and compares the performance of the proposed CML-DNA-GA algorithm with LM-DNA-GA, TLM-DNA-GA hybrid approaches. The results show that the proposed approach performs better than the other two. It also discusses the impact of using a bi-objective GA optimisation for image encryption and applies the same to the all three discussed techniques. The results show that bi-objective optimisation of the proposed algorithm gives balanced results with respect to the selected fitness functions.","PeriodicalId":164016,"journal":{"name":"Int. J. Inf. Comput. Secur.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130294162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-07DOI: 10.1504/ijics.2020.10026782
P. S. A. Kumar, D. Kavitha, S. A. Kumar
Detecting anomalous events in densely pedestrian traffic video scenes remains challenging task, due to object's tracking difficulties and noise in the scene. In this paper, a Novel Hybrid Generative-Discriminative framework is proposed for detecting and localising the anomalous events of illegal vehicles present in the scene. This paper introduces a novelty in the application of Hybrid usage of latent Dirichlet allocation (LDA) and support vector machines (SVMs) over dynamic texture at sub-region level. The proposed HLDA-SVM model consists mainly of three steps: first local binary patterns from twelve orthogonal planes (LBP-TwP) technique is applied in each spatio-temporal video patch to extract dynamic texture; then LDA technique is applied to the extracted dynamic textures for finding the latent topic distribution and finally, training is done on the distribution of topic vector for each video sequence using multi way SVM classifier. The proposed HLDA-SVM model is validated on UCSD dataset data set and is compared with mixture of dynamic texture and motion context technique. Experimental results show that the HLDA-SVM approach performs well in par with current algorithms for anomaly detection.
{"title":"A hybrid generative-discriminative model for abnormal event detection in surveillance video scenes","authors":"P. S. A. Kumar, D. Kavitha, S. A. Kumar","doi":"10.1504/ijics.2020.10026782","DOIUrl":"https://doi.org/10.1504/ijics.2020.10026782","url":null,"abstract":"Detecting anomalous events in densely pedestrian traffic video scenes remains challenging task, due to object's tracking difficulties and noise in the scene. In this paper, a Novel Hybrid Generative-Discriminative framework is proposed for detecting and localising the anomalous events of illegal vehicles present in the scene. This paper introduces a novelty in the application of Hybrid usage of latent Dirichlet allocation (LDA) and support vector machines (SVMs) over dynamic texture at sub-region level. The proposed HLDA-SVM model consists mainly of three steps: first local binary patterns from twelve orthogonal planes (LBP-TwP) technique is applied in each spatio-temporal video patch to extract dynamic texture; then LDA technique is applied to the extracted dynamic textures for finding the latent topic distribution and finally, training is done on the distribution of topic vector for each video sequence using multi way SVM classifier. The proposed HLDA-SVM model is validated on UCSD dataset data set and is compared with mixture of dynamic texture and motion context technique. Experimental results show that the HLDA-SVM approach performs well in par with current algorithms for anomaly detection.","PeriodicalId":164016,"journal":{"name":"Int. J. Inf. Comput. Secur.","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126217561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-04DOI: 10.1504/IJICS.2019.10014436
Amishi Mahesh Kapadia, N. Pandian
Reversible data hiding is art of concealing secret information such that cover media and secret information are both recovered without any information loss. In this paper high frequency sub-bands of integer wavelet transform are used for data embedding. All coefficients are used for embedding and to improve the security the embedding is carried out in frequency domain using spiral, sequential and random embedding method. The main objective of this research is to hide the maximum data with minimal distortion and to attain reversible hiding phenomenon both in cover and secret image. The experimental result shows the improved capacity, imperceptibility and complete reversibility attained on standard and medical images. The parameter of robustness has not been vastly studied for reversible data hiding and an attempt is made to check the same for basic attacks and results shows that it can withstand geometrical attack.
{"title":"Reversible data hiding methods in integer wavelet transform","authors":"Amishi Mahesh Kapadia, N. Pandian","doi":"10.1504/IJICS.2019.10014436","DOIUrl":"https://doi.org/10.1504/IJICS.2019.10014436","url":null,"abstract":"Reversible data hiding is art of concealing secret information such that cover media and secret information are both recovered without any information loss. In this paper high frequency sub-bands of integer wavelet transform are used for data embedding. All coefficients are used for embedding and to improve the security the embedding is carried out in frequency domain using spiral, sequential and random embedding method. The main objective of this research is to hide the maximum data with minimal distortion and to attain reversible hiding phenomenon both in cover and secret image. The experimental result shows the improved capacity, imperceptibility and complete reversibility attained on standard and medical images. The parameter of robustness has not been vastly studied for reversible data hiding and an attempt is made to check the same for basic attacks and results shows that it can withstand geometrical attack.","PeriodicalId":164016,"journal":{"name":"Int. J. Inf. Comput. Secur.","volume":"264 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116069607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-04DOI: 10.1504/IJICS.2019.10018472
Ankit Thakkar, Kajol Patel
Nowadays online transactions are becoming ubiquitous that must be protected from bots using different techniques, and CAPTCHA is one of them. Text-CAPTCHA preferred due to its simplicity amongst different types of CAPTCHAs. Text-CAPTCHA can be strengthened by adding some distortion to prevent bot-attacks but cause usability issues for humans. This results in multiple attempts by a user to gain access to the required service and may give frustration to the user. Hence, there is a need to design CAPTCHA which is easy for humans to recognise but difficult for bots. This paper proposes virtual keyboard-based simple and efficient text-CAPTCHA verification scheme (VIKAS) that makes CAPTCHA verification easy for humans but difficult for bots. VIKAS uses simple text-CAPTCHA and verifies the same using positions of the keys pressed by the user using an image-based virtual keyboard. VIKAS is sustainable against segmentation scheme, replay attacks and possible attacks with keyloggers.
{"title":"VIKAS: a new virtual keyboard-based simple and efficient text CAPTCHA verification scheme","authors":"Ankit Thakkar, Kajol Patel","doi":"10.1504/IJICS.2019.10018472","DOIUrl":"https://doi.org/10.1504/IJICS.2019.10018472","url":null,"abstract":"Nowadays online transactions are becoming ubiquitous that must be protected from bots using different techniques, and CAPTCHA is one of them. Text-CAPTCHA preferred due to its simplicity amongst different types of CAPTCHAs. Text-CAPTCHA can be strengthened by adding some distortion to prevent bot-attacks but cause usability issues for humans. This results in multiple attempts by a user to gain access to the required service and may give frustration to the user. Hence, there is a need to design CAPTCHA which is easy for humans to recognise but difficult for bots. This paper proposes virtual keyboard-based simple and efficient text-CAPTCHA verification scheme (VIKAS) that makes CAPTCHA verification easy for humans but difficult for bots. VIKAS uses simple text-CAPTCHA and verifies the same using positions of the keys pressed by the user using an image-based virtual keyboard. VIKAS is sustainable against segmentation scheme, replay attacks and possible attacks with keyloggers.","PeriodicalId":164016,"journal":{"name":"Int. J. Inf. Comput. Secur.","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133075253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-09DOI: 10.1504/ijics.2019.10024487
Ma Yingying, Yuan Hao
The current method obtains the frequency of occurrence of abnormal data detected in the adjacent regions through reading between the sensor and the adjacent conversion data, and uses the frequency of occurrence of abnormal data to describe the spatial correlation, according to readings of sensor data using the Bayesian analysis method of sensor to determine whether the sensor is abnormal. But this method has the problem of low detection accuracy. For this reason, this paper proposes a method to detect the fuzzy breakpoint of data in the massive dynamic data flow. Firstly, this method used the amplitude difference method to determine the abnormal data amplitude and the discrete point difference of data fuzzy breakpoint, and then used the wavelet transform to extract the features of inflection point of the data fuzzy breakpoint. Combined with the features of inflection point of the extracted data fuzzy breakpoint, we carried out the support vector machine classification, and detected the data fuzzy breakpoints in the massive dynamic data flow. Experimental results show that the proposed method can effectively improve the accuracy of fuzzy breakpoint detection.
{"title":"Study on data fuzzy breakpoint detection in massive dynamic data flow","authors":"Ma Yingying, Yuan Hao","doi":"10.1504/ijics.2019.10024487","DOIUrl":"https://doi.org/10.1504/ijics.2019.10024487","url":null,"abstract":"The current method obtains the frequency of occurrence of abnormal data detected in the adjacent regions through reading between the sensor and the adjacent conversion data, and uses the frequency of occurrence of abnormal data to describe the spatial correlation, according to readings of sensor data using the Bayesian analysis method of sensor to determine whether the sensor is abnormal. But this method has the problem of low detection accuracy. For this reason, this paper proposes a method to detect the fuzzy breakpoint of data in the massive dynamic data flow. Firstly, this method used the amplitude difference method to determine the abnormal data amplitude and the discrete point difference of data fuzzy breakpoint, and then used the wavelet transform to extract the features of inflection point of the data fuzzy breakpoint. Combined with the features of inflection point of the extracted data fuzzy breakpoint, we carried out the support vector machine classification, and detected the data fuzzy breakpoints in the massive dynamic data flow. Experimental results show that the proposed method can effectively improve the accuracy of fuzzy breakpoint detection.","PeriodicalId":164016,"journal":{"name":"Int. J. Inf. Comput. Secur.","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125784102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}