Pub Date : 2021-12-21DOI: 10.1108/ijpcc-08-2021-0200
Shadrack Fred Mahenge, Ala Alsanabani
Purpose In the purpose of the section, the cracks that are in the construction domain may be common and usually fixed with the human inspection which is at the visible range, but for the cracks which may exist at the distant place for the human eye in the same building but can be captured with the camera. If the crack size is quite big can be visible but few cracks will be present due to the flaws in the construction of walls which needs authentic information and confirmation about it for the successful completion of the wall cracks, as these cracks in the wall will result in the structure collapse. Design/methodology/approach In the modern era of digital image processing, it has captured the importance in all the domain of engineering and all the fields irrespective of the division of the engineering, hence, in this research study an attempt is made to deal with the wall cracks which are found or searched during the building inspection process, in the present context in association with the unique U-net architecture is used with convolutional neural network method. Findings In the construction domain, the cracks may be common and usually fixed with the human inspection which is at the visible range, but for the cracks which may exist at the distant place for the human eye in the same building but can be captured with the camera. If the crack size is quite big can be visible but few cracks will be present due to the flaws in the construction of walls which needs authentic information and confirmation about it for the successful completion of the wall cracks, as these cracks in the wall will result in the structure collapse. Hence, for the modeling of the proposed system, it is considered with the image database from the Mendeley portal for the analysis. With the experimental analysis, it is noted and observed that the proposed system was able to detect the wall cracks, search the flat surface by the result of no cracks found and it is successful in dealing with the two phases of operation, namely, classification and segmentation with the deep learning technique. In contrast to other conventional methodologies, the proposed methodology produces excellent performance results. Originality/value The originality of the paper is to find the portion of the cracks on the walls using deep learning architecture.
{"title":"A novel approach for detection and classification of re-entrant crack using modified CNNetwork","authors":"Shadrack Fred Mahenge, Ala Alsanabani","doi":"10.1108/ijpcc-08-2021-0200","DOIUrl":"https://doi.org/10.1108/ijpcc-08-2021-0200","url":null,"abstract":"\u0000Purpose\u0000In the purpose of the section, the cracks that are in the construction domain may be common and usually fixed with the human inspection which is at the visible range, but for the cracks which may exist at the distant place for the human eye in the same building but can be captured with the camera. If the crack size is quite big can be visible but few cracks will be present due to the flaws in the construction of walls which needs authentic information and confirmation about it for the successful completion of the wall cracks, as these cracks in the wall will result in the structure collapse.\u0000\u0000\u0000Design/methodology/approach\u0000In the modern era of digital image processing, it has captured the importance in all the domain of engineering and all the fields irrespective of the division of the engineering, hence, in this research study an attempt is made to deal with the wall cracks which are found or searched during the building inspection process, in the present context in association with the unique U-net architecture is used with convolutional neural network method.\u0000\u0000\u0000Findings\u0000In the construction domain, the cracks may be common and usually fixed with the human inspection which is at the visible range, but for the cracks which may exist at the distant place for the human eye in the same building but can be captured with the camera. If the crack size is quite big can be visible but few cracks will be present due to the flaws in the construction of walls which needs authentic information and confirmation about it for the successful completion of the wall cracks, as these cracks in the wall will result in the structure collapse. Hence, for the modeling of the proposed system, it is considered with the image database from the Mendeley portal for the analysis. With the experimental analysis, it is noted and observed that the proposed system was able to detect the wall cracks, search the flat surface by the result of no cracks found and it is successful in dealing with the two phases of operation, namely, classification and segmentation with the deep learning technique. In contrast to other conventional methodologies, the proposed methodology produces excellent performance results.\u0000\u0000\u0000Originality/value\u0000The originality of the paper is to find the portion of the cracks on the walls using deep learning architecture.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44546165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-07DOI: 10.1108/ijpcc-06-2021-0137
S. D., Syed Inthiyaz
Purpose Pervasive health-care computing applications in medical field provide better diagnosis of various organs such as brain, spinal card, heart, lungs and so on. The purpose of this study is to find brain tumor diagnosis using Machine learning (ML) and Deep Learning(DL) techniques. The brain diagnosis process is an important task to medical research which is the most prominent step for providing the treatment to patient. Therefore, it is important to have high accuracy of diagnosis rate so that patients easily get treatment from medical consult. There are many earlier investigations on this research work to diagnose brain diseases. Moreover, it is necessary to improve the performance measures using deep and ML approaches. Design/methodology/approach In this paper, various brain disorders diagnosis applications are differentiated through following implemented techniques. These techniques are computed through segment and classify the brain magnetic resonance imaging or computerized tomography images clearly. The adaptive median, convolution neural network, gradient boosting machine learning (GBML) and improved support vector machine health-care applications are the advance methods used to extract the hidden features and providing the medical information for diagnosis. The proposed design is implemented on Python 3.7.8 software for simulation analysis. Findings This research is getting more help for investigators, diagnosis centers and doctors. In each and every model, performance measures are to be taken for estimating the application performance. The measures such as accuracy, sensitivity, recall, F1 score, peak-to-signal noise ratio and correlation coefficient have been estimated using proposed methodology. moreover these metrics are providing high improvement compared to earlier models. Originality/value The implemented deep and ML designs get outperformance the methodologies and proving good application successive score.
{"title":"A pervasive health care device computing application for brain tumors with machine and deep learning techniques","authors":"S. D., Syed Inthiyaz","doi":"10.1108/ijpcc-06-2021-0137","DOIUrl":"https://doi.org/10.1108/ijpcc-06-2021-0137","url":null,"abstract":"\u0000Purpose\u0000Pervasive health-care computing applications in medical field provide better diagnosis of various organs such as brain, spinal card, heart, lungs and so on. The purpose of this study is to find brain tumor diagnosis using Machine learning (ML) and Deep Learning(DL) techniques. The brain diagnosis process is an important task to medical research which is the most prominent step for providing the treatment to patient. Therefore, it is important to have high accuracy of diagnosis rate so that patients easily get treatment from medical consult. There are many earlier investigations on this research work to diagnose brain diseases. Moreover, it is necessary to improve the performance measures using deep and ML approaches.\u0000\u0000\u0000Design/methodology/approach\u0000In this paper, various brain disorders diagnosis applications are differentiated through following implemented techniques. These techniques are computed through segment and classify the brain magnetic resonance imaging or computerized tomography images clearly. The adaptive median, convolution neural network, gradient boosting machine learning (GBML) and improved support vector machine health-care applications are the advance methods used to extract the hidden features and providing the medical information for diagnosis. The proposed design is implemented on Python 3.7.8 software for simulation analysis.\u0000\u0000\u0000Findings\u0000This research is getting more help for investigators, diagnosis centers and doctors. In each and every model, performance measures are to be taken for estimating the application performance. The measures such as accuracy, sensitivity, recall, F1 score, peak-to-signal noise ratio and correlation coefficient have been estimated using proposed methodology. moreover these metrics are providing high improvement compared to earlier models.\u0000\u0000\u0000Originality/value\u0000The implemented deep and ML designs get outperformance the methodologies and proving good application successive score.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49567783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose Image classification is a fundamental form of digital image processing in which pixels are labeled into one of the object classes present in the image. Multispectral image classification is a challenging task due to complexities associated with the images captured by satellites. Accurate image classification is highly essential in remote sensing applications. However, existing machine learning and deep learning–based classification methods could not provide desired accuracy. The purpose of this paper is to classify the objects in the satellite image with greater accuracy. Design/methodology/approach This paper proposes a deep learning-based automated method for classifying multispectral images. The central issue of this work is that data sets collected from public databases are first divided into a number of patches and their features are extracted. The features extracted from patches are then concatenated before a classification method is used to classify the objects in the image. Findings The performance of proposed modified velocity-based colliding bodies optimization method is compared with existing methods in terms of type-1 measures such as sensitivity, specificity, accuracy, net present value, F1 Score and Matthews correlation coefficient and type 2 measures such as false discovery rate and false positive rate. The statistical results obtained from the proposed method show better performance than existing methods. Originality/value In this work, multispectral image classification accuracy is improved with an optimization algorithm called modified velocity-based colliding bodies optimization.
{"title":"RNN-based multispectral satellite image processing for remote sensing applications","authors":"Venkata Dasu Marri, Veera Narayana Reddy P., Chandra Mohan Reddy S.","doi":"10.1108/ijpcc-07-2021-0153","DOIUrl":"https://doi.org/10.1108/ijpcc-07-2021-0153","url":null,"abstract":"\u0000Purpose\u0000Image classification is a fundamental form of digital image processing in which pixels are labeled into one of the object classes present in the image. Multispectral image classification is a challenging task due to complexities associated with the images captured by satellites. Accurate image classification is highly essential in remote sensing applications. However, existing machine learning and deep learning–based classification methods could not provide desired accuracy. The purpose of this paper is to classify the objects in the satellite image with greater accuracy.\u0000\u0000\u0000Design/methodology/approach\u0000This paper proposes a deep learning-based automated method for classifying multispectral images. The central issue of this work is that data sets collected from public databases are first divided into a number of patches and their features are extracted. The features extracted from patches are then concatenated before a classification method is used to classify the objects in the image.\u0000\u0000\u0000Findings\u0000The performance of proposed modified velocity-based colliding bodies optimization method is compared with existing methods in terms of type-1 measures such as sensitivity, specificity, accuracy, net present value, F1 Score and Matthews correlation coefficient and type 2 measures such as false discovery rate and false positive rate. The statistical results obtained from the proposed method show better performance than existing methods.\u0000\u0000\u0000Originality/value\u0000In this work, multispectral image classification accuracy is improved with an optimization algorithm called modified velocity-based colliding bodies optimization.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62801928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-18DOI: 10.1108/ijpcc-06-2021-0143
Venkatesh Naramula, A. Kalaivania
Purpose This paper aims to focus on extracting aspect terms on mobile phone (iPhone and Samsung) tweets using NLTK techniques on multiple aspect extraction is one of the challenges. Then, also machine learning techniques are used that can be trained on supervised strategies to predict and classify sentiment present in mobile phone tweets. This paper also presents the proposed architecture for the extraction of aspect terms and sentiment polarity from customer tweets. Design/methodology/approach In the aspect-based sentiment analysis aspect, term extraction is one of the key challenges where different aspects are extracted from online user-generated content. This study focuses on customer tweets/reviews on different mobile products which is an important form of opinionated content by looking at different aspects. Different deep learning techniques are used to extract all aspects from customer tweets which are extracted using Twitter API. Findings The comparison of the results with traditional machine learning methods such as random forest algorithm, K-nearest neighbour and support vector machine using two data sets iPhone tweets and Samsung tweets have been presented for better accuracy. Originality/value In this paper, the authors have focused on extracting aspect terms on mobile phone (iPhone and Samsung) tweets using NLTK techniques on multi-aspect extraction is one of the challenges. Then, also machine learning techniques are used that can be trained on supervised strategies to predict and classify sentiment present in mobile phone tweets. This paper also presents the proposed architecture for the extraction of aspect terms and sentiment polarity from customer tweets.
{"title":"Sentiment analysis in aspect term extraction for mobile phone tweets using machine learning techniques","authors":"Venkatesh Naramula, A. Kalaivania","doi":"10.1108/ijpcc-06-2021-0143","DOIUrl":"https://doi.org/10.1108/ijpcc-06-2021-0143","url":null,"abstract":"\u0000Purpose\u0000This paper aims to focus on extracting aspect terms on mobile phone (iPhone and Samsung) tweets using NLTK techniques on multiple aspect extraction is one of the challenges. Then, also machine learning techniques are used that can be trained on supervised strategies to predict and classify sentiment present in mobile phone tweets. This paper also presents the proposed architecture for the extraction of aspect terms and sentiment polarity from customer tweets.\u0000\u0000\u0000Design/methodology/approach\u0000In the aspect-based sentiment analysis aspect, term extraction is one of the key challenges where different aspects are extracted from online user-generated content. This study focuses on customer tweets/reviews on different mobile products which is an important form of opinionated content by looking at different aspects. Different deep learning techniques are used to extract all aspects from customer tweets which are extracted using Twitter API.\u0000\u0000\u0000Findings\u0000The comparison of the results with traditional machine learning methods such as random forest algorithm, K-nearest neighbour and support vector machine using two data sets iPhone tweets and Samsung tweets have been presented for better accuracy.\u0000\u0000\u0000Originality/value\u0000In this paper, the authors have focused on extracting aspect terms on mobile phone (iPhone and Samsung) tweets using NLTK techniques on multi-aspect extraction is one of the challenges. Then, also machine learning techniques are used that can be trained on supervised strategies to predict and classify sentiment present in mobile phone tweets. This paper also presents the proposed architecture for the extraction of aspect terms and sentiment polarity from customer tweets.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43892028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-29DOI: 10.1108/ijpcc-07-2021-0167
Swetha Parvatha Reddy Chandrasekhara, M. Kabadi, Srivinay Srivinay
Purpose This study has mainly aimed to compare and contrast two completely different image processing algorithms that are very adaptive for detecting prostate cancer using wearable Internet of Things (IoT) devices. Cancer in these modern times is still considered as one of the most dreaded disease, which is continuously pestering the mankind over a past few decades. According to Indian Council of Medical Research, India alone registers about 11.5 lakh cancer related cases every year and closely up to 8 lakh people die with cancer related issues each year. Earlier the incidence of prostate cancer was commonly seen in men aged above 60 years, but a recent study has revealed that this type of cancer has been on rise even in men between the age groups of 35 and 60 years as well. These findings make it even more necessary to prioritize the research on diagnosing the prostate cancer at an early stage, so that the patients can be cured and can lead a normal life. Design/methodology/approach The research focuses on two types of feature extraction algorithms, namely, scale invariant feature transform (SIFT) and gray level co-occurrence matrix (GLCM) that are commonly used in medical image processing, in an attempt to discover and improve the gap present in the potential detection of prostate cancer in medical IoT. Later the results obtained by these two strategies are classified separately using a machine learning based classification model called multi-class support vector machine (SVM). Owing to the advantage of better tissue discrimination and contrast resolution, magnetic resonance imaging images have been considered for this study. The classification results obtained for both the SIFT as well as GLCM methods are then compared to check, which feature extraction strategy provides the most accurate results for diagnosing the prostate cancer. Findings The potential of both the models has been evaluated in terms of three aspects, namely, accuracy, sensitivity and specificity. Each model’s result was checked against diversified ranges of training and test data set. It was found that the SIFT-multiclass SVM model achieved a highest performance rate of 99.9451% accuracy, 100% sensitivity and 99% specificity at 40:60 ratio of the training and testing data set. Originality/value The SIFT-multi SVM versus GLCM-multi SVM based comparison has been introduced for the first time to perceive the best model to be used for the accurate diagnosis of prostate cancer. The performance of the classification for each of the feature extraction strategies is enumerated in terms of accuracy, sensitivity and specificity.
{"title":"Wearable IoT based diagnosis of prostate cancer using GLCM-multiclass SVM and SIFT-multiclass SVM feature extraction strategies","authors":"Swetha Parvatha Reddy Chandrasekhara, M. Kabadi, Srivinay Srivinay","doi":"10.1108/ijpcc-07-2021-0167","DOIUrl":"https://doi.org/10.1108/ijpcc-07-2021-0167","url":null,"abstract":"\u0000Purpose\u0000This study has mainly aimed to compare and contrast two completely different image processing algorithms that are very adaptive for detecting prostate cancer using wearable Internet of Things (IoT) devices. Cancer in these modern times is still considered as one of the most dreaded disease, which is continuously pestering the mankind over a past few decades. According to Indian Council of Medical Research, India alone registers about 11.5 lakh cancer related cases every year and closely up to 8 lakh people die with cancer related issues each year. Earlier the incidence of prostate cancer was commonly seen in men aged above 60 years, but a recent study has revealed that this type of cancer has been on rise even in men between the age groups of 35 and 60 years as well. These findings make it even more necessary to prioritize the research on diagnosing the prostate cancer at an early stage, so that the patients can be cured and can lead a normal life.\u0000\u0000\u0000Design/methodology/approach\u0000The research focuses on two types of feature extraction algorithms, namely, scale invariant feature transform (SIFT) and gray level co-occurrence matrix (GLCM) that are commonly used in medical image processing, in an attempt to discover and improve the gap present in the potential detection of prostate cancer in medical IoT. Later the results obtained by these two strategies are classified separately using a machine learning based classification model called multi-class support vector machine (SVM). Owing to the advantage of better tissue discrimination and contrast resolution, magnetic resonance imaging images have been considered for this study. The classification results obtained for both the SIFT as well as GLCM methods are then compared to check, which feature extraction strategy provides the most accurate results for diagnosing the prostate cancer.\u0000\u0000\u0000Findings\u0000The potential of both the models has been evaluated in terms of three aspects, namely, accuracy, sensitivity and specificity. Each model’s result was checked against diversified ranges of training and test data set. It was found that the SIFT-multiclass SVM model achieved a highest performance rate of 99.9451% accuracy, 100% sensitivity and 99% specificity at 40:60 ratio of the training and testing data set.\u0000\u0000\u0000Originality/value\u0000The SIFT-multi SVM versus GLCM-multi SVM based comparison has been introduced for the first time to perceive the best model to be used for the accurate diagnosis of prostate cancer. The performance of the classification for each of the feature extraction strategies is enumerated in terms of accuracy, sensitivity and specificity.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43934843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-10DOI: 10.1108/ijpcc-02-2021-0037
Deepa S.N.
Purpose Limitations encountered with the models developed in the previous studies had occurrences of global minima; due to which this study developed a new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization. Ubiquitous machine learning computational model process performs training in a better way than regular supervised learning or unsupervised learning computational models with deep learning techniques, resulting in better learning and optimization for the considered problem domain of cloud-based internet-of-things (IOTs). This study aims to improve the network quality and improve the data accuracy rate during the network transmission process using the developed ubiquitous deep learning computational model. Design/methodology/approach In this research study, a novel intelligent ubiquitous machine learning computational model is designed and modelled to maintain the optimal energy level of cloud IOTs in sensor network domains. A new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization is developed. A new unified deterministic sine-cosine algorithm has been developed in this study for parameter optimization of weight factors in the ubiquitous machine learning model. Findings The newly developed ubiquitous model is used for finding network energy and performing its optimization in the considered sensor network model. At the time of progressive simulation, residual energy, network overhead, end-to-end delay, network lifetime and a number of live nodes are evaluated. It is elucidated from the results attained, that the ubiquitous deep learning model resulted in better metrics based on its appropriate cluster selection and minimized route selection mechanism. Research limitations/implications In this research study, a novel ubiquitous computing model derived from a new optimization algorithm called a unified deterministic sine-cosine algorithm and deep learning technique was derived and applied for maintaining the optimal energy level of cloud IOTs in sensor networks. The deterministic levy flight concept is applied for developing the new optimization technique and this tends to determine the parametric weight values for the deep learning model. The ubiquitous deep learning model is designed with auto-encoders and decoders and their corresponding layers weights are determined for optimal values with the optimization algorithm. The modelled ubiquitous deep learning approach was applied in this study to determine the network energy consumption rate and thereby optimize the energy level by increasing the lifetime of the sensor network model considered. For all the considered network metrics, the ubiquitous computing model has proved to be effective and versatile than previous approaches from early research stu
{"title":"Intelligent ubiquitous computing model for energy optimization of cloud IOTs in sensor networks","authors":"Deepa S.N.","doi":"10.1108/ijpcc-02-2021-0037","DOIUrl":"https://doi.org/10.1108/ijpcc-02-2021-0037","url":null,"abstract":"\u0000Purpose\u0000Limitations encountered with the models developed in the previous studies had occurrences of global minima; due to which this study developed a new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization. Ubiquitous machine learning computational model process performs training in a better way than regular supervised learning or unsupervised learning computational models with deep learning techniques, resulting in better learning and optimization for the considered problem domain of cloud-based internet-of-things (IOTs). This study aims to improve the network quality and improve the data accuracy rate during the network transmission process using the developed ubiquitous deep learning computational model.\u0000\u0000\u0000Design/methodology/approach\u0000In this research study, a novel intelligent ubiquitous machine learning computational model is designed and modelled to maintain the optimal energy level of cloud IOTs in sensor network domains. A new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization is developed. A new unified deterministic sine-cosine algorithm has been developed in this study for parameter optimization of weight factors in the ubiquitous machine learning model.\u0000\u0000\u0000Findings\u0000The newly developed ubiquitous model is used for finding network energy and performing its optimization in the considered sensor network model. At the time of progressive simulation, residual energy, network overhead, end-to-end delay, network lifetime and a number of live nodes are evaluated. It is elucidated from the results attained, that the ubiquitous deep learning model resulted in better metrics based on its appropriate cluster selection and minimized route selection mechanism.\u0000\u0000\u0000Research limitations/implications\u0000In this research study, a novel ubiquitous computing model derived from a new optimization algorithm called a unified deterministic sine-cosine algorithm and deep learning technique was derived and applied for maintaining the optimal energy level of cloud IOTs in sensor networks. The deterministic levy flight concept is applied for developing the new optimization technique and this tends to determine the parametric weight values for the deep learning model. The ubiquitous deep learning model is designed with auto-encoders and decoders and their corresponding layers weights are determined for optimal values with the optimization algorithm. The modelled ubiquitous deep learning approach was applied in this study to determine the network energy consumption rate and thereby optimize the energy level by increasing the lifetime of the sensor network model considered. For all the considered network metrics, the ubiquitous computing model has proved to be effective and versatile than previous approaches from early research stu","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76097932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-29DOI: 10.1108/ijpcc-03-2021-0080
Aarathi S., Vasundra S.
Purpose Pervasive analytics act as a prominent role in computer-aided prediction of non-communicating diseases. In the early stage, arrhythmia diagnosis detection helps prevent the cause of death suddenly owing to heart failure or heart stroke. The arrhythmia scope can be identified by electrocardiogram (ECG) report. Design/methodology/approach The ECG report has been used extensively by several clinical experts. However, diagnosis accuracy has been dependent on clinical experience. For the prediction methods of computer-aided heart disease, both accuracy and sensitivity metrics play a remarkable part. Hence, the existing research contributions have optimized the machine-learning approaches to have a great significance in computer-aided methods, which perform predictive analysis of arrhythmia detection. Findings In reference to this, this paper determined a regression heuristics by tridimensional optimum features of ECG reports to perform pervasive analytics for computer-aided arrhythmia prediction. The intent of these reports is arrhythmia detection. From an empirical outcome, it has been envisioned that the project model of this contribution is more optimal and added a more advantage when compared to existing or contemporary approaches. Originality/value In reference to this, this paper determined a regression heuristics by tridimensional optimum features of ECG reports to perform pervasive analytics for computer-aided arrhythmia prediction. The intent of these reports is arrhythmia detection. From an empirical outcome, it has been envisioned that the project model of this contribution is more optimal and added a more advantage when compared to existing or contemporary approaches.
{"title":"Machine learning based pervasive analytics for ECG signal analysis","authors":"Aarathi S., Vasundra S.","doi":"10.1108/ijpcc-03-2021-0080","DOIUrl":"https://doi.org/10.1108/ijpcc-03-2021-0080","url":null,"abstract":"\u0000Purpose\u0000Pervasive analytics act as a prominent role in computer-aided prediction of non-communicating diseases. In the early stage, arrhythmia diagnosis detection helps prevent the cause of death suddenly owing to heart failure or heart stroke. The arrhythmia scope can be identified by electrocardiogram (ECG) report.\u0000\u0000\u0000Design/methodology/approach\u0000The ECG report has been used extensively by several clinical experts. However, diagnosis accuracy has been dependent on clinical experience. For the prediction methods of computer-aided heart disease, both accuracy and sensitivity metrics play a remarkable part. Hence, the existing research contributions have optimized the machine-learning approaches to have a great significance in computer-aided methods, which perform predictive analysis of arrhythmia detection.\u0000\u0000\u0000Findings\u0000In reference to this, this paper determined a regression heuristics by tridimensional optimum features of ECG reports to perform pervasive analytics for computer-aided arrhythmia prediction. The intent of these reports is arrhythmia detection. From an empirical outcome, it has been envisioned that the project model of this contribution is more optimal and added a more advantage when compared to existing or contemporary approaches.\u0000\u0000\u0000Originality/value\u0000In reference to this, this paper determined a regression heuristics by tridimensional optimum features of ECG reports to perform pervasive analytics for computer-aided arrhythmia prediction. The intent of these reports is arrhythmia detection. From an empirical outcome, it has been envisioned that the project model of this contribution is more optimal and added a more advantage when compared to existing or contemporary approaches.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48235937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-17DOI: 10.1108/IJPCC-11-2020-0193
A. Pawar, S. Ghumbre, R. Jogdand
Purpose Cloud computing plays a significant role in the initialization of secure communication between users. The advanced technology directs to offer several services, such as platform, resources, and accessing the network. Furthermore, cloud computing is a broader technology of communication convergence. In cloud computing architecture, data security and authentication are the main significant concerns. Design/methodology/approach The purpose of this study is to design and develop authentication and data security model in cloud computing. This method includes six various units, such as cloud server, data owner, cloud user, inspection authority, attribute authority, and central certified authority. The developed privacy preservation method includes several stages, namely setup phase, key generation phase, authentication phase and data sharing phase. Initially, the setup phase is performed through the owner, where the input is security attributes, whereas the system master key and the public parameter are produced in the key generation stage. After that, the authentication process is performed to identify the security controls of the information system. Finally, the data is decrypted in the data sharing phase for sharing data and for achieving data privacy for confidential data. Additionally, dynamic splicing is utilized, and the security functions, such as hashing, Elliptic Curve Cryptography (ECC), Data Encryption Standard-3 (3DES), interpolation, polynomial kernel, and XOR are employed for providing security to sensitive data. Findings The effectiveness of the developed privacy preservation method is estimated based on other approaches and displayed efficient outcomes with better privacy factor and detection rate of 0.83 and 0.65, and time is highly reduced by 2815ms using the Cleveland dataset. Originality/value This paper presents the privacy preservation technique for initiating authenticated encrypted access in clouds, which is designed for mutual authentication of requester and data owner in the system.
{"title":"Privacy preserving model-based authentication and data security in cloud computing","authors":"A. Pawar, S. Ghumbre, R. Jogdand","doi":"10.1108/IJPCC-11-2020-0193","DOIUrl":"https://doi.org/10.1108/IJPCC-11-2020-0193","url":null,"abstract":"\u0000Purpose\u0000Cloud computing plays a significant role in the initialization of secure communication between users. The advanced technology directs to offer several services, such as platform, resources, and accessing the network. Furthermore, cloud computing is a broader technology of communication convergence. In cloud computing architecture, data security and authentication are the main significant concerns.\u0000\u0000\u0000Design/methodology/approach\u0000The purpose of this study is to design and develop authentication and data security model in cloud computing. This method includes six various units, such as cloud server, data owner, cloud user, inspection authority, attribute authority, and central certified authority. The developed privacy preservation method includes several stages, namely setup phase, key generation phase, authentication phase and data sharing phase. Initially, the setup phase is performed through the owner, where the input is security attributes, whereas the system master key and the public parameter are produced in the key generation stage. After that, the authentication process is performed to identify the security controls of the information system. Finally, the data is decrypted in the data sharing phase for sharing data and for achieving data privacy for confidential data. Additionally, dynamic splicing is utilized, and the security functions, such as hashing, Elliptic Curve Cryptography (ECC), Data Encryption Standard-3 (3DES), interpolation, polynomial kernel, and XOR are employed for providing security to sensitive data.\u0000\u0000\u0000Findings\u0000The effectiveness of the developed privacy preservation method is estimated based on other approaches and displayed efficient outcomes with better privacy factor and detection rate of 0.83 and 0.65, and time is highly reduced by 2815ms using the Cleveland dataset.\u0000\u0000\u0000Originality/value\u0000This paper presents the privacy preservation technique for initiating authenticated encrypted access in clouds, which is designed for mutual authentication of requester and data owner in the system.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45917165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1108/IJPCC-10-2020-0170
A. R. Suhas, M. ManojPriyatham
{"title":"Heal nodes specification improvement using modified CHEF method for group based detection point network","authors":"A. R. Suhas, M. ManojPriyatham","doi":"10.1108/IJPCC-10-2020-0170","DOIUrl":"https://doi.org/10.1108/IJPCC-10-2020-0170","url":null,"abstract":"","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62801906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-02DOI: 10.1108/ijpcc-05-2020-0033
K. Thirugnanasambandam, Raghav R.S., Jayakumar Loganathan, A. Dumka, Dhilipkumar V.
Purpose This paper aims to find the optimal path using directionally driven self-regulating particle swarm optimization (DDSRPSO) with high accuracy and minimal response time. Design/methodology/approach This paper encompasses optimal path planning for automated wheelchair design using swarm intelligence algorithm DDSRPSO. Swarm intelligence is incorporated in optimization due to the cooperative behavior in it. Findings The proposed work has been evaluated in three different regions and the comparison has been made with particle swarm optimization and self-regulating particle swarm optimization and proved that the optimal path with robustness is from the proposed algorithm. Originality/value The performance metrics used for evaluation includes computational time, success rate and distance traveled.
{"title":"Optimal path planning for intelligent automated wheelchair using DDSRPSO","authors":"K. Thirugnanasambandam, Raghav R.S., Jayakumar Loganathan, A. Dumka, Dhilipkumar V.","doi":"10.1108/ijpcc-05-2020-0033","DOIUrl":"https://doi.org/10.1108/ijpcc-05-2020-0033","url":null,"abstract":"\u0000Purpose\u0000This paper aims to find the optimal path using directionally driven self-regulating particle swarm optimization (DDSRPSO) with high accuracy and minimal response time.\u0000\u0000\u0000Design/methodology/approach\u0000This paper encompasses optimal path planning for automated wheelchair design using swarm intelligence algorithm DDSRPSO. Swarm intelligence is incorporated in optimization due to the cooperative behavior in it.\u0000\u0000\u0000Findings\u0000The proposed work has been evaluated in three different regions and the comparison has been made with particle swarm optimization and self-regulating particle swarm optimization and proved that the optimal path with robustness is from the proposed algorithm.\u0000\u0000\u0000Originality/value\u0000The performance metrics used for evaluation includes computational time, success rate and distance traveled.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2020-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86929651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}