Pub Date : 2022-12-12DOI: 10.5755/j01.itc.51.4.30691
Tung-Tso Tsai, Han-Yu Lin, Han-Ching Tsai
Traditional public key cryptography requires certificates as a link between each user’s identity and her/his public key. Typically, public key infrastructures (PKI) are used to manage and maintain certificates. However, it takes a lot of resources to build PKI which includes many roles and complex policies. The concept of certificateless public key encryption (CL-PKC) was introduced to eliminate the need for certificates. Based on this concept, a mechanism called certificateless public key encryption with equality test (CL-PKEET) was proposed to ensure the confidentiality of private data and provide an equality test of different ciphertexts. The mechanism is suitable for cloud applications where users cannot only protect personal private data but also enjoy cloud services which test the equality of different ciphertexts. More specifically, any two ciphertexts can be tested to determine whether they are encrypted from the same plaintext. Indeed, any practical system needs to provide a solution to revoke compromised users. However, these existing CL-PKEET schemes do not address the revocation problem, and the related research is scant. Therefore, the aim of this article is to propose the first revocable CL-PKEET scheme called RCL-PKEET which can effectively remove illegal users from the system while maintaining the effectiveness of existing CL-PKEET schemes in encryption, decryption, and equality testing processes. Additionally, we formally demonstrate the security of the proposed scheme under the bilinear Diffie-Hellman assumption.
{"title":"Revocable Certificateless Public Key Encryption with Equality Test","authors":"Tung-Tso Tsai, Han-Yu Lin, Han-Ching Tsai","doi":"10.5755/j01.itc.51.4.30691","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.30691","url":null,"abstract":"Traditional public key cryptography requires certificates as a link between each user’s identity and her/his public key. Typically, public key infrastructures (PKI) are used to manage and maintain certificates. However, it takes a lot of resources to build PKI which includes many roles and complex policies. The concept of certificateless public key encryption (CL-PKC) was introduced to eliminate the need for certificates. Based on this concept, a mechanism called certificateless public key encryption with equality test (CL-PKEET) was proposed to ensure the confidentiality of private data and provide an equality test of different ciphertexts. The mechanism is suitable for cloud applications where users cannot only protect personal private data but also enjoy cloud services which test the equality of different ciphertexts. More specifically, any two ciphertexts can be tested to determine whether they are encrypted from the same plaintext. Indeed, any practical system needs to provide a solution to revoke compromised users. However, these existing CL-PKEET schemes do not address the revocation problem, and the related research is scant. Therefore, the aim of this article is to propose the first revocable CL-PKEET scheme called RCL-PKEET which can effectively remove illegal users from the system while maintaining the effectiveness of existing CL-PKEET schemes in encryption, decryption, and equality testing processes. Additionally, we formally demonstrate the security of the proposed scheme under the bilinear Diffie-Hellman assumption.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"8 1","pages":"638-660"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73673320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.5755/j01.itc.51.4.30473
Z. Perić, B. Denic, A. Jovanovic, S. Milosavljevic, Milan S. Savic
The main issue when dealing with the non-adaptive scalar quantizers is their sensitivity to variance-mismatch, the effect that occurs when the data variance differs from the one used for the quantizer design. In this paper, we consider the influence of that effect in low-rate (2-bit) uniform scalar quantization (USQ) of Laplacian source and also we propose adequate measure to suppress it. Particularly, the approach we propose represents the upgraded version of the previous approaches used to improve performance of the single quantizer. It is based on dual-mode quantization that combines two 2-bit USQs (with adequately chosen parameters) to process input data, selected by applying the special rule. Analysis conducted in theoretical domain has shown that the proposed approach is less sensitive to variance-mismatch, making the dual-mode USQ more efficient in terms of robustness than the single USQ. Also, a gain is achieved compared to other 2-bit quantizer solutions. Experimental results are also provided for quantization of weights of the multi-layer perceptron (MLP) neural network, where good matching with the theoretical results is observed. Due to these achievements, we believe that the solution we propose can be a good choice for compression of non-stationary data modeled by Laplacian distribution, such as neural network parameters.
{"title":"Performance Analysis of a 2-bit Dual-Mode Uniform Scalar Quantizer for Laplacian Source","authors":"Z. Perić, B. Denic, A. Jovanovic, S. Milosavljevic, Milan S. Savic","doi":"10.5755/j01.itc.51.4.30473","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.30473","url":null,"abstract":"The main issue when dealing with the non-adaptive scalar quantizers is their sensitivity to variance-mismatch, the effect that occurs when the data variance differs from the one used for the quantizer design. In this paper, we consider the influence of that effect in low-rate (2-bit) uniform scalar quantization (USQ) of Laplacian source and also we propose adequate measure to suppress it. Particularly, the approach we propose represents the upgraded version of the previous approaches used to improve performance of the single quantizer. It is based on dual-mode quantization that combines two 2-bit USQs (with adequately chosen parameters) to process input data, selected by applying the special rule. Analysis conducted in theoretical domain has shown that the proposed approach is less sensitive to variance-mismatch, making the dual-mode USQ more efficient in terms of robustness than the single USQ. Also, a gain is achieved compared to other 2-bit quantizer solutions. Experimental results are also provided for quantization of weights of the multi-layer perceptron (MLP) neural network, where good matching with the theoretical results is observed. Due to these achievements, we believe that the solution we propose can be a good choice for compression of non-stationary data modeled by Laplacian distribution, such as neural network parameters.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"40 1","pages":"625-637"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85209963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.5755/j01.itc.51.4.31347
Sam Khozama, Ali Mahmoud Mayya
Breast cancer prediction is essential for preventing and treating cancer. In this research, a novel breast cancer prediction model is introduced. In addition, this research aims to provide a range-based cancer score instead of binary classification results (yes or no). The Breast Cancer Surveillance Consortium dataset (BCSC) dataset is used and modified by applying a proposed probabilistic model to achieve the range-based cancer score. The suggested model analyses a sub dataset of the whole BCSC dataset, including 67632 records and 13 risk factors. Three types of statistics are acquired (general cancer and non-cancer probabilities, previous medical knowledge, and the likelihood of each risk factor given all prediction classes). The model also uses the weighting methodology to achieve the best fusion of the BCSC's risk factors. The computation of the final prediction score is done using the post probability of the weighted combination of risk factors and the three statistics acquired from the probabilistic model. This final prediction is added to the BCSC dataset, and the new version of the BCSC dataset is used to train an ensemble model consisting of 30 learners. The experiments are applied using the sub and the whole datasets (including 317880 medical records). The results indicate that the new range-based model is accurate and robust with an accuracy of 91.33%, a false rejection rate of 1.12%, and an AUC of 0.9795. The new version of the BCSC dataset can be used for further research and analysis.
{"title":"A New Range-based Breast Cancer Prediction Model Using the Bayes' Theorem and Ensemble Learning","authors":"Sam Khozama, Ali Mahmoud Mayya","doi":"10.5755/j01.itc.51.4.31347","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.31347","url":null,"abstract":"Breast cancer prediction is essential for preventing and treating cancer. In this research, a novel breast cancer prediction model is introduced. In addition, this research aims to provide a range-based cancer score instead of binary classification results (yes or no). The Breast Cancer Surveillance Consortium dataset (BCSC) dataset is used and modified by applying a proposed probabilistic model to achieve the range-based cancer score. The suggested model analyses a sub dataset of the whole BCSC dataset, including 67632 records and 13 risk factors. Three types of statistics are acquired (general cancer and non-cancer probabilities, previous medical knowledge, and the likelihood of each risk factor given all prediction classes). The model also uses the weighting methodology to achieve the best fusion of the BCSC's risk factors. The computation of the final prediction score is done using the post probability of the weighted combination of risk factors and the three statistics acquired from the probabilistic model. This final prediction is added to the BCSC dataset, and the new version of the BCSC dataset is used to train an ensemble model consisting of 30 learners. The experiments are applied using the sub and the whole datasets (including 317880 medical records). The results indicate that the new range-based model is accurate and robust with an accuracy of 91.33%, a false rejection rate of 1.12%, and an AUC of 0.9795. The new version of the BCSC dataset can be used for further research and analysis.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"6 1","pages":"757-770"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87456506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.5755/j01.itc.51.4.29894
Fujing Tian, Yang Song, Zhidi Jiang, Wenxu Tao, G. Jiang
With the development of three-dimensional sensing technology, the data volume of point cloud grows rapidly. Therefore, point cloud is usually down-sampled in advance so as to save memory space and reduce the computational complexity for its downstream processing tasks such as classification, segmentation, reconstruction in learning based point cloud processing. Obviously, the sampled point clouds should be well representative and maintain the geometric structure of the original point clouds so that the downstream tasks can achieve satisfied performance based on the point clouds sampled from the original ones. Traditional point cloud sampling methods such as farthest point sampling and random sampling mainly heuristically select a subset of the original point cloud. However, they do not make full use of high-level semantic representation of point clouds, are sensitive to outliers. Some of other sampling methods are task oriented. In this paper, a Universal Point cloud Sampling Network without knowing downstream tasks (denoted as UPSNet) is proposed. It consists of three modules. The importance learning module is responsible for learning the mutual information between the points of input point cloud and calculating a group of variational importance probabilities to represent the importance of each point in the input point cloud, based on which a mask is designed to discard the points with lower importance so that the number of remaining points is controlled. Then, the regional learning module learns from the input point cloud to get the high dimensional space embedding of each region, and the global feature of each region are obtained by weighting the high dimensional space embedding with the variational importance probability. Finally, through the coordinate regression module, the global feature and the high dimensional space embedding of each region are cascaded for learning to obtain the sampled point cloud. A series of experiments are implemented in which the point cloud classification, segmentation, reconstruction and retrieval are performed on the reconstructed point clouds sampled with different point cloud sampling methods. The experimental results show that the proposed UPSNet can provide more reasonable sampling result of the input point cloud for the downstream tasks of classification, segmentation, reconstruction and retrieval, and is superior to the existing sampling methods without knowing the downstream tasks. The proposed UPSNet is not oriented to specific downstream tasks, so it has wide applicability.
{"title":"UPSNet: Universal Point Cloud Sampling Network Without Knowing Downstream Tasks","authors":"Fujing Tian, Yang Song, Zhidi Jiang, Wenxu Tao, G. Jiang","doi":"10.5755/j01.itc.51.4.29894","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.29894","url":null,"abstract":"With the development of three-dimensional sensing technology, the data volume of point cloud grows rapidly. Therefore, point cloud is usually down-sampled in advance so as to save memory space and reduce the computational complexity for its downstream processing tasks such as classification, segmentation, reconstruction in learning based point cloud processing. Obviously, the sampled point clouds should be well representative and maintain the geometric structure of the original point clouds so that the downstream tasks can achieve satisfied performance based on the point clouds sampled from the original ones. Traditional point cloud sampling methods such as farthest point sampling and random sampling mainly heuristically select a subset of the original point cloud. However, they do not make full use of high-level semantic representation of point clouds, are sensitive to outliers. Some of other sampling methods are task oriented. In this paper, a Universal Point cloud Sampling Network without knowing downstream tasks (denoted as UPSNet) is proposed. It consists of three modules. The importance learning module is responsible for learning the mutual information between the points of input point cloud and calculating a group of variational importance probabilities to represent the importance of each point in the input point cloud, based on which a mask is designed to discard the points with lower importance so that the number of remaining points is controlled. Then, the regional learning module learns from the input point cloud to get the high dimensional space embedding of each region, and the global feature of each region are obtained by weighting the high dimensional space embedding with the variational importance probability. Finally, through the coordinate regression module, the global feature and the high dimensional space embedding of each region are cascaded for learning to obtain the sampled point cloud. A series of experiments are implemented in which the point cloud classification, segmentation, reconstruction and retrieval are performed on the reconstructed point clouds sampled with different point cloud sampling methods. The experimental results show that the proposed UPSNet can provide more reasonable sampling result of the input point cloud for the downstream tasks of classification, segmentation, reconstruction and retrieval, and is superior to the existing sampling methods without knowing the downstream tasks. The proposed UPSNet is not oriented to specific downstream tasks, so it has wide applicability.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"295 1","pages":"723-737"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86392699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.5755/j01.itc.51.4.28052
J. Ramya, B. Maheswari, M. Rajakumar, R. Sonia
Alzheimer’s disease (AD) is an irreversible ailment. This ailment causes rapid loss of memory and behavioral changes. Recently, this disorder is very common among the elderly. Although there is no specific treatment for this disorder, its diagnosis aids in delaying the spread of the disease. Therefore, in the past few years, automatic recognition of AD using image processing techniques has achieved much attraction. In this research, we propose a novel framework for the classification of AD using magnetic resonance imaging (MRI) data. Initially, the image is filtered using 2D Adaptive Bilateral Filter (2D-ABF). The denoised image is then enhanced using Entropy-based Contrast Limited Adaptive Histogram Equalization (ECLAHE) algorithm. From enhanced data, the region of interest (ROI) is segmented using clustering and thresholding techniques. Clustering is performed using Enhanced Expectation Maximization (EEM) and thresholding is performed using Adaptive Histogram (AH) thresholding algorithm. From the ROI, Gray Level Co-Occurrence Matrix (GLCM) features are generated. GLCM is a feature that computes the occurrence of pixel pairs in specific spatial coordinates of an image. The dimension of these features is reduced using Principle Component Analysis (PCA). Finally, the obtained features are classified using classifiers. In this work, we have employed Logistic Regression (LR) for classification. The classification results were achieved with the accuracy of 96.92% from the confusion matrix to identify the Alzheimer’s Disease. The proposed framework was then evaluated using performance evaluation metrics like accuracy, sensitivity, F-score, precision and specificity that were arrived from the confusion matrix. Our study demonstrates that the proposed Alzheimer’s disease detection model outperforms other models proposed in the literature.
{"title":"Alzheimer's Disease Segmentation and Classification on MRI Brain Images Using Enhanced Expectation Maximization Adaptive Histogram (EEM-AH) and Machine Learning","authors":"J. Ramya, B. Maheswari, M. Rajakumar, R. Sonia","doi":"10.5755/j01.itc.51.4.28052","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.28052","url":null,"abstract":"Alzheimer’s disease (AD) is an irreversible ailment. This ailment causes rapid loss of memory and behavioral changes. Recently, this disorder is very common among the elderly. Although there is no specific treatment for this disorder, its diagnosis aids in delaying the spread of the disease. Therefore, in the past few years, automatic recognition of AD using image processing techniques has achieved much attraction. In this research, we propose a novel framework for the classification of AD using magnetic resonance imaging (MRI) data. Initially, the image is filtered using 2D Adaptive Bilateral Filter (2D-ABF). The denoised image is then enhanced using Entropy-based Contrast Limited Adaptive Histogram Equalization (ECLAHE) algorithm. From enhanced data, the region of interest (ROI) is segmented using clustering and thresholding techniques. Clustering is performed using Enhanced Expectation Maximization (EEM) and thresholding is performed using Adaptive Histogram (AH) thresholding algorithm. From the ROI, Gray Level Co-Occurrence Matrix (GLCM) features are generated. GLCM is a feature that computes the occurrence of pixel pairs in specific spatial coordinates of an image. The dimension of these features is reduced using Principle Component Analysis (PCA). Finally, the obtained features are classified using classifiers. In this work, we have employed Logistic Regression (LR) for classification. The classification results were achieved with the accuracy of 96.92% from the confusion matrix to identify the Alzheimer’s Disease. The proposed framework was then evaluated using performance evaluation metrics like accuracy, sensitivity, F-score, precision and specificity that were arrived from the confusion matrix. Our study demonstrates that the proposed Alzheimer’s disease detection model outperforms other models proposed in the literature.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"32 1","pages":"786-800"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87052413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.5755/j01.itc.51.4.31323
V. Alamelu, S. Thilagamani
The Internet of Medical Things (IoMT) has subsequently been used in healthcare services to gather sensor data for the prediction and diagnosis of cardiac disease. Recently image processing techniques require a clear focused solution to predict diseases. The primary goal of the proposed method is to use health information and medical pictures for classifying the data and forecasting cardiac disease. It consists of two phases for categorizing the data and prediction. If the previous phase's results are practical heart problems, then there is no need for phase 2 to predict. The first phase categorized data collected from healthcare sensors attached to the patient's body. The second stage evaluated the echocardiography images for the prediction of heart disease. A Hybrid Lion-based Butterfly Optimization Algorithm (L-BOA) is used for classifying the sensor data. In the existing method, Hybrid Faster R-CNN with SE-Rest-Net-101 is used for classification. Faster R-CNN uses areas to locate the item in the picture. The proposed method uses Improved YOLO-v4. It increases the semantic knowledge of little things. An Improved YOLO-v4 with CSPDarkNet53 is used for feature extraction and classifying the echo-cardiogram pictures. Both categorization approaches were used, and the results were integrated and confirmed in the ability to forecast heart disease. The LBO-YOLO-v4 process detected regular sensor data with 97.25% accuracy and irregular sensor data with 98.87% accuracy. The proposed improved YOLO-v4 with the CSPDarkNet53 method gives better classification among echo-cardiogram pictures.
{"title":"Lion Based Butterfly Optimization with Improved YOLO-v4 for Heart Disease Prediction Using IoMT","authors":"V. Alamelu, S. Thilagamani","doi":"10.5755/j01.itc.51.4.31323","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.31323","url":null,"abstract":"The Internet of Medical Things (IoMT) has subsequently been used in healthcare services to gather sensor data for the prediction and diagnosis of cardiac disease. Recently image processing techniques require a clear focused solution to predict diseases. The primary goal of the proposed method is to use health information and medical pictures for classifying the data and forecasting cardiac disease. It consists of two phases for categorizing the data and prediction. If the previous phase's results are practical heart problems, then there is no need for phase 2 to predict. The first phase categorized data collected from healthcare sensors attached to the patient's body. The second stage evaluated the echocardiography images for the prediction of heart disease. A Hybrid Lion-based Butterfly Optimization Algorithm (L-BOA) is used for classifying the sensor data. In the existing method, Hybrid Faster R-CNN with SE-Rest-Net-101 is used for classification. Faster R-CNN uses areas to locate the item in the picture. The proposed method uses Improved YOLO-v4. It increases the semantic knowledge of little things. An Improved YOLO-v4 with CSPDarkNet53 is used for feature extraction and classifying the echo-cardiogram pictures. Both categorization approaches were used, and the results were integrated and confirmed in the ability to forecast heart disease. The LBO-YOLO-v4 process detected regular sensor data with 97.25% accuracy and irregular sensor data with 98.87% accuracy. The proposed improved YOLO-v4 with the CSPDarkNet53 method gives better classification among echo-cardiogram pictures.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"37 6 1","pages":"692-703"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79285793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.30525
Ramathilagam Arunagiri, P. Pandian, Valarmathi Krishnasamy, R. Sivaprakasam
IT and Telecommunication sector has grown massively over the past few decades. Mobile phones that were initially developed for making calls and now become an essential item and are just not restricted to calling. They have dominated most of the gadgets like computers, cameras etc. Regularly people come across an extensive number of enhanced and better-quality features being inbuilt with them. A variety of mobile phones with different shapes and sizes are manufactured within a wide range of budgets. This is the key motivation behind an exponential growth in the number of users and the arrival of new manufacturers in the field. Along with this growth, there is a fast growth of mobile application software providers also. Apart from calling, many consumers use smartphones for browsing the internet. This puts users into a dilemma to select a better browser for their smartphone to fulfill their requirements. With this aim, an attempt is made in this paper for the evaluation and selection of a better browser. To achieve this, a hybrid Multi Criteria Decision Making (MCDM) approach is proposed by combining COPRAS (Complex Proportional Assessment of alternatives) technique and Fuzzy Analytical Hierarchy Process (FAHP).
{"title":"Browser Selection for Android Smartphones Using Novel Fuzzy Hybrid Multi Criteria Decision Making Technique","authors":"Ramathilagam Arunagiri, P. Pandian, Valarmathi Krishnasamy, R. Sivaprakasam","doi":"10.5755/j01.itc.51.3.30525","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.30525","url":null,"abstract":"IT and Telecommunication sector has grown massively over the past few decades. Mobile phones that were initially developed for making calls and now become an essential item and are just not restricted to calling. They have dominated most of the gadgets like computers, cameras etc. Regularly people come across an extensive number of enhanced and better-quality features being inbuilt with them. A variety of mobile phones with different shapes and sizes are manufactured within a wide range of budgets. This is the key motivation behind an exponential growth in the number of users and the arrival of new manufacturers in the field. Along with this growth, there is a fast growth of mobile application software providers also. Apart from calling, many consumers use smartphones for browsing the internet. This puts users into a dilemma to select a better browser for their smartphone to fulfill their requirements. With this aim, an attempt is made in this paper for the evaluation and selection of a better browser. To achieve this, a hybrid Multi Criteria Decision Making (MCDM) approach is proposed by combining COPRAS (Complex Proportional Assessment of alternatives) technique and Fuzzy Analytical Hierarchy Process (FAHP).","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"22 1","pages":"467-484"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87097241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.31273
C. Güven, M. Aci
Smart home systems are the integration of technology and services through the network for a better quality of life. Smart homes perform daily housework and activities more easily without user intervention or with remote control of the user. In this study, a machine learning-based smart home system has been developed. The aim of the study is to design a system that can continuously improve itself and learn instead of an ordinary smart home system that can be remotely controlled. The developed machine learning model predicts the routine activities of the users in the home and performs some operations for the user autonomously. The dataset used in the study consists of real data received from the sensors as a result of the daily use. Naive Bayes (NB) (i.e. Gaussian NB, Bernoulli NB, Multinomial NB, and Complement NB), ensemble (i.e. Random Forest, Gradient Tree Boosting and eXtreme Gradient Boosting), linear (i.e. Logistic Regression, Stochastic Gradient Descent, and Passive-Aggressive Classification), and other (i.e. Decision Tree, Support Vector Machine, K Nearest Neighbor, Gaussian Process Classifier (GPC), Multilayer Perceptron) machine learning-based algorithms were utilized. The performance of the proposed smart home system was evaluated using several performance metrics: The best results were obtained from the GPC algorithm (i.e. Precision: 0.97, Recall: 0.98, F1-score: 0.97, Accuracy: 0.97).
{"title":"Design and Implementation of a Self-Learner Smart Home System Using Machine Learning Algorithms","authors":"C. Güven, M. Aci","doi":"10.5755/j01.itc.51.3.31273","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.31273","url":null,"abstract":"Smart home systems are the integration of technology and services through the network for a better quality of life. Smart homes perform daily housework and activities more easily without user intervention or with remote control of the user. In this study, a machine learning-based smart home system has been developed. The aim of the study is to design a system that can continuously improve itself and learn instead of an ordinary smart home system that can be remotely controlled. The developed machine learning model predicts the routine activities of the users in the home and performs some operations for the user autonomously. The dataset used in the study consists of real data received from the sensors as a result of the daily use. Naive Bayes (NB) (i.e. Gaussian NB, Bernoulli NB, Multinomial NB, and Complement NB), ensemble (i.e. Random Forest, Gradient Tree Boosting and eXtreme Gradient Boosting), linear (i.e. Logistic Regression, Stochastic Gradient Descent, and Passive-Aggressive Classification), and other (i.e. Decision Tree, Support Vector Machine, K Nearest Neighbor, Gaussian Process Classifier (GPC), Multilayer Perceptron) machine learning-based algorithms were utilized. The performance of the proposed smart home system was evaluated using several performance metrics: The best results were obtained from the GPC algorithm (i.e. Precision: 0.97, Recall: 0.98, F1-score: 0.97, Accuracy: 0.97).","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"24 1","pages":"545-562"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85718173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.30881
K. Nivitha, Pabitha Parameshwaran
Cloud Computing is diversified with its services exponentially and lured large number of consumers towards the technology indefinitely. It has become a highly challenging problem to satiate the user requirements. Most of the existing system ingest large search space or provide inappropriate service; hence, there is a need for the reliable and space competent service selection/ranking in the cloud environment. The proposed work introduces a novel pruning method and Dual Ranking Method (DRM) to rank the services from n services in terms of space conserving and providing reliable service quenching the user requirements as well. Dual Ranking Method (DRM) is proposed focusing on the uncertainty of user preferences along with their priorities; converting it to weights with the use of Jensen-Shannon (JS) Entropy Function. The ranking of service is employed through Priority-Technique for Order of Preference by Similarity to Ideal Solution (P-TOPSIS) and space complexity is reduced by novel Utility Pruning method. The performance of the proposed work Clustering – Dual Ranking Method (C-DRM) is estimated in terms of accuracy, Closeness Index (CI) and space complexity have been validated through case study where results outperforms the existing approaches
{"title":"C-DRM: Coalesced P-TOPSIS Entropy Technique addressing Uncertainty in Cloud Service Selection","authors":"K. Nivitha, Pabitha Parameshwaran","doi":"10.5755/j01.itc.51.3.30881","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.30881","url":null,"abstract":"Cloud Computing is diversified with its services exponentially and lured large number of consumers towards the technology indefinitely. It has become a highly challenging problem to satiate the user requirements. Most of the existing system ingest large search space or provide inappropriate service; hence, there is a need for the reliable and space competent service selection/ranking in the cloud environment. The proposed work introduces a novel pruning method and Dual Ranking Method (DRM) to rank the services from n services in terms of space conserving and providing reliable service quenching the user requirements as well. Dual Ranking Method (DRM) is proposed focusing on the uncertainty of user preferences along with their priorities; converting it to weights with the use of Jensen-Shannon (JS) Entropy Function. The ranking of service is employed through Priority-Technique for Order of Preference by Similarity to Ideal Solution (P-TOPSIS) and space complexity is reduced by novel Utility Pruning method. The performance of the proposed work Clustering – Dual Ranking Method (C-DRM) is estimated in terms of accuracy, Closeness Index (CI) and space complexity have been validated through case study where results outperforms the existing approaches","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"38 1","pages":"592-605"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87345675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.30014
Qing-Xin Pan, Yang Li, Nan Wang, Peng-fei Zhao, Lan Huang, Zhongyi Wang
Electrical Impedance Tomography (EIT) can perform non-invasive, low-cost, safe, fast, and simple system structure and functional imaging to map the distribution and changes of root zone. Multi frequency EIT solves the problem that single-frequency EIT can only carry more impedance information than a given single excitation frequency. It still remains challenges to simultaneously obtain multi-frequency electrical impedance tomography. To address the problem, a mixed signal superimposed by multiple frequencies is injected to the object. Essentially, separating the measured mixed voltage signals, which can be used to obtain electrical impedance information at different frequencies at the same time quickly. Since the measurement signal is a multi-frequency signal, the effect of decomposing the multi-frequency signal directly affects the accuracy of imaging. In order to obtain more accurate data, this article used the variational mode decomposition (VMD) method to decompose the measured multi-frequency signal. Accurate amplitude and phase information could be obtained simultaneously at the same time in multi-frequency excitation, and these data could be used to reconstruct electrical impedance distribution The results showed that the proposed method can achieve the expected imaging effect. It was concluded that using the variational modal decomposition method to process the data of multi-frequency signals is more accurate and the imaging effect is better, and it can be applied to multi-frequency electrical impedance imaging in practice.
{"title":"Variational Mode Decomposition-based Synchronous Multi-Frequency Electrical Impedance Tomography","authors":"Qing-Xin Pan, Yang Li, Nan Wang, Peng-fei Zhao, Lan Huang, Zhongyi Wang","doi":"10.5755/j01.itc.51.3.30014","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.30014","url":null,"abstract":"Electrical Impedance Tomography (EIT) can perform non-invasive, low-cost, safe, fast, and simple system structure and functional imaging to map the distribution and changes of root zone. Multi frequency EIT solves the problem that single-frequency EIT can only carry more impedance information than a given single excitation frequency. It still remains challenges to simultaneously obtain multi-frequency electrical impedance tomography. To address the problem, a mixed signal superimposed by multiple frequencies is injected to the object. Essentially, separating the measured mixed voltage signals, which can be used to obtain electrical impedance information at different frequencies at the same time quickly. Since the measurement signal is a multi-frequency signal, the effect of decomposing the multi-frequency signal directly affects the accuracy of imaging. In order to obtain more accurate data, this article used the variational mode decomposition (VMD) method to decompose the measured multi-frequency signal. Accurate amplitude and phase information could be obtained simultaneously at the same time in multi-frequency excitation, and these data could be used to reconstruct electrical impedance distribution The results showed that the proposed method can achieve the expected imaging effect. It was concluded that using the variational modal decomposition method to process the data of multi-frequency signals is more accurate and the imaging effect is better, and it can be applied to multi-frequency electrical impedance imaging in practice.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"42 1","pages":"446-466"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77512573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}