Pub Date : 2024-07-01DOI: 10.15837/ijccc.2024.4.6632
Mihai Doinea, Ioana Trandafir, Cristian Toma, Marius Popa, Alin Zamfiroiu
The need of an automated support system that helps beekeepers maintain and improve beehive population was always a very stressing aspect of their work considering the importance of a healthy bee population. This paper presents a proof of concept, further referred as a PoC solution, based on the Internet of Things technology which proposes a smart monitoring system using machine learning processes, diligently combining the power of edge computing by means of communication and control. Beehive maintenance is improved, having an optimal state of health due to the Deep Learning inference triggered at the edge level of devices which processes hive’s noises. All this is achieved by using IoT sensors to collect data, extract important features and a Tiny ML network for decision support. Having Machine Learning inference to be performed on low-power microcontroller devices leads to significant improvements in the autonomy of beekeeping solutions.
考虑到健康蜂群的重要性,需要一个自动支持系统来帮助养蜂人维护和提高蜂群数量,这一直是养蜂人工作中非常紧张的一个方面。本文基于物联网技术提出了一个概念验证(也称为 PoC 解决方案),该方案利用机器学习过程提出了一个智能监控系统,并通过通信和控制手段努力将边缘计算的力量结合起来。由于在处理蜂巢噪音的边缘设备上触发了深度学习推理,蜂巢的维护工作得到了改善,蜂巢的健康状况达到了最佳状态。所有这些都是通过使用物联网传感器收集数据、提取重要特征和用于决策支持的微型 ML 网络来实现的。在低功耗微控制器设备上执行机器学习推理,可显著提高养蜂解决方案的自主性。
{"title":"IoT Embedded Smart Monitoring System with Edge Machine Learning for Beehive Management","authors":"Mihai Doinea, Ioana Trandafir, Cristian Toma, Marius Popa, Alin Zamfiroiu","doi":"10.15837/ijccc.2024.4.6632","DOIUrl":"https://doi.org/10.15837/ijccc.2024.4.6632","url":null,"abstract":"The need of an automated support system that helps beekeepers maintain and improve beehive population was always a very stressing aspect of their work considering the importance of a healthy bee population. This paper presents a proof of concept, further referred as a PoC solution, based on the Internet of Things technology which proposes a smart monitoring system using machine learning processes, diligently combining the power of edge computing by means of communication and control. Beehive maintenance is improved, having an optimal state of health due to the Deep Learning inference triggered at the edge level of devices which processes hive’s noises. All this is achieved by using IoT sensors to collect data, extract important features and a Tiny ML network for decision support. Having Machine Learning inference to be performed on low-power microcontroller devices leads to significant improvements in the autonomy of beekeeping solutions.","PeriodicalId":179619,"journal":{"name":"Int. J. Comput. Commun. Control","volume":"156 20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141705864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.15837/ijccc.2024.4.6526
Alejandro Velazquez-Mena, Hector Benitez-Perez, Rita C. Rodríguez-Martínez, Ricardo F. Villarreal-Martínez
The Internet of Things (IoT) has emerged as a cornerstone technology, transforming how we interact with our surroundings. Despite their widespread adoption, IoT devices encounter challenges related to processing capabilities and connectivity, frequently necessitating the delegation of tasks to remote cloud servers. This offloading, essential for enhancing user experience, poses challenges, particularly for latency-sensitive applications. Edge-centric paradigms like fog and mist computing have emerged to address these challenges, bringing computational resources closer to end-users. However, efficiently managing task offloading in dynamic IoT environments remains a complex issue. This paper introduces DIstributed COmputing for MIST (dicomist), a methodology designed to facilitate task offloading in IoT settings. Dicomist utilizes wireless mesh networks to organize mobile nodes, employing clustering and classification techniques. Tasks are treated as consensus problems, enabling distributed computation among selected nodes. Real-world experiments demonstrate dicomist’s effectiveness, underscoring its potential to enhance task offloading in IoT environments.
物联网(IoT)已成为一项基础技术,改变着我们与周围环境的互动方式。尽管物联网设备被广泛采用,但它们在处理能力和连接性方面仍面临挑战,经常需要将任务委托给远程云服务器。这种卸载对提升用户体验至关重要,但也带来了挑战,尤其是对延迟敏感的应用。为应对这些挑战,出现了以边缘为中心的范例,如雾和雾计算,使计算资源更接近终端用户。然而,在动态物联网环境中有效管理任务卸载仍然是一个复杂的问题。本文介绍了DIstributed COmputing for MIST(dicomist),这是一种旨在促进物联网环境中任务卸载的方法。Dicomist 利用无线网状网络来组织移动节点,并采用聚类和分类技术。任务被视为共识问题,可在选定节点之间进行分布式计算。真实世界的实验证明了 dicomist 的有效性,凸显了它在物联网环境中加强任务卸载的潜力。
{"title":"DICOMIST: An methodology for Performing Distributed Computing in Heterogeneous ad hoc Networks","authors":"Alejandro Velazquez-Mena, Hector Benitez-Perez, Rita C. Rodríguez-Martínez, Ricardo F. Villarreal-Martínez","doi":"10.15837/ijccc.2024.4.6526","DOIUrl":"https://doi.org/10.15837/ijccc.2024.4.6526","url":null,"abstract":"The Internet of Things (IoT) has emerged as a cornerstone technology, transforming how we interact with our surroundings. Despite their widespread adoption, IoT devices encounter challenges related to processing capabilities and connectivity, frequently necessitating the delegation of tasks to remote cloud servers. This offloading, essential for enhancing user experience, poses challenges, particularly for latency-sensitive applications. Edge-centric paradigms like fog and mist computing have emerged to address these challenges, bringing computational resources closer to end-users. However, efficiently managing task offloading in dynamic IoT environments remains a complex issue. This paper introduces DIstributed COmputing for MIST (dicomist), a methodology designed to facilitate task offloading in IoT settings. Dicomist utilizes wireless mesh networks to organize mobile nodes, employing clustering and classification techniques. Tasks are treated as consensus problems, enabling distributed computation among selected nodes. Real-world experiments demonstrate dicomist’s effectiveness, underscoring its potential to enhance task offloading in IoT environments.","PeriodicalId":179619,"journal":{"name":"Int. J. Comput. Commun. Control","volume":"80 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141714996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.15837/ijccc.2024.4.6633
Hongmei Liu, Shizhou Weng
This paper proposes a novel multi-attribute decision-making (MADM) method for interval rough numbers (IRNs) considering different distribution types, namely uniform, exponential, and normal distributions. Upper and lower approximate interval dominance degrees are defined and aggregated using dynamic weights to obtain pairwise comparisons of IRNs. The properties of dominance are verified, and an attribute weight determination method based on the dominance balance degree is introduced. The proposed MADM method is data-driven and does not rely on the subjective preferences of decision-makers. Case analysis demonstrates the effectiveness and rationality of the proposed method, revealing that the distribution type of IRNs significantly impacts decision results, potentially leading to reversed ranking outcomes. The proposed method offers a comprehensive framework for handling MADM problems with IRNs under different distributions.
{"title":"A Multi-attribute Decision-making Method for Interval Rough Number Information System Considering Distribution Types","authors":"Hongmei Liu, Shizhou Weng","doi":"10.15837/ijccc.2024.4.6633","DOIUrl":"https://doi.org/10.15837/ijccc.2024.4.6633","url":null,"abstract":"This paper proposes a novel multi-attribute decision-making (MADM) method for interval rough numbers (IRNs) considering different distribution types, namely uniform, exponential, and normal distributions. Upper and lower approximate interval dominance degrees are defined and aggregated using dynamic weights to obtain pairwise comparisons of IRNs. The properties of dominance are verified, and an attribute weight determination method based on the dominance balance degree is introduced. The proposed MADM method is data-driven and does not rely on the subjective preferences of decision-makers. Case analysis demonstrates the effectiveness and rationality of the proposed method, revealing that the distribution type of IRNs significantly impacts decision results, potentially leading to reversed ranking outcomes. The proposed method offers a comprehensive framework for handling MADM problems with IRNs under different distributions.","PeriodicalId":179619,"journal":{"name":"Int. J. Comput. Commun. Control","volume":"3 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141700543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timely fault diagnosis and localization of transmission lines is crucial for ensuring the reliable operation of increasingly complex power systems. This study proposes an optimized R-Net algorithm based on a feature pyramid network (FPN) and densely connected convolutional network (D-Net) for transmission line fault diagnosis and localization. The R-Net network is enhanced by reshaping the anchor points using an improved K-means algorithm and incorporating an FPN for multi-scale feature extraction. The backbone network is further optimized using D-Net to strengthen inter-layer connections and improve feature reuse. Experimental results demonstrate that the optimized R-Net achieves an overall average accuracy of 0.64, outperforming the original network by 1.30%. The accuracy improvement is particularly significant for ground wire defects (2.40%). The D-Net-based R-Net, despite having fewer parameters, maintains high accuracy (0.6502). Compared to other object detection algorithms, such as YOLO-v3 and Faster R-CNN, the optimized R-Net exhibits superior performance in terms of mean average precision (15.58% and 2.45% higher, respectively) and parameter efficiency (17M vs. 38M and 81M). Considering both performance and speed, the optimized R-Net achieves a processing rate of 10.5 frames per second. This study provides an efficient and accurate tool for transmission line fault diagnosis and localization, with significant practical implications for power system operation and maintenance.
{"title":"Fault Diagnosis and Localization of Transmission Lines Based on R-Net Algorithm Optimized by Feature Pyramid Network","authors":"Chunmei Zhang, Xingque Xu, Silin Liu, Yongjian Li, Jiefeng Jiang","doi":"10.15837/ijccc.2024.4.6608","DOIUrl":"https://doi.org/10.15837/ijccc.2024.4.6608","url":null,"abstract":"Timely fault diagnosis and localization of transmission lines is crucial for ensuring the reliable operation of increasingly complex power systems. This study proposes an optimized R-Net algorithm based on a feature pyramid network (FPN) and densely connected convolutional network (D-Net) for transmission line fault diagnosis and localization. The R-Net network is enhanced by reshaping the anchor points using an improved K-means algorithm and incorporating an FPN for multi-scale feature extraction. The backbone network is further optimized using D-Net to strengthen inter-layer connections and improve feature reuse. Experimental results demonstrate that the optimized R-Net achieves an overall average accuracy of 0.64, outperforming the original network by 1.30%. The accuracy improvement is particularly significant for ground wire defects (2.40%). The D-Net-based R-Net, despite having fewer parameters, maintains high accuracy (0.6502). Compared to other object detection algorithms, such as YOLO-v3 and Faster R-CNN, the optimized R-Net exhibits superior performance in terms of mean average precision (15.58% and 2.45% higher, respectively) and parameter efficiency (17M vs. 38M and 81M). Considering both performance and speed, the optimized R-Net achieves a processing rate of 10.5 frames per second. This study provides an efficient and accurate tool for transmission line fault diagnosis and localization, with significant practical implications for power system operation and maintenance.","PeriodicalId":179619,"journal":{"name":"Int. J. Comput. Commun. Control","volume":"57 S8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141697890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peak load forecasting plays an important role in electric utilities. However, the daily peak load forecasting problem, especially for holidays, is fuzzy and highly nonlinear. In order to address the nonlinearity and fuzziness of the holiday load behaviors, a grammatical evolution-based fuzzy regression approach is proposed in this paper. The proposed hybrid approach is based on the theorem that fuzzy polynomial regression can model all fuzzy functions. It employs the rules of the grammatical evolution to generate fuzzy nonlinear structures in polynomial form. Then, a two-stage fuzzy regression approach is used to determine the coefficients and calculate the fitness of the fuzzy functions. An artificial bee colony algorithm is used as the evolution system to update the elements of the grammatical evolution system. The process is repeated until a fuzzy model that best fits the load data is found. After that, the developed fuzzy nonlinear model is applied to forecast holiday peak load. Considering that different holidays possess different load patterns, a separate forecaster model is built for each holiday. Test results on real load data show that an averaged absolute percent error less than 2% can be achieved, which significantly outperforms existing methods involved in the comparison.
{"title":"Holiday Peak Load Forecasting Using Grammatical Evolution-Based Fuzzy Regression Approach","authors":"Guo Li, Xiang Hu, Shuyi Chen, Kaixuan Chang, Peiqi Li, Yujue Wang","doi":"10.15837/ijccc.2024.4.6611","DOIUrl":"https://doi.org/10.15837/ijccc.2024.4.6611","url":null,"abstract":"Peak load forecasting plays an important role in electric utilities. However, the daily peak load forecasting problem, especially for holidays, is fuzzy and highly nonlinear. In order to address the nonlinearity and fuzziness of the holiday load behaviors, a grammatical evolution-based fuzzy regression approach is proposed in this paper. The proposed hybrid approach is based on the theorem that fuzzy polynomial regression can model all fuzzy functions. It employs the rules of the grammatical evolution to generate fuzzy nonlinear structures in polynomial form. Then, a two-stage fuzzy regression approach is used to determine the coefficients and calculate the fitness of the fuzzy functions. An artificial bee colony algorithm is used as the evolution system to update the elements of the grammatical evolution system. The process is repeated until a fuzzy model that best fits the load data is found. After that, the developed fuzzy nonlinear model is applied to forecast holiday peak load. Considering that different holidays possess different load patterns, a separate forecaster model is built for each holiday. Test results on real load data show that an averaged absolute percent error less than 2% can be achieved, which significantly outperforms existing methods involved in the comparison.","PeriodicalId":179619,"journal":{"name":"Int. J. Comput. Commun. Control","volume":"47 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141698326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.15837/ijccc.2024.4.6607
Jingzhi Liu, Quanlei Qu, Hongyi Yang, Jianming Zhang, Zhidong Liu
Power distribution networks with distributed generation (DG) face challenges in fault diagnosis due to the high uncertainty, randomness, and complexity introduced by DG integration. This study proposes a two-stage approach for fault location and identification in distribution networks with DG. First, an improved bald eagle search algorithm combined with the Dijkstra algorithm (D-IBES) is developed for fault location. Second, a fusion deep residual shrinkage network (FDRSN) is integrated with IBES and support vector machine (SVM) to form the FDRSN-IBS-SVM model for fault identification. Experimental results showed that the D-IBES algorithm achieved a CPU loss rate of 0.54% and an average time consumption of 1.70 seconds in complex scenarios, outperforming the original IBES algorithm. The FDRSN-IBS-SVM model attained high fault identification accuracy (99.05% and 98.54%) under different DG output power levels and maintained robustness (97.89% accuracy and 97.54% recall) under 5% Gaussian white noise. The proposed approach demonstrates superior performance compared to existing methods and provides a promising solution for intelligent fault diagnosis in modern distribution networks.
{"title":"Deep Learning-based Intelligent Fault Diagnosis for Power Distribution Networks","authors":"Jingzhi Liu, Quanlei Qu, Hongyi Yang, Jianming Zhang, Zhidong Liu","doi":"10.15837/ijccc.2024.4.6607","DOIUrl":"https://doi.org/10.15837/ijccc.2024.4.6607","url":null,"abstract":"Power distribution networks with distributed generation (DG) face challenges in fault diagnosis due to the high uncertainty, randomness, and complexity introduced by DG integration. This study proposes a two-stage approach for fault location and identification in distribution networks with DG. First, an improved bald eagle search algorithm combined with the Dijkstra algorithm (D-IBES) is developed for fault location. Second, a fusion deep residual shrinkage network (FDRSN) is integrated with IBES and support vector machine (SVM) to form the FDRSN-IBS-SVM model for fault identification. Experimental results showed that the D-IBES algorithm achieved a CPU loss rate of 0.54% and an average time consumption of 1.70 seconds in complex scenarios, outperforming the original IBES algorithm. The FDRSN-IBS-SVM model attained high fault identification accuracy (99.05% and 98.54%) under different DG output power levels and maintained robustness (97.89% accuracy and 97.54% recall) under 5% Gaussian white noise. The proposed approach demonstrates superior performance compared to existing methods and provides a promising solution for intelligent fault diagnosis in modern distribution networks.","PeriodicalId":179619,"journal":{"name":"Int. J. Comput. Commun. Control","volume":"14 31","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141700221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.15837/ijccc.2024.4.6606
Jose Alberto Aparicio, Héctor Benítez-Pérez, Luis Agustin Alvarez-Icaza Longoria, Luis Mendoza Rodríguez
A resource administrator (RM) can distribute tasks processing time between heterogeneous processors is presented. Its design is based on analyzing of the possible forms of attention to pending tasks and in the previous knowledge about them. A model is proposed based on difference equations that quantify the resource allocation of each task and choose the best processor where the task can be executed. The scheme adapts to dynamic changes in the requirements or the number of tasks.
{"title":"Resource manager for heterogeneous processors","authors":"Jose Alberto Aparicio, Héctor Benítez-Pérez, Luis Agustin Alvarez-Icaza Longoria, Luis Mendoza Rodríguez","doi":"10.15837/ijccc.2024.4.6606","DOIUrl":"https://doi.org/10.15837/ijccc.2024.4.6606","url":null,"abstract":"A resource administrator (RM) can distribute tasks processing time between heterogeneous processors is presented. Its design is based on analyzing of the possible forms of attention to pending tasks and in the previous knowledge about them. A model is proposed based on difference equations that quantify the resource allocation of each task and choose the best processor where the task can be executed. The scheme adapts to dynamic changes in the requirements or the number of tasks.","PeriodicalId":179619,"journal":{"name":"Int. J. Comput. Commun. Control","volume":"341 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141691637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.15837/ijccc.2024.4.6686
Puwadol Oak Dusadeerungsikul, S. Nof
Effective work scheduling for clinical training is essential for medical education, yet it remains challenging. Creating a clinical training schedule is a difficult task, due to the complexity of curriculum requirements, hospital demands, and student well-being. This study proposes the Collaborative Control Protocol with Artificial Intelligence for Medical Student Work Scheduling (CCP-AI-MWS) to optimize clinical training schedules. The CCP-AI-MWS integrates the Collaborative Requirement Planning principle with Artificial Intelligence (AI). Two experiments have been conducted comparing CCP-AI-MWS with current practice. Results show that the newly developed protocol outperforms the current method. CCP-AI-MWS achieves a more equitable distribution of assignments, better accommodates student preferences, and reduces unnecessary workload, thus mitigating student burnout and improving satisfaction. Moreover, the CCP-AI-MWS exhibits adaptability to unexpected situations and minimizes disruptions to the current schedule. The findings present the potential of CCP-AI-MWS to transform scheduling practices in medical education, offering an efficient solution that could benefit medical schools worldwide.
{"title":"A Collaborative Control Protocol with Artificial Intelligence for Medical Student Work Scheduling","authors":"Puwadol Oak Dusadeerungsikul, S. Nof","doi":"10.15837/ijccc.2024.4.6686","DOIUrl":"https://doi.org/10.15837/ijccc.2024.4.6686","url":null,"abstract":"Effective work scheduling for clinical training is essential for medical education, yet it remains challenging. Creating a clinical training schedule is a difficult task, due to the complexity of curriculum requirements, hospital demands, and student well-being. This study proposes the Collaborative Control Protocol with Artificial Intelligence for Medical Student Work Scheduling (CCP-AI-MWS) to optimize clinical training schedules. The CCP-AI-MWS integrates the Collaborative Requirement Planning principle with Artificial Intelligence (AI). Two experiments have been conducted comparing CCP-AI-MWS with current practice. Results show that the newly developed protocol outperforms the current method. CCP-AI-MWS achieves a more equitable distribution of assignments, better accommodates student preferences, and reduces unnecessary workload, thus mitigating student burnout and improving satisfaction. Moreover, the CCP-AI-MWS exhibits adaptability to unexpected situations and minimizes disruptions to the current schedule. The findings present the potential of CCP-AI-MWS to transform scheduling practices in medical education, offering an efficient solution that could benefit medical schools worldwide.","PeriodicalId":179619,"journal":{"name":"Int. J. Comput. Commun. Control","volume":"2015 22","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141706768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.15837/ijccc.2024.4.5840
Vishal Paranjape, Neelu Nihalani, Nishchol Mishra
Movie Recommender systems are frequently used in academics and industry to give users with relevant, engaging material based on their rating history. However, most traditional methods suffer from the cold-start problem, which is the initial lack of item ratings and data sparsity. The Hybrid Machine Learning (ML) technique is proposed for a movie recommendation system. Demographic data is collected from the Movie Lens dataset, and attributes are evaluated using the Attribute Analysis module. The Aquila Optimization Algorithm is used to select the best attributes, while Random Forest classifier is used for classification. Data is clustered using Fuzzy Probabilistic Cmeans Clustering Algorithm (FPCCA), and the Correspondence Index Assessment Phase (CIAP) uses Bhattacharyya Coefficient in Collaborative Filtering (BCCF) for similarity index calculation. The Outcomes gives the proposed method obtained low error, such as MAE has 0.44, RMSE has 0.63 compared with the baseline methods.
电影推荐系统经常被用于学术界和工业界,根据用户的评分记录为其提供相关的、吸引人的资料。然而,大多数传统方法都存在冷启动问题,即最初缺乏项目评级和数据稀疏。本文针对电影推荐系统提出了混合机器学习(ML)技术。从电影镜头数据集中收集人口统计学数据,并使用属性分析模块对属性进行评估。Aquila 优化算法用于选择最佳属性,而随机森林分类器则用于分类。数据使用模糊概率均值聚类算法(FPCCA)进行聚类,对应指数评估阶段(CIAP)使用巴塔查里亚协同过滤系数(BCCF)计算相似性指数。结果表明,与基线方法相比,拟议方法的误差较小,如 MAE 为 0.44,RMSE 为 0.63。
{"title":"Design and Development of an Efficient Demographic-based Movie Recommender System using Hybrid Machine Learning Techniques","authors":"Vishal Paranjape, Neelu Nihalani, Nishchol Mishra","doi":"10.15837/ijccc.2024.4.5840","DOIUrl":"https://doi.org/10.15837/ijccc.2024.4.5840","url":null,"abstract":"Movie Recommender systems are frequently used in academics and industry to give users with relevant, engaging material based on their rating history. However, most traditional methods suffer from the cold-start problem, which is the initial lack of item ratings and data sparsity. The Hybrid Machine Learning (ML) technique is proposed for a movie recommendation system. Demographic data is collected from the Movie Lens dataset, and attributes are evaluated using the Attribute Analysis module. The Aquila Optimization Algorithm is used to select the best attributes, while Random Forest classifier is used for classification. Data is clustered using Fuzzy Probabilistic Cmeans Clustering Algorithm (FPCCA), and the Correspondence Index Assessment Phase (CIAP) uses Bhattacharyya Coefficient in Collaborative Filtering (BCCF) for similarity index calculation. The Outcomes gives the proposed method obtained low error, such as MAE has 0.44, RMSE has 0.63 compared with the baseline methods.","PeriodicalId":179619,"journal":{"name":"Int. J. Comput. Commun. Control","volume":"28 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141690149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.15837/ijccc.2024.4.6498
Mincong Tang, Jie Cao, Dalin Zhang, Ionut Pandelica
The rapid development of the metaverse has sparked extensive discussion on how to estimate its development maturity using quantifiable indicators, which can offer an assessment framework for governing the metaverse. Currently, the measurable methods for assessing the maturity of the metaverse are still in the early stages. Data-driven approaches, which depend on the collection, analysis, and interpretation of large volumes of data to guide decisions and actions, are becoming more important. This paper proposes a data-driven approach to assess the maturity of the metaverse based on K-means-AdaBoost. This method automatically updates the indicator weights based on the knowledge acquired from the model, thereby significantly enhancing the accuracy of model predictions. Our approach assesses the maturity of metaverse systems through a thorough analysis of metaverse data and provides strategic guidance for their development.
{"title":"A Data-Driven Assessment Model for Metaverse Maturity","authors":"Mincong Tang, Jie Cao, Dalin Zhang, Ionut Pandelica","doi":"10.15837/ijccc.2024.4.6498","DOIUrl":"https://doi.org/10.15837/ijccc.2024.4.6498","url":null,"abstract":"The rapid development of the metaverse has sparked extensive discussion on how to estimate its development maturity using quantifiable indicators, which can offer an assessment framework for governing the metaverse. Currently, the measurable methods for assessing the maturity of the metaverse are still in the early stages. Data-driven approaches, which depend on the collection, analysis, and interpretation of large volumes of data to guide decisions and actions, are becoming more important. This paper proposes a data-driven approach to assess the maturity of the metaverse based on K-means-AdaBoost. This method automatically updates the indicator weights based on the knowledge acquired from the model, thereby significantly enhancing the accuracy of model predictions. Our approach assesses the maturity of metaverse systems through a thorough analysis of metaverse data and provides strategic guidance for their development.","PeriodicalId":179619,"journal":{"name":"Int. J. Comput. Commun. Control","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141699233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}