Pub Date : 2025-01-04DOI: 10.1016/j.jii.2025.100776
Ruiling Gao , Wenzhong Zhang , Wenyi Mao , Jinjing Tan , Jin Zhang , Haiyun Huang , Wen'an Tan , Feiyue Huang
With the extensive adoption of cloud and edge computing in intelligent manufacturing systems driven by the Industrial Internet of Things (IIoT) and Artificial Intelligence, enhancing the efficiency of cloud-edge collaboration under constrained communication and computational resources has emerged as a prominent research focus. We develop the GRALB model, which is based on Role-Based Collaboration (RBC) in cooperative services, to comprehensively manage the offloading strategy of terminal user tasks between edge nodes and the cloud to solve the joint communication and computing resource allocation problem in intelligent manufacturing systems. First, we jointly model the end-to-end latency and energy consumption based on the physical scenario of cloud-edge collaboration. Then, we extend the GRA model based on E-CARGO and propose the GRALB model with load balancing, which formally models the original joint communication and computing resource allocation problem as an equivalent cooperative service model and provides a proof of algorithm convergence. Finally, we design an x-ILP solution to support the verification and integrated application of the proposed model. Simulation results further confirm our theoretical analysis and show that the proposed collaborative cloud and edge computing solution significantly improves the overall system performance.
{"title":"Method towards collaborative cloud and edge computing via RBC for joint communication and computation resource allocation","authors":"Ruiling Gao , Wenzhong Zhang , Wenyi Mao , Jinjing Tan , Jin Zhang , Haiyun Huang , Wen'an Tan , Feiyue Huang","doi":"10.1016/j.jii.2025.100776","DOIUrl":"10.1016/j.jii.2025.100776","url":null,"abstract":"<div><div>With the extensive adoption of cloud and edge computing in intelligent manufacturing systems driven by the Industrial Internet of Things (IIoT) and Artificial Intelligence, enhancing the efficiency of cloud-edge collaboration under constrained communication and computational resources has emerged as a prominent research focus. We develop the GRALB model, which is based on Role-Based Collaboration (RBC) in cooperative services, to comprehensively manage the offloading strategy of terminal user tasks between edge nodes and the cloud to solve the joint communication and computing resource allocation problem in intelligent manufacturing systems. First, we jointly model the end-to-end latency and energy consumption based on the physical scenario of cloud-edge collaboration. Then, we extend the GRA model based on E-CARGO and propose the GRALB model with load balancing, which formally models the original joint communication and computing resource allocation problem as an equivalent cooperative service model and provides a proof of algorithm convergence. Finally, we design an x-ILP solution to support the verification and integrated application of the proposed model. Simulation results further confirm our theoretical analysis and show that the proposed collaborative cloud and edge computing solution significantly improves the overall system performance.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"44 ","pages":"Article 100776"},"PeriodicalIF":10.4,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-04DOI: 10.1016/j.jii.2025.100780
Wei Fang , Lixi Chen , Lei Han , Ji Ding
Since the inception of Augmented Reality (AR) nearly three decades ago, numerous studies have demonstrated its potential to provide intuitive instructions for manual tasks within the manufacturing sector. This includes operations such as manual assembly, which can be executed with high efficiency and a focus on error-avoidance. Despite this, to the best of our knowledge, there has been no comprehensive review of the cognitive AR assembly method from a holistic perspective. This includes aspects such as context awareness, visual instructions, environment interaction, human factor considerations, and the impact on real-world AR assembly deployment. These factors are particularly relevant to current human-centric manufacturing practices in Industry 5.0. Since 2012, the release of Google Glass and advancements in artificial intelligence (AI) have significantly expanded actual AR deployments. Consequently, this review takes 2012 as the starting point for the literature collection. The objective of this article is to provide an overview of the context-aware cognitive AR (CA-CAR) assembly works published between 2012 and 2023. We aim to identify and classify the necessary context modules for CA-CAR assembly, and analyze potential technical barriers to their shop floor adaptation. It should be noted that this work offers both a historical perspective and a comprehensive map of the current research landscape surrounding the development of CA-CAR assembly applications. Furthermore, we discuss recent research trends and open problems in the field of CA-CAR assembly, along with potential future research directions.
自增强现实技术(AR)问世近三十年以来,许多研究都证明了它在为制造业的手工任务提供直观指导方面的潜力。这包括手工装配等操作,这些操作可以高效率地执行,并注重避免错误。尽管如此,据我们所知,目前还没有从整体角度对认知 AR 组装方法进行全面审查。这包括情境感知、视觉指示、环境交互、人为因素考虑以及对现实世界中 AR 组装部署的影响等方面。这些因素与当前工业 5.0 中以人为中心的制造实践尤为相关。自 2012 年以来,谷歌眼镜的发布和人工智能(AI)的进步极大地扩展了实际 AR 部署。因此,本综述以 2012 年作为文献收集的起点。本文旨在概述 2012 年至 2023 年间发表的情境感知认知增强现实(CA-CAR)组装作品。我们旨在识别和分类 CA-CAR 组装所需的情境模块,并分析其车间适应性的潜在技术障碍。值得注意的是,这项工作既提供了历史视角,又全面描绘了当前围绕 CA-CAR 装配应用开发的研究前景。此外,我们还讨论了 CA-CAR 组装领域的最新研究趋势和未决问题,以及潜在的未来研究方向。
{"title":"Context-aware cognitive augmented reality assembly: Past, present, and future","authors":"Wei Fang , Lixi Chen , Lei Han , Ji Ding","doi":"10.1016/j.jii.2025.100780","DOIUrl":"10.1016/j.jii.2025.100780","url":null,"abstract":"<div><div>Since the inception of Augmented Reality (AR) nearly three decades ago, numerous studies have demonstrated its potential to provide intuitive instructions for manual tasks within the manufacturing sector. This includes operations such as manual assembly, which can be executed with high efficiency and a focus on error-avoidance. Despite this, to the best of our knowledge, there has been no comprehensive review of the cognitive AR assembly method from a holistic perspective. This includes aspects such as context awareness, visual instructions, environment interaction, human factor considerations, and the impact on real-world AR assembly deployment. These factors are particularly relevant to current human-centric manufacturing practices in Industry 5.0. Since 2012, the release of Google Glass and advancements in artificial intelligence (AI) have significantly expanded actual AR deployments. Consequently, this review takes 2012 as the starting point for the literature collection. The objective of this article is to provide an overview of the context-aware cognitive AR (CA-CAR) assembly works published between 2012 and 2023. We aim to identify and classify the necessary context modules for CA-CAR assembly, and analyze potential technical barriers to their shop floor adaptation. It should be noted that this work offers both a historical perspective and a comprehensive map of the current research landscape surrounding the development of CA-CAR assembly applications. Furthermore, we discuss recent research trends and open problems in the field of CA-CAR assembly, along with potential future research directions.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"44 ","pages":"Article 100780"},"PeriodicalIF":10.4,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.jii.2024.100747
Fouad Khalifa , Mohamed Marzouk
The building sector remains a major contributor to increasing energy consumption and emissions. Meanwhile, the energy system is becoming more complex due to the transition to clean energy sources. Current tools and policies struggle to manage this complexity, as the existing infrastructure was not designed for such large dynamic distributed energy resources. This creates an urgent need to adopt emerging technologies for enhancing building energy management systems. The objective of this research is to develop a framework that integrates Blockchain and Digital Twin technologies to provide an efficient and trusted energy management platform that supports smart cities communities and to effectively contribute to enhancing the progress of UN Sustainability Development Goals (SDGs), specifically SDG11 and SDG13. The proposed framework comprises four main elements: Blockchain platform, Digital Twin platform, Application Program Interfaces (APIs), and Building Energy Model. Blockchain platform automates energy billing by utilizing digital currency and smart contracts with pre-set pricing tiers and feed in tariffs. Digital Twin platform provides interactive communication and visualization with physical assets. APIs enables seamless interconnectivity between both platforms. The Building Energy Model acts as a prediction tool, and the simulation results are fed to Digital Twin platform to alert system participants in case actual consumption deviates from optimum values. The viability of the proposed framework is demonstrated using a case study of a residential apartment.
{"title":"Integrated blockchain and Digital Twin framework for sustainable building energy management","authors":"Fouad Khalifa , Mohamed Marzouk","doi":"10.1016/j.jii.2024.100747","DOIUrl":"10.1016/j.jii.2024.100747","url":null,"abstract":"<div><div>The building sector remains a major contributor to increasing energy consumption and emissions. Meanwhile, the energy system is becoming more complex due to the transition to clean energy sources. Current tools and policies struggle to manage this complexity, as the existing infrastructure was not designed for such large dynamic distributed energy resources. This creates an urgent need to adopt emerging technologies for enhancing building energy management systems. The objective of this research is to develop a framework that integrates Blockchain and Digital Twin technologies to provide an efficient and trusted energy management platform that supports smart cities communities and to effectively contribute to enhancing the progress of UN Sustainability Development Goals (SDGs), specifically SDG11 and SDG13. The proposed framework comprises four main elements: Blockchain platform, Digital Twin platform, Application Program Interfaces (APIs), and Building Energy Model. Blockchain platform automates energy billing by utilizing digital currency and smart contracts with pre-set pricing tiers and feed in tariffs. Digital Twin platform provides interactive communication and visualization with physical assets. APIs enables seamless interconnectivity between both platforms. The Building Energy Model acts as a prediction tool, and the simulation results are fed to Digital Twin platform to alert system participants in case actual consumption deviates from optimum values. The viability of the proposed framework is demonstrated using a case study of a residential apartment.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"43 ","pages":"Article 100747"},"PeriodicalIF":10.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.jii.2024.100745
Y.P. Tsang , C.H. Wu , W.H. Ip , K.L. Yung
Recent advances in Industry 4.0 technologies drive robotic objects' decentralisation and autonomous intelligence, raising emerging space security concerns, specifically invasion detection. Existing physical detection methods, such as vision-based and radar-based techniques, are ineffective in detecting small-scale objects moving at low speeds. Therefore, it is worth investigating and leveraging the power of artificial intelligence to discover invasion patterns through space data analytics. Additionally, fuzzy modelling is needed for invasion detection to enhance the capability of handling data uncertainty and adaptability to evolving invasion patterns. This study proposes a Blockchain-Enabled Federated Fuzzy Invasion Detection System (BFFIDS) to address these challenges and establish real-time invasion detection capabilities for edge devices in the low earth orbit. The entire model training process is performed over the blockchain and horizontal federated learning scheme, securely reaching consensus in model updates. The system's effectiveness is examined through case analyses on a publicly available dataset. The results indicate that the proposed system can effectively maintain the desired invasion detection performance, with an average Area Under Curve (AUC) value of 0.99 across experimental runs. Utilising the blockchain-based federated learning process, the total size of transmitted data is reduced by 89.5 %, supporting the development of lightweight invasion detection applications. A closed-loop mechanism for continuously updating the space invasion detection model is established to achieve high space security.
{"title":"A blockchain-enabled horizontal federated learning system for fuzzy invasion detection in maintaining space security","authors":"Y.P. Tsang , C.H. Wu , W.H. Ip , K.L. Yung","doi":"10.1016/j.jii.2024.100745","DOIUrl":"10.1016/j.jii.2024.100745","url":null,"abstract":"<div><div>Recent advances in Industry 4.0 technologies drive robotic objects' decentralisation and autonomous intelligence, raising emerging space security concerns, specifically invasion detection. Existing physical detection methods, such as vision-based and radar-based techniques, are ineffective in detecting small-scale objects moving at low speeds. Therefore, it is worth investigating and leveraging the power of artificial intelligence to discover invasion patterns through space data analytics. Additionally, fuzzy modelling is needed for invasion detection to enhance the capability of handling data uncertainty and adaptability to evolving invasion patterns. This study proposes a Blockchain-Enabled Federated Fuzzy Invasion Detection System (BFFIDS) to address these challenges and establish real-time invasion detection capabilities for edge devices in the low earth orbit. The entire model training process is performed over the blockchain and horizontal federated learning scheme, securely reaching consensus in model updates. The system's effectiveness is examined through case analyses on a publicly available dataset. The results indicate that the proposed system can effectively maintain the desired invasion detection performance, with an average Area Under Curve (AUC) value of 0.99 across experimental runs. Utilising the blockchain-based federated learning process, the total size of transmitted data is reduced by 89.5 %, supporting the development of lightweight invasion detection applications. A closed-loop mechanism for continuously updating the space invasion detection model is established to achieve high space security.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"43 ","pages":"Article 100745"},"PeriodicalIF":10.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Production planning and control (PPC) is essential in industrial manufacturing, ensuring efficient resource allocation and process management. Industry 4.0 introduces advanced technologies like cyber physical systems (CPS), artificial intelligence (AI), and internet of things (IoT) to effectively manage and monitor manufacturing operations. However, integrating these technologies into existing machinery, particularly for small and medium-sized enterprises (SMEs), poses challenges due to complexity and cost. The present study addresses this gap by designing and implementing a Smart Machine Monitoring System (SMMS) compatible with existing machinery such as computer numerical control and special purpose machines. The SMMS integrates IoT-based systems with AI algorithms to enhance machine tool utilization through effective planning, scheduling, and real-time monitoring. Through a nine-month case study in the shackle bolt manufacturing section, it was tested and compared to an Enterprise Resource Planning (ERP)-based system to assess its performance. Results showed significant improvements in production output, machine utilization rates, labor efficiency, and overall manufacturing costs. In conclusion, this study contributes to the body of knowledge on practical Industry 4.0 implementations for SMEs, offering insights into cost-effective solutions for enhancing operational efficiency and resource utilization in manufacturing environments.
{"title":"Implementation and evaluation of a smart machine monitoring system under industry 4.0 concept","authors":"Jagmeet Singh , Amandeep Singh , Harwinder Singh , Philippe Doyon-Poulin","doi":"10.1016/j.jii.2024.100746","DOIUrl":"10.1016/j.jii.2024.100746","url":null,"abstract":"<div><div>Production planning and control (PPC) is essential in industrial manufacturing, ensuring efficient resource allocation and process management. Industry 4.0 introduces advanced technologies like cyber physical systems (CPS), artificial intelligence (AI), and internet of things (IoT) to effectively manage and monitor manufacturing operations. However, integrating these technologies into existing machinery, particularly for small and medium-sized enterprises (SMEs), poses challenges due to complexity and cost. The present study addresses this gap by designing and implementing a Smart Machine Monitoring System (SMMS) compatible with existing machinery such as computer numerical control and special purpose machines. The SMMS integrates IoT-based systems with AI algorithms to enhance machine tool utilization through effective planning, scheduling, and real-time monitoring. Through a nine-month case study in the shackle bolt manufacturing section, it was tested and compared to an Enterprise Resource Planning (ERP)-based system to assess its performance. Results showed significant improvements in production output, machine utilization rates, labor efficiency, and overall manufacturing costs. In conclusion, this study contributes to the body of knowledge on practical Industry 4.0 implementations for SMEs, offering insights into cost-effective solutions for enhancing operational efficiency and resource utilization in manufacturing environments.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"43 ","pages":"Article 100746"},"PeriodicalIF":10.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142790052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.jii.2024.100750
Akbayan Bekarystankyzy , Abdul Razaque , Orken Mamyrbayev
Millions of individuals across the world use automatic speech recognition (ASR) systems every day to dictate messages, operate gadgets, begin searches, and enable data entry in tiny devices. The engagement in these circumstances is determined by the accuracy of the voice transcriptions and the system's response. A second barrier to natural engagement for multilingual users is the monolingual nature of many ASR systems, which limit users to a single predefined language. A substantial amount of transcribed audio data must be used to train an ASR model to obtain one that is trustworthy and accurate. The absence of this data type affects a large number of languages, particularly agglutinative languages. Much research has been conducted using various strategies to improve models for low-resource languages. This study presents an integrated end-to-end multi-language ASR (EMASR) architecture that allows users to choose from a variety of spoken language combinations. The proposed EMASR presents an integrated design to support low-resource agglutinative languages by fusing the features of the multi-identifier module, voice fusion module, and recurrent neural network module. The proposed EMSAR identifies Turkic agglutinative languages (Kazakh, Bashkir, Kyrgyz, Saha, and Tatar) to enable multilingual training through the use of Connectionist Temporal Classification (CTC) and an attention mechanism that includes a language model (LM). The cognate word, sentence construction principles, and an alphabet are all present in these languages (Cyrillic). We use recent advancements in language identification to obtain recognition accuracy and latency characteristics. Experiment results reveal that multilingual training produces superior results than monolingual training in all languages tested. The Kazakh language obtained a spectacular result: word error rate (WER) was reduced to half and character error rate (CER) was reduced to one-third, demonstrating that this strategy may be beneficial for critically low-resource languages.
全球每天有数百万人使用自动语音识别(ASR)系统口述信息、操作小工具、开始搜索并在微型设备中输入数据。在这种情况下,参与度取决于语音转录的准确性和系统的响应。影响多语言用户自然参与的第二个障碍是许多 ASR 系统的单语言性质,它们将用户限制在单一的预定义语言中。必须使用大量转录的音频数据来训练 ASR 模型,才能获得可信和准确的模型。这种数据类型的缺失影响了大量语言,尤其是凝集语言。为了改进低资源语言的模型,人们使用各种策略进行了大量研究。本研究提出了一种集成的端到端多语言 ASR(EMASR)架构,允许用户从各种口语组合中进行选择。通过融合多识别器模块、语音融合模块和递归神经网络模块的功能,拟议的 EMASR 采用了集成设计,以支持低资源聚合语言。拟议的 EMSAR 可识别突厥语聚合语言(哈萨克语、巴什基尔语、吉尔吉斯语、萨哈语和塔塔尔语),通过使用联结时态分类(CTC)和包含语言模型(LM)的注意机制,实现多语言训练。这些语言(西里尔语)中都有同源词、造句原则和字母表。我们利用语言识别领域的最新进展来获得识别准确率和延迟特征。实验结果表明,在所有测试语言中,多语种训练比单语种训练的效果更好。哈萨克语取得了令人瞩目的成果:单词错误率(WER)降低到一半,字符错误率(CER)降低到三分之一,这表明这种策略可能对资源严重匮乏的语言有益。
{"title":"Integrated end-to-end multilingual method for low-resource agglutinative languages using Cyrillic scripts","authors":"Akbayan Bekarystankyzy , Abdul Razaque , Orken Mamyrbayev","doi":"10.1016/j.jii.2024.100750","DOIUrl":"10.1016/j.jii.2024.100750","url":null,"abstract":"<div><div>Millions of individuals across the world use automatic speech recognition (ASR) systems every day to dictate messages, operate gadgets, begin searches, and enable data entry in tiny devices. The engagement in these circumstances is determined by the accuracy of the voice transcriptions and the system's response. A second barrier to natural engagement for multilingual users is the monolingual nature of many ASR systems, which limit users to a single predefined language. A substantial amount of transcribed audio data must be used to train an ASR model to obtain one that is trustworthy and accurate. The absence of this data type affects a large number of languages, particularly agglutinative languages. Much research has been conducted using various strategies to improve models for low-resource languages. This study presents an integrated end-to-end multi-language ASR (EMASR) architecture that allows users to choose from a variety of spoken language combinations. The proposed EMASR presents an integrated design to support low-resource agglutinative languages by fusing the features of the multi-identifier module, voice fusion module, and recurrent neural network module. The proposed EMSAR identifies Turkic agglutinative languages (Kazakh, Bashkir, Kyrgyz, Saha, and Tatar) to enable multilingual training through the use of Connectionist Temporal Classification (CTC) and an attention mechanism that includes a language model (LM). The cognate word, sentence construction principles, and an alphabet are all present in these languages (Cyrillic). We use recent advancements in language identification to obtain recognition accuracy and latency characteristics. Experiment results reveal that multilingual training produces superior results than monolingual training in all languages tested. The Kazakh language obtained a spectacular result: word error rate (WER) was reduced to half and character error rate (CER) was reduced to one-third, demonstrating that this strategy may be beneficial for critically low-resource languages.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"43 ","pages":"Article 100750"},"PeriodicalIF":10.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.jii.2024.100758
Chunji Xie , Li Yang , Xiantao He , Tao Cui , Dongxing Zhang , Hongsheng Li , Tianpu Xiao , Haoyu Wang
Seeding plays a crucial role in agricultural production. The traditional mechanized seeding suffers from inefficiencies, low precision, and lack of control, which makes it inadequate for the high demands of the modern precision agriculture, such as the high speed, high precision, and real-time control. Therefore, this study proposes a precision seeding scheme based on multi-sensor information fusion. The system uses a Controller Area Network bus to collect and analyze data from multiple sensors for accurately controlling the seeding and fertilization mechanisms and real-time monitoring the operational conditions. In addition, the structural design, functional development, and field testing of the proposed seeding scheme are analyzed. A dual-speed measurement method, which employs an encoder and a Global Navigation Satellite System receiver, is then used to develop the motor drive model. The test results show that the maximum average error in motor speed does not exceed 1.5 %. The system can accurately alarm for seeding and fertilization faults reaching a 100 % success rate, with no missed or false alarms. The incorporated novel features include a field headland switch and a one-click pre-seeding function. During the lifting and lowering of the seeder, the motor stop and start success rate also reach 100 %, with a system response time <0.7 s. The pre-seeding time can be arbitrarily set, which allows to avoid the issue of no seeds falling at the start of the seeder. Moreover, the wind pressure measurement of the system has an average relative error of 0.83 %. The long-term operation tests show no faults, and all the functions remain normal. Furthermore, the field test results show an average qualified seeding rate of 94.81 % and an average seed spacing variation coefficient of 14.1 %, which demonstrates the high accuracy and stability of the system.
{"title":"Maize precision seeding scheme based on multi-sensor information fusion","authors":"Chunji Xie , Li Yang , Xiantao He , Tao Cui , Dongxing Zhang , Hongsheng Li , Tianpu Xiao , Haoyu Wang","doi":"10.1016/j.jii.2024.100758","DOIUrl":"10.1016/j.jii.2024.100758","url":null,"abstract":"<div><div>Seeding plays a crucial role in agricultural production. The traditional mechanized seeding suffers from inefficiencies, low precision, and lack of control, which makes it inadequate for the high demands of the modern precision agriculture, such as the high speed, high precision, and real-time control. Therefore, this study proposes a precision seeding scheme based on multi-sensor information fusion. The system uses a Controller Area Network bus to collect and analyze data from multiple sensors for accurately controlling the seeding and fertilization mechanisms and real-time monitoring the operational conditions. In addition, the structural design, functional development, and field testing of the proposed seeding scheme are analyzed. A dual-speed measurement method, which employs an encoder and a Global Navigation Satellite System receiver, is then used to develop the motor drive model. The test results show that the maximum average error in motor speed does not exceed 1.5 %. The system can accurately alarm for seeding and fertilization faults reaching a 100 % success rate, with no missed or false alarms. The incorporated novel features include a field headland switch and a one-click pre-seeding function. During the lifting and lowering of the seeder, the motor stop and start success rate also reach 100 %, with a system response time <0.7 s. The pre-seeding time can be arbitrarily set, which allows to avoid the issue of no seeds falling at the start of the seeder. Moreover, the wind pressure measurement of the system has an average relative error of 0.83 %. The long-term operation tests show no faults, and all the functions remain normal. Furthermore, the field test results show an average qualified seeding rate of 94.81 % and an average seed spacing variation coefficient of 14.1 %, which demonstrates the high accuracy and stability of the system.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"43 ","pages":"Article 100758"},"PeriodicalIF":10.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.jii.2024.100742
Tao Gu , Yajuan Zhang , Limin Wang , Yufei Zhang , Muhammet Deveci , Xin Wen
Optimizing industrial information integration is fundamental to harnessing the potential of Industry 4.0, driving data-informed decisions that enhance operational efficiency, reduce costs, and improve competitiveness in modern industrial environments. Effective unmanned aerial vehicle (UAV) path planning is crucial within this optimization framework, supporting timely and reliable data collection and transmission for smarter decision-making. This study proposes an enhanced RIME (IRIME) algorithm for three-dimensional UAV path planning in complex urban environments, formulated as a multiconstraint optimization problem aimed at discovering optimal flight paths in intricate configuration spaces. IRIME integrates three strategic innovations into the RIME algorithm: a frost crystal diffusion mechanism for improved initial population diversity, a high-altitude condensation strategy to enhance global exploration, and a lattice weaving strategy to avoid premature convergence. Evaluated on the CEC2017 test set and six realistic urban scenarios, IRIME achieves an 86.21 % win rate across 100 functions. In scenarios 4–6, IRIME uniquely identifies the globally optimal paths, outperforming other algorithms that are limited to locally optimal solutions. We believe these findings demonstrate IRIME's capacity to address complex path-planning challenges, laying a robust foundation for its future application to broader industrial optimization tasks.
{"title":"A comprehensive analysis of multi-strategic RIME algorithm for UAV path planning in varied terrains","authors":"Tao Gu , Yajuan Zhang , Limin Wang , Yufei Zhang , Muhammet Deveci , Xin Wen","doi":"10.1016/j.jii.2024.100742","DOIUrl":"10.1016/j.jii.2024.100742","url":null,"abstract":"<div><div>Optimizing industrial information integration is fundamental to harnessing the potential of Industry 4.0, driving data-informed decisions that enhance operational efficiency, reduce costs, and improve competitiveness in modern industrial environments. Effective unmanned aerial vehicle (UAV) path planning is crucial within this optimization framework, supporting timely and reliable data collection and transmission for smarter decision-making. This study proposes an enhanced RIME (IRIME) algorithm for three-dimensional UAV path planning in complex urban environments, formulated as a multiconstraint optimization problem aimed at discovering optimal flight paths in intricate configuration spaces. IRIME integrates three strategic innovations into the RIME algorithm: a frost crystal diffusion mechanism for improved initial population diversity, a high-altitude condensation strategy to enhance global exploration, and a lattice weaving strategy to avoid premature convergence. Evaluated on the CEC2017 test set and six realistic urban scenarios, IRIME achieves an 86.21 % win rate across 100 functions. In scenarios 4–6, IRIME uniquely identifies the globally optimal paths, outperforming other algorithms that are limited to locally optimal solutions. We believe these findings demonstrate IRIME's capacity to address complex path-planning challenges, laying a robust foundation for its future application to broader industrial optimization tasks.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"43 ","pages":"Article 100742"},"PeriodicalIF":10.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.jii.2024.100741
Hulin Jin , Yong-Guk Kim , Zhiran Jin , Chunyang Fan
Deep eutectic solvents (DESs) are recently proposed as green materials to remove nitric oxide (NO) from released streams into the atmosphere. The mathematical aspect of this process attracted less attention than it deserved. A straightforward approach in this field will help engineer DES chemistry and optimize the equilibrium conditions to maximize the amount of removed NO. This study covers this gap by constructing a reliable artificial neural network (ANN) to correlate the NO removal capacity of DES with equilibrium pressure/temperature and solvent chemistry. So, firstly, the physical meaningful features are selected to make the DES chemistry quantitative. It was found that the density is the best representative for the hydrogen-bound acceptor and hydrogen-bound donor. Also, the density and viscosity of the DESs exhibit the highest correlation with the NO solubility. Then, the hyperparameters of three famous ANN types (feedforward, recurrent, and cascade) are determined by combining trial-and-error and sensitivity analyzes. Finally, the ranking test distinguishes the ANN type with the lowest uncertainty toward estimating NO dissolution in DESs. The cascade neural network (CNN) with twelve and one neurons in the hidden and output layers equipped with the tangent hyperbolic and radial basis transfer functions is identified as the best ANN type for the given purpose. This model predicts 292 DES-NO equilibrium records collected from the literature with mean absolute errors = 0.033, relative absolute errors = 1.49 %, mean squared errors = 0.002, and coefficient of determination = 0.9998. Also, the present study helps understand the role of DES chemistry and operating conditions on the amount of removable NO by DESs. 1,3-dimethylthioureaP4444Cl (3:1) is recognized as the best DES to separate NO molecules from gaseous streams, respectively. The simulation results show that the unit mass of the best DES is capable of absorbing up to ∼27 mol of NO.
{"title":"Machine learning assisted prediction of the nitric oxide (NO) solubility in various deep eutectic solvents","authors":"Hulin Jin , Yong-Guk Kim , Zhiran Jin , Chunyang Fan","doi":"10.1016/j.jii.2024.100741","DOIUrl":"10.1016/j.jii.2024.100741","url":null,"abstract":"<div><div>Deep eutectic solvents (DESs) are recently proposed as green materials to remove nitric oxide (NO) from released streams into the atmosphere. The mathematical aspect of this process attracted less attention than it deserved. A straightforward approach in this field will help engineer DES chemistry and optimize the equilibrium conditions to maximize the amount of removed NO. This study covers this gap by constructing a reliable artificial neural network (ANN) to correlate the NO removal capacity of DES with equilibrium pressure/temperature and solvent chemistry. So, firstly, the physical meaningful features are selected to make the DES chemistry quantitative. It was found that the density is the best representative for the hydrogen-bound acceptor and hydrogen-bound donor. Also, the density and viscosity of the DESs exhibit the highest correlation with the NO solubility. Then, the hyperparameters of three famous ANN types (feedforward, recurrent, and cascade) are determined by combining trial-and-error and sensitivity analyzes. Finally, the ranking test distinguishes the ANN type with the lowest uncertainty toward estimating NO dissolution in DESs. The cascade neural network (CNN) with twelve and one neurons in the hidden and output layers equipped with the tangent hyperbolic and radial basis transfer functions is identified as the best ANN type for the given purpose. This model predicts 292 DES-NO equilibrium records collected from the literature with mean absolute errors = 0.033, relative absolute errors = 1.49 %, mean squared errors = 0.002, and coefficient of determination = 0.9998. Also, the present study helps understand the role of DES chemistry and operating conditions on the amount of removable NO by DESs. 1,3-dimethylthioureaP4444Cl (3:1) is recognized as the best DES to separate NO molecules from gaseous streams, respectively. The simulation results show that the unit mass of the best DES is capable of absorbing up to ∼27 mol of NO.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"43 ","pages":"Article 100741"},"PeriodicalIF":10.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142790053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.jii.2024.100744
Nisar Hakam , Khaled Benfriha
Advanced simulation tools allow the optimization of processes prior to production implementation. Our study aims to integrate industrial information and data into a digital model based on artificial intelligence (AI) to simulate acoustic and vibration behavior during the production preparation phase. This model integrates real manufacturing conditions with generated vibrations and acoustic waves, creating a comprehensive simulation tool for acoustic and vibration behavior during the production preparation phase. By harnessing Internet of Things (IoT) sensors, Big Data, and Cyber-Physical Systems (CPS), our approach achieves a unified system that consolidates data from diverse sources, facilitating a seamless information flow within an Industry 4.0 framework. Small signal variations made it complex to model manufacturing operations using AI tools, as seen in recent studies. However, the proposed approach overcomes these challenges and has been successfully applied to a numerical lathe using sensors and advanced analytical tools, paving the way for a robust industrial information integration system to optimize and predict operational outcomes.
{"title":"Enhancement of industrial information systems through AI models to simulate the vibrational and acoustic behavior of machining operations","authors":"Nisar Hakam , Khaled Benfriha","doi":"10.1016/j.jii.2024.100744","DOIUrl":"10.1016/j.jii.2024.100744","url":null,"abstract":"<div><div>Advanced simulation tools allow the optimization of processes prior to production implementation. Our study aims to integrate industrial information and data into a digital model based on artificial intelligence (AI) to simulate acoustic and vibration behavior during the production preparation phase. This model integrates real manufacturing conditions with generated vibrations and acoustic waves, creating a comprehensive simulation tool for acoustic and vibration behavior during the production preparation phase. By harnessing Internet of Things (IoT) sensors, Big Data, and Cyber-Physical Systems (CPS), our approach achieves a unified system that consolidates data from diverse sources, facilitating a seamless information flow within an Industry 4.0 framework. Small signal variations made it complex to model manufacturing operations using AI tools, as seen in recent studies. However, the proposed approach overcomes these challenges and has been successfully applied to a numerical lathe using sensors and advanced analytical tools, paving the way for a robust industrial information integration system to optimize and predict operational outcomes.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"43 ","pages":"Article 100744"},"PeriodicalIF":10.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142790141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}