首页 > 最新文献

物联网技术最新文献

英文 中文
Hierarchical Fuzzy Methodologies for Energy Efficient Routing Protocol for Wireless Sensor Networks 无线传感器网络节能路由协议的层次模糊方法
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053474
M. Prabha, M. Anbarasan, S. Sunithamani, Mrs. K. Saranya
In recent years, the wireless sensor networks are widely used in numerous real-time applications such as WBAN monitoring and tracking. Recent developments in wireless networks have given rise to new and reliable methods for enhancing network lifetime, energy efficiency, and scalability. The power consumption of the entire wireless sensor network and the energy level of each individual sensor node are closely related by the commonly used clustering technique to manage the sensor networks. In order to optimize data transmission, the fuzzy C means algorithm is employed in this article to thoroughly analyze the cluster head while considering the energy that is available in each node and distance metrics from the base station. This study demonstrates how carefully choosing cluster heads and node clustering, which divides large networks into smaller clusters, can enhance the lifespan of a network. The proposed network uses a multi-hop routing approach, where each sensor node can independently collect and send data in order to address the data rate issue. The suggested cluster routing protocol was put to the test with 1000 data transmission rounds to demonstrate its strengths and weaknesses in terms of network lifetime and energy efficiency. The choice of the cluster head node, the distance between the nodes, and the amount of energy needed for subsequent data transmission are all considered to be random for each round. The simulation results show that the suggested methodology beats cutting-edge routing techniques and achieves a promising network performance. Furthermore, the effects of hierarchical cluster head selection point to the potential of our method for use in WSN in the future. The following tests were performed using computer simulation, including comparing the effect of network life on the increased number of rounds before and after the influence of an energy-efficient routing protocol, and examining performance metrics for network lifetime.
近年来,无线传感器网络被广泛应用于WBAN监控和跟踪等众多实时应用中。无线网络的最新发展已经产生了新的可靠方法来提高网络寿命、能源效率和可扩展性。采用常用的聚类技术对传感器网络进行管理,使整个无线传感器网络的功耗与单个传感器节点的能量水平密切相关。为了优化数据传输,本文采用模糊C均值算法,在考虑每个节点的可用能量和与基站的距离指标的同时,对簇头进行了深入的分析。本研究展示了如何谨慎选择簇头和节点聚类(将大型网络划分为较小的集群)可以提高网络的寿命。所提出的网络采用多跳路由方法,其中每个传感器节点可以独立收集和发送数据,以解决数据速率问题。本文对所提出的集群路由协议进行了1000轮数据传输测试,以验证其在网络寿命和能源效率方面的优缺点。每轮的簇头节点的选择、节点之间的距离以及后续数据传输所需的能量都被认为是随机的。仿真结果表明,该方法优于当前的路由技术,取得了良好的网络性能。此外,分层簇头选择的影响表明了我们的方法在未来WSN中使用的潜力。使用计算机模拟执行了以下测试,包括比较在采用节能路由协议之前和之后,网络寿命对增加的轮数的影响,以及检查网络寿命的性能指标。
{"title":"Hierarchical Fuzzy Methodologies for Energy Efficient Routing Protocol for Wireless Sensor Networks","authors":"M. Prabha, M. Anbarasan, S. Sunithamani, Mrs. K. Saranya","doi":"10.1109/IDCIoT56793.2023.10053474","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053474","url":null,"abstract":"In recent years, the wireless sensor networks are widely used in numerous real-time applications such as WBAN monitoring and tracking. Recent developments in wireless networks have given rise to new and reliable methods for enhancing network lifetime, energy efficiency, and scalability. The power consumption of the entire wireless sensor network and the energy level of each individual sensor node are closely related by the commonly used clustering technique to manage the sensor networks. In order to optimize data transmission, the fuzzy C means algorithm is employed in this article to thoroughly analyze the cluster head while considering the energy that is available in each node and distance metrics from the base station. This study demonstrates how carefully choosing cluster heads and node clustering, which divides large networks into smaller clusters, can enhance the lifespan of a network. The proposed network uses a multi-hop routing approach, where each sensor node can independently collect and send data in order to address the data rate issue. The suggested cluster routing protocol was put to the test with 1000 data transmission rounds to demonstrate its strengths and weaknesses in terms of network lifetime and energy efficiency. The choice of the cluster head node, the distance between the nodes, and the amount of energy needed for subsequent data transmission are all considered to be random for each round. The simulation results show that the suggested methodology beats cutting-edge routing techniques and achieves a promising network performance. Furthermore, the effects of hierarchical cluster head selection point to the potential of our method for use in WSN in the future. The following tests were performed using computer simulation, including comparing the effect of network life on the increased number of rounds before and after the influence of an energy-efficient routing protocol, and examining performance metrics for network lifetime.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81477151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement remoa优化与机器学习驱动的客户流失预测业务改进
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053554
Sumita Kumar, P. Baruah, S. Kirubakaran, A. S. Kumar, Kamlesh Singh, M. V. J. Reddy
Customer Relationship Management (CRM) is a complete approach to constructing, handling, and establishing loyal and long-lasting customer relationships. It is mostly acknowledged and widely executed for distinct domains, e.g., telecom, retail market, banking and insurance, and so on. A major objective is customer retention. The churn methods drive to recognize early churn signals and identify customers with an enhanced possibility to leave voluntarily. Machine learning (ML) techniques are presented for tackling the churning prediction difficult. This paper presents a Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement (ROML-CPBI) technique. The aim of the ROML-CPBI technique is to forecast the possibility of customer churns in the business sector. The working of the ROML-CPBI technique encompasses two major processes namely prediction and parameter tuning. At the initial stage, the ROML-CPBI technique utilizes multi-kernel extreme learning machine (MKELM) technique for churn prediction purposes. Secondly, the RO algorithm is applied for adjusting the parameters related to the MKELM model and thereby results in enhanced predictive outcomes. For validating the greater performance of the ROML-CPBI technique, an extensive range of experiments were performed. The experimental values signified the improved outcomes of the ROML-CPBI technique over other ones.
客户关系管理(CRM)是一个完整的方法来构建,处理和建立忠诚和持久的客户关系。它在不同的领域得到了广泛的认可和执行,例如电信、零售市场、银行和保险等。一个主要的目标是留住客户。流失方法驱动识别早期流失信号,并识别客户与提高自愿离开的可能性。提出了机器学习(ML)技术来解决搅拌预测难题。提出了一种基于机器学习驱动的客户流失预测的业务改进(ROML-CPBI)技术。ROML-CPBI技术的目的是预测商业部门客户流失的可能性。ROML-CPBI技术的工作包括两个主要过程,即预测和参数调优。在初始阶段,ROML-CPBI技术利用多核极限学习机(MKELM)技术进行客户流失预测。其次,利用RO算法对MKELM模型相关参数进行调整,提高预测结果。为了验证ROML-CPBI技术的更高性能,进行了广泛的实验。实验结果表明,ROML-CPBI技术的治疗效果优于其他方法。
{"title":"Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement","authors":"Sumita Kumar, P. Baruah, S. Kirubakaran, A. S. Kumar, Kamlesh Singh, M. V. J. Reddy","doi":"10.1109/IDCIoT56793.2023.10053554","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053554","url":null,"abstract":"Customer Relationship Management (CRM) is a complete approach to constructing, handling, and establishing loyal and long-lasting customer relationships. It is mostly acknowledged and widely executed for distinct domains, e.g., telecom, retail market, banking and insurance, and so on. A major objective is customer retention. The churn methods drive to recognize early churn signals and identify customers with an enhanced possibility to leave voluntarily. Machine learning (ML) techniques are presented for tackling the churning prediction difficult. This paper presents a Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement (ROML-CPBI) technique. The aim of the ROML-CPBI technique is to forecast the possibility of customer churns in the business sector. The working of the ROML-CPBI technique encompasses two major processes namely prediction and parameter tuning. At the initial stage, the ROML-CPBI technique utilizes multi-kernel extreme learning machine (MKELM) technique for churn prediction purposes. Secondly, the RO algorithm is applied for adjusting the parameters related to the MKELM model and thereby results in enhanced predictive outcomes. For validating the greater performance of the ROML-CPBI technique, an extensive range of experiments were performed. The experimental values signified the improved outcomes of the ROML-CPBI technique over other ones.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82905086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Neural Network (ANN) Enabled Weather Monitoring and Prediction System using IoT 使用物联网的人工神经网络(ANN)支持天气监测和预测系统
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053534
P. Krishna, Kongara Chandra Bhanu, Shaik Akram Ahamed, Myneni Umesh Chandra, Neelapu Prudhvi, Nandigama Apoorva
This article proposes a weather monitoring and prediction system using ANN with Internet of things (IoT) for the diverse applications. Neural networks provide an ability to perform computation and learning. Neural networks can solve difficult problems that appear to be computationally interact. The output neurons are responsible for producing a function of the inputs from the input neurons and their previous outputs. The proposed system is developed based on IoT using ES P32 microcontroller by interfacing different sensors to take the input parameters. All the acquired sensor information can be visualized in the ThingSpeak cloud as well as in the mobile application. Once the data is acquired from the sensors it is processed by the system with the available data set and gives the output, that can be seen on the cloud, text message, and mail to the end user. As the proof of concepts, it is tested in different weather conditions.
本文提出了一种基于人工神经网络和物联网(IoT)的天气监测预报系统。神经网络提供了进行计算和学习的能力。神经网络可以解决那些看起来在计算上相互作用的难题。输出神经元负责产生输入神经元的输入和它们之前的输出的函数。该系统基于物联网,采用ES P32单片机,通过接口不同的传感器获取输入参数。所有采集到的传感器信息都可以在ThingSpeak云和移动应用程序中可视化。一旦从传感器获取数据,系统就会用可用的数据集对其进行处理,并给出输出,这些输出可以在云端、文本消息和邮件中看到,并发送给最终用户。作为概念的证明,它在不同的天气条件下进行了测试。
{"title":"Artificial Neural Network (ANN) Enabled Weather Monitoring and Prediction System using IoT","authors":"P. Krishna, Kongara Chandra Bhanu, Shaik Akram Ahamed, Myneni Umesh Chandra, Neelapu Prudhvi, Nandigama Apoorva","doi":"10.1109/IDCIoT56793.2023.10053534","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053534","url":null,"abstract":"This article proposes a weather monitoring and prediction system using ANN with Internet of things (IoT) for the diverse applications. Neural networks provide an ability to perform computation and learning. Neural networks can solve difficult problems that appear to be computationally interact. The output neurons are responsible for producing a function of the inputs from the input neurons and their previous outputs. The proposed system is developed based on IoT using ES P32 microcontroller by interfacing different sensors to take the input parameters. All the acquired sensor information can be visualized in the ThingSpeak cloud as well as in the mobile application. Once the data is acquired from the sensors it is processed by the system with the available data set and gives the output, that can be seen on the cloud, text message, and mail to the end user. As the proof of concepts, it is tested in different weather conditions.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89867961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Edge Computing Model by using Data Combs for Big Data in Metaverse 基于数据梳的元宇宙大数据边缘计算模型增强
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053519
Lakshmikanth Rajath Mohan T, N. Jayapandian
The Metaverse is a huge project undertaken by Facebook in order to bring the world closer together and help people live out their dreams. Even handicapped can travel across the world. People can visit any place and would be safe in the comfort of their homes. Meta (Previously Facebook) plans to execute this by using a combination of AR and VR (Augmented Reality and Virtual Reality). Facebook aims to bring this technology to the people soon. However, a big factor in this idea that needs to be accounted for is the amount of data generation that will take place. Many Computer Science professors and scientists believe that the amount of data Meta is going to generate in one day would almost be equal to the amount of data Instagram/Facebook would have generated in their entire lifetime. This will push the entire data generation by at least 30%, if not more. Using traditional methods such as cloud computing might seem to become a shortcoming in the near future. This is because the servers might not be able to handle such large amounts of data. The solution to this problem should be a system that is designed specifically for handling data that is extremely large. A system that is not only secure, resilient and robust but also must be able to handle multiple requests and connections at once and yet not slow down when the number of requests increases gradually over time. In this model, a solution called the DHA (Data Hive Architecture) is provided. These DHAs are made up of multiple subunits called Data Combs and those are further broken down into data cells. These are small units of memory which can process big data extremely fast. When information is requested from a client (Example: A Data Warehouse) that is stored in multiple edges across the world, then these Data Combs rearrange the data cells within them on the basis of the requested criteria. This article aims to explain this concept of data combs and its usage in the Metaverse.
Metaverse是Facebook的一个大型项目,目的是让世界更紧密地联系在一起,帮助人们实现梦想。即使是残疾人也可以周游世界。人们可以去任何地方,在舒适的家中也会很安全。Meta(以前的Facebook)计划通过AR和VR(增强现实和虚拟现实)的结合来实现这一目标。Facebook的目标是尽快将这项技术带给人们。然而,这个想法中需要考虑的一个重要因素是将要产生的数据量。许多计算机科学教授和科学家认为,Meta一天产生的数据量几乎等于Instagram/Facebook在他们一生中产生的数据量。这将使整个数据生成至少增加30%,甚至更多。在不久的将来,使用云计算等传统方法似乎会成为一个缺点。这是因为服务器可能无法处理如此大量的数据。这个问题的解决方案应该是一个专门为处理超大数据而设计的系统。一个不仅安全、有弹性和健壮的系统,还必须能够同时处理多个请求和连接,并且在请求数量随着时间的推移逐渐增加时不会减慢速度。在该模型中,提供了一种称为DHA (Data Hive Architecture)的解决方案。这些dha由称为数据梳的多个子单元组成,这些子单元进一步分解为数据单元。这些小内存单元可以非常快地处理大数据。当从存储在世界各地多个边中的客户机(例如:数据仓库)请求信息时,这些数据梳根据所请求的标准重新排列其中的数据单元。本文旨在解释数据梳的概念及其在meta中的用法。
{"title":"Enhanced Edge Computing Model by using Data Combs for Big Data in Metaverse","authors":"Lakshmikanth Rajath Mohan T, N. Jayapandian","doi":"10.1109/IDCIoT56793.2023.10053519","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053519","url":null,"abstract":"The Metaverse is a huge project undertaken by Facebook in order to bring the world closer together and help people live out their dreams. Even handicapped can travel across the world. People can visit any place and would be safe in the comfort of their homes. Meta (Previously Facebook) plans to execute this by using a combination of AR and VR (Augmented Reality and Virtual Reality). Facebook aims to bring this technology to the people soon. However, a big factor in this idea that needs to be accounted for is the amount of data generation that will take place. Many Computer Science professors and scientists believe that the amount of data Meta is going to generate in one day would almost be equal to the amount of data Instagram/Facebook would have generated in their entire lifetime. This will push the entire data generation by at least 30%, if not more. Using traditional methods such as cloud computing might seem to become a shortcoming in the near future. This is because the servers might not be able to handle such large amounts of data. The solution to this problem should be a system that is designed specifically for handling data that is extremely large. A system that is not only secure, resilient and robust but also must be able to handle multiple requests and connections at once and yet not slow down when the number of requests increases gradually over time. In this model, a solution called the DHA (Data Hive Architecture) is provided. These DHAs are made up of multiple subunits called Data Combs and those are further broken down into data cells. These are small units of memory which can process big data extremely fast. When information is requested from a client (Example: A Data Warehouse) that is stored in multiple edges across the world, then these Data Combs rearrange the data cells within them on the basis of the requested criteria. This article aims to explain this concept of data combs and its usage in the Metaverse.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85838463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review on Autopilot using Neuro Evaluation of Augmenting Topologies 基于增强拓扑神经评估的自动驾驶研究进展
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053395
K. C. Reddy, V. K, Faraz Ahmed Mulla, G. N. H. Kumar, J. Prajwal, M. Gopal
Autonomous lateral movement is the significant challenge for self-driving cars; therefore, the major target of this paper is to replicate propulsion in order to improve the performance of self-driving cars. This work focuses on using multilayer neural networks and deep learning techniques to enable operating self-driving cars under stimulus conditions. Within the simulator, the driver`s vision and reactions are mimicked by preprocessing the images obtained from a cameramounted vehicle. Neural network trains the deep learning model based on the images captured by camera in manual mode. The trained multi-layer neural network creates various conditions for driving a car in self-driving mode. The driver imitation algorithms created and characterized in this work are all about profound learning techniques.
自动横向移动是自动驾驶汽车面临的重大挑战;因此,本文的主要目标是复制推进,以提高自动驾驶汽车的性能。这项工作的重点是使用多层神经网络和深度学习技术,使自动驾驶汽车能够在刺激条件下运行。在模拟器中,驾驶员的视觉和反应是通过预处理从摄像机车辆获得的图像来模拟的。神经网络根据相机在手动模式下拍摄的图像训练深度学习模型。经过训练的多层神经网络为自动驾驶模式下的汽车驾驶创造了各种条件。本研究创建并描述的驾驶员模仿算法都是关于深度学习技术的。
{"title":"A Review on Autopilot using Neuro Evaluation of Augmenting Topologies","authors":"K. C. Reddy, V. K, Faraz Ahmed Mulla, G. N. H. Kumar, J. Prajwal, M. Gopal","doi":"10.1109/IDCIoT56793.2023.10053395","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053395","url":null,"abstract":"Autonomous lateral movement is the significant challenge for self-driving cars; therefore, the major target of this paper is to replicate propulsion in order to improve the performance of self-driving cars. This work focuses on using multilayer neural networks and deep learning techniques to enable operating self-driving cars under stimulus conditions. Within the simulator, the driver`s vision and reactions are mimicked by preprocessing the images obtained from a cameramounted vehicle. Neural network trains the deep learning model based on the images captured by camera in manual mode. The trained multi-layer neural network creates various conditions for driving a car in self-driving mode. The driver imitation algorithms created and characterized in this work are all about profound learning techniques.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87406798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maximizing the Net Present Value of Resource-Constrained Project Scheduling Problems using Recurrent Neural Network with Genetic Algorithm 基于递归神经网络遗传算法的资源约束项目调度问题净现值最大化
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053390
Tshewang Phuntsho, T. Gonsalves
Scheduling long-term and financially dependent projects constrained by resources are of the utmost significance to project and finance managers. A new technique based on a modified Recurrent Neural Network (RNN) employing Parallel Schedule Generation Scheme (PSGS) is proposed as heuristics method to solve this discounted cash flows for resource-constrained project scheduling (RCPSPDC). To resolve the gradient exploding/vanishing problem of RNN, a Genetic Algorithm (GA) is employed to optimize its weight matrices. Our GA takes advantage of p-point crossover and m-point mutation operators besides utilizing elitism and tournament strategies to diversify and evolve the population. The proposed RNN architecture implemented in Julia language is evaluated on sampled projects from well-known 17,280 project instances dataset. This article, establishes the superior performance of our proposed architecture when compared to existing state-of-the-art standalone meta-heuristic techniques, besides having transfer learning capabilities. This technique can easily be hybridized with existing architectures to achieve remarkable performance.
对于项目经理和财务经理来说,对资源约束下的长期和财务依赖的项目进行调度是至关重要的。提出了一种基于改进的递归神经网络(RNN)并行计划生成方案(PSGS)的启发式方法来求解资源约束项目调度(RCPSPDC)的现金流贴现问题。为了解决RNN的梯度爆炸/消失问题,采用遗传算法对其权矩阵进行优化。我们的遗传算法除了利用精英和锦标赛策略外,还利用p点交叉和m点突变算子来实现种群的多样化和进化。用Julia语言实现的RNN架构在已知的17,280个项目实例数据集中的样本项目上进行了评估。本文与现有的最先进的独立元启发式技术相比,除了具有迁移学习能力外,还建立了我们提出的体系结构的优越性能。这种技术可以很容易地与现有的体系结构相结合,以获得卓越的性能。
{"title":"Maximizing the Net Present Value of Resource-Constrained Project Scheduling Problems using Recurrent Neural Network with Genetic Algorithm","authors":"Tshewang Phuntsho, T. Gonsalves","doi":"10.1109/IDCIoT56793.2023.10053390","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053390","url":null,"abstract":"Scheduling long-term and financially dependent projects constrained by resources are of the utmost significance to project and finance managers. A new technique based on a modified Recurrent Neural Network (RNN) employing Parallel Schedule Generation Scheme (PSGS) is proposed as heuristics method to solve this discounted cash flows for resource-constrained project scheduling (RCPSPDC). To resolve the gradient exploding/vanishing problem of RNN, a Genetic Algorithm (GA) is employed to optimize its weight matrices. Our GA takes advantage of p-point crossover and m-point mutation operators besides utilizing elitism and tournament strategies to diversify and evolve the population. The proposed RNN architecture implemented in Julia language is evaluated on sampled projects from well-known 17,280 project instances dataset. This article, establishes the superior performance of our proposed architecture when compared to existing state-of-the-art standalone meta-heuristic techniques, besides having transfer learning capabilities. This technique can easily be hybridized with existing architectures to achieve remarkable performance.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72689492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Crime Scene Objects using Deep Learning Techniques 使用深度学习技术检测犯罪现场物体
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053440
Nandhini T J, K. Thinakaran
Research on the detection of objects at crime scenes has flourished in the last two decades. Researchers have been concentrating on color pictures, where lighting is a crucial component, since this is one of the most pressing issues in computer vision, with applications spanning surveillance, security, medicine, and more. However, night time monitoring is crucial since most security problems cannot be seen by the naked eye. That's why it's crucial to record a dark scene and identify the things at a crime scene. Even when its dark out, infrared cameras are indispensable. Both military and civilian sectors will benefit from the use of such methods for night time navigation. On the other hand, IR photographs have issues with poor resolution, lighting effects, and other similar issues. Surveillance cameras with infrared (IR) imaging capabilities have been the focus of much study and development in recent years. This research work has attempted to offer a good model for object recognition by using IR images obtained from crime scenes using Deep Learning. The model is tested in many scenarios including a central processing unit (CPU), Google COLAB, and graphics processing unit (GPU), and its performance is also tabulated.
在过去的二十年里,对犯罪现场物品检测的研究蓬勃发展。研究人员一直专注于彩色图像,其中照明是一个至关重要的组成部分,因为这是计算机视觉中最紧迫的问题之一,其应用跨越监视,安全,医学等领域。然而,夜间监控是至关重要的,因为大多数安全问题无法用肉眼看到。这就是为什么记录一个黑暗的现场和识别犯罪现场的东西是至关重要的。即使天黑了,红外摄像机也是必不可少的。军事和民用部门都将受益于使用这种方法进行夜间导航。另一方面,红外照片有分辨率差、灯光效果和其他类似问题的问题。具有红外成像能力的监控摄像机是近年来研究和发展的热点。这项研究工作试图通过使用深度学习从犯罪现场获得的红外图像,为物体识别提供一个很好的模型。该模型在许多场景下进行了测试,包括中央处理单元(CPU)、Google COLAB和图形处理单元(GPU),并将其性能制成表格。
{"title":"Detection of Crime Scene Objects using Deep Learning Techniques","authors":"Nandhini T J, K. Thinakaran","doi":"10.1109/IDCIoT56793.2023.10053440","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053440","url":null,"abstract":"Research on the detection of objects at crime scenes has flourished in the last two decades. Researchers have been concentrating on color pictures, where lighting is a crucial component, since this is one of the most pressing issues in computer vision, with applications spanning surveillance, security, medicine, and more. However, night time monitoring is crucial since most security problems cannot be seen by the naked eye. That's why it's crucial to record a dark scene and identify the things at a crime scene. Even when its dark out, infrared cameras are indispensable. Both military and civilian sectors will benefit from the use of such methods for night time navigation. On the other hand, IR photographs have issues with poor resolution, lighting effects, and other similar issues. Surveillance cameras with infrared (IR) imaging capabilities have been the focus of much study and development in recent years. This research work has attempted to offer a good model for object recognition by using IR images obtained from crime scenes using Deep Learning. The model is tested in many scenarios including a central processing unit (CPU), Google COLAB, and graphics processing unit (GPU), and its performance is also tabulated.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85692738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
RPL Protocol Enhancement using Artificial Neural Network (ANN) for IoT Applications 物联网应用中使用人工神经网络(ANN)的RPL协议增强
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053540
S. Kuwelkar, H. G. Virani
In near future, IoT will revolutionize human lifestyle. IoT is categorized as low power lossy network since it employs devices with constrained power, memory and processing capability which are interconnected over lossy links. The efficiency of such networks largely depends on the design of the routing protocol. To cater specific routing needs of such networks, the IETF has proposed IPv6 routing protocol for LLNs (RPL) as a de facto routing standard. In RPL, routing decision is based on a single parameter which leads to the selection of inefficient paths and affects network lifetime. This work primarily focuses on improving the RPL protocol by overcoming the single metric limitation. In this work, a novel version of RPL is proposed which uses a Multilayer Feed Forward Neural Network to make the routing decision based on multiple metrics. Four routing parameters namely, hop count, delay, residual energy and link quality of candidate neighbors are fed as input to ANN in order to compute the fitness of each candidate and the one with highest value is designated as the most suitable parent to route packets towards sink node. This technique lowers energy consumption by 15%, improves Packet Delivery Ratio by 3%, lowers delay by 17% and reduces the control overhead by 48% as compared to standard RPL implementation.
在不久的将来,物联网将彻底改变人类的生活方式。物联网被归类为低功耗损耗网络,因为它使用的设备具有受限的功率、内存和处理能力,这些设备通过有损链路相互连接。这种网络的效率很大程度上取决于路由协议的设计。为了满足此类网络的特定路由需求,IETF提出了用于lln的IPv6路由协议(RPL)作为事实上的路由标准。在RPL中,路由决策是基于单一参数的,这会导致选择低效的路径并影响网络的生存时间。这项工作主要集中在通过克服单度量限制来改进RPL协议。在这项工作中,提出了一种新的RPL版本,该版本使用多层前馈神经网络基于多个指标进行路由决策。将候选邻居的跳数、时延、剩余能量和链路质量四个路由参数作为输入输入到人工神经网络中,计算每个候选邻居的适应度,并将值最高的一个指定为最适合的父节点,将数据包路由到sink节点。与标准RPL实现相比,该技术降低了15%的能耗,提高了3%的数据包传输率,降低了17%的延迟,减少了48%的控制开销。
{"title":"RPL Protocol Enhancement using Artificial Neural Network (ANN) for IoT Applications","authors":"S. Kuwelkar, H. G. Virani","doi":"10.1109/IDCIoT56793.2023.10053540","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053540","url":null,"abstract":"In near future, IoT will revolutionize human lifestyle. IoT is categorized as low power lossy network since it employs devices with constrained power, memory and processing capability which are interconnected over lossy links. The efficiency of such networks largely depends on the design of the routing protocol. To cater specific routing needs of such networks, the IETF has proposed IPv6 routing protocol for LLNs (RPL) as a de facto routing standard. In RPL, routing decision is based on a single parameter which leads to the selection of inefficient paths and affects network lifetime. This work primarily focuses on improving the RPL protocol by overcoming the single metric limitation. In this work, a novel version of RPL is proposed which uses a Multilayer Feed Forward Neural Network to make the routing decision based on multiple metrics. Four routing parameters namely, hop count, delay, residual energy and link quality of candidate neighbors are fed as input to ANN in order to compute the fitness of each candidate and the one with highest value is designated as the most suitable parent to route packets towards sink node. This technique lowers energy consumption by 15%, improves Packet Delivery Ratio by 3%, lowers delay by 17% and reduces the control overhead by 48% as compared to standard RPL implementation.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89071402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Predictive Analysis on CO2 Emissions in Automobiles using Machine Learning Techniques 基于机器学习技术的汽车二氧化碳排放预测分析
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053539
M. Manvitha, M. Vani Pujitha, N. Prasad, B. Yashitha Anju
1.80 metric tonnes of CO2 are emitted by citizens in India, which is highly detrimental to all living beings. Climate change and glacier melting are the results of CO2 emissions. Sea levels are rising as a result of global warming, which is mostly caused by CO2. In the past, the prediction has been accomplished using statistical approaches including the t-test, ANOVA test, ARIMA, and SARIMAX. The Random Forest, Decision Tree, and Regression Models are increasingly used to forecast CO2 emissions. When several vehicle feature inputs are used, multivariate polynomial regression and multiple linear regression may reliably forecast the emissions. For inputs with a single feature, single linear regression is used for the prediction. Based on factors including engine size, fuel type, cylinder count, vehicle class, and model, CO2 emissions are anticipated. Python Scikit-Learn and the Matplotlib package are used to analyze CO2 emissions. The efficiency of the implemented models is assessed by using performance metrics. The accuracy of each model is predicted by using the Regression Score (R2-Score), MAE (Mean Absolute Error), and MSE (Mean Squared Error).
印度公民每年排放1.8吨二氧化碳,这对所有生物都是非常有害的。气候变化和冰川融化是二氧化碳排放的结果。由于全球变暖,海平面正在上升,而这主要是由二氧化碳引起的。在过去,预测是通过统计方法完成的,包括t检验、ANOVA检验、ARIMA和SARIMAX。随机森林、决策树和回归模型越来越多地用于预测二氧化碳排放。当使用多个车辆特征输入时,多元多项式回归和多元线性回归可以可靠地预测排放。对于具有单一特征的输入,单线性回归用于预测。根据发动机尺寸、燃料类型、气缸数量、车辆类别和型号等因素,预计二氧化碳排放量。Python Scikit-Learn和Matplotlib包用于分析二氧化碳排放。通过使用性能度量来评估所实现模型的效率。使用回归评分(R2-Score)、平均绝对误差(MAE)和均方误差(MSE)预测每个模型的准确性。
{"title":"A Predictive Analysis on CO2 Emissions in Automobiles using Machine Learning Techniques","authors":"M. Manvitha, M. Vani Pujitha, N. Prasad, B. Yashitha Anju","doi":"10.1109/IDCIoT56793.2023.10053539","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053539","url":null,"abstract":"1.80 metric tonnes of CO2 are emitted by citizens in India, which is highly detrimental to all living beings. Climate change and glacier melting are the results of CO2 emissions. Sea levels are rising as a result of global warming, which is mostly caused by CO2. In the past, the prediction has been accomplished using statistical approaches including the t-test, ANOVA test, ARIMA, and SARIMAX. The Random Forest, Decision Tree, and Regression Models are increasingly used to forecast CO2 emissions. When several vehicle feature inputs are used, multivariate polynomial regression and multiple linear regression may reliably forecast the emissions. For inputs with a single feature, single linear regression is used for the prediction. Based on factors including engine size, fuel type, cylinder count, vehicle class, and model, CO2 emissions are anticipated. Python Scikit-Learn and the Matplotlib package are used to analyze CO2 emissions. The efficiency of the implemented models is assessed by using performance metrics. The accuracy of each model is predicted by using the Regression Score (R2-Score), MAE (Mean Absolute Error), and MSE (Mean Squared Error).","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86413921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparative Survey on K-Means and Hierarchical Clustering in E-Commerce Systems 电子商务系统中K-Means与层次聚类的比较研究
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053472
Chinnam Sasidhar Reddy, N. S. K. Deepak Rao, Atkuri Sisir, Vysyaraju Shanmukha Srinivasa Raju, S. S. Aravinth
E-commerce systems have grown in popularity and are now used in almost every business. A platform for online product marketing and customer promotion is an e-commerce system. Customer clustering is defined as the process of categorizing consumers into sections that share resembling characteristics. To maximize each customer's profit to the business, customer clustering’s goal is to help decide how to engage clients in each category. To facilitate customer needs by improvising products and optimizing services, businesses can identify their most profitable customers by segmenting their customer base. As a result, customer clustering assists E-commerce systems in promoting the appropriate product to the appropriate customer to increase profits. Customer clustering factors include geographic, psychological, behavioral, and demographic factors. The consumer’s behavioral factor has been highlighted in this research. As a result, to discover the consumption behavior of the E-shopping system, customers will be analyzed using several clustering algorithms. Clustering seeks to maximize experimental similarity within a cluster while minimizing dissimilarity between clusters. Customers’ age, gender, income, expenditure rate, etc. are correlated in this study. To assist vendors in identifying and concentrating on the most profitable segments of the market as opposed to the least profitable segments, this study compared several clustering techniques to find which technique is more accurate to cluster customer behavior. A significant role for this kind of analysis in business improvement to keep customers for a long time and boost business profits, businesses group their customers based on similar behavioral traits. It also enables the maximum disclosure of online offers to attract the attention of potential customers. A learning algorithm called K-Means and an unsupervised algorithm hierarchical clustering is applied to a customer dataset to compare which strategy gives most accurate clustering.
电子商务系统越来越受欢迎,现在几乎在每个企业中使用。在线产品营销和客户推广的平台是电子商务系统。客户聚类定义为将消费者分类为具有相似特征的部分的过程。为了使每个客户对企业的利润最大化,客户集群的目标是帮助决定如何吸引每个类别的客户。为了通过即兴创作产品和优化服务来满足客户需求,企业可以通过细分客户群来确定最有利可图的客户。因此,客户集群可以帮助电子商务系统将合适的产品推广给合适的客户,从而增加利润。顾客聚类因素包括地理因素、心理因素、行为因素和人口因素。消费者的行为因素在本研究中得到了强调。因此,为了发现电子购物系统的消费行为,将使用几种聚类算法对客户进行分析。聚类寻求最大限度地提高集群内的实验相似性,同时最小化集群之间的不相似性。在本研究中,顾客的年龄、性别、收入、消费率等是相关的。为了帮助供应商识别和专注于最有利可图的细分市场,而不是最不有利可图的细分市场,本研究比较了几种聚类技术,以找出哪种技术更准确地聚集客户行为。这种分析在业务改进中发挥着重要作用,为了长期保持客户并提高业务利润,企业根据相似的行为特征对客户进行分组。它还可以最大限度地披露在线报价,以吸引潜在客户的注意。将一种称为K-Means的学习算法和一种无监督算法分层聚类应用于客户数据集,以比较哪种策略提供最准确的聚类。
{"title":"A Comparative Survey on K-Means and Hierarchical Clustering in E-Commerce Systems","authors":"Chinnam Sasidhar Reddy, N. S. K. Deepak Rao, Atkuri Sisir, Vysyaraju Shanmukha Srinivasa Raju, S. S. Aravinth","doi":"10.1109/IDCIoT56793.2023.10053472","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053472","url":null,"abstract":"E-commerce systems have grown in popularity and are now used in almost every business. A platform for online product marketing and customer promotion is an e-commerce system. Customer clustering is defined as the process of categorizing consumers into sections that share resembling characteristics. To maximize each customer's profit to the business, customer clustering’s goal is to help decide how to engage clients in each category. To facilitate customer needs by improvising products and optimizing services, businesses can identify their most profitable customers by segmenting their customer base. As a result, customer clustering assists E-commerce systems in promoting the appropriate product to the appropriate customer to increase profits. Customer clustering factors include geographic, psychological, behavioral, and demographic factors. The consumer’s behavioral factor has been highlighted in this research. As a result, to discover the consumption behavior of the E-shopping system, customers will be analyzed using several clustering algorithms. Clustering seeks to maximize experimental similarity within a cluster while minimizing dissimilarity between clusters. Customers’ age, gender, income, expenditure rate, etc. are correlated in this study. To assist vendors in identifying and concentrating on the most profitable segments of the market as opposed to the least profitable segments, this study compared several clustering techniques to find which technique is more accurate to cluster customer behavior. A significant role for this kind of analysis in business improvement to keep customers for a long time and boost business profits, businesses group their customers based on similar behavioral traits. It also enables the maximum disclosure of online offers to attract the attention of potential customers. A learning algorithm called K-Means and an unsupervised algorithm hierarchical clustering is applied to a customer dataset to compare which strategy gives most accurate clustering.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86683493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
物联网技术
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1