首页 > 最新文献

物联网技术最新文献

英文 中文
Hierarchical Fuzzy Methodologies for Energy Efficient Routing Protocol for Wireless Sensor Networks 无线传感器网络节能路由协议的层次模糊方法
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053474
M. Prabha, M. Anbarasan, S. Sunithamani, Mrs. K. Saranya
In recent years, the wireless sensor networks are widely used in numerous real-time applications such as WBAN monitoring and tracking. Recent developments in wireless networks have given rise to new and reliable methods for enhancing network lifetime, energy efficiency, and scalability. The power consumption of the entire wireless sensor network and the energy level of each individual sensor node are closely related by the commonly used clustering technique to manage the sensor networks. In order to optimize data transmission, the fuzzy C means algorithm is employed in this article to thoroughly analyze the cluster head while considering the energy that is available in each node and distance metrics from the base station. This study demonstrates how carefully choosing cluster heads and node clustering, which divides large networks into smaller clusters, can enhance the lifespan of a network. The proposed network uses a multi-hop routing approach, where each sensor node can independently collect and send data in order to address the data rate issue. The suggested cluster routing protocol was put to the test with 1000 data transmission rounds to demonstrate its strengths and weaknesses in terms of network lifetime and energy efficiency. The choice of the cluster head node, the distance between the nodes, and the amount of energy needed for subsequent data transmission are all considered to be random for each round. The simulation results show that the suggested methodology beats cutting-edge routing techniques and achieves a promising network performance. Furthermore, the effects of hierarchical cluster head selection point to the potential of our method for use in WSN in the future. The following tests were performed using computer simulation, including comparing the effect of network life on the increased number of rounds before and after the influence of an energy-efficient routing protocol, and examining performance metrics for network lifetime.
近年来,无线传感器网络被广泛应用于WBAN监控和跟踪等众多实时应用中。无线网络的最新发展已经产生了新的可靠方法来提高网络寿命、能源效率和可扩展性。采用常用的聚类技术对传感器网络进行管理,使整个无线传感器网络的功耗与单个传感器节点的能量水平密切相关。为了优化数据传输,本文采用模糊C均值算法,在考虑每个节点的可用能量和与基站的距离指标的同时,对簇头进行了深入的分析。本研究展示了如何谨慎选择簇头和节点聚类(将大型网络划分为较小的集群)可以提高网络的寿命。所提出的网络采用多跳路由方法,其中每个传感器节点可以独立收集和发送数据,以解决数据速率问题。本文对所提出的集群路由协议进行了1000轮数据传输测试,以验证其在网络寿命和能源效率方面的优缺点。每轮的簇头节点的选择、节点之间的距离以及后续数据传输所需的能量都被认为是随机的。仿真结果表明,该方法优于当前的路由技术,取得了良好的网络性能。此外,分层簇头选择的影响表明了我们的方法在未来WSN中使用的潜力。使用计算机模拟执行了以下测试,包括比较在采用节能路由协议之前和之后,网络寿命对增加的轮数的影响,以及检查网络寿命的性能指标。
{"title":"Hierarchical Fuzzy Methodologies for Energy Efficient Routing Protocol for Wireless Sensor Networks","authors":"M. Prabha, M. Anbarasan, S. Sunithamani, Mrs. K. Saranya","doi":"10.1109/IDCIoT56793.2023.10053474","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053474","url":null,"abstract":"In recent years, the wireless sensor networks are widely used in numerous real-time applications such as WBAN monitoring and tracking. Recent developments in wireless networks have given rise to new and reliable methods for enhancing network lifetime, energy efficiency, and scalability. The power consumption of the entire wireless sensor network and the energy level of each individual sensor node are closely related by the commonly used clustering technique to manage the sensor networks. In order to optimize data transmission, the fuzzy C means algorithm is employed in this article to thoroughly analyze the cluster head while considering the energy that is available in each node and distance metrics from the base station. This study demonstrates how carefully choosing cluster heads and node clustering, which divides large networks into smaller clusters, can enhance the lifespan of a network. The proposed network uses a multi-hop routing approach, where each sensor node can independently collect and send data in order to address the data rate issue. The suggested cluster routing protocol was put to the test with 1000 data transmission rounds to demonstrate its strengths and weaknesses in terms of network lifetime and energy efficiency. The choice of the cluster head node, the distance between the nodes, and the amount of energy needed for subsequent data transmission are all considered to be random for each round. The simulation results show that the suggested methodology beats cutting-edge routing techniques and achieves a promising network performance. Furthermore, the effects of hierarchical cluster head selection point to the potential of our method for use in WSN in the future. The following tests were performed using computer simulation, including comparing the effect of network life on the increased number of rounds before and after the influence of an energy-efficient routing protocol, and examining performance metrics for network lifetime.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"25 1","pages":"989-992"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81477151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement remoa优化与机器学习驱动的客户流失预测业务改进
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053554
Sumita Kumar, P. Baruah, S. Kirubakaran, A. S. Kumar, Kamlesh Singh, M. V. J. Reddy
Customer Relationship Management (CRM) is a complete approach to constructing, handling, and establishing loyal and long-lasting customer relationships. It is mostly acknowledged and widely executed for distinct domains, e.g., telecom, retail market, banking and insurance, and so on. A major objective is customer retention. The churn methods drive to recognize early churn signals and identify customers with an enhanced possibility to leave voluntarily. Machine learning (ML) techniques are presented for tackling the churning prediction difficult. This paper presents a Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement (ROML-CPBI) technique. The aim of the ROML-CPBI technique is to forecast the possibility of customer churns in the business sector. The working of the ROML-CPBI technique encompasses two major processes namely prediction and parameter tuning. At the initial stage, the ROML-CPBI technique utilizes multi-kernel extreme learning machine (MKELM) technique for churn prediction purposes. Secondly, the RO algorithm is applied for adjusting the parameters related to the MKELM model and thereby results in enhanced predictive outcomes. For validating the greater performance of the ROML-CPBI technique, an extensive range of experiments were performed. The experimental values signified the improved outcomes of the ROML-CPBI technique over other ones.
客户关系管理(CRM)是一个完整的方法来构建,处理和建立忠诚和持久的客户关系。它在不同的领域得到了广泛的认可和执行,例如电信、零售市场、银行和保险等。一个主要的目标是留住客户。流失方法驱动识别早期流失信号,并识别客户与提高自愿离开的可能性。提出了机器学习(ML)技术来解决搅拌预测难题。提出了一种基于机器学习驱动的客户流失预测的业务改进(ROML-CPBI)技术。ROML-CPBI技术的目的是预测商业部门客户流失的可能性。ROML-CPBI技术的工作包括两个主要过程,即预测和参数调优。在初始阶段,ROML-CPBI技术利用多核极限学习机(MKELM)技术进行客户流失预测。其次,利用RO算法对MKELM模型相关参数进行调整,提高预测结果。为了验证ROML-CPBI技术的更高性能,进行了广泛的实验。实验结果表明,ROML-CPBI技术的治疗效果优于其他方法。
{"title":"Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement","authors":"Sumita Kumar, P. Baruah, S. Kirubakaran, A. S. Kumar, Kamlesh Singh, M. V. J. Reddy","doi":"10.1109/IDCIoT56793.2023.10053554","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053554","url":null,"abstract":"Customer Relationship Management (CRM) is a complete approach to constructing, handling, and establishing loyal and long-lasting customer relationships. It is mostly acknowledged and widely executed for distinct domains, e.g., telecom, retail market, banking and insurance, and so on. A major objective is customer retention. The churn methods drive to recognize early churn signals and identify customers with an enhanced possibility to leave voluntarily. Machine learning (ML) techniques are presented for tackling the churning prediction difficult. This paper presents a Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement (ROML-CPBI) technique. The aim of the ROML-CPBI technique is to forecast the possibility of customer churns in the business sector. The working of the ROML-CPBI technique encompasses two major processes namely prediction and parameter tuning. At the initial stage, the ROML-CPBI technique utilizes multi-kernel extreme learning machine (MKELM) technique for churn prediction purposes. Secondly, the RO algorithm is applied for adjusting the parameters related to the MKELM model and thereby results in enhanced predictive outcomes. For validating the greater performance of the ROML-CPBI technique, an extensive range of experiments were performed. The experimental values signified the improved outcomes of the ROML-CPBI technique over other ones.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"31 1","pages":"416-420"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82905086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Neural Network (ANN) Enabled Weather Monitoring and Prediction System using IoT 使用物联网的人工神经网络(ANN)支持天气监测和预测系统
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053534
P. Krishna, Kongara Chandra Bhanu, Shaik Akram Ahamed, Myneni Umesh Chandra, Neelapu Prudhvi, Nandigama Apoorva
This article proposes a weather monitoring and prediction system using ANN with Internet of things (IoT) for the diverse applications. Neural networks provide an ability to perform computation and learning. Neural networks can solve difficult problems that appear to be computationally interact. The output neurons are responsible for producing a function of the inputs from the input neurons and their previous outputs. The proposed system is developed based on IoT using ES P32 microcontroller by interfacing different sensors to take the input parameters. All the acquired sensor information can be visualized in the ThingSpeak cloud as well as in the mobile application. Once the data is acquired from the sensors it is processed by the system with the available data set and gives the output, that can be seen on the cloud, text message, and mail to the end user. As the proof of concepts, it is tested in different weather conditions.
本文提出了一种基于人工神经网络和物联网(IoT)的天气监测预报系统。神经网络提供了进行计算和学习的能力。神经网络可以解决那些看起来在计算上相互作用的难题。输出神经元负责产生输入神经元的输入和它们之前的输出的函数。该系统基于物联网,采用ES P32单片机,通过接口不同的传感器获取输入参数。所有采集到的传感器信息都可以在ThingSpeak云和移动应用程序中可视化。一旦从传感器获取数据,系统就会用可用的数据集对其进行处理,并给出输出,这些输出可以在云端、文本消息和邮件中看到,并发送给最终用户。作为概念的证明,它在不同的天气条件下进行了测试。
{"title":"Artificial Neural Network (ANN) Enabled Weather Monitoring and Prediction System using IoT","authors":"P. Krishna, Kongara Chandra Bhanu, Shaik Akram Ahamed, Myneni Umesh Chandra, Neelapu Prudhvi, Nandigama Apoorva","doi":"10.1109/IDCIoT56793.2023.10053534","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053534","url":null,"abstract":"This article proposes a weather monitoring and prediction system using ANN with Internet of things (IoT) for the diverse applications. Neural networks provide an ability to perform computation and learning. Neural networks can solve difficult problems that appear to be computationally interact. The output neurons are responsible for producing a function of the inputs from the input neurons and their previous outputs. The proposed system is developed based on IoT using ES P32 microcontroller by interfacing different sensors to take the input parameters. All the acquired sensor information can be visualized in the ThingSpeak cloud as well as in the mobile application. Once the data is acquired from the sensors it is processed by the system with the available data set and gives the output, that can be seen on the cloud, text message, and mail to the end user. As the proof of concepts, it is tested in different weather conditions.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"25 1","pages":"46-51"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89867961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Edge Computing Model by using Data Combs for Big Data in Metaverse 基于数据梳的元宇宙大数据边缘计算模型增强
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053519
Lakshmikanth Rajath Mohan T, N. Jayapandian
The Metaverse is a huge project undertaken by Facebook in order to bring the world closer together and help people live out their dreams. Even handicapped can travel across the world. People can visit any place and would be safe in the comfort of their homes. Meta (Previously Facebook) plans to execute this by using a combination of AR and VR (Augmented Reality and Virtual Reality). Facebook aims to bring this technology to the people soon. However, a big factor in this idea that needs to be accounted for is the amount of data generation that will take place. Many Computer Science professors and scientists believe that the amount of data Meta is going to generate in one day would almost be equal to the amount of data Instagram/Facebook would have generated in their entire lifetime. This will push the entire data generation by at least 30%, if not more. Using traditional methods such as cloud computing might seem to become a shortcoming in the near future. This is because the servers might not be able to handle such large amounts of data. The solution to this problem should be a system that is designed specifically for handling data that is extremely large. A system that is not only secure, resilient and robust but also must be able to handle multiple requests and connections at once and yet not slow down when the number of requests increases gradually over time. In this model, a solution called the DHA (Data Hive Architecture) is provided. These DHAs are made up of multiple subunits called Data Combs and those are further broken down into data cells. These are small units of memory which can process big data extremely fast. When information is requested from a client (Example: A Data Warehouse) that is stored in multiple edges across the world, then these Data Combs rearrange the data cells within them on the basis of the requested criteria. This article aims to explain this concept of data combs and its usage in the Metaverse.
Metaverse是Facebook的一个大型项目,目的是让世界更紧密地联系在一起,帮助人们实现梦想。即使是残疾人也可以周游世界。人们可以去任何地方,在舒适的家中也会很安全。Meta(以前的Facebook)计划通过AR和VR(增强现实和虚拟现实)的结合来实现这一目标。Facebook的目标是尽快将这项技术带给人们。然而,这个想法中需要考虑的一个重要因素是将要产生的数据量。许多计算机科学教授和科学家认为,Meta一天产生的数据量几乎等于Instagram/Facebook在他们一生中产生的数据量。这将使整个数据生成至少增加30%,甚至更多。在不久的将来,使用云计算等传统方法似乎会成为一个缺点。这是因为服务器可能无法处理如此大量的数据。这个问题的解决方案应该是一个专门为处理超大数据而设计的系统。一个不仅安全、有弹性和健壮的系统,还必须能够同时处理多个请求和连接,并且在请求数量随着时间的推移逐渐增加时不会减慢速度。在该模型中,提供了一种称为DHA (Data Hive Architecture)的解决方案。这些dha由称为数据梳的多个子单元组成,这些子单元进一步分解为数据单元。这些小内存单元可以非常快地处理大数据。当从存储在世界各地多个边中的客户机(例如:数据仓库)请求信息时,这些数据梳根据所请求的标准重新排列其中的数据单元。本文旨在解释数据梳的概念及其在meta中的用法。
{"title":"Enhanced Edge Computing Model by using Data Combs for Big Data in Metaverse","authors":"Lakshmikanth Rajath Mohan T, N. Jayapandian","doi":"10.1109/IDCIoT56793.2023.10053519","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053519","url":null,"abstract":"The Metaverse is a huge project undertaken by Facebook in order to bring the world closer together and help people live out their dreams. Even handicapped can travel across the world. People can visit any place and would be safe in the comfort of their homes. Meta (Previously Facebook) plans to execute this by using a combination of AR and VR (Augmented Reality and Virtual Reality). Facebook aims to bring this technology to the people soon. However, a big factor in this idea that needs to be accounted for is the amount of data generation that will take place. Many Computer Science professors and scientists believe that the amount of data Meta is going to generate in one day would almost be equal to the amount of data Instagram/Facebook would have generated in their entire lifetime. This will push the entire data generation by at least 30%, if not more. Using traditional methods such as cloud computing might seem to become a shortcoming in the near future. This is because the servers might not be able to handle such large amounts of data. The solution to this problem should be a system that is designed specifically for handling data that is extremely large. A system that is not only secure, resilient and robust but also must be able to handle multiple requests and connections at once and yet not slow down when the number of requests increases gradually over time. In this model, a solution called the DHA (Data Hive Architecture) is provided. These DHAs are made up of multiple subunits called Data Combs and those are further broken down into data cells. These are small units of memory which can process big data extremely fast. When information is requested from a client (Example: A Data Warehouse) that is stored in multiple edges across the world, then these Data Combs rearrange the data cells within them on the basis of the requested criteria. This article aims to explain this concept of data combs and its usage in the Metaverse.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"72 1","pages":"249-255"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85838463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review on Autopilot using Neuro Evaluation of Augmenting Topologies 基于增强拓扑神经评估的自动驾驶研究进展
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053395
K. C. Reddy, V. K, Faraz Ahmed Mulla, G. N. H. Kumar, J. Prajwal, M. Gopal
Autonomous lateral movement is the significant challenge for self-driving cars; therefore, the major target of this paper is to replicate propulsion in order to improve the performance of self-driving cars. This work focuses on using multilayer neural networks and deep learning techniques to enable operating self-driving cars under stimulus conditions. Within the simulator, the driver`s vision and reactions are mimicked by preprocessing the images obtained from a cameramounted vehicle. Neural network trains the deep learning model based on the images captured by camera in manual mode. The trained multi-layer neural network creates various conditions for driving a car in self-driving mode. The driver imitation algorithms created and characterized in this work are all about profound learning techniques.
自动横向移动是自动驾驶汽车面临的重大挑战;因此,本文的主要目标是复制推进,以提高自动驾驶汽车的性能。这项工作的重点是使用多层神经网络和深度学习技术,使自动驾驶汽车能够在刺激条件下运行。在模拟器中,驾驶员的视觉和反应是通过预处理从摄像机车辆获得的图像来模拟的。神经网络根据相机在手动模式下拍摄的图像训练深度学习模型。经过训练的多层神经网络为自动驾驶模式下的汽车驾驶创造了各种条件。本研究创建并描述的驾驶员模仿算法都是关于深度学习技术的。
{"title":"A Review on Autopilot using Neuro Evaluation of Augmenting Topologies","authors":"K. C. Reddy, V. K, Faraz Ahmed Mulla, G. N. H. Kumar, J. Prajwal, M. Gopal","doi":"10.1109/IDCIoT56793.2023.10053395","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053395","url":null,"abstract":"Autonomous lateral movement is the significant challenge for self-driving cars; therefore, the major target of this paper is to replicate propulsion in order to improve the performance of self-driving cars. This work focuses on using multilayer neural networks and deep learning techniques to enable operating self-driving cars under stimulus conditions. Within the simulator, the driver`s vision and reactions are mimicked by preprocessing the images obtained from a cameramounted vehicle. Neural network trains the deep learning model based on the images captured by camera in manual mode. The trained multi-layer neural network creates various conditions for driving a car in self-driving mode. The driver imitation algorithms created and characterized in this work are all about profound learning techniques.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"17 1","pages":"573-577"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87406798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maximizing the Net Present Value of Resource-Constrained Project Scheduling Problems using Recurrent Neural Network with Genetic Algorithm 基于递归神经网络遗传算法的资源约束项目调度问题净现值最大化
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053390
Tshewang Phuntsho, T. Gonsalves
Scheduling long-term and financially dependent projects constrained by resources are of the utmost significance to project and finance managers. A new technique based on a modified Recurrent Neural Network (RNN) employing Parallel Schedule Generation Scheme (PSGS) is proposed as heuristics method to solve this discounted cash flows for resource-constrained project scheduling (RCPSPDC). To resolve the gradient exploding/vanishing problem of RNN, a Genetic Algorithm (GA) is employed to optimize its weight matrices. Our GA takes advantage of p-point crossover and m-point mutation operators besides utilizing elitism and tournament strategies to diversify and evolve the population. The proposed RNN architecture implemented in Julia language is evaluated on sampled projects from well-known 17,280 project instances dataset. This article, establishes the superior performance of our proposed architecture when compared to existing state-of-the-art standalone meta-heuristic techniques, besides having transfer learning capabilities. This technique can easily be hybridized with existing architectures to achieve remarkable performance.
对于项目经理和财务经理来说,对资源约束下的长期和财务依赖的项目进行调度是至关重要的。提出了一种基于改进的递归神经网络(RNN)并行计划生成方案(PSGS)的启发式方法来求解资源约束项目调度(RCPSPDC)的现金流贴现问题。为了解决RNN的梯度爆炸/消失问题,采用遗传算法对其权矩阵进行优化。我们的遗传算法除了利用精英和锦标赛策略外,还利用p点交叉和m点突变算子来实现种群的多样化和进化。用Julia语言实现的RNN架构在已知的17,280个项目实例数据集中的样本项目上进行了评估。本文与现有的最先进的独立元启发式技术相比,除了具有迁移学习能力外,还建立了我们提出的体系结构的优越性能。这种技术可以很容易地与现有的体系结构相结合,以获得卓越的性能。
{"title":"Maximizing the Net Present Value of Resource-Constrained Project Scheduling Problems using Recurrent Neural Network with Genetic Algorithm","authors":"Tshewang Phuntsho, T. Gonsalves","doi":"10.1109/IDCIoT56793.2023.10053390","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053390","url":null,"abstract":"Scheduling long-term and financially dependent projects constrained by resources are of the utmost significance to project and finance managers. A new technique based on a modified Recurrent Neural Network (RNN) employing Parallel Schedule Generation Scheme (PSGS) is proposed as heuristics method to solve this discounted cash flows for resource-constrained project scheduling (RCPSPDC). To resolve the gradient exploding/vanishing problem of RNN, a Genetic Algorithm (GA) is employed to optimize its weight matrices. Our GA takes advantage of p-point crossover and m-point mutation operators besides utilizing elitism and tournament strategies to diversify and evolve the population. The proposed RNN architecture implemented in Julia language is evaluated on sampled projects from well-known 17,280 project instances dataset. This article, establishes the superior performance of our proposed architecture when compared to existing state-of-the-art standalone meta-heuristic techniques, besides having transfer learning capabilities. This technique can easily be hybridized with existing architectures to achieve remarkable performance.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"81 1","pages":"524-530"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72689492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Crime Scene Objects using Deep Learning Techniques 使用深度学习技术检测犯罪现场物体
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053440
Nandhini T J, K. Thinakaran
Research on the detection of objects at crime scenes has flourished in the last two decades. Researchers have been concentrating on color pictures, where lighting is a crucial component, since this is one of the most pressing issues in computer vision, with applications spanning surveillance, security, medicine, and more. However, night time monitoring is crucial since most security problems cannot be seen by the naked eye. That's why it's crucial to record a dark scene and identify the things at a crime scene. Even when its dark out, infrared cameras are indispensable. Both military and civilian sectors will benefit from the use of such methods for night time navigation. On the other hand, IR photographs have issues with poor resolution, lighting effects, and other similar issues. Surveillance cameras with infrared (IR) imaging capabilities have been the focus of much study and development in recent years. This research work has attempted to offer a good model for object recognition by using IR images obtained from crime scenes using Deep Learning. The model is tested in many scenarios including a central processing unit (CPU), Google COLAB, and graphics processing unit (GPU), and its performance is also tabulated.
在过去的二十年里,对犯罪现场物品检测的研究蓬勃发展。研究人员一直专注于彩色图像,其中照明是一个至关重要的组成部分,因为这是计算机视觉中最紧迫的问题之一,其应用跨越监视,安全,医学等领域。然而,夜间监控是至关重要的,因为大多数安全问题无法用肉眼看到。这就是为什么记录一个黑暗的现场和识别犯罪现场的东西是至关重要的。即使天黑了,红外摄像机也是必不可少的。军事和民用部门都将受益于使用这种方法进行夜间导航。另一方面,红外照片有分辨率差、灯光效果和其他类似问题的问题。具有红外成像能力的监控摄像机是近年来研究和发展的热点。这项研究工作试图通过使用深度学习从犯罪现场获得的红外图像,为物体识别提供一个很好的模型。该模型在许多场景下进行了测试,包括中央处理单元(CPU)、Google COLAB和图形处理单元(GPU),并将其性能制成表格。
{"title":"Detection of Crime Scene Objects using Deep Learning Techniques","authors":"Nandhini T J, K. Thinakaran","doi":"10.1109/IDCIoT56793.2023.10053440","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053440","url":null,"abstract":"Research on the detection of objects at crime scenes has flourished in the last two decades. Researchers have been concentrating on color pictures, where lighting is a crucial component, since this is one of the most pressing issues in computer vision, with applications spanning surveillance, security, medicine, and more. However, night time monitoring is crucial since most security problems cannot be seen by the naked eye. That's why it's crucial to record a dark scene and identify the things at a crime scene. Even when its dark out, infrared cameras are indispensable. Both military and civilian sectors will benefit from the use of such methods for night time navigation. On the other hand, IR photographs have issues with poor resolution, lighting effects, and other similar issues. Surveillance cameras with infrared (IR) imaging capabilities have been the focus of much study and development in recent years. This research work has attempted to offer a good model for object recognition by using IR images obtained from crime scenes using Deep Learning. The model is tested in many scenarios including a central processing unit (CPU), Google COLAB, and graphics processing unit (GPU), and its performance is also tabulated.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"15 1","pages":"357-361"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85692738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Framework for Implementation of Personality Inventory Model on Natural Language Processing with Personality Traits Analysis 基于人格特质分析的自然语言处理人格清单模型实现框架
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053501
P. William, Y. N, V. M. Tidake, Snehal Sumit Gondkar, Chetana. R, K. Vengatesan
The phrase "personality" refers to an individual's distinct mode of thought, action, and behaviour Personality is a collection of feelings, thoughts, and aspirations that may be seen in the way people interact with one another. Behavioural features that separate one person from another and may be clearly seen when interacting with individuals in one's immediate surroundings and social group are included in this category of traits. To improve good healthy discourse, a variety of ways for evaluating candidate personalities based on the meaning of their textual message have been developed. According to the research, the textual content of interview responses to conventional interview questions is an effective measure for predicting a person's personality attribute. Nowadays, personality prediction has garnered considerable interest. It analyses user activity and displays their ideas, feelings, and so on. Historically, defining a personality trait was a laborious process. Thus, automated prediction is required for a big number of users. Different algorithms, data sources, and feature sets are used in various techniques. As a way to gauge someone's personality, personality prediction has evolved into an important topic of research in both psychology and computer science. Candidate personality traits may be classified using a word embedding model, which is the subject of this article.
“个性”一词指的是一个人独特的思维、行动和行为模式。个性是情感、思想和愿望的集合,可以从人们相互交往的方式中看到。这类特征包括将一个人与另一个人区分开来的行为特征,以及在与周围环境和社会群体中的个体互动时可以清楚地看到的行为特征。为了改善良好的健康话语,人们开发了各种基于文本信息含义的评估候选人个性的方法。研究表明,对常规面试问题的回答文本内容是预测面试者人格属性的有效手段。如今,人格预测已经引起了相当大的兴趣。它分析用户活动并显示他们的想法、感受等等。从历史上看,定义个性特征是一个费力的过程。因此,需要对大量用户进行自动预测。在不同的技术中使用不同的算法、数据源和特性集。作为一种衡量一个人性格的方法,性格预测已经发展成为心理学和计算机科学研究的一个重要课题。候选人格特征可以使用词嵌入模型进行分类,这也是本文的主题。
{"title":"Framework for Implementation of Personality Inventory Model on Natural Language Processing with Personality Traits Analysis","authors":"P. William, Y. N, V. M. Tidake, Snehal Sumit Gondkar, Chetana. R, K. Vengatesan","doi":"10.1109/IDCIoT56793.2023.10053501","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053501","url":null,"abstract":"The phrase \"personality\" refers to an individual's distinct mode of thought, action, and behaviour Personality is a collection of feelings, thoughts, and aspirations that may be seen in the way people interact with one another. Behavioural features that separate one person from another and may be clearly seen when interacting with individuals in one's immediate surroundings and social group are included in this category of traits. To improve good healthy discourse, a variety of ways for evaluating candidate personalities based on the meaning of their textual message have been developed. According to the research, the textual content of interview responses to conventional interview questions is an effective measure for predicting a person's personality attribute. Nowadays, personality prediction has garnered considerable interest. It analyses user activity and displays their ideas, feelings, and so on. Historically, defining a personality trait was a laborious process. Thus, automated prediction is required for a big number of users. Different algorithms, data sources, and feature sets are used in various techniques. As a way to gauge someone's personality, personality prediction has evolved into an important topic of research in both psychology and computer science. Candidate personality traits may be classified using a word embedding model, which is the subject of this article.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"101 1","pages":"625-628"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72897136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Implementation of Motorist Weariness Detection System using a Conventional Object Recognition Technique 基于传统目标识别技术的驾驶员疲劳检测系统的实现
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10052783
Khushi Gupta, Siddhartha Choubey, Y. N, P. William, V. N., Chaitanya P. Kale
Detecting driver drowsiness is a huge crucial problem in the sector of accident-avoidance technologies, so the development of an innovative intelligent system came into the picture. The system also prioritized safety concerns such as informing the victim and avoiding yawning. The technique for this system is a machine learning-based sophisticated algorithm that can identify the driver's facial expressions and quantify the rate of driver sleepiness. This may be avoided by activating an alarm that causes the driver to become alert when he or she becomes fatigued. The Eye Aspects Ratio (EAR) is used to recognize the system’s drowsiness rate by calculating the facial plot localization which extracts and gives the drowsiness rate.Current approaches, however, have significant shortcomings due to the considerable unpredictability of surrounding conditions. Poor lighting may impair the camera's ability to precisely measure the driver's face and eye. This will affect image processing analysis which corresponds to late detection or no detection, tendering the technique in accuracy and efficiency. Numerous strategies were investigated and analyzed to determine the optimal technique with the maximum accuracy for detecting driver tiredness. In this paper, the implementation of a real-time system is proposed that requires a camera to automatically trace and process the victim’s eye using Dlib Python, and OpenCV. The driver's eye area is continually monitored and computed to assess drowsiness before generating an output alarm to notify the driver.
在事故避免技术领域,检测驾驶员困倦是一个非常关键的问题,因此开发一种创新的智能系统就出现了。该系统还优先考虑安全问题,如通知受害者和避免打哈欠。该系统的技术是一种基于机器学习的复杂算法,可以识别驾驶员的面部表情,并量化驾驶员的困倦率。这可以通过激活警报来避免,当司机疲劳时,警报会使他或她变得警觉。利用眼宽比(EAR)方法,通过计算人脸图定位提取并给出困倦率来识别系统的困倦率。然而,由于周围条件的不可预测性,目前的方法有很大的缺点。光线不足可能会影响相机精确测量驾驶员面部和眼睛的能力。这将影响到图像处理分析,从而导致检测延迟或不检测,从而影响技术的准确性和效率。为了确定检测驾驶员疲劳程度的最优技术,对多种策略进行了研究和分析。在本文中,提出了一个实时系统的实现,该系统需要一个摄像头来自动跟踪和处理受害者的眼睛,使用Dlib Python和OpenCV。在产生输出警报通知驾驶员之前,驾驶员的眼睛区域被持续监测和计算以评估困倦程度。
{"title":"Implementation of Motorist Weariness Detection System using a Conventional Object Recognition Technique","authors":"Khushi Gupta, Siddhartha Choubey, Y. N, P. William, V. N., Chaitanya P. Kale","doi":"10.1109/IDCIoT56793.2023.10052783","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10052783","url":null,"abstract":"Detecting driver drowsiness is a huge crucial problem in the sector of accident-avoidance technologies, so the development of an innovative intelligent system came into the picture. The system also prioritized safety concerns such as informing the victim and avoiding yawning. The technique for this system is a machine learning-based sophisticated algorithm that can identify the driver's facial expressions and quantify the rate of driver sleepiness. This may be avoided by activating an alarm that causes the driver to become alert when he or she becomes fatigued. The Eye Aspects Ratio (EAR) is used to recognize the system’s drowsiness rate by calculating the facial plot localization which extracts and gives the drowsiness rate.Current approaches, however, have significant shortcomings due to the considerable unpredictability of surrounding conditions. Poor lighting may impair the camera's ability to precisely measure the driver's face and eye. This will affect image processing analysis which corresponds to late detection or no detection, tendering the technique in accuracy and efficiency. Numerous strategies were investigated and analyzed to determine the optimal technique with the maximum accuracy for detecting driver tiredness. In this paper, the implementation of a real-time system is proposed that requires a camera to automatically trace and process the victim’s eye using Dlib Python, and OpenCV. The driver's eye area is continually monitored and computed to assess drowsiness before generating an output alarm to notify the driver.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"67 1","pages":"640-646"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74807212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Gaussian Approximation based WCDMA and OFDMA System Performance Investigation for Various Fading Channels 基于高斯逼近的WCDMA和OFDMA系统各种衰落信道性能研究
Pub Date : 2023-01-05 DOI: 10.1109/IDCIoT56793.2023.10053401
Parveen Singla, Vikas Gupta, Rinkesh Mittal, Ramanpreet Kaur, Jaskirat Kaur
Wideband Code Division Multiple Access (WCDMA) systems and Orthogonal Frequency Division Multiple Access (OFDMA) technique were the basic of modern wireless systems aimed to provide enriched services. But the channel impairments always put a limit on modern systems that also includes AC-MIMO Radio, 802.11ac and LTE/VoLTE. Here, the conduct of WCDMA and OFDMA primarily based totally structures is analyzed via way of means of widely recognized primary Gaussian Approximation (GA) in which interference and noise to the gadget is generated via way of means of suggest and variance approximations of noise power. In order to generate the faded transmitted signal Weibull, Rayleigh, Rician and Nakagami distributions have been applied to systems. OFDMA and WCDMA system performances for different fading environments have been observed by error rate graphs. It is validated that inclusion of fading in the system increases error rate and the performance of OFDMA system is much better than WCDMA system.
宽带码分多址(WCDMA)系统和正交频分多址(OFDMA)技术是现代无线系统的基础,旨在提供丰富的业务。但是,信道障碍总是限制现代系统的使用,包括AC-MIMO Radio、802.11ac和LTE/VoLTE。本文通过广泛认可的初级高斯近似(primary Gaussian Approximation, GA)方法分析了基于WCDMA和OFDMA的总体结构的性能,其中对器件的干扰和噪声是通过噪声功率的建议近似和方差近似产生的。为了产生衰落的传输信号,系统中应用了威布尔分布、瑞利分布、瑞利分布和中川分布。用误码率图观察了OFDMA和WCDMA系统在不同衰落环境下的性能。验证了系统中加入衰落会增加误码率,并且OFDMA系统的性能比WCDMA系统好得多。
{"title":"Gaussian Approximation based WCDMA and OFDMA System Performance Investigation for Various Fading Channels","authors":"Parveen Singla, Vikas Gupta, Rinkesh Mittal, Ramanpreet Kaur, Jaskirat Kaur","doi":"10.1109/IDCIoT56793.2023.10053401","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053401","url":null,"abstract":"Wideband Code Division Multiple Access (WCDMA) systems and Orthogonal Frequency Division Multiple Access (OFDMA) technique were the basic of modern wireless systems aimed to provide enriched services. But the channel impairments always put a limit on modern systems that also includes AC-MIMO Radio, 802.11ac and LTE/VoLTE. Here, the conduct of WCDMA and OFDMA primarily based totally structures is analyzed via way of means of widely recognized primary Gaussian Approximation (GA) in which interference and noise to the gadget is generated via way of means of suggest and variance approximations of noise power. In order to generate the faded transmitted signal Weibull, Rayleigh, Rician and Nakagami distributions have been applied to systems. OFDMA and WCDMA system performances for different fading environments have been observed by error rate graphs. It is validated that inclusion of fading in the system increases error rate and the performance of OFDMA system is much better than WCDMA system.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"275 1","pages":"735-739"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76976332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
物联网技术
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1