Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053474
M. Prabha, M. Anbarasan, S. Sunithamani, Mrs. K. Saranya
In recent years, the wireless sensor networks are widely used in numerous real-time applications such as WBAN monitoring and tracking. Recent developments in wireless networks have given rise to new and reliable methods for enhancing network lifetime, energy efficiency, and scalability. The power consumption of the entire wireless sensor network and the energy level of each individual sensor node are closely related by the commonly used clustering technique to manage the sensor networks. In order to optimize data transmission, the fuzzy C means algorithm is employed in this article to thoroughly analyze the cluster head while considering the energy that is available in each node and distance metrics from the base station. This study demonstrates how carefully choosing cluster heads and node clustering, which divides large networks into smaller clusters, can enhance the lifespan of a network. The proposed network uses a multi-hop routing approach, where each sensor node can independently collect and send data in order to address the data rate issue. The suggested cluster routing protocol was put to the test with 1000 data transmission rounds to demonstrate its strengths and weaknesses in terms of network lifetime and energy efficiency. The choice of the cluster head node, the distance between the nodes, and the amount of energy needed for subsequent data transmission are all considered to be random for each round. The simulation results show that the suggested methodology beats cutting-edge routing techniques and achieves a promising network performance. Furthermore, the effects of hierarchical cluster head selection point to the potential of our method for use in WSN in the future. The following tests were performed using computer simulation, including comparing the effect of network life on the increased number of rounds before and after the influence of an energy-efficient routing protocol, and examining performance metrics for network lifetime.
{"title":"Hierarchical Fuzzy Methodologies for Energy Efficient Routing Protocol for Wireless Sensor Networks","authors":"M. Prabha, M. Anbarasan, S. Sunithamani, Mrs. K. Saranya","doi":"10.1109/IDCIoT56793.2023.10053474","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053474","url":null,"abstract":"In recent years, the wireless sensor networks are widely used in numerous real-time applications such as WBAN monitoring and tracking. Recent developments in wireless networks have given rise to new and reliable methods for enhancing network lifetime, energy efficiency, and scalability. The power consumption of the entire wireless sensor network and the energy level of each individual sensor node are closely related by the commonly used clustering technique to manage the sensor networks. In order to optimize data transmission, the fuzzy C means algorithm is employed in this article to thoroughly analyze the cluster head while considering the energy that is available in each node and distance metrics from the base station. This study demonstrates how carefully choosing cluster heads and node clustering, which divides large networks into smaller clusters, can enhance the lifespan of a network. The proposed network uses a multi-hop routing approach, where each sensor node can independently collect and send data in order to address the data rate issue. The suggested cluster routing protocol was put to the test with 1000 data transmission rounds to demonstrate its strengths and weaknesses in terms of network lifetime and energy efficiency. The choice of the cluster head node, the distance between the nodes, and the amount of energy needed for subsequent data transmission are all considered to be random for each round. The simulation results show that the suggested methodology beats cutting-edge routing techniques and achieves a promising network performance. Furthermore, the effects of hierarchical cluster head selection point to the potential of our method for use in WSN in the future. The following tests were performed using computer simulation, including comparing the effect of network life on the increased number of rounds before and after the influence of an energy-efficient routing protocol, and examining performance metrics for network lifetime.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"25 1","pages":"989-992"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81477151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053554
Sumita Kumar, P. Baruah, S. Kirubakaran, A. S. Kumar, Kamlesh Singh, M. V. J. Reddy
Customer Relationship Management (CRM) is a complete approach to constructing, handling, and establishing loyal and long-lasting customer relationships. It is mostly acknowledged and widely executed for distinct domains, e.g., telecom, retail market, banking and insurance, and so on. A major objective is customer retention. The churn methods drive to recognize early churn signals and identify customers with an enhanced possibility to leave voluntarily. Machine learning (ML) techniques are presented for tackling the churning prediction difficult. This paper presents a Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement (ROML-CPBI) technique. The aim of the ROML-CPBI technique is to forecast the possibility of customer churns in the business sector. The working of the ROML-CPBI technique encompasses two major processes namely prediction and parameter tuning. At the initial stage, the ROML-CPBI technique utilizes multi-kernel extreme learning machine (MKELM) technique for churn prediction purposes. Secondly, the RO algorithm is applied for adjusting the parameters related to the MKELM model and thereby results in enhanced predictive outcomes. For validating the greater performance of the ROML-CPBI technique, an extensive range of experiments were performed. The experimental values signified the improved outcomes of the ROML-CPBI technique over other ones.
{"title":"Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement","authors":"Sumita Kumar, P. Baruah, S. Kirubakaran, A. S. Kumar, Kamlesh Singh, M. V. J. Reddy","doi":"10.1109/IDCIoT56793.2023.10053554","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053554","url":null,"abstract":"Customer Relationship Management (CRM) is a complete approach to constructing, handling, and establishing loyal and long-lasting customer relationships. It is mostly acknowledged and widely executed for distinct domains, e.g., telecom, retail market, banking and insurance, and so on. A major objective is customer retention. The churn methods drive to recognize early churn signals and identify customers with an enhanced possibility to leave voluntarily. Machine learning (ML) techniques are presented for tackling the churning prediction difficult. This paper presents a Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement (ROML-CPBI) technique. The aim of the ROML-CPBI technique is to forecast the possibility of customer churns in the business sector. The working of the ROML-CPBI technique encompasses two major processes namely prediction and parameter tuning. At the initial stage, the ROML-CPBI technique utilizes multi-kernel extreme learning machine (MKELM) technique for churn prediction purposes. Secondly, the RO algorithm is applied for adjusting the parameters related to the MKELM model and thereby results in enhanced predictive outcomes. For validating the greater performance of the ROML-CPBI technique, an extensive range of experiments were performed. The experimental values signified the improved outcomes of the ROML-CPBI technique over other ones.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"31 1","pages":"416-420"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82905086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article proposes a weather monitoring and prediction system using ANN with Internet of things (IoT) for the diverse applications. Neural networks provide an ability to perform computation and learning. Neural networks can solve difficult problems that appear to be computationally interact. The output neurons are responsible for producing a function of the inputs from the input neurons and their previous outputs. The proposed system is developed based on IoT using ES P32 microcontroller by interfacing different sensors to take the input parameters. All the acquired sensor information can be visualized in the ThingSpeak cloud as well as in the mobile application. Once the data is acquired from the sensors it is processed by the system with the available data set and gives the output, that can be seen on the cloud, text message, and mail to the end user. As the proof of concepts, it is tested in different weather conditions.
{"title":"Artificial Neural Network (ANN) Enabled Weather Monitoring and Prediction System using IoT","authors":"P. Krishna, Kongara Chandra Bhanu, Shaik Akram Ahamed, Myneni Umesh Chandra, Neelapu Prudhvi, Nandigama Apoorva","doi":"10.1109/IDCIoT56793.2023.10053534","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053534","url":null,"abstract":"This article proposes a weather monitoring and prediction system using ANN with Internet of things (IoT) for the diverse applications. Neural networks provide an ability to perform computation and learning. Neural networks can solve difficult problems that appear to be computationally interact. The output neurons are responsible for producing a function of the inputs from the input neurons and their previous outputs. The proposed system is developed based on IoT using ES P32 microcontroller by interfacing different sensors to take the input parameters. All the acquired sensor information can be visualized in the ThingSpeak cloud as well as in the mobile application. Once the data is acquired from the sensors it is processed by the system with the available data set and gives the output, that can be seen on the cloud, text message, and mail to the end user. As the proof of concepts, it is tested in different weather conditions.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"25 1","pages":"46-51"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89867961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053519
Lakshmikanth Rajath Mohan T, N. Jayapandian
The Metaverse is a huge project undertaken by Facebook in order to bring the world closer together and help people live out their dreams. Even handicapped can travel across the world. People can visit any place and would be safe in the comfort of their homes. Meta (Previously Facebook) plans to execute this by using a combination of AR and VR (Augmented Reality and Virtual Reality). Facebook aims to bring this technology to the people soon. However, a big factor in this idea that needs to be accounted for is the amount of data generation that will take place. Many Computer Science professors and scientists believe that the amount of data Meta is going to generate in one day would almost be equal to the amount of data Instagram/Facebook would have generated in their entire lifetime. This will push the entire data generation by at least 30%, if not more. Using traditional methods such as cloud computing might seem to become a shortcoming in the near future. This is because the servers might not be able to handle such large amounts of data. The solution to this problem should be a system that is designed specifically for handling data that is extremely large. A system that is not only secure, resilient and robust but also must be able to handle multiple requests and connections at once and yet not slow down when the number of requests increases gradually over time. In this model, a solution called the DHA (Data Hive Architecture) is provided. These DHAs are made up of multiple subunits called Data Combs and those are further broken down into data cells. These are small units of memory which can process big data extremely fast. When information is requested from a client (Example: A Data Warehouse) that is stored in multiple edges across the world, then these Data Combs rearrange the data cells within them on the basis of the requested criteria. This article aims to explain this concept of data combs and its usage in the Metaverse.
{"title":"Enhanced Edge Computing Model by using Data Combs for Big Data in Metaverse","authors":"Lakshmikanth Rajath Mohan T, N. Jayapandian","doi":"10.1109/IDCIoT56793.2023.10053519","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053519","url":null,"abstract":"The Metaverse is a huge project undertaken by Facebook in order to bring the world closer together and help people live out their dreams. Even handicapped can travel across the world. People can visit any place and would be safe in the comfort of their homes. Meta (Previously Facebook) plans to execute this by using a combination of AR and VR (Augmented Reality and Virtual Reality). Facebook aims to bring this technology to the people soon. However, a big factor in this idea that needs to be accounted for is the amount of data generation that will take place. Many Computer Science professors and scientists believe that the amount of data Meta is going to generate in one day would almost be equal to the amount of data Instagram/Facebook would have generated in their entire lifetime. This will push the entire data generation by at least 30%, if not more. Using traditional methods such as cloud computing might seem to become a shortcoming in the near future. This is because the servers might not be able to handle such large amounts of data. The solution to this problem should be a system that is designed specifically for handling data that is extremely large. A system that is not only secure, resilient and robust but also must be able to handle multiple requests and connections at once and yet not slow down when the number of requests increases gradually over time. In this model, a solution called the DHA (Data Hive Architecture) is provided. These DHAs are made up of multiple subunits called Data Combs and those are further broken down into data cells. These are small units of memory which can process big data extremely fast. When information is requested from a client (Example: A Data Warehouse) that is stored in multiple edges across the world, then these Data Combs rearrange the data cells within them on the basis of the requested criteria. This article aims to explain this concept of data combs and its usage in the Metaverse.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"72 1","pages":"249-255"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85838463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053395
K. C. Reddy, V. K, Faraz Ahmed Mulla, G. N. H. Kumar, J. Prajwal, M. Gopal
Autonomous lateral movement is the significant challenge for self-driving cars; therefore, the major target of this paper is to replicate propulsion in order to improve the performance of self-driving cars. This work focuses on using multilayer neural networks and deep learning techniques to enable operating self-driving cars under stimulus conditions. Within the simulator, the driver`s vision and reactions are mimicked by preprocessing the images obtained from a cameramounted vehicle. Neural network trains the deep learning model based on the images captured by camera in manual mode. The trained multi-layer neural network creates various conditions for driving a car in self-driving mode. The driver imitation algorithms created and characterized in this work are all about profound learning techniques.
{"title":"A Review on Autopilot using Neuro Evaluation of Augmenting Topologies","authors":"K. C. Reddy, V. K, Faraz Ahmed Mulla, G. N. H. Kumar, J. Prajwal, M. Gopal","doi":"10.1109/IDCIoT56793.2023.10053395","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053395","url":null,"abstract":"Autonomous lateral movement is the significant challenge for self-driving cars; therefore, the major target of this paper is to replicate propulsion in order to improve the performance of self-driving cars. This work focuses on using multilayer neural networks and deep learning techniques to enable operating self-driving cars under stimulus conditions. Within the simulator, the driver`s vision and reactions are mimicked by preprocessing the images obtained from a cameramounted vehicle. Neural network trains the deep learning model based on the images captured by camera in manual mode. The trained multi-layer neural network creates various conditions for driving a car in self-driving mode. The driver imitation algorithms created and characterized in this work are all about profound learning techniques.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"17 1","pages":"573-577"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87406798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053390
Tshewang Phuntsho, T. Gonsalves
Scheduling long-term and financially dependent projects constrained by resources are of the utmost significance to project and finance managers. A new technique based on a modified Recurrent Neural Network (RNN) employing Parallel Schedule Generation Scheme (PSGS) is proposed as heuristics method to solve this discounted cash flows for resource-constrained project scheduling (RCPSPDC). To resolve the gradient exploding/vanishing problem of RNN, a Genetic Algorithm (GA) is employed to optimize its weight matrices. Our GA takes advantage of p-point crossover and m-point mutation operators besides utilizing elitism and tournament strategies to diversify and evolve the population. The proposed RNN architecture implemented in Julia language is evaluated on sampled projects from well-known 17,280 project instances dataset. This article, establishes the superior performance of our proposed architecture when compared to existing state-of-the-art standalone meta-heuristic techniques, besides having transfer learning capabilities. This technique can easily be hybridized with existing architectures to achieve remarkable performance.
{"title":"Maximizing the Net Present Value of Resource-Constrained Project Scheduling Problems using Recurrent Neural Network with Genetic Algorithm","authors":"Tshewang Phuntsho, T. Gonsalves","doi":"10.1109/IDCIoT56793.2023.10053390","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053390","url":null,"abstract":"Scheduling long-term and financially dependent projects constrained by resources are of the utmost significance to project and finance managers. A new technique based on a modified Recurrent Neural Network (RNN) employing Parallel Schedule Generation Scheme (PSGS) is proposed as heuristics method to solve this discounted cash flows for resource-constrained project scheduling (RCPSPDC). To resolve the gradient exploding/vanishing problem of RNN, a Genetic Algorithm (GA) is employed to optimize its weight matrices. Our GA takes advantage of p-point crossover and m-point mutation operators besides utilizing elitism and tournament strategies to diversify and evolve the population. The proposed RNN architecture implemented in Julia language is evaluated on sampled projects from well-known 17,280 project instances dataset. This article, establishes the superior performance of our proposed architecture when compared to existing state-of-the-art standalone meta-heuristic techniques, besides having transfer learning capabilities. This technique can easily be hybridized with existing architectures to achieve remarkable performance.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"81 1","pages":"524-530"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72689492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053440
Nandhini T J, K. Thinakaran
Research on the detection of objects at crime scenes has flourished in the last two decades. Researchers have been concentrating on color pictures, where lighting is a crucial component, since this is one of the most pressing issues in computer vision, with applications spanning surveillance, security, medicine, and more. However, night time monitoring is crucial since most security problems cannot be seen by the naked eye. That's why it's crucial to record a dark scene and identify the things at a crime scene. Even when its dark out, infrared cameras are indispensable. Both military and civilian sectors will benefit from the use of such methods for night time navigation. On the other hand, IR photographs have issues with poor resolution, lighting effects, and other similar issues. Surveillance cameras with infrared (IR) imaging capabilities have been the focus of much study and development in recent years. This research work has attempted to offer a good model for object recognition by using IR images obtained from crime scenes using Deep Learning. The model is tested in many scenarios including a central processing unit (CPU), Google COLAB, and graphics processing unit (GPU), and its performance is also tabulated.
{"title":"Detection of Crime Scene Objects using Deep Learning Techniques","authors":"Nandhini T J, K. Thinakaran","doi":"10.1109/IDCIoT56793.2023.10053440","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053440","url":null,"abstract":"Research on the detection of objects at crime scenes has flourished in the last two decades. Researchers have been concentrating on color pictures, where lighting is a crucial component, since this is one of the most pressing issues in computer vision, with applications spanning surveillance, security, medicine, and more. However, night time monitoring is crucial since most security problems cannot be seen by the naked eye. That's why it's crucial to record a dark scene and identify the things at a crime scene. Even when its dark out, infrared cameras are indispensable. Both military and civilian sectors will benefit from the use of such methods for night time navigation. On the other hand, IR photographs have issues with poor resolution, lighting effects, and other similar issues. Surveillance cameras with infrared (IR) imaging capabilities have been the focus of much study and development in recent years. This research work has attempted to offer a good model for object recognition by using IR images obtained from crime scenes using Deep Learning. The model is tested in many scenarios including a central processing unit (CPU), Google COLAB, and graphics processing unit (GPU), and its performance is also tabulated.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"15 1","pages":"357-361"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85692738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053540
S. Kuwelkar, H. G. Virani
In near future, IoT will revolutionize human lifestyle. IoT is categorized as low power lossy network since it employs devices with constrained power, memory and processing capability which are interconnected over lossy links. The efficiency of such networks largely depends on the design of the routing protocol. To cater specific routing needs of such networks, the IETF has proposed IPv6 routing protocol for LLNs (RPL) as a de facto routing standard. In RPL, routing decision is based on a single parameter which leads to the selection of inefficient paths and affects network lifetime. This work primarily focuses on improving the RPL protocol by overcoming the single metric limitation. In this work, a novel version of RPL is proposed which uses a Multilayer Feed Forward Neural Network to make the routing decision based on multiple metrics. Four routing parameters namely, hop count, delay, residual energy and link quality of candidate neighbors are fed as input to ANN in order to compute the fitness of each candidate and the one with highest value is designated as the most suitable parent to route packets towards sink node. This technique lowers energy consumption by 15%, improves Packet Delivery Ratio by 3%, lowers delay by 17% and reduces the control overhead by 48% as compared to standard RPL implementation.
{"title":"RPL Protocol Enhancement using Artificial Neural Network (ANN) for IoT Applications","authors":"S. Kuwelkar, H. G. Virani","doi":"10.1109/IDCIoT56793.2023.10053540","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053540","url":null,"abstract":"In near future, IoT will revolutionize human lifestyle. IoT is categorized as low power lossy network since it employs devices with constrained power, memory and processing capability which are interconnected over lossy links. The efficiency of such networks largely depends on the design of the routing protocol. To cater specific routing needs of such networks, the IETF has proposed IPv6 routing protocol for LLNs (RPL) as a de facto routing standard. In RPL, routing decision is based on a single parameter which leads to the selection of inefficient paths and affects network lifetime. This work primarily focuses on improving the RPL protocol by overcoming the single metric limitation. In this work, a novel version of RPL is proposed which uses a Multilayer Feed Forward Neural Network to make the routing decision based on multiple metrics. Four routing parameters namely, hop count, delay, residual energy and link quality of candidate neighbors are fed as input to ANN in order to compute the fitness of each candidate and the one with highest value is designated as the most suitable parent to route packets towards sink node. This technique lowers energy consumption by 15%, improves Packet Delivery Ratio by 3%, lowers delay by 17% and reduces the control overhead by 48% as compared to standard RPL implementation.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"21 1","pages":"52-58"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89071402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053539
M. Manvitha, M. Vani Pujitha, N. Prasad, B. Yashitha Anju
1.80 metric tonnes of CO2 are emitted by citizens in India, which is highly detrimental to all living beings. Climate change and glacier melting are the results of CO2 emissions. Sea levels are rising as a result of global warming, which is mostly caused by CO2. In the past, the prediction has been accomplished using statistical approaches including the t-test, ANOVA test, ARIMA, and SARIMAX. The Random Forest, Decision Tree, and Regression Models are increasingly used to forecast CO2 emissions. When several vehicle feature inputs are used, multivariate polynomial regression and multiple linear regression may reliably forecast the emissions. For inputs with a single feature, single linear regression is used for the prediction. Based on factors including engine size, fuel type, cylinder count, vehicle class, and model, CO2 emissions are anticipated. Python Scikit-Learn and the Matplotlib package are used to analyze CO2 emissions. The efficiency of the implemented models is assessed by using performance metrics. The accuracy of each model is predicted by using the Regression Score (R2-Score), MAE (Mean Absolute Error), and MSE (Mean Squared Error).
{"title":"A Predictive Analysis on CO2 Emissions in Automobiles using Machine Learning Techniques","authors":"M. Manvitha, M. Vani Pujitha, N. Prasad, B. Yashitha Anju","doi":"10.1109/IDCIoT56793.2023.10053539","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053539","url":null,"abstract":"1.80 metric tonnes of CO2 are emitted by citizens in India, which is highly detrimental to all living beings. Climate change and glacier melting are the results of CO2 emissions. Sea levels are rising as a result of global warming, which is mostly caused by CO2. In the past, the prediction has been accomplished using statistical approaches including the t-test, ANOVA test, ARIMA, and SARIMAX. The Random Forest, Decision Tree, and Regression Models are increasingly used to forecast CO2 emissions. When several vehicle feature inputs are used, multivariate polynomial regression and multiple linear regression may reliably forecast the emissions. For inputs with a single feature, single linear regression is used for the prediction. Based on factors including engine size, fuel type, cylinder count, vehicle class, and model, CO2 emissions are anticipated. Python Scikit-Learn and the Matplotlib package are used to analyze CO2 emissions. The efficiency of the implemented models is assessed by using performance metrics. The accuracy of each model is predicted by using the Regression Score (R2-Score), MAE (Mean Absolute Error), and MSE (Mean Squared Error).","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"31 1","pages":"394-401"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86413921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053472
Chinnam Sasidhar Reddy, N. S. K. Deepak Rao, Atkuri Sisir, Vysyaraju Shanmukha Srinivasa Raju, S. S. Aravinth
E-commerce systems have grown in popularity and are now used in almost every business. A platform for online product marketing and customer promotion is an e-commerce system. Customer clustering is defined as the process of categorizing consumers into sections that share resembling characteristics. To maximize each customer's profit to the business, customer clustering’s goal is to help decide how to engage clients in each category. To facilitate customer needs by improvising products and optimizing services, businesses can identify their most profitable customers by segmenting their customer base. As a result, customer clustering assists E-commerce systems in promoting the appropriate product to the appropriate customer to increase profits. Customer clustering factors include geographic, psychological, behavioral, and demographic factors. The consumer’s behavioral factor has been highlighted in this research. As a result, to discover the consumption behavior of the E-shopping system, customers will be analyzed using several clustering algorithms. Clustering seeks to maximize experimental similarity within a cluster while minimizing dissimilarity between clusters. Customers’ age, gender, income, expenditure rate, etc. are correlated in this study. To assist vendors in identifying and concentrating on the most profitable segments of the market as opposed to the least profitable segments, this study compared several clustering techniques to find which technique is more accurate to cluster customer behavior. A significant role for this kind of analysis in business improvement to keep customers for a long time and boost business profits, businesses group their customers based on similar behavioral traits. It also enables the maximum disclosure of online offers to attract the attention of potential customers. A learning algorithm called K-Means and an unsupervised algorithm hierarchical clustering is applied to a customer dataset to compare which strategy gives most accurate clustering.
{"title":"A Comparative Survey on K-Means and Hierarchical Clustering in E-Commerce Systems","authors":"Chinnam Sasidhar Reddy, N. S. K. Deepak Rao, Atkuri Sisir, Vysyaraju Shanmukha Srinivasa Raju, S. S. Aravinth","doi":"10.1109/IDCIoT56793.2023.10053472","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053472","url":null,"abstract":"E-commerce systems have grown in popularity and are now used in almost every business. A platform for online product marketing and customer promotion is an e-commerce system. Customer clustering is defined as the process of categorizing consumers into sections that share resembling characteristics. To maximize each customer's profit to the business, customer clustering’s goal is to help decide how to engage clients in each category. To facilitate customer needs by improvising products and optimizing services, businesses can identify their most profitable customers by segmenting their customer base. As a result, customer clustering assists E-commerce systems in promoting the appropriate product to the appropriate customer to increase profits. Customer clustering factors include geographic, psychological, behavioral, and demographic factors. The consumer’s behavioral factor has been highlighted in this research. As a result, to discover the consumption behavior of the E-shopping system, customers will be analyzed using several clustering algorithms. Clustering seeks to maximize experimental similarity within a cluster while minimizing dissimilarity between clusters. Customers’ age, gender, income, expenditure rate, etc. are correlated in this study. To assist vendors in identifying and concentrating on the most profitable segments of the market as opposed to the least profitable segments, this study compared several clustering techniques to find which technique is more accurate to cluster customer behavior. A significant role for this kind of analysis in business improvement to keep customers for a long time and boost business profits, businesses group their customers based on similar behavioral traits. It also enables the maximum disclosure of online offers to attract the attention of potential customers. A learning algorithm called K-Means and an unsupervised algorithm hierarchical clustering is applied to a customer dataset to compare which strategy gives most accurate clustering.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"36 1","pages":"805-811"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86683493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}