Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053474
M. Prabha, M. Anbarasan, S. Sunithamani, Mrs. K. Saranya
In recent years, the wireless sensor networks are widely used in numerous real-time applications such as WBAN monitoring and tracking. Recent developments in wireless networks have given rise to new and reliable methods for enhancing network lifetime, energy efficiency, and scalability. The power consumption of the entire wireless sensor network and the energy level of each individual sensor node are closely related by the commonly used clustering technique to manage the sensor networks. In order to optimize data transmission, the fuzzy C means algorithm is employed in this article to thoroughly analyze the cluster head while considering the energy that is available in each node and distance metrics from the base station. This study demonstrates how carefully choosing cluster heads and node clustering, which divides large networks into smaller clusters, can enhance the lifespan of a network. The proposed network uses a multi-hop routing approach, where each sensor node can independently collect and send data in order to address the data rate issue. The suggested cluster routing protocol was put to the test with 1000 data transmission rounds to demonstrate its strengths and weaknesses in terms of network lifetime and energy efficiency. The choice of the cluster head node, the distance between the nodes, and the amount of energy needed for subsequent data transmission are all considered to be random for each round. The simulation results show that the suggested methodology beats cutting-edge routing techniques and achieves a promising network performance. Furthermore, the effects of hierarchical cluster head selection point to the potential of our method for use in WSN in the future. The following tests were performed using computer simulation, including comparing the effect of network life on the increased number of rounds before and after the influence of an energy-efficient routing protocol, and examining performance metrics for network lifetime.
{"title":"Hierarchical Fuzzy Methodologies for Energy Efficient Routing Protocol for Wireless Sensor Networks","authors":"M. Prabha, M. Anbarasan, S. Sunithamani, Mrs. K. Saranya","doi":"10.1109/IDCIoT56793.2023.10053474","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053474","url":null,"abstract":"In recent years, the wireless sensor networks are widely used in numerous real-time applications such as WBAN monitoring and tracking. Recent developments in wireless networks have given rise to new and reliable methods for enhancing network lifetime, energy efficiency, and scalability. The power consumption of the entire wireless sensor network and the energy level of each individual sensor node are closely related by the commonly used clustering technique to manage the sensor networks. In order to optimize data transmission, the fuzzy C means algorithm is employed in this article to thoroughly analyze the cluster head while considering the energy that is available in each node and distance metrics from the base station. This study demonstrates how carefully choosing cluster heads and node clustering, which divides large networks into smaller clusters, can enhance the lifespan of a network. The proposed network uses a multi-hop routing approach, where each sensor node can independently collect and send data in order to address the data rate issue. The suggested cluster routing protocol was put to the test with 1000 data transmission rounds to demonstrate its strengths and weaknesses in terms of network lifetime and energy efficiency. The choice of the cluster head node, the distance between the nodes, and the amount of energy needed for subsequent data transmission are all considered to be random for each round. The simulation results show that the suggested methodology beats cutting-edge routing techniques and achieves a promising network performance. Furthermore, the effects of hierarchical cluster head selection point to the potential of our method for use in WSN in the future. The following tests were performed using computer simulation, including comparing the effect of network life on the increased number of rounds before and after the influence of an energy-efficient routing protocol, and examining performance metrics for network lifetime.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"25 1","pages":"989-992"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81477151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053554
Sumita Kumar, P. Baruah, S. Kirubakaran, A. S. Kumar, Kamlesh Singh, M. V. J. Reddy
Customer Relationship Management (CRM) is a complete approach to constructing, handling, and establishing loyal and long-lasting customer relationships. It is mostly acknowledged and widely executed for distinct domains, e.g., telecom, retail market, banking and insurance, and so on. A major objective is customer retention. The churn methods drive to recognize early churn signals and identify customers with an enhanced possibility to leave voluntarily. Machine learning (ML) techniques are presented for tackling the churning prediction difficult. This paper presents a Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement (ROML-CPBI) technique. The aim of the ROML-CPBI technique is to forecast the possibility of customer churns in the business sector. The working of the ROML-CPBI technique encompasses two major processes namely prediction and parameter tuning. At the initial stage, the ROML-CPBI technique utilizes multi-kernel extreme learning machine (MKELM) technique for churn prediction purposes. Secondly, the RO algorithm is applied for adjusting the parameters related to the MKELM model and thereby results in enhanced predictive outcomes. For validating the greater performance of the ROML-CPBI technique, an extensive range of experiments were performed. The experimental values signified the improved outcomes of the ROML-CPBI technique over other ones.
{"title":"Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement","authors":"Sumita Kumar, P. Baruah, S. Kirubakaran, A. S. Kumar, Kamlesh Singh, M. V. J. Reddy","doi":"10.1109/IDCIoT56793.2023.10053554","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053554","url":null,"abstract":"Customer Relationship Management (CRM) is a complete approach to constructing, handling, and establishing loyal and long-lasting customer relationships. It is mostly acknowledged and widely executed for distinct domains, e.g., telecom, retail market, banking and insurance, and so on. A major objective is customer retention. The churn methods drive to recognize early churn signals and identify customers with an enhanced possibility to leave voluntarily. Machine learning (ML) techniques are presented for tackling the churning prediction difficult. This paper presents a Remora Optimization with Machine Learning Driven Churn Prediction for Business Improvement (ROML-CPBI) technique. The aim of the ROML-CPBI technique is to forecast the possibility of customer churns in the business sector. The working of the ROML-CPBI technique encompasses two major processes namely prediction and parameter tuning. At the initial stage, the ROML-CPBI technique utilizes multi-kernel extreme learning machine (MKELM) technique for churn prediction purposes. Secondly, the RO algorithm is applied for adjusting the parameters related to the MKELM model and thereby results in enhanced predictive outcomes. For validating the greater performance of the ROML-CPBI technique, an extensive range of experiments were performed. The experimental values signified the improved outcomes of the ROML-CPBI technique over other ones.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"31 1","pages":"416-420"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82905086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article proposes a weather monitoring and prediction system using ANN with Internet of things (IoT) for the diverse applications. Neural networks provide an ability to perform computation and learning. Neural networks can solve difficult problems that appear to be computationally interact. The output neurons are responsible for producing a function of the inputs from the input neurons and their previous outputs. The proposed system is developed based on IoT using ES P32 microcontroller by interfacing different sensors to take the input parameters. All the acquired sensor information can be visualized in the ThingSpeak cloud as well as in the mobile application. Once the data is acquired from the sensors it is processed by the system with the available data set and gives the output, that can be seen on the cloud, text message, and mail to the end user. As the proof of concepts, it is tested in different weather conditions.
{"title":"Artificial Neural Network (ANN) Enabled Weather Monitoring and Prediction System using IoT","authors":"P. Krishna, Kongara Chandra Bhanu, Shaik Akram Ahamed, Myneni Umesh Chandra, Neelapu Prudhvi, Nandigama Apoorva","doi":"10.1109/IDCIoT56793.2023.10053534","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053534","url":null,"abstract":"This article proposes a weather monitoring and prediction system using ANN with Internet of things (IoT) for the diverse applications. Neural networks provide an ability to perform computation and learning. Neural networks can solve difficult problems that appear to be computationally interact. The output neurons are responsible for producing a function of the inputs from the input neurons and their previous outputs. The proposed system is developed based on IoT using ES P32 microcontroller by interfacing different sensors to take the input parameters. All the acquired sensor information can be visualized in the ThingSpeak cloud as well as in the mobile application. Once the data is acquired from the sensors it is processed by the system with the available data set and gives the output, that can be seen on the cloud, text message, and mail to the end user. As the proof of concepts, it is tested in different weather conditions.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"25 1","pages":"46-51"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89867961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053519
Lakshmikanth Rajath Mohan T, N. Jayapandian
The Metaverse is a huge project undertaken by Facebook in order to bring the world closer together and help people live out their dreams. Even handicapped can travel across the world. People can visit any place and would be safe in the comfort of their homes. Meta (Previously Facebook) plans to execute this by using a combination of AR and VR (Augmented Reality and Virtual Reality). Facebook aims to bring this technology to the people soon. However, a big factor in this idea that needs to be accounted for is the amount of data generation that will take place. Many Computer Science professors and scientists believe that the amount of data Meta is going to generate in one day would almost be equal to the amount of data Instagram/Facebook would have generated in their entire lifetime. This will push the entire data generation by at least 30%, if not more. Using traditional methods such as cloud computing might seem to become a shortcoming in the near future. This is because the servers might not be able to handle such large amounts of data. The solution to this problem should be a system that is designed specifically for handling data that is extremely large. A system that is not only secure, resilient and robust but also must be able to handle multiple requests and connections at once and yet not slow down when the number of requests increases gradually over time. In this model, a solution called the DHA (Data Hive Architecture) is provided. These DHAs are made up of multiple subunits called Data Combs and those are further broken down into data cells. These are small units of memory which can process big data extremely fast. When information is requested from a client (Example: A Data Warehouse) that is stored in multiple edges across the world, then these Data Combs rearrange the data cells within them on the basis of the requested criteria. This article aims to explain this concept of data combs and its usage in the Metaverse.
{"title":"Enhanced Edge Computing Model by using Data Combs for Big Data in Metaverse","authors":"Lakshmikanth Rajath Mohan T, N. Jayapandian","doi":"10.1109/IDCIoT56793.2023.10053519","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053519","url":null,"abstract":"The Metaverse is a huge project undertaken by Facebook in order to bring the world closer together and help people live out their dreams. Even handicapped can travel across the world. People can visit any place and would be safe in the comfort of their homes. Meta (Previously Facebook) plans to execute this by using a combination of AR and VR (Augmented Reality and Virtual Reality). Facebook aims to bring this technology to the people soon. However, a big factor in this idea that needs to be accounted for is the amount of data generation that will take place. Many Computer Science professors and scientists believe that the amount of data Meta is going to generate in one day would almost be equal to the amount of data Instagram/Facebook would have generated in their entire lifetime. This will push the entire data generation by at least 30%, if not more. Using traditional methods such as cloud computing might seem to become a shortcoming in the near future. This is because the servers might not be able to handle such large amounts of data. The solution to this problem should be a system that is designed specifically for handling data that is extremely large. A system that is not only secure, resilient and robust but also must be able to handle multiple requests and connections at once and yet not slow down when the number of requests increases gradually over time. In this model, a solution called the DHA (Data Hive Architecture) is provided. These DHAs are made up of multiple subunits called Data Combs and those are further broken down into data cells. These are small units of memory which can process big data extremely fast. When information is requested from a client (Example: A Data Warehouse) that is stored in multiple edges across the world, then these Data Combs rearrange the data cells within them on the basis of the requested criteria. This article aims to explain this concept of data combs and its usage in the Metaverse.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"72 1","pages":"249-255"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85838463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053395
K. C. Reddy, V. K, Faraz Ahmed Mulla, G. N. H. Kumar, J. Prajwal, M. Gopal
Autonomous lateral movement is the significant challenge for self-driving cars; therefore, the major target of this paper is to replicate propulsion in order to improve the performance of self-driving cars. This work focuses on using multilayer neural networks and deep learning techniques to enable operating self-driving cars under stimulus conditions. Within the simulator, the driver`s vision and reactions are mimicked by preprocessing the images obtained from a cameramounted vehicle. Neural network trains the deep learning model based on the images captured by camera in manual mode. The trained multi-layer neural network creates various conditions for driving a car in self-driving mode. The driver imitation algorithms created and characterized in this work are all about profound learning techniques.
{"title":"A Review on Autopilot using Neuro Evaluation of Augmenting Topologies","authors":"K. C. Reddy, V. K, Faraz Ahmed Mulla, G. N. H. Kumar, J. Prajwal, M. Gopal","doi":"10.1109/IDCIoT56793.2023.10053395","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053395","url":null,"abstract":"Autonomous lateral movement is the significant challenge for self-driving cars; therefore, the major target of this paper is to replicate propulsion in order to improve the performance of self-driving cars. This work focuses on using multilayer neural networks and deep learning techniques to enable operating self-driving cars under stimulus conditions. Within the simulator, the driver`s vision and reactions are mimicked by preprocessing the images obtained from a cameramounted vehicle. Neural network trains the deep learning model based on the images captured by camera in manual mode. The trained multi-layer neural network creates various conditions for driving a car in self-driving mode. The driver imitation algorithms created and characterized in this work are all about profound learning techniques.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"17 1","pages":"573-577"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87406798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053390
Tshewang Phuntsho, T. Gonsalves
Scheduling long-term and financially dependent projects constrained by resources are of the utmost significance to project and finance managers. A new technique based on a modified Recurrent Neural Network (RNN) employing Parallel Schedule Generation Scheme (PSGS) is proposed as heuristics method to solve this discounted cash flows for resource-constrained project scheduling (RCPSPDC). To resolve the gradient exploding/vanishing problem of RNN, a Genetic Algorithm (GA) is employed to optimize its weight matrices. Our GA takes advantage of p-point crossover and m-point mutation operators besides utilizing elitism and tournament strategies to diversify and evolve the population. The proposed RNN architecture implemented in Julia language is evaluated on sampled projects from well-known 17,280 project instances dataset. This article, establishes the superior performance of our proposed architecture when compared to existing state-of-the-art standalone meta-heuristic techniques, besides having transfer learning capabilities. This technique can easily be hybridized with existing architectures to achieve remarkable performance.
{"title":"Maximizing the Net Present Value of Resource-Constrained Project Scheduling Problems using Recurrent Neural Network with Genetic Algorithm","authors":"Tshewang Phuntsho, T. Gonsalves","doi":"10.1109/IDCIoT56793.2023.10053390","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053390","url":null,"abstract":"Scheduling long-term and financially dependent projects constrained by resources are of the utmost significance to project and finance managers. A new technique based on a modified Recurrent Neural Network (RNN) employing Parallel Schedule Generation Scheme (PSGS) is proposed as heuristics method to solve this discounted cash flows for resource-constrained project scheduling (RCPSPDC). To resolve the gradient exploding/vanishing problem of RNN, a Genetic Algorithm (GA) is employed to optimize its weight matrices. Our GA takes advantage of p-point crossover and m-point mutation operators besides utilizing elitism and tournament strategies to diversify and evolve the population. The proposed RNN architecture implemented in Julia language is evaluated on sampled projects from well-known 17,280 project instances dataset. This article, establishes the superior performance of our proposed architecture when compared to existing state-of-the-art standalone meta-heuristic techniques, besides having transfer learning capabilities. This technique can easily be hybridized with existing architectures to achieve remarkable performance.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"81 1","pages":"524-530"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72689492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053440
Nandhini T J, K. Thinakaran
Research on the detection of objects at crime scenes has flourished in the last two decades. Researchers have been concentrating on color pictures, where lighting is a crucial component, since this is one of the most pressing issues in computer vision, with applications spanning surveillance, security, medicine, and more. However, night time monitoring is crucial since most security problems cannot be seen by the naked eye. That's why it's crucial to record a dark scene and identify the things at a crime scene. Even when its dark out, infrared cameras are indispensable. Both military and civilian sectors will benefit from the use of such methods for night time navigation. On the other hand, IR photographs have issues with poor resolution, lighting effects, and other similar issues. Surveillance cameras with infrared (IR) imaging capabilities have been the focus of much study and development in recent years. This research work has attempted to offer a good model for object recognition by using IR images obtained from crime scenes using Deep Learning. The model is tested in many scenarios including a central processing unit (CPU), Google COLAB, and graphics processing unit (GPU), and its performance is also tabulated.
{"title":"Detection of Crime Scene Objects using Deep Learning Techniques","authors":"Nandhini T J, K. Thinakaran","doi":"10.1109/IDCIoT56793.2023.10053440","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053440","url":null,"abstract":"Research on the detection of objects at crime scenes has flourished in the last two decades. Researchers have been concentrating on color pictures, where lighting is a crucial component, since this is one of the most pressing issues in computer vision, with applications spanning surveillance, security, medicine, and more. However, night time monitoring is crucial since most security problems cannot be seen by the naked eye. That's why it's crucial to record a dark scene and identify the things at a crime scene. Even when its dark out, infrared cameras are indispensable. Both military and civilian sectors will benefit from the use of such methods for night time navigation. On the other hand, IR photographs have issues with poor resolution, lighting effects, and other similar issues. Surveillance cameras with infrared (IR) imaging capabilities have been the focus of much study and development in recent years. This research work has attempted to offer a good model for object recognition by using IR images obtained from crime scenes using Deep Learning. The model is tested in many scenarios including a central processing unit (CPU), Google COLAB, and graphics processing unit (GPU), and its performance is also tabulated.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"15 1","pages":"357-361"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85692738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10053501
P. William, Y. N, V. M. Tidake, Snehal Sumit Gondkar, Chetana. R, K. Vengatesan
The phrase "personality" refers to an individual's distinct mode of thought, action, and behaviour Personality is a collection of feelings, thoughts, and aspirations that may be seen in the way people interact with one another. Behavioural features that separate one person from another and may be clearly seen when interacting with individuals in one's immediate surroundings and social group are included in this category of traits. To improve good healthy discourse, a variety of ways for evaluating candidate personalities based on the meaning of their textual message have been developed. According to the research, the textual content of interview responses to conventional interview questions is an effective measure for predicting a person's personality attribute. Nowadays, personality prediction has garnered considerable interest. It analyses user activity and displays their ideas, feelings, and so on. Historically, defining a personality trait was a laborious process. Thus, automated prediction is required for a big number of users. Different algorithms, data sources, and feature sets are used in various techniques. As a way to gauge someone's personality, personality prediction has evolved into an important topic of research in both psychology and computer science. Candidate personality traits may be classified using a word embedding model, which is the subject of this article.
{"title":"Framework for Implementation of Personality Inventory Model on Natural Language Processing with Personality Traits Analysis","authors":"P. William, Y. N, V. M. Tidake, Snehal Sumit Gondkar, Chetana. R, K. Vengatesan","doi":"10.1109/IDCIoT56793.2023.10053501","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053501","url":null,"abstract":"The phrase \"personality\" refers to an individual's distinct mode of thought, action, and behaviour Personality is a collection of feelings, thoughts, and aspirations that may be seen in the way people interact with one another. Behavioural features that separate one person from another and may be clearly seen when interacting with individuals in one's immediate surroundings and social group are included in this category of traits. To improve good healthy discourse, a variety of ways for evaluating candidate personalities based on the meaning of their textual message have been developed. According to the research, the textual content of interview responses to conventional interview questions is an effective measure for predicting a person's personality attribute. Nowadays, personality prediction has garnered considerable interest. It analyses user activity and displays their ideas, feelings, and so on. Historically, defining a personality trait was a laborious process. Thus, automated prediction is required for a big number of users. Different algorithms, data sources, and feature sets are used in various techniques. As a way to gauge someone's personality, personality prediction has evolved into an important topic of research in both psychology and computer science. Candidate personality traits may be classified using a word embedding model, which is the subject of this article.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"101 1","pages":"625-628"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72897136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1109/IDCIoT56793.2023.10052783
Khushi Gupta, Siddhartha Choubey, Y. N, P. William, V. N., Chaitanya P. Kale
Detecting driver drowsiness is a huge crucial problem in the sector of accident-avoidance technologies, so the development of an innovative intelligent system came into the picture. The system also prioritized safety concerns such as informing the victim and avoiding yawning. The technique for this system is a machine learning-based sophisticated algorithm that can identify the driver's facial expressions and quantify the rate of driver sleepiness. This may be avoided by activating an alarm that causes the driver to become alert when he or she becomes fatigued. The Eye Aspects Ratio (EAR) is used to recognize the system’s drowsiness rate by calculating the facial plot localization which extracts and gives the drowsiness rate.Current approaches, however, have significant shortcomings due to the considerable unpredictability of surrounding conditions. Poor lighting may impair the camera's ability to precisely measure the driver's face and eye. This will affect image processing analysis which corresponds to late detection or no detection, tendering the technique in accuracy and efficiency. Numerous strategies were investigated and analyzed to determine the optimal technique with the maximum accuracy for detecting driver tiredness. In this paper, the implementation of a real-time system is proposed that requires a camera to automatically trace and process the victim’s eye using Dlib Python, and OpenCV. The driver's eye area is continually monitored and computed to assess drowsiness before generating an output alarm to notify the driver.
{"title":"Implementation of Motorist Weariness Detection System using a Conventional Object Recognition Technique","authors":"Khushi Gupta, Siddhartha Choubey, Y. N, P. William, V. N., Chaitanya P. Kale","doi":"10.1109/IDCIoT56793.2023.10052783","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10052783","url":null,"abstract":"Detecting driver drowsiness is a huge crucial problem in the sector of accident-avoidance technologies, so the development of an innovative intelligent system came into the picture. The system also prioritized safety concerns such as informing the victim and avoiding yawning. The technique for this system is a machine learning-based sophisticated algorithm that can identify the driver's facial expressions and quantify the rate of driver sleepiness. This may be avoided by activating an alarm that causes the driver to become alert when he or she becomes fatigued. The Eye Aspects Ratio (EAR) is used to recognize the system’s drowsiness rate by calculating the facial plot localization which extracts and gives the drowsiness rate.Current approaches, however, have significant shortcomings due to the considerable unpredictability of surrounding conditions. Poor lighting may impair the camera's ability to precisely measure the driver's face and eye. This will affect image processing analysis which corresponds to late detection or no detection, tendering the technique in accuracy and efficiency. Numerous strategies were investigated and analyzed to determine the optimal technique with the maximum accuracy for detecting driver tiredness. In this paper, the implementation of a real-time system is proposed that requires a camera to automatically trace and process the victim’s eye using Dlib Python, and OpenCV. The driver's eye area is continually monitored and computed to assess drowsiness before generating an output alarm to notify the driver.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"67 1","pages":"640-646"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74807212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wideband Code Division Multiple Access (WCDMA) systems and Orthogonal Frequency Division Multiple Access (OFDMA) technique were the basic of modern wireless systems aimed to provide enriched services. But the channel impairments always put a limit on modern systems that also includes AC-MIMO Radio, 802.11ac and LTE/VoLTE. Here, the conduct of WCDMA and OFDMA primarily based totally structures is analyzed via way of means of widely recognized primary Gaussian Approximation (GA) in which interference and noise to the gadget is generated via way of means of suggest and variance approximations of noise power. In order to generate the faded transmitted signal Weibull, Rayleigh, Rician and Nakagami distributions have been applied to systems. OFDMA and WCDMA system performances for different fading environments have been observed by error rate graphs. It is validated that inclusion of fading in the system increases error rate and the performance of OFDMA system is much better than WCDMA system.
{"title":"Gaussian Approximation based WCDMA and OFDMA System Performance Investigation for Various Fading Channels","authors":"Parveen Singla, Vikas Gupta, Rinkesh Mittal, Ramanpreet Kaur, Jaskirat Kaur","doi":"10.1109/IDCIoT56793.2023.10053401","DOIUrl":"https://doi.org/10.1109/IDCIoT56793.2023.10053401","url":null,"abstract":"Wideband Code Division Multiple Access (WCDMA) systems and Orthogonal Frequency Division Multiple Access (OFDMA) technique were the basic of modern wireless systems aimed to provide enriched services. But the channel impairments always put a limit on modern systems that also includes AC-MIMO Radio, 802.11ac and LTE/VoLTE. Here, the conduct of WCDMA and OFDMA primarily based totally structures is analyzed via way of means of widely recognized primary Gaussian Approximation (GA) in which interference and noise to the gadget is generated via way of means of suggest and variance approximations of noise power. In order to generate the faded transmitted signal Weibull, Rayleigh, Rician and Nakagami distributions have been applied to systems. OFDMA and WCDMA system performances for different fading environments have been observed by error rate graphs. It is validated that inclusion of fading in the system increases error rate and the performance of OFDMA system is much better than WCDMA system.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"275 1","pages":"735-739"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76976332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}