Pub Date : 2023-02-20DOI: 10.1109/ICAIIC57133.2023.10067029
Jeonghun Park, Heetae Jin, Jaehan Joo, Geonho Choi, Suk Chan Kim
In the fifth-generation (5G) network, mmWave has been utilized to cope with a demand for an extremely high data rate. However, the harsh propagation characteristic of mmWave signal limits networks' coverage, thus requiring network densification. Under this circumstance, 3GPP has introduced Integrated Access and Backhaul (IAB) architecture for cost-effective network deployment&operation. Contrary to traditional network architecture using wired backhaul links, IAB uses wireless backhaul links to forward data traffic. This feature improves spectrum utilization and cost efficiency. However, due to the dynamic, time-varying environment of the IAB network, finding a proper resource allocation strategy is a challenging issue. In this paper, we formulate the backhaul spectrum allocation problem maximizing user sum capacity. Then propose a double deep Q-Iearning-based backhaul spectrum allocation strategy. The simulation result shows that the proposed reinforcement learning-based spectrum allocation can achieve 20% higher user sum capacity than static rule-based spectrum allocation.
{"title":"Double Deep Q-Learning based Backhaul Spectrum Allocation in Integrated Access and Backhaul Network","authors":"Jeonghun Park, Heetae Jin, Jaehan Joo, Geonho Choi, Suk Chan Kim","doi":"10.1109/ICAIIC57133.2023.10067029","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067029","url":null,"abstract":"In the fifth-generation (5G) network, mmWave has been utilized to cope with a demand for an extremely high data rate. However, the harsh propagation characteristic of mmWave signal limits networks' coverage, thus requiring network densification. Under this circumstance, 3GPP has introduced Integrated Access and Backhaul (IAB) architecture for cost-effective network deployment&operation. Contrary to traditional network architecture using wired backhaul links, IAB uses wireless backhaul links to forward data traffic. This feature improves spectrum utilization and cost efficiency. However, due to the dynamic, time-varying environment of the IAB network, finding a proper resource allocation strategy is a challenging issue. In this paper, we formulate the backhaul spectrum allocation problem maximizing user sum capacity. Then propose a double deep Q-Iearning-based backhaul spectrum allocation strategy. The simulation result shows that the proposed reinforcement learning-based spectrum allocation can achieve 20% higher user sum capacity than static rule-based spectrum allocation.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127538583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-20DOI: 10.1109/ICAIIC57133.2023.10067062
Van Toan Quyen, Jong Hyuk Lee, Min Young Kim
Semantic segmentation is a complicated topic when they require strictly the object boundary accuracy. For autonomous driving applications, they have to face a long range of objective sizes in the street scenes, so a single field of views is not suitable to extract input features. Feature pyramid network (FPN) is an effective method for computer vision tasks such as object detection and semantic segmentation. The architecture of this approach composes of a bottom-up pathway and a top-down pathway. Based on the structure, we can obtain rich spatial information from the largest layer and extract rich segmentation information from lower-scale features. The traditional FPN efficiently captures different objective sizes by using multiple receptive fields and then predicts the outputs from the concatenated features. The final feature combination is not optimistic when they burden the hardware with huge computation and reduce the semantic information. In this paper, we propose multiple predictions for semantic segmentation. Instead of combining four-feature scales together, the proposed method processes separately three lower scales as the contextual contributor and the largest features as the coarser-information branch. Each contextual feature is concatenated with the coarse branch to generate an individual prediction. By deploying this architecture, a single prediction effectively segments specific objective sizes. Finally, score maps are fused together in order to gather the prominent weights from the different predictions. A series of experiments is implemented to validate the efficiency on various open data sets. We have achieved good results 76.4% $m$IoU at 52 FPS on Cityscapes and 43.6% $m$IoU on Mapillary Vistas.
{"title":"Enhanced-feature pyramid network for semantic segmentation","authors":"Van Toan Quyen, Jong Hyuk Lee, Min Young Kim","doi":"10.1109/ICAIIC57133.2023.10067062","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067062","url":null,"abstract":"Semantic segmentation is a complicated topic when they require strictly the object boundary accuracy. For autonomous driving applications, they have to face a long range of objective sizes in the street scenes, so a single field of views is not suitable to extract input features. Feature pyramid network (FPN) is an effective method for computer vision tasks such as object detection and semantic segmentation. The architecture of this approach composes of a bottom-up pathway and a top-down pathway. Based on the structure, we can obtain rich spatial information from the largest layer and extract rich segmentation information from lower-scale features. The traditional FPN efficiently captures different objective sizes by using multiple receptive fields and then predicts the outputs from the concatenated features. The final feature combination is not optimistic when they burden the hardware with huge computation and reduce the semantic information. In this paper, we propose multiple predictions for semantic segmentation. Instead of combining four-feature scales together, the proposed method processes separately three lower scales as the contextual contributor and the largest features as the coarser-information branch. Each contextual feature is concatenated with the coarse branch to generate an individual prediction. By deploying this architecture, a single prediction effectively segments specific objective sizes. Finally, score maps are fused together in order to gather the prominent weights from the different predictions. A series of experiments is implemented to validate the efficiency on various open data sets. We have achieved good results 76.4% $m$IoU at 52 FPS on Cityscapes and 43.6% $m$IoU on Mapillary Vistas.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127022946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-20DOI: 10.1109/ICAIIC57133.2023.10067132
Ryoto Koizumi, Xiaoyan Wang, M. Umehira, S. Takeda, Ran Sun
In recent years, high-resolution 77GHz onboard automotive radar has been extensively investigated for automated driving due to its high performance and low cost characteristics. As onboard CS (Chirp Sequence) radars' deployment density increases, inter-radar interference occurs which will increase target miss-detection and false-detection probabilities significantly. To address this critical and challenging problem, wideband interference suppression method using deep learning was proposed, in which the feasibility for performance improvement is validated based on simulations. In this study, we perform both simulation and experimental evaluations on RNN (recurrent neural network) based interference suppression method, in order to address the tradeoff between the model training time and interference suppression performance and validate its real-world applicability.
{"title":"RNN-based Interference Suppression Method for CS radar: Simulation and Experimental Evaluations","authors":"Ryoto Koizumi, Xiaoyan Wang, M. Umehira, S. Takeda, Ran Sun","doi":"10.1109/ICAIIC57133.2023.10067132","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067132","url":null,"abstract":"In recent years, high-resolution 77GHz onboard automotive radar has been extensively investigated for automated driving due to its high performance and low cost characteristics. As onboard CS (Chirp Sequence) radars' deployment density increases, inter-radar interference occurs which will increase target miss-detection and false-detection probabilities significantly. To address this critical and challenging problem, wideband interference suppression method using deep learning was proposed, in which the feasibility for performance improvement is validated based on simulations. In this study, we perform both simulation and experimental evaluations on RNN (recurrent neural network) based interference suppression method, in order to address the tradeoff between the model training time and interference suppression performance and validate its real-world applicability.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116570511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-20DOI: 10.1109/ICAIIC57133.2023.10067126
Yafeng Deng, Young-June Choi
Many efforts have been done to increase the performance of vehicle-to-vehicle (V2V) services, such as basic safety message (BSM) and collision avoidance warning. However, high dynamics, such as topology and channel condition, still pose big challenges to resource allocation tasks in vehicular networks. A previous work, relative distance based MAC [1], is proposed to address merging collision. The dynamics can not be fully addressed because thresholds are used. Therefore, we intuitively adapt a dueling deep Q-network [2] to tune the threshold based on the aforementioned work to further address merging collision. The simulation results demonstrate the improvement of the proposed algorithm.
{"title":"A Reinforcement Learning Assisted Relative Distance based MAC in Vehicular Networks","authors":"Yafeng Deng, Young-June Choi","doi":"10.1109/ICAIIC57133.2023.10067126","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067126","url":null,"abstract":"Many efforts have been done to increase the performance of vehicle-to-vehicle (V2V) services, such as basic safety message (BSM) and collision avoidance warning. However, high dynamics, such as topology and channel condition, still pose big challenges to resource allocation tasks in vehicular networks. A previous work, relative distance based MAC [1], is proposed to address merging collision. The dynamics can not be fully addressed because thresholds are used. Therefore, we intuitively adapt a dueling deep Q-network [2] to tune the threshold based on the aforementioned work to further address merging collision. The simulation results demonstrate the improvement of the proposed algorithm.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131388761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-20DOI: 10.1109/ICAIIC57133.2023.10067064
Dongkyu Kim, Seokjun Lee, Nak-Myoung Sung, Chungjae Choe
This paper presents a domain-based transfer learning method for deep learning-based object detection models where the method enables real-time computation in resource-constrained edge devices. Object detection is an essential task for intelligent platforms (e.g., drones, robots, and autonomous vehicles). However, edge devices could not afford to run huge object detection models due to insufficient resources. Although a compressed deep learning model increases inference speed, the accuracy of the model could be significantly deteriorate. In this paper, we propose an accurate object detection method while achieving real-time computation on edge devices. Our method aims to reduce marginal detection outputs of models according to application domains (e.g., city, park, factory, etc). We classify crucial objects (i.e., pedestrian, car, bench, etc) for a specific domain and adopt a transfer learning in which the learning is solely towards the selected objects. Such approach improves detection accuracy even for a compressed deep learning model like tiny versions of a YOLO (you only look once) framework. From the experiments, we validate that the method empowers the YOLOv7-tiny can provide the comparable detection accuracy with a YOLOv7 model despite of 83% less parameters than that of the original model. Besides, we confirm that our method achieves 389% faster inference on resource-constrained edge devices (i.e., NVIDIA Jetsons) than the YOLOv7.
{"title":"Real-time object detection using a domain-based transfer learning method for resource-constrained edge devices","authors":"Dongkyu Kim, Seokjun Lee, Nak-Myoung Sung, Chungjae Choe","doi":"10.1109/ICAIIC57133.2023.10067064","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067064","url":null,"abstract":"This paper presents a domain-based transfer learning method for deep learning-based object detection models where the method enables real-time computation in resource-constrained edge devices. Object detection is an essential task for intelligent platforms (e.g., drones, robots, and autonomous vehicles). However, edge devices could not afford to run huge object detection models due to insufficient resources. Although a compressed deep learning model increases inference speed, the accuracy of the model could be significantly deteriorate. In this paper, we propose an accurate object detection method while achieving real-time computation on edge devices. Our method aims to reduce marginal detection outputs of models according to application domains (e.g., city, park, factory, etc). We classify crucial objects (i.e., pedestrian, car, bench, etc) for a specific domain and adopt a transfer learning in which the learning is solely towards the selected objects. Such approach improves detection accuracy even for a compressed deep learning model like tiny versions of a YOLO (you only look once) framework. From the experiments, we validate that the method empowers the YOLOv7-tiny can provide the comparable detection accuracy with a YOLOv7 model despite of 83% less parameters than that of the original model. Besides, we confirm that our method achieves 389% faster inference on resource-constrained edge devices (i.e., NVIDIA Jetsons) than the YOLOv7.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132589840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-20DOI: 10.1109/ICAIIC57133.2023.10067105
Chang Woo Choi, Hyo-eun Kang, Yoonyoung Hong, Yong Su Kim, Guem Bo Kim, Aji Teguh Prihatno, Jang Hyun Ji, Seungdo Hong, Ho Won Kim
It is essential to perform flow analysis in all spaces where people live. For example, designing the shape of the wing by analyzing the flow flowing through the wing of an airplane, or finding an appropriate air conditioner installation location by analyzing the flow according to the location of the air conditioner in the indoor space. In this study, we propose a deep learning model that performs real-time flow analysis assuming an indoor space that is relatively smaller than outdoor space. Computational Fluid Dynamics (CFD), a traditional method used for flow analysis, is not suitable for this task because it takes a long time to derive simulation results. Thus, the application of deep learning to flow analysis is considered in the present study because deep learning technology for physics, i.e., fluid mechanics and thermodynamics, can be applied to real spaces. We have constructed a deep learning model based on the TransUnet model that can learn data relationships and capture spatial information. Unlike the existing TransUnet model, our model contains a dense layer to reflect operating and spatial information. train and test data were collected using the ANSYS FLUENT commercial program. On 11 test data cases, the average R2 score between the actual and predicted value was 0.884, and the RMSE was 0.047, which are significant results. We used the image of the entire space as well as a cross-section to see how similar the predicted values were to the actual ones, Although a slight error occurred inside the space, It was confirmed that the flow tendency was accurately learned under the given operating conditions. Flow analysis through simulation based on existing numerical analysis methods requires a minimum of 8 hours for processing. However, our proposed deep learning model significantly reduces the time cost of flow analysis as it requires less than 3 seconds.
{"title":"Indoor Space Flow Analysis Based on Deep Learning","authors":"Chang Woo Choi, Hyo-eun Kang, Yoonyoung Hong, Yong Su Kim, Guem Bo Kim, Aji Teguh Prihatno, Jang Hyun Ji, Seungdo Hong, Ho Won Kim","doi":"10.1109/ICAIIC57133.2023.10067105","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067105","url":null,"abstract":"It is essential to perform flow analysis in all spaces where people live. For example, designing the shape of the wing by analyzing the flow flowing through the wing of an airplane, or finding an appropriate air conditioner installation location by analyzing the flow according to the location of the air conditioner in the indoor space. In this study, we propose a deep learning model that performs real-time flow analysis assuming an indoor space that is relatively smaller than outdoor space. Computational Fluid Dynamics (CFD), a traditional method used for flow analysis, is not suitable for this task because it takes a long time to derive simulation results. Thus, the application of deep learning to flow analysis is considered in the present study because deep learning technology for physics, i.e., fluid mechanics and thermodynamics, can be applied to real spaces. We have constructed a deep learning model based on the TransUnet model that can learn data relationships and capture spatial information. Unlike the existing TransUnet model, our model contains a dense layer to reflect operating and spatial information. train and test data were collected using the ANSYS FLUENT commercial program. On 11 test data cases, the average R2 score between the actual and predicted value was 0.884, and the RMSE was 0.047, which are significant results. We used the image of the entire space as well as a cross-section to see how similar the predicted values were to the actual ones, Although a slight error occurred inside the space, It was confirmed that the flow tendency was accurately learned under the given operating conditions. Flow analysis through simulation based on existing numerical analysis methods requires a minimum of 8 hours for processing. However, our proposed deep learning model significantly reduces the time cost of flow analysis as it requires less than 3 seconds.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128204180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-20DOI: 10.1109/ICAIIC57133.2023.10067008
Sabrina Adinda Sari, Wikky Fawwaz Al Maki
The face is one of the biometrics utilized to learn information from a person, such as gender. Gender classification study is expanding daily as a result of how important it is and how many other sectors, like forensics, security, business, and others, employ it. However, in order to protect themselves and stop the spread of Covid-19 during this epidemic, everyone must wear a face mask. Because many crucial facial features that help determine a person's gender are obscured by masks, using one creates an issue for the gender classification system. To obtain optimal performance outcomes, suitable hyperparameters are also required. As a result, the objective of this study is to develop a gender categorization system based on mask-covered faces utilizing a novel technique that combines several features in the Gray Level Co-occurrence Matrix (GLCM), which is then fed into the Bagging classifier.A Hybrid Bat Algorithm (HBA) is used to optimize the bagging hyperparameters. With 97% accuracy, precision, recall, and f1-score values, the suggested model is demonstrated to have greater performance than before the hyperparameters were tuned using HBA.
人脸是用来了解一个人的性别等信息的生物特征之一。由于性别分类研究的重要性以及许多其他部门(如法医、安全、商业和其他部门)使用它,性别分类研究每天都在扩大。然而,为了在疫情期间保护自己并阻止Covid-19的传播,每个人都必须戴口罩。因为许多有助于确定一个人性别的关键面部特征被面具掩盖了,使用面具会给性别分类系统带来问题。为了获得最佳的性能结果,还需要合适的超参数。因此,本研究的目的是利用一种结合灰度共生矩阵(GLCM)中的几个特征的新技术,开发一种基于蒙面人脸的性别分类系统,然后将其输入Bagging分类器。采用HBA (Hybrid Bat Algorithm)算法对装袋超参数进行优化。具有97%的准确度、精密度、召回率和f1得分值,所建议的模型被证明比使用HBA调优超参数之前具有更高的性能。
{"title":"Masked Face Images Based Gender Classification using Hybrid Bat Algorithm Optimized Bagging","authors":"Sabrina Adinda Sari, Wikky Fawwaz Al Maki","doi":"10.1109/ICAIIC57133.2023.10067008","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067008","url":null,"abstract":"The face is one of the biometrics utilized to learn information from a person, such as gender. Gender classification study is expanding daily as a result of how important it is and how many other sectors, like forensics, security, business, and others, employ it. However, in order to protect themselves and stop the spread of Covid-19 during this epidemic, everyone must wear a face mask. Because many crucial facial features that help determine a person's gender are obscured by masks, using one creates an issue for the gender classification system. To obtain optimal performance outcomes, suitable hyperparameters are also required. As a result, the objective of this study is to develop a gender categorization system based on mask-covered faces utilizing a novel technique that combines several features in the Gray Level Co-occurrence Matrix (GLCM), which is then fed into the Bagging classifier.A Hybrid Bat Algorithm (HBA) is used to optimize the bagging hyperparameters. With 97% accuracy, precision, recall, and f1-score values, the suggested model is demonstrated to have greater performance than before the hyperparameters were tuned using HBA.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133107622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-20DOI: 10.1109/ICAIIC57133.2023.10067013
Mrinmoy Sarker Turja, Tae-Ho Kwon, Hyoungkeun Kim, Ki-Doo Kim
Diabetes has recently become a more serious disease. Almost every family has at least one diabetic. Patients have to regularly monitor their blood glucose levels, and using an invasive device on the other hand can be really painful and less reliable. This is because blood glucose levels fluctuate more with food intake. On the contrary, HbA1c level does not fluctuate as much as that of blood glucose. Therefore, in this study, XGBoost calibration considering only important features for Monte-Carlo simulation based noninvasive HbA1c estimation with PPG signals was proposed. After considering the important 13 of the 45 features, the model achieved a Pearson's r value of 98.90%.
{"title":"XGBoost Calibration Considering Feature Importance for Noninvasive HbA1c Estimation Using PPG Signals","authors":"Mrinmoy Sarker Turja, Tae-Ho Kwon, Hyoungkeun Kim, Ki-Doo Kim","doi":"10.1109/ICAIIC57133.2023.10067013","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067013","url":null,"abstract":"Diabetes has recently become a more serious disease. Almost every family has at least one diabetic. Patients have to regularly monitor their blood glucose levels, and using an invasive device on the other hand can be really painful and less reliable. This is because blood glucose levels fluctuate more with food intake. On the contrary, HbA1c level does not fluctuate as much as that of blood glucose. Therefore, in this study, XGBoost calibration considering only important features for Monte-Carlo simulation based noninvasive HbA1c estimation with PPG signals was proposed. After considering the important 13 of the 45 features, the model achieved a Pearson's r value of 98.90%.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124425211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-20DOI: 10.1109/ICAIIC57133.2023.10067066
Yeoun Chan Kim, Pankaj Agarwal
Knowledge tracing and learning path optimization are active research fields in education with AI technologies. The purpose of knowledge tracing is to model student's knowledge state of a concept and to predict the percentage of correctly answer a next question. Using the technology of modeling a student's knowledge state, learning path optimization technologies recommend personalized learning path for efficient learning. These two research fields are implemented on learning management systems for individual learning. In this research paper, method of using knowledge tracing and learning path optimization in group learning environment is suggested. Group score prediction model predicts number of students who will answer their next question correctly by utilizing one-dimensional convolution neural network and fully connected layers. The model is adopted in a group score prediction system where instructors utilize the model's output to create a question set corresponding to their strategy and students' responses are used to re-train and evaluate the model.
{"title":"AI in Classroom: Group Score Prediction System","authors":"Yeoun Chan Kim, Pankaj Agarwal","doi":"10.1109/ICAIIC57133.2023.10067066","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067066","url":null,"abstract":"Knowledge tracing and learning path optimization are active research fields in education with AI technologies. The purpose of knowledge tracing is to model student's knowledge state of a concept and to predict the percentage of correctly answer a next question. Using the technology of modeling a student's knowledge state, learning path optimization technologies recommend personalized learning path for efficient learning. These two research fields are implemented on learning management systems for individual learning. In this research paper, method of using knowledge tracing and learning path optimization in group learning environment is suggested. Group score prediction model predicts number of students who will answer their next question correctly by utilizing one-dimensional convolution neural network and fully connected layers. The model is adopted in a group score prediction system where instructors utilize the model's output to create a question set corresponding to their strategy and students' responses are used to re-train and evaluate the model.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121450683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-20DOI: 10.1109/ICAIIC57133.2023.10066981
Alexander Nurenie, Y. Heryadi, Lukas, W. Suparta, Yulyani Arifin
Surveillance server technology was growth with new technology, effective, extra new features, human friendly, and human deals with big amount data, can't view and collect the data in the short time, and took time to analyze, playback video/picture to determine machine, human, vehicle or environment issues or performance, Surveillance Server Systems now which has the ability to face recognition, face detection, human detection, motion detection, license plate recognition, The authors perform this study that still new this research has never been done before to determine the efficacy of the LSTM in predicting human behavior (Long Short Term Memory) Face Detection on Server surveillance system, by taking log view data with a total of 91501 Face detection data downloaded from 10/18/2022~11/9/2022, the data will be processed using Python programming and training so that it can be used to predict the future regarding human activities that vary utilizing time series prediction LSTM include the number of daily activities, the highest and lowest numbers of days, and the maximum and minimum numbers of days. from the results of this study it was found to help to find out the days with the lowest number of humans and the days with the highest number of human activities, so that the owner can predict with sequence of the data the service would be provided when human activity is high in certain area or certain day, it can also can find out the maximum or minimum amount human counting day by day, and compare able some different date and location, the author will continue to do more in-depth research the others data related with prediction with deep learning server surveillance machine system interaction with human, vehicle behavior in the future studies.
{"title":"Predicting Human Activity with LSTM Face Detection on Server Surveillance System","authors":"Alexander Nurenie, Y. Heryadi, Lukas, W. Suparta, Yulyani Arifin","doi":"10.1109/ICAIIC57133.2023.10066981","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10066981","url":null,"abstract":"Surveillance server technology was growth with new technology, effective, extra new features, human friendly, and human deals with big amount data, can't view and collect the data in the short time, and took time to analyze, playback video/picture to determine machine, human, vehicle or environment issues or performance, Surveillance Server Systems now which has the ability to face recognition, face detection, human detection, motion detection, license plate recognition, The authors perform this study that still new this research has never been done before to determine the efficacy of the LSTM in predicting human behavior (Long Short Term Memory) Face Detection on Server surveillance system, by taking log view data with a total of 91501 Face detection data downloaded from 10/18/2022~11/9/2022, the data will be processed using Python programming and training so that it can be used to predict the future regarding human activities that vary utilizing time series prediction LSTM include the number of daily activities, the highest and lowest numbers of days, and the maximum and minimum numbers of days. from the results of this study it was found to help to find out the days with the lowest number of humans and the days with the highest number of human activities, so that the owner can predict with sequence of the data the service would be provided when human activity is high in certain area or certain day, it can also can find out the maximum or minimum amount human counting day by day, and compare able some different date and location, the author will continue to do more in-depth research the others data related with prediction with deep learning server surveillance machine system interaction with human, vehicle behavior in the future studies.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128621247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}