首页 > 最新文献

2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)最新文献

英文 中文
Identification of Misogyny on Social Media in Indonesian Using Bidirectional Encoder Representations From Transformers (BERT) 利用《变形金刚》的双向编码器表征识别印尼社交媒体上的厌女症(BERT)
Bagas Tri Wibowo, Dade Nurjanah, Hani Nurrahmi
Misogyny is a behavior that hates or dislikes women Text classification can be used to identify misogyny text. One text classification method currently popular and proven to have good performance is the Bidirectional Encoder From Transformers (BERT). Fine-tuning is a method to transfer knowledge from a trained model to a new model to complete a new task. This study focuses on building a misogyny identification model with IndoBert pre-trained model provided by IndoNLU. The identification of Misogyny model obtained the best results with an accuracy value of 83.74% and by using K-fold cross-validation, the average validation value is 77.86%.
厌女症是一种厌恶或不喜欢女性的行为。文本分类可以用来识别厌女症文本。目前比较流行的一种文本分类方法是双向编码器从变压器(BERT)。微调是一种将知识从已训练好的模型转移到新模型以完成新任务的方法。本研究主要利用IndoNLU提供的IndoBert预训练模型构建厌女症识别模型。Misogyny模型的识别准确率最高,达到83.74%,K-fold交叉验证的平均验证值为77.86%。
{"title":"Identification of Misogyny on Social Media in Indonesian Using Bidirectional Encoder Representations From Transformers (BERT)","authors":"Bagas Tri Wibowo, Dade Nurjanah, Hani Nurrahmi","doi":"10.1109/ICAIIC57133.2023.10067106","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067106","url":null,"abstract":"Misogyny is a behavior that hates or dislikes women Text classification can be used to identify misogyny text. One text classification method currently popular and proven to have good performance is the Bidirectional Encoder From Transformers (BERT). Fine-tuning is a method to transfer knowledge from a trained model to a new model to complete a new task. This study focuses on building a misogyny identification model with IndoBert pre-trained model provided by IndoNLU. The identification of Misogyny model obtained the best results with an accuracy value of 83.74% and by using K-fold cross-validation, the average validation value is 77.86%.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127897659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Indoor Space Flow Analysis Based on Deep Learning 基于深度学习的室内空间流分析
Chang Woo Choi, Hyo-eun Kang, Yoonyoung Hong, Yong Su Kim, Guem Bo Kim, Aji Teguh Prihatno, Jang Hyun Ji, Seungdo Hong, Ho Won Kim
It is essential to perform flow analysis in all spaces where people live. For example, designing the shape of the wing by analyzing the flow flowing through the wing of an airplane, or finding an appropriate air conditioner installation location by analyzing the flow according to the location of the air conditioner in the indoor space. In this study, we propose a deep learning model that performs real-time flow analysis assuming an indoor space that is relatively smaller than outdoor space. Computational Fluid Dynamics (CFD), a traditional method used for flow analysis, is not suitable for this task because it takes a long time to derive simulation results. Thus, the application of deep learning to flow analysis is considered in the present study because deep learning technology for physics, i.e., fluid mechanics and thermodynamics, can be applied to real spaces. We have constructed a deep learning model based on the TransUnet model that can learn data relationships and capture spatial information. Unlike the existing TransUnet model, our model contains a dense layer to reflect operating and spatial information. train and test data were collected using the ANSYS FLUENT commercial program. On 11 test data cases, the average R2 score between the actual and predicted value was 0.884, and the RMSE was 0.047, which are significant results. We used the image of the entire space as well as a cross-section to see how similar the predicted values were to the actual ones, Although a slight error occurred inside the space, It was confirmed that the flow tendency was accurately learned under the given operating conditions. Flow analysis through simulation based on existing numerical analysis methods requires a minimum of 8 hours for processing. However, our proposed deep learning model significantly reduces the time cost of flow analysis as it requires less than 3 seconds.
在人们居住的所有空间中进行流量分析是必不可少的。例如,通过分析飞机机翼的气流来设计机翼的形状,或者根据空调在室内空间中的位置,通过分析气流来找到合适的空调安装位置。在本研究中,我们提出了一个深度学习模型,该模型可以在室内空间相对小于室外空间的情况下进行实时流量分析。计算流体力学(CFD)是一种传统的流动分析方法,但由于计算流体力学需要较长时间才能得出模拟结果,因此不适合这项任务。因此,本研究考虑将深度学习应用于流动分析,因为物理,即流体力学和热力学的深度学习技术可以应用于实际空间。我们在TransUnet模型的基础上构建了一个深度学习模型,可以学习数据关系和捕获空间信息。与现有的TransUnet模型不同,我们的模型包含一个密集的层来反映操作和空间信息。利用ANSYS FLUENT商业软件采集训练和试验数据。在11个测试数据用例中,实际值与预测值的平均R2得分为0.884,RMSE为0.047,均为显著性结果。我们使用了整个空间的图像和一个横截面来观察预测值与实际值的相似程度,虽然在空间内部有轻微的误差,但证实了在给定的操作条件下,准确地学习了流动趋势。基于现有数值分析方法进行流场仿真分析至少需要8小时的处理时间。然而,我们提出的深度学习模型显著降低了流分析的时间成本,因为它只需要不到3秒的时间。
{"title":"Indoor Space Flow Analysis Based on Deep Learning","authors":"Chang Woo Choi, Hyo-eun Kang, Yoonyoung Hong, Yong Su Kim, Guem Bo Kim, Aji Teguh Prihatno, Jang Hyun Ji, Seungdo Hong, Ho Won Kim","doi":"10.1109/ICAIIC57133.2023.10067105","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067105","url":null,"abstract":"It is essential to perform flow analysis in all spaces where people live. For example, designing the shape of the wing by analyzing the flow flowing through the wing of an airplane, or finding an appropriate air conditioner installation location by analyzing the flow according to the location of the air conditioner in the indoor space. In this study, we propose a deep learning model that performs real-time flow analysis assuming an indoor space that is relatively smaller than outdoor space. Computational Fluid Dynamics (CFD), a traditional method used for flow analysis, is not suitable for this task because it takes a long time to derive simulation results. Thus, the application of deep learning to flow analysis is considered in the present study because deep learning technology for physics, i.e., fluid mechanics and thermodynamics, can be applied to real spaces. We have constructed a deep learning model based on the TransUnet model that can learn data relationships and capture spatial information. Unlike the existing TransUnet model, our model contains a dense layer to reflect operating and spatial information. train and test data were collected using the ANSYS FLUENT commercial program. On 11 test data cases, the average R2 score between the actual and predicted value was 0.884, and the RMSE was 0.047, which are significant results. We used the image of the entire space as well as a cross-section to see how similar the predicted values were to the actual ones, Although a slight error occurred inside the space, It was confirmed that the flow tendency was accurately learned under the given operating conditions. Flow analysis through simulation based on existing numerical analysis methods requires a minimum of 8 hours for processing. However, our proposed deep learning model significantly reduces the time cost of flow analysis as it requires less than 3 seconds.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"1996 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128204180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proposal of Docker and Kubernetes Direction through the Event Timeline of Kubernetes 通过Kubernetes的事件时间线提出Docker和Kubernetes方向的建议
Seungchan Woo, Jong-Hyouk Lee
Modern developers typically run their workloads through cloud-native environments such as Docker and Kubernetes. Docker is a platform that runs and manages containers. With the birth of Docker, interest in containers and technology has grown. As one of the container orchestration tools that control and manage containers running on multiple hosts, Kubernetes has a very large share and is used by many cloud companies, making it the standard for practical container orchestration tools. Therefore, in this paper, by analyzing the Kubernetes event timeline, we present the future direction of Kubernetes and Docker, which are key tools in the cloud-native environment.
现代开发人员通常通过云原生环境(如Docker和Kubernetes)运行他们的工作负载。Docker是一个运行和管理容器的平台。随着Docker的诞生,人们对容器和技术的兴趣也在增长。作为控制和管理在多台主机上运行的容器的容器编排工具之一,Kubernetes占有非常大的份额,被许多云公司使用,使其成为实用容器编排工具的标准。因此,本文通过对Kubernetes事件时间线的分析,提出了Kubernetes和Docker这两个云原生环境下的关键工具的未来发展方向。
{"title":"Proposal of Docker and Kubernetes Direction through the Event Timeline of Kubernetes","authors":"Seungchan Woo, Jong-Hyouk Lee","doi":"10.1109/ICAIIC57133.2023.10066988","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10066988","url":null,"abstract":"Modern developers typically run their workloads through cloud-native environments such as Docker and Kubernetes. Docker is a platform that runs and manages containers. With the birth of Docker, interest in containers and technology has grown. As one of the container orchestration tools that control and manage containers running on multiple hosts, Kubernetes has a very large share and is used by many cloud companies, making it the standard for practical container orchestration tools. Therefore, in this paper, by analyzing the Kubernetes event timeline, we present the future direction of Kubernetes and Docker, which are key tools in the cloud-native environment.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126285019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GNN Link Prediction for Time-Triggered Systems 时间触发系统的GNN链路预测
Carlos Lua, Ye Zhang, Omar Hekal, Daniel Onwuchekwa, R. Obermaisser
Research on graph neural networks (GNNs) has increasingly gained popularity recently. GNN is considered a powerful tool for solving machine learning tasks that require dealing with irregular topologies such as graph data. Meanwhile, solving the scheduling problems for time-triggered systems has been debated for a long time. Even though several algorithms were proposed to solve this problem, none considered exploiting GNN partially or wholly, solving time-triggered scheduling. In this work, we propose an approach for dynamic adaptation in time-triggered systems using GNN. We use GNNs to solve scheduling problems for time-triggered systems by transforming job allocation probelms to link prediction tasks. The preliminary results show that GNNs have a promising potential to perform job allocation problems in time-triggered systems.
近年来,图神经网络(GNNs)的研究日益受到关注。GNN被认为是解决需要处理不规则拓扑(如图数据)的机器学习任务的强大工具。同时,时间触发系统的调度问题的解决一直是人们争论的焦点。尽管提出了几种算法来解决这个问题,但没有一个考虑部分或全部利用GNN来解决时间触发调度问题。在这项工作中,我们提出了一种使用GNN进行时间触发系统动态自适应的方法。通过将作业分配问题转化为链路预测任务,我们使用gnn来解决时间触发系统的调度问题。初步结果表明,gnn在时间触发系统中执行任务分配问题具有很大的潜力。
{"title":"GNN Link Prediction for Time-Triggered Systems","authors":"Carlos Lua, Ye Zhang, Omar Hekal, Daniel Onwuchekwa, R. Obermaisser","doi":"10.1109/ICAIIC57133.2023.10066960","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10066960","url":null,"abstract":"Research on graph neural networks (GNNs) has increasingly gained popularity recently. GNN is considered a powerful tool for solving machine learning tasks that require dealing with irregular topologies such as graph data. Meanwhile, solving the scheduling problems for time-triggered systems has been debated for a long time. Even though several algorithms were proposed to solve this problem, none considered exploiting GNN partially or wholly, solving time-triggered scheduling. In this work, we propose an approach for dynamic adaptation in time-triggered systems using GNN. We use GNNs to solve scheduling problems for time-triggered systems by transforming job allocation probelms to link prediction tasks. The preliminary results show that GNNs have a promising potential to perform job allocation problems in time-triggered systems.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127080682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double Deep Q-Learning based Backhaul Spectrum Allocation in Integrated Access and Backhaul Network 基于双深度q学习的综合接入回传网络回程频谱分配
Jeonghun Park, Heetae Jin, Jaehan Joo, Geonho Choi, Suk Chan Kim
In the fifth-generation (5G) network, mmWave has been utilized to cope with a demand for an extremely high data rate. However, the harsh propagation characteristic of mmWave signal limits networks' coverage, thus requiring network densification. Under this circumstance, 3GPP has introduced Integrated Access and Backhaul (IAB) architecture for cost-effective network deployment&operation. Contrary to traditional network architecture using wired backhaul links, IAB uses wireless backhaul links to forward data traffic. This feature improves spectrum utilization and cost efficiency. However, due to the dynamic, time-varying environment of the IAB network, finding a proper resource allocation strategy is a challenging issue. In this paper, we formulate the backhaul spectrum allocation problem maximizing user sum capacity. Then propose a double deep Q-Iearning-based backhaul spectrum allocation strategy. The simulation result shows that the proposed reinforcement learning-based spectrum allocation can achieve 20% higher user sum capacity than static rule-based spectrum allocation.
在第五代(5G)网络中,毫米波已被用于应对对极高数据速率的需求。然而,毫米波信号的恶劣传播特性限制了网络的覆盖范围,因此需要网络致密化。在这种情况下,3GPP引入了综合接入和回程(IAB)架构,以实现经济高效的网络部署和运营。与使用有线回程链路的传统网络架构相反,IAB使用无线回程链路转发数据流量。该特性提高了频谱利用率和成本效率。然而,由于IAB网络的动态、时变环境,寻找合适的资源分配策略是一个具有挑战性的问题。本文提出了最大用户和容量的回程频谱分配问题。然后提出了一种基于双深度q学习的回程频谱分配策略。仿真结果表明,基于强化学习的频谱分配比基于静态规则的频谱分配能提高20%的用户和容量。
{"title":"Double Deep Q-Learning based Backhaul Spectrum Allocation in Integrated Access and Backhaul Network","authors":"Jeonghun Park, Heetae Jin, Jaehan Joo, Geonho Choi, Suk Chan Kim","doi":"10.1109/ICAIIC57133.2023.10067029","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067029","url":null,"abstract":"In the fifth-generation (5G) network, mmWave has been utilized to cope with a demand for an extremely high data rate. However, the harsh propagation characteristic of mmWave signal limits networks' coverage, thus requiring network densification. Under this circumstance, 3GPP has introduced Integrated Access and Backhaul (IAB) architecture for cost-effective network deployment&operation. Contrary to traditional network architecture using wired backhaul links, IAB uses wireless backhaul links to forward data traffic. This feature improves spectrum utilization and cost efficiency. However, due to the dynamic, time-varying environment of the IAB network, finding a proper resource allocation strategy is a challenging issue. In this paper, we formulate the backhaul spectrum allocation problem maximizing user sum capacity. Then propose a double deep Q-Iearning-based backhaul spectrum allocation strategy. The simulation result shows that the proposed reinforcement learning-based spectrum allocation can achieve 20% higher user sum capacity than static rule-based spectrum allocation.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127538583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhanced-feature pyramid network for semantic segmentation 用于语义分割的增强特征金字塔网络
Van Toan Quyen, Jong Hyuk Lee, Min Young Kim
Semantic segmentation is a complicated topic when they require strictly the object boundary accuracy. For autonomous driving applications, they have to face a long range of objective sizes in the street scenes, so a single field of views is not suitable to extract input features. Feature pyramid network (FPN) is an effective method for computer vision tasks such as object detection and semantic segmentation. The architecture of this approach composes of a bottom-up pathway and a top-down pathway. Based on the structure, we can obtain rich spatial information from the largest layer and extract rich segmentation information from lower-scale features. The traditional FPN efficiently captures different objective sizes by using multiple receptive fields and then predicts the outputs from the concatenated features. The final feature combination is not optimistic when they burden the hardware with huge computation and reduce the semantic information. In this paper, we propose multiple predictions for semantic segmentation. Instead of combining four-feature scales together, the proposed method processes separately three lower scales as the contextual contributor and the largest features as the coarser-information branch. Each contextual feature is concatenated with the coarse branch to generate an individual prediction. By deploying this architecture, a single prediction effectively segments specific objective sizes. Finally, score maps are fused together in order to gather the prominent weights from the different predictions. A series of experiments is implemented to validate the efficiency on various open data sets. We have achieved good results 76.4% $m$IoU at 52 FPS on Cityscapes and 43.6% $m$IoU on Mapillary Vistas.
语义分割是一个非常复杂的问题,它对目标边界精度要求很高。对于自动驾驶应用来说,他们必须面对街景中的长范围物镜尺寸,因此单一视场不适合提取输入特征。特征金字塔网络(FPN)是一种有效的计算机视觉目标检测和语义分割方法。该方法的体系结构由自底向上路径和自顶向下路径组成。基于该结构,我们可以从最大层中获得丰富的空间信息,并从较低尺度的特征中提取丰富的分割信息。传统的FPN通过使用多个接收域来捕获不同的目标大小,然后从连接的特征中预测输出。当它们给硬件带来巨大的计算负担和减少语义信息时,最终的特征组合是不乐观的。在本文中,我们提出了语义分割的多个预测。该方法不是将四个特征尺度组合在一起,而是分别处理三个较低的尺度作为上下文贡献者,并将最大的特征作为粗信息分支。每个上下文特征与粗分支相连接,以生成单独的预测。通过部署这种架构,单个预测就可以有效地分割特定的目标大小。最后,将分数图融合在一起,以便从不同的预测中收集突出的权重。通过一系列实验验证了该算法在各种开放数据集上的有效性。我们取得了良好的成果,在城市景观方面取得了76.4%的百万美元IoU,在Mapillary远景方面取得了43.6%的百万美元IoU。
{"title":"Enhanced-feature pyramid network for semantic segmentation","authors":"Van Toan Quyen, Jong Hyuk Lee, Min Young Kim","doi":"10.1109/ICAIIC57133.2023.10067062","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067062","url":null,"abstract":"Semantic segmentation is a complicated topic when they require strictly the object boundary accuracy. For autonomous driving applications, they have to face a long range of objective sizes in the street scenes, so a single field of views is not suitable to extract input features. Feature pyramid network (FPN) is an effective method for computer vision tasks such as object detection and semantic segmentation. The architecture of this approach composes of a bottom-up pathway and a top-down pathway. Based on the structure, we can obtain rich spatial information from the largest layer and extract rich segmentation information from lower-scale features. The traditional FPN efficiently captures different objective sizes by using multiple receptive fields and then predicts the outputs from the concatenated features. The final feature combination is not optimistic when they burden the hardware with huge computation and reduce the semantic information. In this paper, we propose multiple predictions for semantic segmentation. Instead of combining four-feature scales together, the proposed method processes separately three lower scales as the contextual contributor and the largest features as the coarser-information branch. Each contextual feature is concatenated with the coarse branch to generate an individual prediction. By deploying this architecture, a single prediction effectively segments specific objective sizes. Finally, score maps are fused together in order to gather the prominent weights from the different predictions. A series of experiments is implemented to validate the efficiency on various open data sets. We have achieved good results 76.4% $m$IoU at 52 FPS on Cityscapes and 43.6% $m$IoU on Mapillary Vistas.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127022946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Reinforcement Learning Assisted Relative Distance based MAC in Vehicular Networks 车辆网络中基于相对距离的强化学习MAC
Yafeng Deng, Young-June Choi
Many efforts have been done to increase the performance of vehicle-to-vehicle (V2V) services, such as basic safety message (BSM) and collision avoidance warning. However, high dynamics, such as topology and channel condition, still pose big challenges to resource allocation tasks in vehicular networks. A previous work, relative distance based MAC [1], is proposed to address merging collision. The dynamics can not be fully addressed because thresholds are used. Therefore, we intuitively adapt a dueling deep Q-network [2] to tune the threshold based on the aforementioned work to further address merging collision. The simulation results demonstrate the improvement of the proposed algorithm.
为了提高车对车(V2V)服务的性能,人们已经做了很多努力,比如基本安全信息(BSM)和避碰警告。然而,拓扑和信道条件等高动态特性仍然给车载网络的资源分配任务带来了很大的挑战。先前的工作,基于相对距离的MAC[1],提出了解决合并碰撞。由于使用了阈值,因此无法完全解决动态问题。因此,我们在上述工作的基础上,直观地调整了一个决斗深度Q-network[2]来调整阈值,以进一步解决合并碰撞问题。仿真结果表明了该算法的改进。
{"title":"A Reinforcement Learning Assisted Relative Distance based MAC in Vehicular Networks","authors":"Yafeng Deng, Young-June Choi","doi":"10.1109/ICAIIC57133.2023.10067126","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067126","url":null,"abstract":"Many efforts have been done to increase the performance of vehicle-to-vehicle (V2V) services, such as basic safety message (BSM) and collision avoidance warning. However, high dynamics, such as topology and channel condition, still pose big challenges to resource allocation tasks in vehicular networks. A previous work, relative distance based MAC [1], is proposed to address merging collision. The dynamics can not be fully addressed because thresholds are used. Therefore, we intuitively adapt a dueling deep Q-network [2] to tune the threshold based on the aforementioned work to further address merging collision. The simulation results demonstrate the improvement of the proposed algorithm.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131388761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XGBoost Calibration Considering Feature Importance for Noninvasive HbA1c Estimation Using PPG Signals 考虑特征重要性的XGBoost校准使用PPG信号进行无创糖化血红蛋白估计
Mrinmoy Sarker Turja, Tae-Ho Kwon, Hyoungkeun Kim, Ki-Doo Kim
Diabetes has recently become a more serious disease. Almost every family has at least one diabetic. Patients have to regularly monitor their blood glucose levels, and using an invasive device on the other hand can be really painful and less reliable. This is because blood glucose levels fluctuate more with food intake. On the contrary, HbA1c level does not fluctuate as much as that of blood glucose. Therefore, in this study, XGBoost calibration considering only important features for Monte-Carlo simulation based noninvasive HbA1c estimation with PPG signals was proposed. After considering the important 13 of the 45 features, the model achieved a Pearson's r value of 98.90%.
糖尿病最近已成为一种更严重的疾病。几乎每个家庭都至少有一个糖尿病患者。患者必须定期监测他们的血糖水平,另一方面,使用侵入性设备可能真的很痛苦,而且不太可靠。这是因为血糖水平随食物摄入波动更大。相反,HbA1c水平的波动不像血糖那么大。因此,在本研究中,提出了仅考虑重要特征的XGBoost校准方法,用于基于蒙特卡罗模拟的基于PPG信号的无创HbA1c估计。在考虑了45个特征中的13个重要特征后,该模型的Pearson’s r值达到了98.90%。
{"title":"XGBoost Calibration Considering Feature Importance for Noninvasive HbA1c Estimation Using PPG Signals","authors":"Mrinmoy Sarker Turja, Tae-Ho Kwon, Hyoungkeun Kim, Ki-Doo Kim","doi":"10.1109/ICAIIC57133.2023.10067013","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067013","url":null,"abstract":"Diabetes has recently become a more serious disease. Almost every family has at least one diabetic. Patients have to regularly monitor their blood glucose levels, and using an invasive device on the other hand can be really painful and less reliable. This is because blood glucose levels fluctuate more with food intake. On the contrary, HbA1c level does not fluctuate as much as that of blood glucose. Therefore, in this study, XGBoost calibration considering only important features for Monte-Carlo simulation based noninvasive HbA1c estimation with PPG signals was proposed. After considering the important 13 of the 45 features, the model achieved a Pearson's r value of 98.90%.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124425211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI in Classroom: Group Score Prediction System 课堂上的AI:小组成绩预测系统
Yeoun Chan Kim, Pankaj Agarwal
Knowledge tracing and learning path optimization are active research fields in education with AI technologies. The purpose of knowledge tracing is to model student's knowledge state of a concept and to predict the percentage of correctly answer a next question. Using the technology of modeling a student's knowledge state, learning path optimization technologies recommend personalized learning path for efficient learning. These two research fields are implemented on learning management systems for individual learning. In this research paper, method of using knowledge tracing and learning path optimization in group learning environment is suggested. Group score prediction model predicts number of students who will answer their next question correctly by utilizing one-dimensional convolution neural network and fully connected layers. The model is adopted in a group score prediction system where instructors utilize the model's output to create a question set corresponding to their strategy and students' responses are used to re-train and evaluate the model.
知识追踪和学习路径优化是人工智能教育领域的研究热点。知识追踪的目的是模拟学生对一个概念的知识状态,并预测正确回答下一个问题的百分比。学习路径优化技术通过对学生的知识状态进行建模,为学生推荐个性化的学习路径,实现高效的学习。这两个研究领域都是在个人学习的学习管理系统上实现的。本文提出了在群体学习环境中运用知识跟踪和学习路径优化的方法。小组分数预测模型利用一维卷积神经网络和全连接层来预测正确回答下一题的学生人数。该模型被用于一个小组分数预测系统,教师利用模型的输出来创建一个与他们的策略相对应的问题集,并使用学生的回答来重新训练和评估模型。
{"title":"AI in Classroom: Group Score Prediction System","authors":"Yeoun Chan Kim, Pankaj Agarwal","doi":"10.1109/ICAIIC57133.2023.10067066","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10067066","url":null,"abstract":"Knowledge tracing and learning path optimization are active research fields in education with AI technologies. The purpose of knowledge tracing is to model student's knowledge state of a concept and to predict the percentage of correctly answer a next question. Using the technology of modeling a student's knowledge state, learning path optimization technologies recommend personalized learning path for efficient learning. These two research fields are implemented on learning management systems for individual learning. In this research paper, method of using knowledge tracing and learning path optimization in group learning environment is suggested. Group score prediction model predicts number of students who will answer their next question correctly by utilizing one-dimensional convolution neural network and fully connected layers. The model is adopted in a group score prediction system where instructors utilize the model's output to create a question set corresponding to their strategy and students' responses are used to re-train and evaluate the model.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121450683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Human Activity with LSTM Face Detection on Server Surveillance System 服务器监控系统中LSTM人脸检测预测人类活动
Alexander Nurenie, Y. Heryadi, Lukas, W. Suparta, Yulyani Arifin
Surveillance server technology was growth with new technology, effective, extra new features, human friendly, and human deals with big amount data, can't view and collect the data in the short time, and took time to analyze, playback video/picture to determine machine, human, vehicle or environment issues or performance, Surveillance Server Systems now which has the ability to face recognition, face detection, human detection, motion detection, license plate recognition, The authors perform this study that still new this research has never been done before to determine the efficacy of the LSTM in predicting human behavior (Long Short Term Memory) Face Detection on Server surveillance system, by taking log view data with a total of 91501 Face detection data downloaded from 10/18/2022~11/9/2022, the data will be processed using Python programming and training so that it can be used to predict the future regarding human activities that vary utilizing time series prediction LSTM include the number of daily activities, the highest and lowest numbers of days, and the maximum and minimum numbers of days. from the results of this study it was found to help to find out the days with the lowest number of humans and the days with the highest number of human activities, so that the owner can predict with sequence of the data the service would be provided when human activity is high in certain area or certain day, it can also can find out the maximum or minimum amount human counting day by day, and compare able some different date and location, the author will continue to do more in-depth research the others data related with prediction with deep learning server surveillance machine system interaction with human, vehicle behavior in the future studies.
监控服务器技术是随着新技术的发展而发展起来的,有效的,额外的新功能,人性化,人类处理大数据,不能在短时间内查看和收集数据,并且需要时间来分析,播放视频/图片以确定机器,人,车辆或环境的问题或性能,监控服务器系统现在具有人脸识别,人脸检测,人体检测,运动检测,车牌识别,为了确定LSTM在服务器监控系统上预测人类行为(长短期记忆)人脸检测方面的有效性,作者进行了一项新的研究,该研究采用了从2022年10月18日至2022年11月9日下载的91501个人脸检测数据的日志视图数据。数据将使用Python编程和训练进行处理,以便它可以用于预测人类活动的未来,利用时间序列预测LSTM包括日常活动的数量,最高和最低天数,以及最大和最小天数。从这项研究的结果被发现有助于找到最低的天的人类和天数最多的人类活动,以便业主能够预测的序列数据所提供的服务将是当人类活动是在特定的区域或特定的一天,它也可以找到人类最大或最小数量计算,并比较不同的日期和地点,在未来的研究中,作者将继续对其他与预测相关的数据进行更深入的研究,这些数据与深度学习服务器监控机系统与人、车辆的交互行为有关。
{"title":"Predicting Human Activity with LSTM Face Detection on Server Surveillance System","authors":"Alexander Nurenie, Y. Heryadi, Lukas, W. Suparta, Yulyani Arifin","doi":"10.1109/ICAIIC57133.2023.10066981","DOIUrl":"https://doi.org/10.1109/ICAIIC57133.2023.10066981","url":null,"abstract":"Surveillance server technology was growth with new technology, effective, extra new features, human friendly, and human deals with big amount data, can't view and collect the data in the short time, and took time to analyze, playback video/picture to determine machine, human, vehicle or environment issues or performance, Surveillance Server Systems now which has the ability to face recognition, face detection, human detection, motion detection, license plate recognition, The authors perform this study that still new this research has never been done before to determine the efficacy of the LSTM in predicting human behavior (Long Short Term Memory) Face Detection on Server surveillance system, by taking log view data with a total of 91501 Face detection data downloaded from 10/18/2022~11/9/2022, the data will be processed using Python programming and training so that it can be used to predict the future regarding human activities that vary utilizing time series prediction LSTM include the number of daily activities, the highest and lowest numbers of days, and the maximum and minimum numbers of days. from the results of this study it was found to help to find out the days with the lowest number of humans and the days with the highest number of human activities, so that the owner can predict with sequence of the data the service would be provided when human activity is high in certain area or certain day, it can also can find out the maximum or minimum amount human counting day by day, and compare able some different date and location, the author will continue to do more in-depth research the others data related with prediction with deep learning server surveillance machine system interaction with human, vehicle behavior in the future studies.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128621247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1