Permana langgeng wicaksono ellwid Putra, Muhammad Naufal, Erwin Yudi Hidayat
{"title":"用于人群预测的 MobileNet 架构优化器比较研究","authors":"Permana langgeng wicaksono ellwid Putra, Muhammad Naufal, Erwin Yudi Hidayat","doi":"10.30591/jpit.v8i3.5703","DOIUrl":null,"url":null,"abstract":"Artificial intelligence technology has grown quickly in recent years. Convolutional neural network (CNN) technology has also been developed as a result of these developments. However, because convolutional neural networks entail several calculations and the optimization of numerous matrices, their application necessitates the utilization of appropriate technology, such as GPUs or other accelerators. Applying transfer learning techniques is one way to get around this resource barrier. MobileNetV2 is an example of a lightweight convolutional neural network architecture that is appropriate for transfer learning. The objective of the research is to compare the performance of SGD and Adam using the MobileNetv2 convolutional neural network architecture. Model training uses a learning rate of 0.0001, batch size of 32, and binary cross-entropy as the loss function. The training process is carried out for 100 epochs with the application of early stop and patience for 10 epochs. Result of this research is both models using Adam's optimizer and SGD show good capability in crowd classification. However, the model with the SGD optimizer has a slightly superior performance even with less accuracy than model with Adam optimizer. Which is model with Adam has accuracy 96%, while the model with SGD has 95% accuracy. This is because in the graphical results model with the SGD optimizer shows better stability than the model with the Adam optimizer. The loss graph and accuracy graph of the SGD model are more consistent and tend to experience lower fluctuations than the Adam model.","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Comparative Study of MobileNet Architecture Optimizer for Crowd Prediction\",\"authors\":\"Permana langgeng wicaksono ellwid Putra, Muhammad Naufal, Erwin Yudi Hidayat\",\"doi\":\"10.30591/jpit.v8i3.5703\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence technology has grown quickly in recent years. Convolutional neural network (CNN) technology has also been developed as a result of these developments. However, because convolutional neural networks entail several calculations and the optimization of numerous matrices, their application necessitates the utilization of appropriate technology, such as GPUs or other accelerators. Applying transfer learning techniques is one way to get around this resource barrier. MobileNetV2 is an example of a lightweight convolutional neural network architecture that is appropriate for transfer learning. The objective of the research is to compare the performance of SGD and Adam using the MobileNetv2 convolutional neural network architecture. Model training uses a learning rate of 0.0001, batch size of 32, and binary cross-entropy as the loss function. The training process is carried out for 100 epochs with the application of early stop and patience for 10 epochs. Result of this research is both models using Adam's optimizer and SGD show good capability in crowd classification. However, the model with the SGD optimizer has a slightly superior performance even with less accuracy than model with Adam optimizer. Which is model with Adam has accuracy 96%, while the model with SGD has 95% accuracy. This is because in the graphical results model with the SGD optimizer shows better stability than the model with the Adam optimizer. The loss graph and accuracy graph of the SGD model are more consistent and tend to experience lower fluctuations than the Adam model.\",\"PeriodicalId\":503683,\"journal\":{\"name\":\"Jurnal Informatika: Jurnal Pengembangan IT\",\"volume\":\"27 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Jurnal Informatika: Jurnal Pengembangan IT\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.30591/jpit.v8i3.5703\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Jurnal Informatika: Jurnal Pengembangan IT","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.30591/jpit.v8i3.5703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
近年来,人工智能技术发展迅速。卷积神经网络(CNN)技术也因这些发展而得到了发展。然而,由于卷积神经网络需要进行多次计算并对大量矩阵进行优化,因此其应用需要利用适当的技术,如 GPU 或其他加速器。应用迁移学习技术是绕过这一资源障碍的方法之一。MobileNetV2 就是适合迁移学习的轻量级卷积神经网络架构的一个例子。本研究的目的是比较使用 MobileNetv2 卷积神经网络架构的 SGD 和 Adam 的性能。模型训练使用的学习率为 0.0001,批量大小为 32,损失函数为二元交叉熵。训练过程进行了 100 个历元,并提前停止和耐心等待 10 个历元。研究结果表明,使用亚当优化器和 SGD 的模型在人群分类中都表现出了良好的能力。不过,与使用亚当优化器的模型相比,使用 SGD 优化器的模型即使准确率较低,性能也略胜一筹。其中,使用 Adam 优化器的模型准确率为 96%,而使用 SGD 优化器的模型准确率为 95%。这是因为在图形结果中,使用 SGD 优化器的模型比使用 Adam 优化器的模型显示出更好的稳定性。与 Adam 模型相比,SGD 模型的损失图和精确度图更加一致,波动也更小。
A Comparative Study of MobileNet Architecture Optimizer for Crowd Prediction
Artificial intelligence technology has grown quickly in recent years. Convolutional neural network (CNN) technology has also been developed as a result of these developments. However, because convolutional neural networks entail several calculations and the optimization of numerous matrices, their application necessitates the utilization of appropriate technology, such as GPUs or other accelerators. Applying transfer learning techniques is one way to get around this resource barrier. MobileNetV2 is an example of a lightweight convolutional neural network architecture that is appropriate for transfer learning. The objective of the research is to compare the performance of SGD and Adam using the MobileNetv2 convolutional neural network architecture. Model training uses a learning rate of 0.0001, batch size of 32, and binary cross-entropy as the loss function. The training process is carried out for 100 epochs with the application of early stop and patience for 10 epochs. Result of this research is both models using Adam's optimizer and SGD show good capability in crowd classification. However, the model with the SGD optimizer has a slightly superior performance even with less accuracy than model with Adam optimizer. Which is model with Adam has accuracy 96%, while the model with SGD has 95% accuracy. This is because in the graphical results model with the SGD optimizer shows better stability than the model with the Adam optimizer. The loss graph and accuracy graph of the SGD model are more consistent and tend to experience lower fluctuations than the Adam model.