Permana langgeng wicaksono ellwid Putra, Muhammad Naufal, Erwin Yudi Hidayat
Artificial intelligence technology has grown quickly in recent years. Convolutional neural network (CNN) technology has also been developed as a result of these developments. However, because convolutional neural networks entail several calculations and the optimization of numerous matrices, their application necessitates the utilization of appropriate technology, such as GPUs or other accelerators. Applying transfer learning techniques is one way to get around this resource barrier. MobileNetV2 is an example of a lightweight convolutional neural network architecture that is appropriate for transfer learning. The objective of the research is to compare the performance of SGD and Adam using the MobileNetv2 convolutional neural network architecture. Model training uses a learning rate of 0.0001, batch size of 32, and binary cross-entropy as the loss function. The training process is carried out for 100 epochs with the application of early stop and patience for 10 epochs. Result of this research is both models using Adam's optimizer and SGD show good capability in crowd classification. However, the model with the SGD optimizer has a slightly superior performance even with less accuracy than model with Adam optimizer. Which is model with Adam has accuracy 96%, while the model with SGD has 95% accuracy. This is because in the graphical results model with the SGD optimizer shows better stability than the model with the Adam optimizer. The loss graph and accuracy graph of the SGD model are more consistent and tend to experience lower fluctuations than the Adam model.
近年来,人工智能技术发展迅速。卷积神经网络(CNN)技术也因这些发展而得到了发展。然而,由于卷积神经网络需要进行多次计算并对大量矩阵进行优化,因此其应用需要利用适当的技术,如 GPU 或其他加速器。应用迁移学习技术是绕过这一资源障碍的方法之一。MobileNetV2 就是适合迁移学习的轻量级卷积神经网络架构的一个例子。本研究的目的是比较使用 MobileNetv2 卷积神经网络架构的 SGD 和 Adam 的性能。模型训练使用的学习率为 0.0001,批量大小为 32,损失函数为二元交叉熵。训练过程进行了 100 个历元,并提前停止和耐心等待 10 个历元。研究结果表明,使用亚当优化器和 SGD 的模型在人群分类中都表现出了良好的能力。不过,与使用亚当优化器的模型相比,使用 SGD 优化器的模型即使准确率较低,性能也略胜一筹。其中,使用 Adam 优化器的模型准确率为 96%,而使用 SGD 优化器的模型准确率为 95%。这是因为在图形结果中,使用 SGD 优化器的模型比使用 Adam 优化器的模型显示出更好的稳定性。与 Adam 模型相比,SGD 模型的损失图和精确度图更加一致,波动也更小。
{"title":"A Comparative Study of MobileNet Architecture Optimizer for Crowd Prediction","authors":"Permana langgeng wicaksono ellwid Putra, Muhammad Naufal, Erwin Yudi Hidayat","doi":"10.30591/jpit.v8i3.5703","DOIUrl":"https://doi.org/10.30591/jpit.v8i3.5703","url":null,"abstract":"Artificial intelligence technology has grown quickly in recent years. Convolutional neural network (CNN) technology has also been developed as a result of these developments. However, because convolutional neural networks entail several calculations and the optimization of numerous matrices, their application necessitates the utilization of appropriate technology, such as GPUs or other accelerators. Applying transfer learning techniques is one way to get around this resource barrier. MobileNetV2 is an example of a lightweight convolutional neural network architecture that is appropriate for transfer learning. The objective of the research is to compare the performance of SGD and Adam using the MobileNetv2 convolutional neural network architecture. Model training uses a learning rate of 0.0001, batch size of 32, and binary cross-entropy as the loss function. The training process is carried out for 100 epochs with the application of early stop and patience for 10 epochs. Result of this research is both models using Adam's optimizer and SGD show good capability in crowd classification. However, the model with the SGD optimizer has a slightly superior performance even with less accuracy than model with Adam optimizer. Which is model with Adam has accuracy 96%, while the model with SGD has 95% accuracy. This is because in the graphical results model with the SGD optimizer shows better stability than the model with the Adam optimizer. The loss graph and accuracy graph of the SGD model are more consistent and tend to experience lower fluctuations than the Adam model.","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139339363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yani Parti Astuti, Egia Rosi Subhiyakto, Indah Wardatunizza, Etika Kartikadarma
Bawen District is one of the sub-districts in Semarang Regency, Central Java. This region has an area of land used for agriculture around 63.29%. In this area the population still uses soil as a planting medium. Soil is one of the planting media which plays an important role for the survival of plants. With so many types of soil that have different properties and characteristics, the treatment of these soils is also different. So it is necessary to have a soil classification to know how to manage the soil properly. To facilitate the classification of soil types, Deep Learning technology can be utilized with images as input which are then processed using the Convolutional Neural Network (CNN) algorithm. In order to get a model that has a high level of accuracy, an experiment was carried out on several influential parameters and an evaluation of the model was carried out using a confusion matrix. The confusion matrix has several values such as accuracy, precision, recall, and f1-score. Tests have been carried out and the results of this study are models that have a training accuracy of 97% with a loss value of 0.0880 and a testing accuracy of 95% with a loss value of 0.1513.
巴文区(Bawen)是中爪哇三宝垄地区的一个分区。该地区的农业用地面积约占 63.29%。该地区的居民仍然使用土壤作为种植介质。土壤是种植介质之一,对植物的生存起着重要作用。由于土壤种类繁多,性质和特点各不相同,对这些土壤的处理方法也不尽相同。因此,有必要对土壤进行分类,以便知道如何正确管理土壤。为了便于对土壤类型进行分类,可以利用深度学习技术,将图像作为输入,然后使用卷积神经网络(CNN)算法进行处理。为了得到一个高准确度的模型,我们对几个有影响的参数进行了实验,并使用混淆矩阵对模型进行了评估。混淆矩阵有几个值,如准确度、精确度、召回率和 f1 分数。测试结果表明,模型的训练准确率为 97%,损失值为 0.0880;测试准确率为 95%,损失值为 0.1513。
{"title":"Implementasi Algoritma Convolutional Neural Network (CNN) Untuk Klasifikasi Jenis Tanah Berbasis Android","authors":"Yani Parti Astuti, Egia Rosi Subhiyakto, Indah Wardatunizza, Etika Kartikadarma","doi":"10.30591/jpit.v8i3.5026","DOIUrl":"https://doi.org/10.30591/jpit.v8i3.5026","url":null,"abstract":"Bawen District is one of the sub-districts in Semarang Regency, Central Java. This region has an area of land used for agriculture around 63.29%. In this area the population still uses soil as a planting medium. Soil is one of the planting media which plays an important role for the survival of plants. With so many types of soil that have different properties and characteristics, the treatment of these soils is also different. So it is necessary to have a soil classification to know how to manage the soil properly. To facilitate the classification of soil types, Deep Learning technology can be utilized with images as input which are then processed using the Convolutional Neural Network (CNN) algorithm. In order to get a model that has a high level of accuracy, an experiment was carried out on several influential parameters and an evaluation of the model was carried out using a confusion matrix. The confusion matrix has several values such as accuracy, precision, recall, and f1-score. Tests have been carried out and the results of this study are models that have a training accuracy of 97% with a loss value of 0.0880 and a testing accuracy of 95% with a loss value of 0.1513.","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"127 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139339378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Julkarnain, Pengembangan Aplikasi, Kamus Bahasa, Erwin Mardinata
The dictionary is one of the solutions for learning about vocabulary and translating it. The use of dictionaries in the form of books is less effective and efficient, so it is necessary to develop electronic dictionaries in the form of dictionary applications available on smartphones. This study aims to develop an Android-based three-language dictionary application, namely Bima, Indonesian, and English, which includes a speech to text feature. The software development method uses the Rapid Application Development (RAD) method. This method has four main stages: requirements planning, design, construction, and cutover. Data collection methods in this research include observation, interviews, documentation, and literature study. The data analysis technique used is qualitative data analysis of data resulting from observations, interviews, and literature studies. The application was built using the Flutter and Codeigniter frameworks. In the final stage of dictionary application development, testing was carried out on the application's functionality using the black box method. The results of the test show that the application runs very well; all buttons and features work as they should after fixing bugs and problems found in the final test before launch.
{"title":"Pengembangan Aplikasi Kamus Bahasa Bima-Inggris-Indonesia Menggunakan Rapid Application Development","authors":"M. Julkarnain, Pengembangan Aplikasi, Kamus Bahasa, Erwin Mardinata","doi":"10.30591/jpit.v8i3.5692","DOIUrl":"https://doi.org/10.30591/jpit.v8i3.5692","url":null,"abstract":"The dictionary is one of the solutions for learning about vocabulary and translating it. The use of dictionaries in the form of books is less effective and efficient, so it is necessary to develop electronic dictionaries in the form of dictionary applications available on smartphones. This study aims to develop an Android-based three-language dictionary application, namely Bima, Indonesian, and English, which includes a speech to text feature. The software development method uses the Rapid Application Development (RAD) method. This method has four main stages: requirements planning, design, construction, and cutover. Data collection methods in this research include observation, interviews, documentation, and literature study. The data analysis technique used is qualitative data analysis of data resulting from observations, interviews, and literature studies. The application was built using the Flutter and Codeigniter frameworks. In the final stage of dictionary application development, testing was carried out on the application's functionality using the black box method. The results of the test show that the application runs very well; all buttons and features work as they should after fixing bugs and problems found in the final test before launch.","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139339371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Termination of employment (PHK) on a large scale has a very significant impact on society and the economy. Mass layoffs have led to an increase in the number of unemployed people. Many people who have lost their jobs without a stable source of income struggle to find new jobs. This exacerbated the situation on the labor market and increased the number of unemployed people. Mass layoffs can also reduce economic activity and consumption. The sentiment analysis carried out aims to determine public sentiment regarding the phenomenon of mass layoffs that are currently happening in Indonesia based on positive and negative categories. In this study, the classification method used is the SVM method, which is one of the supervised learning methods in machine learning and also uses Nave Bayes as a comparison method. After classification, the next stage is the testing process using the K-fold cross-validation method. From the various sentiments obtained from Twitter data, it can be concluded that there are around 108 positive sentiments and 333 negative sentiments related to mass layoffs, while the results obtained from the test results using the SVM method show an accuracy of up to 84% while using the Nave Bayes method shows an accuracy of up to 74.1 percent
{"title":"Analisis Sentimen Fenomena PHK Massal Menggunakan Naive Bayes dan Support Vector Machine","authors":"Mohd Amiruddin Saddam, Erno Kurniawan D, I. Indra","doi":"10.30591/jpit.v8i3.4884","DOIUrl":"https://doi.org/10.30591/jpit.v8i3.4884","url":null,"abstract":"Termination of employment (PHK) on a large scale has a very significant impact on society and the economy. Mass layoffs have led to an increase in the number of unemployed people. Many people who have lost their jobs without a stable source of income struggle to find new jobs. This exacerbated the situation on the labor market and increased the number of unemployed people. Mass layoffs can also reduce economic activity and consumption. The sentiment analysis carried out aims to determine public sentiment regarding the phenomenon of mass layoffs that are currently happening in Indonesia based on positive and negative categories. In this study, the classification method used is the SVM method, which is one of the supervised learning methods in machine learning and also uses Nave Bayes as a comparison method. After classification, the next stage is the testing process using the K-fold cross-validation method. From the various sentiments obtained from Twitter data, it can be concluded that there are around 108 positive sentiments and 333 negative sentiments related to mass layoffs, while the results obtained from the test results using the SVM method show an accuracy of up to 84% while using the Nave Bayes method shows an accuracy of up to 74.1 percent","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139339415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. S. Rohman, Galuh Wilujeng Saraswati, N. Winarsih
Location Based Service (LBS) is a service on smartphones that functions as a navigation device based on the user's position to determine the location where the user is. LBS utilizes GPS capabilities in finding geolocation information and sometimes using Google maps to display a complete map of the location. But the results of previous research studies Google Map does not give shortest and accessible routes. Furthermore, to improve work of LBS, Floyd Warshall algorithm is used because the algorithm has the principle of optimality in calculating the total of all routes optimally. According to data recorded by the Ministry of Religion of the Republic of Indonesia there have been 1,304 Mosques in the City of Semarang, but with this much data it should be easier to find places of worship for Muslims. Most mosques that are visited are mosques on the highway because it is more visible even though there are many other mosques that can be accessed. By using the White Box and Black Box tests, finding shortest path to find places of worship in the city of Semarang can be given accurately. The result was the Floyd Warshall algorithm could provide shortest path route and it was more accessible better than Google Map navigation.
{"title":"Implementasi Algoritma Floyd Warshall Pada Aplikasi Dewan Masjid Indonesia (Dmi) Kota Semarang Untuk Menentukan Masjid Terdekat","authors":"M. S. Rohman, Galuh Wilujeng Saraswati, N. Winarsih","doi":"10.30591/jpit.v8i3.4895","DOIUrl":"https://doi.org/10.30591/jpit.v8i3.4895","url":null,"abstract":"Location Based Service (LBS) is a service on smartphones that functions as a navigation device based on the user's position to determine the location where the user is. LBS utilizes GPS capabilities in finding geolocation information and sometimes using Google maps to display a complete map of the location. But the results of previous research studies Google Map does not give shortest and accessible routes. Furthermore, to improve work of LBS, Floyd Warshall algorithm is used because the algorithm has the principle of optimality in calculating the total of all routes optimally. According to data recorded by the Ministry of Religion of the Republic of Indonesia there have been 1,304 Mosques in the City of Semarang, but with this much data it should be easier to find places of worship for Muslims. Most mosques that are visited are mosques on the highway because it is more visible even though there are many other mosques that can be accessed. By using the White Box and Black Box tests, finding shortest path to find places of worship in the city of Semarang can be given accurately. The result was the Floyd Warshall algorithm could provide shortest path route and it was more accessible better than Google Map navigation.","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139339364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ensuring and fulfilling the needs of the community isa form of government responsibility to reduce existing socialinequalities. One of the efforts that the government has made isto provide social assistance through the Non-Cash FoodAssistance program. However, the process of selecting recipientsof social assistance is often not on target. For this reason, it isnecessary to build a system that is able to support in determiningdecisions for the selection of families receiving social assistance.To help the selection process of social assistance recipients, ofcourse, it must use the right and appropriate method so that theselection process produces social assistance recipients who reallydeserve assistance. The selection process in this study uses twodecision support methods, namely Fuzzy Logic and SimpleAdditive Weighting (SAW) and has conducted accuracy tests onboth methods against the suitability of recipient eligibility data,so that it can be seen which method has the highest level ofaccuracy in the selection of social assistance recipients. Theresults of the accuracy test carried out in this study are that bothmethods produce the same high level of accuracy in thesuitability of prospective recipient eligibility results, namely100%, this means that both methods can be used in determiningrecipients of social assistanc.
{"title":"Analisis Perbandingan Metode Fuzzy Logic Dan Metode SAW Dalam Pemilihan Keluarga Penerima Bantuan Sosial","authors":"Siti Ma'rifatul Latifah, Dwi Agus Diartono","doi":"10.30591/jpit.v8i3.5374","DOIUrl":"https://doi.org/10.30591/jpit.v8i3.5374","url":null,"abstract":"Ensuring and fulfilling the needs of the community isa form of government responsibility to reduce existing socialinequalities. One of the efforts that the government has made isto provide social assistance through the Non-Cash FoodAssistance program. However, the process of selecting recipientsof social assistance is often not on target. For this reason, it isnecessary to build a system that is able to support in determiningdecisions for the selection of families receiving social assistance.To help the selection process of social assistance recipients, ofcourse, it must use the right and appropriate method so that theselection process produces social assistance recipients who reallydeserve assistance. The selection process in this study uses twodecision support methods, namely Fuzzy Logic and SimpleAdditive Weighting (SAW) and has conducted accuracy tests onboth methods against the suitability of recipient eligibility data,so that it can be seen which method has the highest level ofaccuracy in the selection of social assistance recipients. Theresults of the accuracy test carried out in this study are that bothmethods produce the same high level of accuracy in thesuitability of prospective recipient eligibility results, namely100%, this means that both methods can be used in determiningrecipients of social assistanc.","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139339393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The science of artificial intelligence and computer vision is beneficial in facilitating the detection of diseases in the medical field. Computer-based disease detection can save time. However, identifying and detecting tumors on MRI images require seriousness and is time-consuming. Due to the diversity of structures in size, shape, and intensity of the image, accuracy is needed in identifying the original organ structure and the diseased one. Previous studies have proposed a method for identifying brain tumors to produce the correct precision. In previous studies, neural network-based methods have good accuracy. We present five Convolutional Neural Network (CNN) architectures for identifying brain tumors (glioma, meningioma, no tumor, and pituitary) on MRI images. This study aims to develop an optimal CNN architecture for identifying tumors. We use the dataset from Kaggle with a total training data of 5712 and testing of 1311. Of the five proposed CNN architectures, architecture c has the highest accuracy of 82.2% with an unlimited number of parameters of 29605060. A good CNN architecture has many convolution layers. We also compare the proposed architecture with CNN transfer learning (Inception, ResNet-50, and VGG16), and with CNN transfer learning architecture, the accuracy is higher than our proposed architecture.
{"title":"Identifikasi Tumor Otak Citra MRI dengan Convolutional Neural Network","authors":"Nur Nafiiyah","doi":"10.30591/jpit.v8i3.4985","DOIUrl":"https://doi.org/10.30591/jpit.v8i3.4985","url":null,"abstract":"The science of artificial intelligence and computer vision is beneficial in facilitating the detection of diseases in the medical field. Computer-based disease detection can save time. However, identifying and detecting tumors on MRI images require seriousness and is time-consuming. Due to the diversity of structures in size, shape, and intensity of the image, accuracy is needed in identifying the original organ structure and the diseased one. Previous studies have proposed a method for identifying brain tumors to produce the correct precision. In previous studies, neural network-based methods have good accuracy. We present five Convolutional Neural Network (CNN) architectures for identifying brain tumors (glioma, meningioma, no tumor, and pituitary) on MRI images. This study aims to develop an optimal CNN architecture for identifying tumors. We use the dataset from Kaggle with a total training data of 5712 and testing of 1311. Of the five proposed CNN architectures, architecture c has the highest accuracy of 82.2% with an unlimited number of parameters of 29605060. A good CNN architecture has many convolution layers. We also compare the proposed architecture with CNN transfer learning (Inception, ResNet-50, and VGG16), and with CNN transfer learning architecture, the accuracy is higher than our proposed architecture.","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139339348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pelayanan administrasi di desa seringkali masih menerapkan metode manual dalam pelaksanaanya yang mengharuskan masyarakat datang secara langsung ke kantor desa. Selain itu, pelayanan yang masih manual juga menyulitkan petugas kantor desa dalam menentukan urutan pemrosesan surat yang harus diverifikasi terlebih dahulu, akibatnya banyak surat yang seharusnya sudah diselesaikan namun prosesnya masih berjalan. Oleh sebab itu, tujuan dilakukan penelitian ini berfokus pada perancangan sistem informasi yang dapat membantu masyarakat dalam pengajuan pembuatan surat serta sistem yang dapat membantu petugas kantor desa dalam menentukan prioritas surat yang harus diverifikasi terlebih dahulu dengan mengimplementasikan algoritma priority scheduling. Dalam penelitian ini, metode yang digunakan melingkupi perancangan algoritma priority scheduling yang diimplementasikan ke dalam sistem serta perancangan perangkat lunak menggunakan metode SDLC waterfall. Perancangan algoritma priority scheduling berupa penentuan urutan prioritas serta pembuatan pseudocode dari algoritma. Hasil dari penelitian ini adalah sebuah sistem informasi pelayanan administrasi kependudukan yang mengimplementasikan algoritma priority scheduling dalam proses pengurutan surat. Berdasarkan hasil pengujian menggunakan metode blackbox aplikasi berjalan tanpa ada error, 0% kegagalan dan berjalan sesuai fungsinya. Sedangkan pengujian SUS menunjukkan bahwa aplikasi berada pada level good dengan skor 75,25.
村里的行政服务在执行过程中往往仍采用人工方式,需要社区居民直接到村办公室办理。此外,人工服务也使村公所人员难以确定必须先核实的信件的处理顺序,导致许多本应完成的信件仍在处理过程中。因此,本研究的目的主要是设计一个可以帮助社区提交信件的信息系统,以及一个可以通过实施优先排序算法帮助村办公室人员确定必须先核实的信件的优先次序的系统。本研究采用的方法包括设计系统中的优先排期算法,以及使用 SDLC 瀑布法进行软件设计。优先级调度算法的设计形式是确定优先级顺序和制作算法的伪代码。本研究的成果是在邮件分拣过程中实施优先级调度算法的人口管理服务信息系统。根据使用黑盒方法的测试结果,应用程序运行无任何错误,故障率为 0%,并能按照其功能运行。而 SUS 测试表明,该应用程序处于良好水平,得分为 75.25 分。
{"title":"Implementasi Algoritma Priority Scheduling Sistem Informasi Pelayanan Administrasi Kependudukan Desa","authors":"Alifah Alfiatur Rohmah, Dedi Gunawan","doi":"10.30591/jpit.v8i3.4891","DOIUrl":"https://doi.org/10.30591/jpit.v8i3.4891","url":null,"abstract":"Pelayanan administrasi di desa seringkali masih menerapkan metode manual dalam pelaksanaanya yang mengharuskan masyarakat datang secara langsung ke kantor desa. Selain itu, pelayanan yang masih manual juga menyulitkan petugas kantor desa dalam menentukan urutan pemrosesan surat yang harus diverifikasi terlebih dahulu, akibatnya banyak surat yang seharusnya sudah diselesaikan namun prosesnya masih berjalan. Oleh sebab itu, tujuan dilakukan penelitian ini berfokus pada perancangan sistem informasi yang dapat membantu masyarakat dalam pengajuan pembuatan surat serta sistem yang dapat membantu petugas kantor desa dalam menentukan prioritas surat yang harus diverifikasi terlebih dahulu dengan mengimplementasikan algoritma priority scheduling. Dalam penelitian ini, metode yang digunakan melingkupi perancangan algoritma priority scheduling yang diimplementasikan ke dalam sistem serta perancangan perangkat lunak menggunakan metode SDLC waterfall. Perancangan algoritma priority scheduling berupa penentuan urutan prioritas serta pembuatan pseudocode dari algoritma. Hasil dari penelitian ini adalah sebuah sistem informasi pelayanan administrasi kependudukan yang mengimplementasikan algoritma priority scheduling dalam proses pengurutan surat. Berdasarkan hasil pengujian menggunakan metode blackbox aplikasi berjalan tanpa ada error, 0% kegagalan dan berjalan sesuai fungsinya. Sedangkan pengujian SUS menunjukkan bahwa aplikasi berada pada level good dengan skor 75,25.","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139339320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anisa Nur Syafia, M. Hidayattullah, Wirmanto Suteddy
Sentiment analysis of YouTube boy group BTS comments uses the NLP approach to detect emotional patterns based on two category labels, namely positive and negative. With NLP, positive or negative polarity in an entity can be allocated as well as predicted high and low performance from various classification sentiments. The machine learning algorithms used to measure the accuracy of sentiment analysis developed are the Support Vector Machine and Random Forest algorithms. The steps taken start from the data collection obtained from the BTS YouTube Comment dataset and then go through the data preprocessing stage. Then proceed to the feature extraction stage by converting text into digital vectors or Bag of Words (BOW) and classified using machine learning algorithms until the evaluation stage. From the results comparison of the evaluated algorithms, the accuracy value between the two algorithms is 96% for training data and 85% for data testing using the SVM algorithm, while for the Random Forest algorithm it is 82% for training data and 80% for data testing. This shows that the SVM algorithm produces a higher accuracy value than the Random Forest for sentiment analysis of YouTube boy group BTS comments.
{"title":"Studi Komparasi Algoritma SVM Dan Random Forest Pada Analisis Sentimen Komentar Youtube BTS","authors":"Anisa Nur Syafia, M. Hidayattullah, Wirmanto Suteddy","doi":"10.30591/jpit.v8i3.5064","DOIUrl":"https://doi.org/10.30591/jpit.v8i3.5064","url":null,"abstract":"Sentiment analysis of YouTube boy group BTS comments uses the NLP approach to detect emotional patterns based on two category labels, namely positive and negative. With NLP, positive or negative polarity in an entity can be allocated as well as predicted high and low performance from various classification sentiments. The machine learning algorithms used to measure the accuracy of sentiment analysis developed are the Support Vector Machine and Random Forest algorithms. The steps taken start from the data collection obtained from the BTS YouTube Comment dataset and then go through the data preprocessing stage. Then proceed to the feature extraction stage by converting text into digital vectors or Bag of Words (BOW) and classified using machine learning algorithms until the evaluation stage. From the results comparison of the evaluated algorithms, the accuracy value between the two algorithms is 96% for training data and 85% for data testing using the SVM algorithm, while for the Random Forest algorithm it is 82% for training data and 80% for data testing. This shows that the SVM algorithm produces a higher accuracy value than the Random Forest for sentiment analysis of YouTube boy group BTS comments.","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139339346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low Contrast can cause low image quality and make it difficult for proper image analysis. One technique to improve image quality is to increase the lighting contrast. One method that is often used is histogram normalization, which can increase image contrast by balancing the distribution of pixels across a range of pixel values. The purpose of this research is to apply the histogram normalization method to images and compare the results before and after the normalization process. The images used in this study are self-made images and images from public databases. The results of the study show that normalized histograms can increase image contrast and improve low image quality due to inadequate lighting. Thus, histogram normalization can be used as a technique to improve image quality in various applications, including medical image processing, satellite image processing, and security surveillance.
{"title":"Penerapan Normalisasi Histogram untuk Peningkatan Kontras Pencahayaan pada Pengamatan Visual CCTV","authors":"Saluky Saluky, Yoni Marine, Nurul Bahiyah","doi":"10.30591/jpit.v8i3.4929","DOIUrl":"https://doi.org/10.30591/jpit.v8i3.4929","url":null,"abstract":"Low Contrast can cause low image quality and make it difficult for proper image analysis. One technique to improve image quality is to increase the lighting contrast. One method that is often used is histogram normalization, which can increase image contrast by balancing the distribution of pixels across a range of pixel values. The purpose of this research is to apply the histogram normalization method to images and compare the results before and after the normalization process. The images used in this study are self-made images and images from public databases. The results of the study show that normalized histograms can increase image contrast and improve low image quality due to inadequate lighting. Thus, histogram normalization can be used as a technique to improve image quality in various applications, including medical image processing, satellite image processing, and security surveillance.","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139339401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}