Pub Date : 2022-01-25DOI: 10.14421/jiska.2022.7.1.10-19
Trifebi Shina Sabrila, Yufis Azhar, C. Aditya
Support Vector Machine (SVM) is one of the most widely used classification algorithms for sentiment analysis and has been shown to provide satisfactory performance. However, despite its advantages, the SVM algorithm still has weaknesses in selecting the right SVM parameters to optimize the performance. In this study, sentiment analysis was done with the use of data called tweets about Undang-Undang Cipta Kerja which reap many pros and cons by the people in Indonesia, especially the laborers. The classification method used in this study is the Support Vector Machine algorithm which is optimized using the Particle Swarm Optimization method for the SVM parameters selection in the hope of optimizing the performance generated by the SVM algorithm in sentiment analysis. The results of the study using 10 k-fold cross-validations using the SVM algorithm resulted in an accuracy of 92,99%, a precision of 93,24%, and a recall of 93%. Meanwhile, the SVM and PSO algorithms produce an accuracy of 95%, precision of 95,08%, and recall of 94,97%. The results show that the Particle Swarm Optimization method can overcome the weaknesses of the Support Vector Machine algorithm in the problem of parameter selection and has succeeded in improving the resulting performance where the SVM-PSO is more superior to SVM without optimization in sentiment analysis.
{"title":"Analisis Sentimen Tweet Tentang UU Cipta Kerja Menggunakan Algoritma SVM Berbasis PSO","authors":"Trifebi Shina Sabrila, Yufis Azhar, C. Aditya","doi":"10.14421/jiska.2022.7.1.10-19","DOIUrl":"https://doi.org/10.14421/jiska.2022.7.1.10-19","url":null,"abstract":"Support Vector Machine (SVM) is one of the most widely used classification algorithms for sentiment analysis and has been shown to provide satisfactory performance. However, despite its advantages, the SVM algorithm still has weaknesses in selecting the right SVM parameters to optimize the performance. In this study, sentiment analysis was done with the use of data called tweets about Undang-Undang Cipta Kerja which reap many pros and cons by the people in Indonesia, especially the laborers. The classification method used in this study is the Support Vector Machine algorithm which is optimized using the Particle Swarm Optimization method for the SVM parameters selection in the hope of optimizing the performance generated by the SVM algorithm in sentiment analysis. The results of the study using 10 k-fold cross-validations using the SVM algorithm resulted in an accuracy of 92,99%, a precision of 93,24%, and a recall of 93%. Meanwhile, the SVM and PSO algorithms produce an accuracy of 95%, precision of 95,08%, and recall of 94,97%. The results show that the Particle Swarm Optimization method can overcome the weaknesses of the Support Vector Machine algorithm in the problem of parameter selection and has succeeded in improving the resulting performance where the SVM-PSO is more superior to SVM without optimization in sentiment analysis.","PeriodicalId":34216,"journal":{"name":"JISKA Jurnal Informatika Sunan Kalijaga","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46437623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-25DOI: 10.14421/jiska.2022.7.1.20-32
Dony Fahrudy, Bambang Sugiantoro
TCP was developed to deal with problems that often occur in the network, such as congestion problems. Congestion can occur when the number of packets transmitted in the network approaches the network capacity which can cause network problems. This can be overcome by implementing TCP and queue management. In this research, we will test the performance of TCP Newreno and TCP Vegas using NS-2 in the Drop Tail queue. The performance parameters used are throughput, packet drop, and congestion window with additional buffer capacity. The test results for the congestion window and packet drop parameters, TCP Vegas has better performance when the buffer gets bigger when congestion occurs with the congestion window smaller than TCP New Reno and the average packet drop is 18.33 packets compared to TCP New Reno with an average of 18.33 packets. average 41.67 packets. For throughput parameters, TCP New Reno has better performance with an average of 6.77253 Mbps than TCP Vegas with an average of 4.29693 Mbps. From testing and analysis that TCP Vegas has better performance than TCP New Reno when using Drop Tail queues.
TCP的开发是为了解决网络中经常出现的问题,例如拥塞问题。当网络中传输的数据包数量接近网络容量时,可能会发生拥塞,这可能会导致网络问题。这可以通过实现TCP和队列管理来克服。在本研究中,我们将在Drop-Tail队列中使用NS-2测试TCP Newreno和TCP Vegas的性能。使用的性能参数是吞吐量、数据包丢弃和具有额外缓冲容量的拥塞窗口。对拥塞窗口和丢包参数的测试结果表明,当拥塞窗口小于TCP New Reno时,当缓冲区变大时,TCP Vegas具有更好的性能,与平均18.33包的TCP New Reno相比,平均丢包为18.33包。平均41.67个数据包。在吞吐量参数方面,TCP New Reno的平均性能为6.77253 Mbps,比TCP Vegas的平均性能4.29693 Mbps要好。通过测试和分析,TCP Vegas在使用Drop Tail队列时比TCP New Reno具有更好的性能。
{"title":"Analisis Perbandingan Kinerja TCP Vegas dan TCP New Reno Menggunakan Antrian Drop Tail","authors":"Dony Fahrudy, Bambang Sugiantoro","doi":"10.14421/jiska.2022.7.1.20-32","DOIUrl":"https://doi.org/10.14421/jiska.2022.7.1.20-32","url":null,"abstract":"TCP was developed to deal with problems that often occur in the network, such as congestion problems. Congestion can occur when the number of packets transmitted in the network approaches the network capacity which can cause network problems. This can be overcome by implementing TCP and queue management. In this research, we will test the performance of TCP Newreno and TCP Vegas using NS-2 in the Drop Tail queue. The performance parameters used are throughput, packet drop, and congestion window with additional buffer capacity. The test results for the congestion window and packet drop parameters, TCP Vegas has better performance when the buffer gets bigger when congestion occurs with the congestion window smaller than TCP New Reno and the average packet drop is 18.33 packets compared to TCP New Reno with an average of 18.33 packets. average 41.67 packets. For throughput parameters, TCP New Reno has better performance with an average of 6.77253 Mbps than TCP Vegas with an average of 4.29693 Mbps. From testing and analysis that TCP Vegas has better performance than TCP New Reno when using Drop Tail queues.","PeriodicalId":34216,"journal":{"name":"JISKA Jurnal Informatika Sunan Kalijaga","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44361267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-22DOI: 10.14421/jiska.2021.6.3.149-160
Novianti Puspitasari, Haviluddin, Arinda Mulawardani Kustiawan, H. Setyadi, Gubtha Mahendra Putra
The automotive industry in Indonesia, primarily cars, is getting more and more varied. Along with increasing the number of vehicles, Brand Holder Sole Agents (ATPM) compete to provide after-sale services (mobile service). However, the company has difficulty knowing the rate of growth in the number of mobile services handled, thus causing losses that impact sources of income. Therefore, we need a standard method in determining the forecasting of the number of car services in the following year. This study implements the Backpropagation Neural Network (BPNN) method in forecasting car service services (after-sale) and Mean Square Error (MSE) for the process of testing the accuracy of the forecasting results formed. The data used in this study is car service data (after-sale) for the last five years. The results show that the best architecture for forecasting after-sales services using BPNN is the 5-10-5-1 architectural model with a learning rate of 0.2 and the learning function of trainlm and MSE of 0.00045581. This proves that the BPNN method can predict mobile service (after-sale) services with good forecasting accuracy values.
{"title":"Peramalan Pelayanan Service Mobil (After-Sale) Menggunakan Backpropagation Neural Network (BPNN)","authors":"Novianti Puspitasari, Haviluddin, Arinda Mulawardani Kustiawan, H. Setyadi, Gubtha Mahendra Putra","doi":"10.14421/jiska.2021.6.3.149-160","DOIUrl":"https://doi.org/10.14421/jiska.2021.6.3.149-160","url":null,"abstract":"The automotive industry in Indonesia, primarily cars, is getting more and more varied. Along with increasing the number of vehicles, Brand Holder Sole Agents (ATPM) compete to provide after-sale services (mobile service). However, the company has difficulty knowing the rate of growth in the number of mobile services handled, thus causing losses that impact sources of income. Therefore, we need a standard method in determining the forecasting of the number of car services in the following year. This study implements the Backpropagation Neural Network (BPNN) method in forecasting car service services (after-sale) and Mean Square Error (MSE) for the process of testing the accuracy of the forecasting results formed. The data used in this study is car service data (after-sale) for the last five years. The results show that the best architecture for forecasting after-sales services using BPNN is the 5-10-5-1 architectural model with a learning rate of 0.2 and the learning function of trainlm and MSE of 0.00045581. This proves that the BPNN method can predict mobile service (after-sale) services with good forecasting accuracy values.","PeriodicalId":34216,"journal":{"name":"JISKA Jurnal Informatika Sunan Kalijaga","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47941559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-22DOI: 10.14421/jiska.2021.6.3.139-148
Imam Riadi, Herman, Aulyah Zakilah Ifani
The aspect of the internet that needs to be considered a security is the login system. The login system usually uses a username and password as an authentication method because it is easy to implement. However, data in the form of usernames and passwords are very vulnerable to theft, so it is necessary to increase the security of the login system. The purpose of this research is to investigate the security of the system. Whether the system is good at protecting user data or not, minimizing execution errors from the system and minimizing risk errors on the system so that the login system can be used safely. This research is conducted to test the system security with Burp Suite on the login system that has been built. Testing the security of this system by experimenting with POST data which is secured using blockchain technology makes the data sent in the form of hash blocks safer and more confidential so that the system is safer than before. Blockchain technology has successfully secured usernames and passwords from broken authentication attacks. By using the Burp Suite testing system, login is more specific in conducting security testing.
{"title":"Optimasi Keamanan Web Server terhadap Serangan Broken Authentication Menggunakan Teknologi Blockchain","authors":"Imam Riadi, Herman, Aulyah Zakilah Ifani","doi":"10.14421/jiska.2021.6.3.139-148","DOIUrl":"https://doi.org/10.14421/jiska.2021.6.3.139-148","url":null,"abstract":"The aspect of the internet that needs to be considered a security is the login system. The login system usually uses a username and password as an authentication method because it is easy to implement. However, data in the form of usernames and passwords are very vulnerable to theft, so it is necessary to increase the security of the login system. The purpose of this research is to investigate the security of the system. Whether the system is good at protecting user data or not, minimizing execution errors from the system and minimizing risk errors on the system so that the login system can be used safely. This research is conducted to test the system security with Burp Suite on the login system that has been built. Testing the security of this system by experimenting with POST data which is secured using blockchain technology makes the data sent in the form of hash blocks safer and more confidential so that the system is safer than before. Blockchain technology has successfully secured usernames and passwords from broken authentication attacks. By using the Burp Suite testing system, login is more specific in conducting security testing.","PeriodicalId":34216,"journal":{"name":"JISKA Jurnal Informatika Sunan Kalijaga","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49042751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Start-ups have a very important role in economic growth, the existence of a start-up can open up many new jobs. However, not all start-ups that are developing can become successful start-ups. This is because start-ups have a high failure rate, data shows that 75% of start-ups fail in their development. Therefore, it is important to classify the successful and failed start-ups, so that later it can be used to see the factors that most influence start-up success, and can also predict the success of a start-up. Among the many classifications in data mining, the Decision Tree, kNN, and Naïve Bayes algorithms are the algorithms that the authors chose to classify the 923 start-up data records that were previously obtained. The test results using cross-validation and T-test show that the Decision Tree Algorithm is the most appropriate algorithm for classifying in this case study. This is evidenced by the accuracy value obtained from the Decision Tree algorithm, which is greater than other algorithms, which is 79.29%, while the kNN algorithm has an accuracy value of 66.69%, and Naive Bayes is 64.21%.
{"title":"Analisis Perbandingan Algoritma Decision Tree, kNN, dan Naive Bayes untuk Prediksi Kesuksesan Start-up","authors":"Adhitya Prayoga Permana, Kurniyatul Ainiyah, Khadijah Fahmi Hayati Holle","doi":"10.14421/jiska.2021.6.3.178-188","DOIUrl":"https://doi.org/10.14421/jiska.2021.6.3.178-188","url":null,"abstract":"Start-ups have a very important role in economic growth, the existence of a start-up can open up many new jobs. However, not all start-ups that are developing can become successful start-ups. This is because start-ups have a high failure rate, data shows that 75% of start-ups fail in their development. Therefore, it is important to classify the successful and failed start-ups, so that later it can be used to see the factors that most influence start-up success, and can also predict the success of a start-up. Among the many classifications in data mining, the Decision Tree, kNN, and Naïve Bayes algorithms are the algorithms that the authors chose to classify the 923 start-up data records that were previously obtained. The test results using cross-validation and T-test show that the Decision Tree Algorithm is the most appropriate algorithm for classifying in this case study. This is evidenced by the accuracy value obtained from the Decision Tree algorithm, which is greater than other algorithms, which is 79.29%, while the kNN algorithm has an accuracy value of 66.69%, and Naive Bayes is 64.21%.","PeriodicalId":34216,"journal":{"name":"JISKA Jurnal Informatika Sunan Kalijaga","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41690162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-22DOI: 10.14421/jiska.2021.6.3.130-138
Rivanda Putra Pratama, Rahmat Hidayat, Nisrina Fadhilah Fano, Adam Akbar, Nur Aini Rakhmawati
Data processing speed in companies is important to speed up their analysis. Entity matching is a computational process that companies can perform in data processing. In conducting data processing, entity matching plays a role in determining two different data but referring to the same entity. Entity matching problems arise when the dataset used in the comparison is large. The deep learning concept is one of the solutions in dealing with entity matching problems. DeepMatcher is a python package based on a deep learning model architecture that can solve entity matching problems. The purpose of this study was to determine the matching between the two datasets with the application of DeepMatcher in entity matching using drug data from farmaku.com and k24klik.com. The comparison model used is the Hybrid model. Based on the test results, the Hybrid model produces accurate numbers, so that the entity matching used in this study runs well. The best accuracy value of the 10th training with an F1 value of 30.30, a precision value of 17.86, and a recall value of 100.
{"title":"Implementasi Deep Learning untuk Entity Matching pada Dataset Obat (Studi Kasus K24 dan Farmaku)","authors":"Rivanda Putra Pratama, Rahmat Hidayat, Nisrina Fadhilah Fano, Adam Akbar, Nur Aini Rakhmawati","doi":"10.14421/jiska.2021.6.3.130-138","DOIUrl":"https://doi.org/10.14421/jiska.2021.6.3.130-138","url":null,"abstract":"Data processing speed in companies is important to speed up their analysis. Entity matching is a computational process that companies can perform in data processing. In conducting data processing, entity matching plays a role in determining two different data but referring to the same entity. Entity matching problems arise when the dataset used in the comparison is large. The deep learning concept is one of the solutions in dealing with entity matching problems. DeepMatcher is a python package based on a deep learning model architecture that can solve entity matching problems. The purpose of this study was to determine the matching between the two datasets with the application of DeepMatcher in entity matching using drug data from farmaku.com and k24klik.com. The comparison model used is the Hybrid model. Based on the test results, the Hybrid model produces accurate numbers, so that the entity matching used in this study runs well. The best accuracy value of the 10th training with an F1 value of 30.30, a precision value of 17.86, and a recall value of 100.","PeriodicalId":34216,"journal":{"name":"JISKA Jurnal Informatika Sunan Kalijaga","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48156669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-22DOI: 10.14421/jiska.2021.6.3.189-200
Mr. Fitree Tahe, Maria Ulfah Siregar
There are many technological developments in banks, one of which is online transactions. To get these transactions, an account should be opened using the electronic know you customer (e-KYC) verification system at banks. This research wants to know the differences in the factors that influence behavioral intentions to use e-KYC to open a bank account for SCB (The Siam Commercial Bank) Thailand and Bank Mandiri Indonesia. This is quantitative research using a survey. We have prepared a questionnaire of 160 respondents: 80 for Bank Mandiri and 80 for SCB. The results indicate that the willingness to use electronic identity verification services influence by the availability of technology, the external impact of the network, safety awareness, perception of trust, and perception of security. The perception of security affects the perception of trust, and technical protection, also the transaction procedure does not affect the perception of trust.
{"title":"Perbandingan Faktor-Faktor Yang Mempengaruhi Penggunaan Electronic-Know Your Customer (e-Kyc)","authors":"Mr. Fitree Tahe, Maria Ulfah Siregar","doi":"10.14421/jiska.2021.6.3.189-200","DOIUrl":"https://doi.org/10.14421/jiska.2021.6.3.189-200","url":null,"abstract":"There are many technological developments in banks, one of which is online transactions. To get these transactions, an account should be opened using the electronic know you customer (e-KYC) verification system at banks. This research wants to know the differences in the factors that influence behavioral intentions to use e-KYC to open a bank account for SCB (The Siam Commercial Bank) Thailand and Bank Mandiri Indonesia. This is quantitative research using a survey. We have prepared a questionnaire of 160 respondents: 80 for Bank Mandiri and 80 for SCB. The results indicate that the willingness to use electronic identity verification services influence by the availability of technology, the external impact of the network, safety awareness, perception of trust, and perception of security. The perception of security affects the perception of trust, and technical protection, also the transaction procedure does not affect the perception of trust.","PeriodicalId":34216,"journal":{"name":"JISKA Jurnal Informatika Sunan Kalijaga","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48353202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-22DOI: 10.14421/jiska.2021.6.3.171-177
Fauziyah Suwarsita Febriyani, Arief Arfriandi
The development of science and technology has led to changes in the use of documents in life to become digital data. However, this can cause problems, namely regarding data security and confidentiality. To increase security and confidentiality can be done with cryptographic algorithm RC4. The research method uses the Waterfall method. The result of this research is a website that can secure document files with * doc extension using the RC4 algorithm. The test was carried out using the blackbox test and the CrackStation test for encryption testing. The results of the test show that the website can run well and successfully implements the RC4 algorithm.
{"title":"Implementasi Algoritma RC4 pada Sistem Pengamanan Dokumen Digital Soal Ujian","authors":"Fauziyah Suwarsita Febriyani, Arief Arfriandi","doi":"10.14421/jiska.2021.6.3.171-177","DOIUrl":"https://doi.org/10.14421/jiska.2021.6.3.171-177","url":null,"abstract":"The development of science and technology has led to changes in the use of documents in life to become digital data. However, this can cause problems, namely regarding data security and confidentiality. To increase security and confidentiality can be done with cryptographic algorithm RC4. The research method uses the Waterfall method. The result of this research is a website that can secure document files with * doc extension using the RC4 algorithm. The test was carried out using the blackbox test and the CrackStation test for encryption testing. The results of the test show that the website can run well and successfully implements the RC4 algorithm.","PeriodicalId":34216,"journal":{"name":"JISKA Jurnal Informatika Sunan Kalijaga","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47292288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-22DOI: 10.14421/jiska.2021.6.3.161-170
Ami Natuzzuhriyyah, Nisa’atun Nafisah, R. Mayasari
Since the spread of Covid-19 in Indonesia, in early March 2020, the activities of Educational Institutions have not been disrupted. As conventional learning. Learning at Singaperbangsa University began with regulation from the Ministry of Education and Culture of the Republic of Indonesia, from learning that boldly affects concentration, influences concentration, such as signals, learning atmosphere, and teaching methods, so that factors affect the level of student satisfaction in learning. This study aims to determine the level of student satisfaction with learning who dares to use the Bayes naive algorithm using RapidMiner tools with results obtained with an accuracy rate of 76.92%, class precision of 100.00%, class recall 57.14%, and an AUC value of 0.881 or close to, so the resulting model is good. In other words, the results obtained using the Naïve Bayes algorithm can be used as material for making decisions about the level of online learning satisfaction.
{"title":"Klasifikasi Tingkat Kepuasan Mahasiswa Terhadap Pembelajaran Secara Daring Menggunakan Algoritma Naïve Bayes","authors":"Ami Natuzzuhriyyah, Nisa’atun Nafisah, R. Mayasari","doi":"10.14421/jiska.2021.6.3.161-170","DOIUrl":"https://doi.org/10.14421/jiska.2021.6.3.161-170","url":null,"abstract":"Since the spread of Covid-19 in Indonesia, in early March 2020, the activities of Educational Institutions have not been disrupted. As conventional learning. Learning at Singaperbangsa University began with regulation from the Ministry of Education and Culture of the Republic of Indonesia, from learning that boldly affects concentration, influences concentration, such as signals, learning atmosphere, and teaching methods, so that factors affect the level of student satisfaction in learning. This study aims to determine the level of student satisfaction with learning who dares to use the Bayes naive algorithm using RapidMiner tools with results obtained with an accuracy rate of 76.92%, class precision of 100.00%, class recall 57.14%, and an AUC value of 0.881 or close to, so the resulting model is good. In other words, the results obtained using the Naïve Bayes algorithm can be used as material for making decisions about the level of online learning satisfaction.","PeriodicalId":34216,"journal":{"name":"JISKA Jurnal Informatika Sunan Kalijaga","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42355622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays people exchange information in digital media such as text, audio, video and imagery. The development of Information and Communication makes the delivery of information and data more efficient. Current developments in technology which are very significant have an impact on the community in exchanging information and communicating. Confidential hidden data can also be in the form of image, audio, text, or video. The Hill Chiper algorithm uses a matrix of size m x m as a key for encryption and decryption. One way to recover the original text is of course to guess the decryption key, so the process of guessing the decryption key must be difficult. break ciphertext into palintext without knowing which key to use. The LSB part that is converted to the value of the message to be inserted. After affixing a secret message, each pixel is rebuilt into a whole image that resembles the original image media. The Hill Cipher algorithm is used to determine the position of the plaintext encryption into a random ciphertext. 2. Testing text messages using the hill cipher algorithm successfully carried out in accordance with the flow or the steps so as to produce a ciphertext in the form of randomization of the letters of the alphabet.
如今,人们通过文本、音频、视频和图像等数字媒体交换信息。信息和通信的发展提高了信息和数据的传递效率。目前技术的发展非常重要,对社区交流信息和沟通产生了影响。机密隐藏数据也可以是图像、音频、文本或视频的形式。Hill Chiper算法使用大小为m x m的矩阵作为加密和解密的密钥。恢复原始文本的一种方法当然是猜测解密密钥,因此猜测解密密钥的过程一定很困难。在不知道使用哪一个密钥的情况下,将密文分解为复文。转换为要插入的消息值的LSB部分。在附加一条秘密消息后,每个像素都被重建成一个类似于原始图像媒体的完整图像。Hill密码算法用于确定明文加密到随机密文中的位置。2.使用根据流程或步骤成功执行的希尔密码算法测试文本消息,以便产生字母表中字母随机化形式的密文。
{"title":"Penerapan Algoritma Hill Cipher Dan Least Significant Bit (LSB) Untuk Pengamanan Pesan Pada Citra Digital","authors":"Desimeri Laoli, Bosker Sinaga, Anita Sindar Sinaga","doi":"10.14421/JISKA.2020.%X","DOIUrl":"https://doi.org/10.14421/JISKA.2020.%X","url":null,"abstract":"Nowadays people exchange information in digital media such as text, audio, video and imagery. The development of Information and Communication makes the delivery of information and data more efficient. Current developments in technology which are very significant have an impact on the community in exchanging information and communicating. Confidential hidden data can also be in the form of image, audio, text, or video. The Hill Chiper algorithm uses a matrix of size m x m as a key for encryption and decryption. One way to recover the original text is of course to guess the decryption key, so the process of guessing the decryption key must be difficult. break ciphertext into palintext without knowing which key to use. The LSB part that is converted to the value of the message to be inserted. After affixing a secret message, each pixel is rebuilt into a whole image that resembles the original image media. The Hill Cipher algorithm is used to determine the position of the plaintext encryption into a random ciphertext. 2. Testing text messages using the hill cipher algorithm successfully carried out in accordance with the flow or the steps so as to produce a ciphertext in the form of randomization of the letters of the alphabet. ","PeriodicalId":34216,"journal":{"name":"JISKA Jurnal Informatika Sunan Kalijaga","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43534925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}