Pub Date : 2021-10-20DOI: 10.1109/ICTS52701.2021.9608010
Tanzina Akter Tani, Tabassum Islam, Sayed Atique Newaz, N. Sultana
In today's internet-based world, social media is one of the most popular platforms through which users can outburst their different types of feelings, emotions, frustration, anger, happiness etc. without having concern about distinguishes between moral and social values. These kinds of abusive or offensive texts cause social disturbances, crimes, and many unethical deeds. So, there is a huge necessity to distinguish these kinds of abusive texts/posts and remove them from social media. Different researchers have distinguished different text detection processes in their related work. In our proposed work, three classifiers have been used: Naïve Bayes (NB), Random Forest (RF), and Support Vector Machine (SVM) for detecting hateful text. Bag of Words (BoW) and TF-IDF feature extraction methods have been used to compare these three classifiers for both unigram and bigrams words. To balance hateful and clean content, the Twitter dataset has been under-sampled. Text preprocessing is essential for NLP to produce better and more accurate results which have been carried out in this work. In our result, Naive Bayes has provided the highest accuracy (89%) using the TF-IDF feature extraction model, whereas Random Forest has provided the most accuracy (88%) using Bag of words (BoW) in the case of unigram word. Overall, we got much better performance using unigram than using bigrams word. Finally, we made a number of principle contributions.
{"title":"Systematic Analysis of Hateful Text Detection Using Machine Learning Classifiers","authors":"Tanzina Akter Tani, Tabassum Islam, Sayed Atique Newaz, N. Sultana","doi":"10.1109/ICTS52701.2021.9608010","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608010","url":null,"abstract":"In today's internet-based world, social media is one of the most popular platforms through which users can outburst their different types of feelings, emotions, frustration, anger, happiness etc. without having concern about distinguishes between moral and social values. These kinds of abusive or offensive texts cause social disturbances, crimes, and many unethical deeds. So, there is a huge necessity to distinguish these kinds of abusive texts/posts and remove them from social media. Different researchers have distinguished different text detection processes in their related work. In our proposed work, three classifiers have been used: Naïve Bayes (NB), Random Forest (RF), and Support Vector Machine (SVM) for detecting hateful text. Bag of Words (BoW) and TF-IDF feature extraction methods have been used to compare these three classifiers for both unigram and bigrams words. To balance hateful and clean content, the Twitter dataset has been under-sampled. Text preprocessing is essential for NLP to produce better and more accurate results which have been carried out in this work. In our result, Naive Bayes has provided the highest accuracy (89%) using the TF-IDF feature extraction model, whereas Random Forest has provided the most accuracy (88%) using Bag of words (BoW) in the case of unigram word. Overall, we got much better performance using unigram than using bigrams word. Finally, we made a number of principle contributions.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"339 1","pages":"330-335"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80733948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-20DOI: 10.1109/ICTS52701.2021.9608565
Dwi Sunaryono, Annas Nuril Iman, D. Purwitasari, A. B. Raharjo
Before the zoning policy, students or their parents tend to choose a recognized school with high educational quality despite its distance. New Student Admissions or Penerimaan Peserta Didik Baru (PPDB) is a school zoning enrollment system that aims to reduce the student travel distance. The online-based PPDB system requires home location input supplemented with legal documents as validation mechanism. However, falsifying home residence or enrollment fraud could not be identified by the PPDB system. This study examines the possible fraud cases from the PPDB enrollment ranks data. The ranks data forms a graph relationship between the registrant and the school. Every data contains a longitude-latitude point, and it is the main factor for accepting based on PPDB policy. The process is trying to analyze the connection between distance gap distribution derived from the ranks data, with the concurrent fraud cases. Because the distance gap distribution still has a missing value on several gap points, it is useful to use KDE (Kernel Density Estimation) to estimate those unknown values. KDE will result in estimated distance gap distribution. The distance gap distribution is affected by the residence location that is plotted on a geo map. When there's an uncommon location of some registrant it will create fluctuation on the distance gap distribution. The gap distribution value exceeds the estimated distance gap distribution from this situation and will be detected as an enrollment fraud. The process to detect enrollment fraud is handled with a graph algorithm. The graph algorithm traverses the graph data and gets ranked registrant from a school. The data are grouped every two meters and check whether its count does not exceed the estimated distance gap distribution. The graph algorithm builds over the PPDB system and tests several manipulated residence locations. It could detect those manipulated data and has a fast process since it only took less than one second.
在区域政策之前,学生或家长倾向于选择一所公认的教育质量高的学校,尽管距离远。新学生招生(Penerimaan Peserta Didik Baru,简称PPDB)是一种学校分区招生制度,旨在减少学生的出行距离。基于在线的PPDB系统需要家庭位置输入,并辅以法律文件作为验证机制。然而,伪造家庭住址或入学欺诈无法被PPDB系统识别。本研究考察了PPDB招生排名数据中可能存在的欺诈案例。排名数据在注册者和学校之间形成了一个图表关系。每个数据都包含一个经纬度点,它是基于PPDB策略接受数据的主要因素。该过程试图分析从排名数据中得出的距离差距分布与并发欺诈案件之间的联系。因为距离间隙分布在几个间隙点上仍然有一个缺失值,所以使用KDE (Kernel Density Estimation)来估计这些未知值是有用的。KDE将产生估计的距离差距分布。距离差距分布受绘制在地理地图上的居住地位置的影响。当某些注册者的位置不常见时,它会在距离间隙分布上产生波动。差距分布值超过这种情况下估计的距离差距分布,将被检测为入学欺诈。通过图形算法处理注册欺诈检测过程。图形算法遍历图形数据并从学校获得排名注册者。每隔两米对数据进行分组,检查其计数是否超过估计的距离差距分布。图算法建立在PPDB系统上,并测试了几个被操纵的居住位置。它可以检测到那些被操纵的数据,而且处理速度很快,因为只需要不到一秒的时间。
{"title":"Graph Algorithm for Anomaly Prediction in East Java Student Admission System","authors":"Dwi Sunaryono, Annas Nuril Iman, D. Purwitasari, A. B. Raharjo","doi":"10.1109/ICTS52701.2021.9608565","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608565","url":null,"abstract":"Before the zoning policy, students or their parents tend to choose a recognized school with high educational quality despite its distance. New Student Admissions or Penerimaan Peserta Didik Baru (PPDB) is a school zoning enrollment system that aims to reduce the student travel distance. The online-based PPDB system requires home location input supplemented with legal documents as validation mechanism. However, falsifying home residence or enrollment fraud could not be identified by the PPDB system. This study examines the possible fraud cases from the PPDB enrollment ranks data. The ranks data forms a graph relationship between the registrant and the school. Every data contains a longitude-latitude point, and it is the main factor for accepting based on PPDB policy. The process is trying to analyze the connection between distance gap distribution derived from the ranks data, with the concurrent fraud cases. Because the distance gap distribution still has a missing value on several gap points, it is useful to use KDE (Kernel Density Estimation) to estimate those unknown values. KDE will result in estimated distance gap distribution. The distance gap distribution is affected by the residence location that is plotted on a geo map. When there's an uncommon location of some registrant it will create fluctuation on the distance gap distribution. The gap distribution value exceeds the estimated distance gap distribution from this situation and will be detected as an enrollment fraud. The process to detect enrollment fraud is handled with a graph algorithm. The graph algorithm traverses the graph data and gets ranked registrant from a school. The data are grouped every two meters and check whether its count does not exceed the estimated distance gap distribution. The graph algorithm builds over the PPDB system and tests several manipulated residence locations. It could detect those manipulated data and has a fast process since it only took less than one second.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"104 1","pages":"252-257"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72817666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-20DOI: 10.1109/ICTS52701.2021.9608370
M. Sahal, Zaidan Adenin Said, Rusdhianto Effendi Abdul Kadir, Z. Hidayat, Y. Bilfaqih, Abdullah Alkaff
Radar has various functions, one of which is tracking the position of targets in the air. In tracking the target position, there are obstacles, namely the uncertainty of data associations. To overcome the uncertainty of data associations, the concept of data association can be used where one of the algorithms that use this concept is the Probabilistic Data Association Filter (PDAF). A single target position tracking system in the air will be tested on radar using the PDAF with our proposed maintenance track algorithm. The data used for testing comes from simulating two motions on the SPx-Radar-Simulator. False alarms originating from interference (clutter) will be generated in the environment around the simulated target. The test results of the target tracking system using the PDAF algorithm that has been designed can track targets well, maintain tracking conditions in an environment that has clutter, and maintain track on multitarget environments. The error between the original data and the prediction on the validated target has a relatively small value, although there is a relatively significant difference in the error of the altitude data when the motion has varying altitude conditions.
{"title":"Tracking Position of Airborne Target on SPx-Radar-Simulator Using Probabilistic Data Association Filter","authors":"M. Sahal, Zaidan Adenin Said, Rusdhianto Effendi Abdul Kadir, Z. Hidayat, Y. Bilfaqih, Abdullah Alkaff","doi":"10.1109/ICTS52701.2021.9608370","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608370","url":null,"abstract":"Radar has various functions, one of which is tracking the position of targets in the air. In tracking the target position, there are obstacles, namely the uncertainty of data associations. To overcome the uncertainty of data associations, the concept of data association can be used where one of the algorithms that use this concept is the Probabilistic Data Association Filter (PDAF). A single target position tracking system in the air will be tested on radar using the PDAF with our proposed maintenance track algorithm. The data used for testing comes from simulating two motions on the SPx-Radar-Simulator. False alarms originating from interference (clutter) will be generated in the environment around the simulated target. The test results of the target tracking system using the PDAF algorithm that has been designed can track targets well, maintain tracking conditions in an environment that has clutter, and maintain track on multitarget environments. The error between the original data and the prediction on the validated target has a relatively small value, although there is a relatively significant difference in the error of the altitude data when the motion has varying altitude conditions.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"56 1","pages":"258-263"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76622298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-20DOI: 10.1109/ICTS52701.2021.9609028
I. B. Prayogi, T. Ahmad, Ntivuguruzwa Jean de La Croix, Pascal Maniriho
Today's information technology tools allow people to share personal data and information through online media. Data security techniques are also developed to protect confidential messages that are only intended for certain parties, one of which is by embedding those data in a specific medium. The hiding method proposed in this study uses a simple partition to split the 16-bit audio cover file into two groups of the same length. The simple partition technique is applied to improve the quality of the resulting stego file. Furthermore, it is also developing an adaptive modulus operation calculation method, whose implementation on the difference of sample can increase the embedding capacity of the secret messages. The final result of the proposed method shows that all secret messages at the 700 kb payload file size can be successfully embedded, and the PSNR can be maintained up to 120.55 dB.
{"title":"Hiding Messages in Audio using Modulus Operation and Simple Partition","authors":"I. B. Prayogi, T. Ahmad, Ntivuguruzwa Jean de La Croix, Pascal Maniriho","doi":"10.1109/ICTS52701.2021.9609028","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9609028","url":null,"abstract":"Today's information technology tools allow people to share personal data and information through online media. Data security techniques are also developed to protect confidential messages that are only intended for certain parties, one of which is by embedding those data in a specific medium. The hiding method proposed in this study uses a simple partition to split the 16-bit audio cover file into two groups of the same length. The simple partition technique is applied to improve the quality of the resulting stego file. Furthermore, it is also developing an adaptive modulus operation calculation method, whose implementation on the difference of sample can increase the embedding capacity of the secret messages. The final result of the proposed method shows that all secret messages at the 700 kb payload file size can be successfully embedded, and the PSNR can be maintained up to 120.55 dB.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"72 1","pages":"51-55"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73342980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-20DOI: 10.1109/ICTS52701.2021.9608037
Nur Muhammad Husnul Habib Yahya, Hadziq Fabroyir, D. Herumurti, Imam Kuswardayan, S. Arifiani
Procedural Content Generation (PCG) allows game developers to create worlds in games easier compared with creating them manually. One of many PCG algorithms is Cellular Automata (CA). CA was used in this research together with Poisson Disk Sampling (PDS) to generate two-dimensional dungeons in a roguelike game. PDS was modified to adapt to a grid-based environment to increase computation's efficiency. Each room generated in the game was connected to each other. Together, all the rooms created a dungeon. The connections between the rooms were architected using Depth First Search (DFS) algorithm as commonly used in maze generation. The created dungeons became the world in the game. Players could explore the dungeons to conquer them and collect resources. Then, players could spend the resources to upgrade their character. According to the research evaluation, using only CA was not sufficient to generate dungeons with specific designs. Certain pipelines were demanded to adjust and to control the end results. The algorithm still run adequately fast, which was around 73 milliseconds, despite all the pipeline additions.
{"title":"Dungeon's Room Generation Using Cellular Automata and Poisson Disk Sampling in Roguelike Game","authors":"Nur Muhammad Husnul Habib Yahya, Hadziq Fabroyir, D. Herumurti, Imam Kuswardayan, S. Arifiani","doi":"10.1109/ICTS52701.2021.9608037","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608037","url":null,"abstract":"Procedural Content Generation (PCG) allows game developers to create worlds in games easier compared with creating them manually. One of many PCG algorithms is Cellular Automata (CA). CA was used in this research together with Poisson Disk Sampling (PDS) to generate two-dimensional dungeons in a roguelike game. PDS was modified to adapt to a grid-based environment to increase computation's efficiency. Each room generated in the game was connected to each other. Together, all the rooms created a dungeon. The connections between the rooms were architected using Depth First Search (DFS) algorithm as commonly used in maze generation. The created dungeons became the world in the game. Players could explore the dungeons to conquer them and collect resources. Then, players could spend the resources to upgrade their character. According to the research evaluation, using only CA was not sufficient to generate dungeons with specific designs. Certain pipelines were demanded to adjust and to control the end results. The algorithm still run adequately fast, which was around 73 milliseconds, despite all the pipeline additions.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"25 1","pages":"29-34"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74043900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the CHAOS report 2020 released by the Standish Group, “a good team” is an essential factor in determining the success of a project. Therefore, team quality assessment becomes significant for team management. Furthermore, it has believed that the quality of the product developed represents the quality of the development team. For this reason, it is necessary to measure the quality of the team at any point, so we can take mitigation steps to improve the quality. The study has two aims: (1) to build a team quality measurement model to prove that team quality positively correlates with product quality and (2) to produce a team quality formula to predict product quality. We conducted the study on ten teams with 57 students who attended the Software Development Workshop class. The students applied self-assessment to assess the quality of the team. In addition, software development practitioners carried out product quality assessments. The method used to obtain a relationship between the two is the Pearson correlation. We generated a team quality measurement formula using the Brute Force method. This study proves that the quality of the development team has a positive correlation with the quality of the product developed by 0.87. Meanwhile, the team quality formula can be used as a reference to predict the quality of the product developed
{"title":"Generating Team Quality Formula to Predict Product Quality in Software Engineering Project of College Students","authors":"Umi Sa’adah, Maulidan Bagus Afridian Rasyid, S. Rochimah, Umi Laili Yuhana","doi":"10.1109/ICTS52701.2021.9607916","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9607916","url":null,"abstract":"In the CHAOS report 2020 released by the Standish Group, “a good team” is an essential factor in determining the success of a project. Therefore, team quality assessment becomes significant for team management. Furthermore, it has believed that the quality of the product developed represents the quality of the development team. For this reason, it is necessary to measure the quality of the team at any point, so we can take mitigation steps to improve the quality. The study has two aims: (1) to build a team quality measurement model to prove that team quality positively correlates with product quality and (2) to produce a team quality formula to predict product quality. We conducted the study on ten teams with 57 students who attended the Software Development Workshop class. The students applied self-assessment to assess the quality of the team. In addition, software development practitioners carried out product quality assessments. The method used to obtain a relationship between the two is the Pearson correlation. We generated a team quality measurement formula using the Brute Force method. This study proves that the quality of the development team has a positive correlation with the quality of the product developed by 0.87. Meanwhile, the team quality formula can be used as a reference to predict the quality of the product developed","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"108 1","pages":"106-111"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79406066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-20DOI: 10.1109/ICTS52701.2021.9608634
Shintya Rezky Rahmayanti, C. Fatichah, N. Suciati
Technology in Robotics and machine learning have been applied in numerous fields including the arts. Paul The Robot is able to draw sketches from human faces using the conventional convolution filter method. Generative Adversarial Network (GAN) has been successful in generating synthetic images. Researches in sketch generation have been conducted either by using Recurrent Neural Network (RNN) or by using Deep Reinforcement Learning, with step-by-step stroke drawing. This research proposes a system to generate sketches from real object images using GAN dan Deep Reinforcement Learning. The training framework used is based on Doodle-SDQ (Doodle with Stroke Demonstration and Deep Q-Network) that combines supervised learning and reinforcement learning. Real object images are converted into contour images by GAN to be the reference images by the reinforcement learning agent to generate the sketch. The experiment is done by modifying pooling layers during the supervised learning stage and rare exploration scenarios during the reinforcement learning stage. The result of this research is a model that can reach an average total reward of 2558.98 with an average pixel error of 0.0489 using 200 as the maximum step in an average time of 3.29 seconds for the sketch generation.
机器人技术和机器学习技术已经应用于包括艺术在内的许多领域。机器人保罗能够使用传统的卷积滤波方法从人脸上绘制草图。生成对抗网络(GAN)在生成合成图像方面取得了成功。草图生成的研究有两种,一种是使用递归神经网络(RNN),另一种是使用深度强化学习,逐步绘制笔画。本研究提出了一种利用GAN和深度强化学习从真实物体图像中生成草图的系统。使用的训练框架基于Doodle- sdq (Doodle with Stroke Demonstration and Deep Q-Network),结合了监督学习和强化学习。通过GAN将真实物体图像转换为轮廓图像,作为强化学习代理生成草图的参考图像。实验通过修改监督学习阶段的池化层和强化学习阶段的罕见探索场景来完成。本研究的结果是,在草图生成的平均时间为3.29秒的情况下,以200为最大步长,平均总奖励达到2558.98,平均像素误差为0.0489。
{"title":"Sketch Generation From Real Object Images Using Generative Adversarial Network and Deep Reinforcement Learning","authors":"Shintya Rezky Rahmayanti, C. Fatichah, N. Suciati","doi":"10.1109/ICTS52701.2021.9608634","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608634","url":null,"abstract":"Technology in Robotics and machine learning have been applied in numerous fields including the arts. Paul The Robot is able to draw sketches from human faces using the conventional convolution filter method. Generative Adversarial Network (GAN) has been successful in generating synthetic images. Researches in sketch generation have been conducted either by using Recurrent Neural Network (RNN) or by using Deep Reinforcement Learning, with step-by-step stroke drawing. This research proposes a system to generate sketches from real object images using GAN dan Deep Reinforcement Learning. The training framework used is based on Doodle-SDQ (Doodle with Stroke Demonstration and Deep Q-Network) that combines supervised learning and reinforcement learning. Real object images are converted into contour images by GAN to be the reference images by the reinforcement learning agent to generate the sketch. The experiment is done by modifying pooling layers during the supervised learning stage and rare exploration scenarios during the reinforcement learning stage. The result of this research is a model that can reach an average total reward of 2558.98 with an average pixel error of 0.0489 using 200 as the maximum step in an average time of 3.29 seconds for the sketch generation.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"114 1","pages":"134-139"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80463960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-20DOI: 10.1109/ICTS52701.2021.9608229
K. R. Sungkono, Erina Oktavia Putri, Habibatul Azkiyah, R. Sarno
Companies in the world have been considered fraud as a crucial problem. The fraud can be caused by many things, including data manipulation and anomalies in business processes or standard operational procedures (SOP). Anomalies have several types; two of them are a wrong decision and a wrong pattern. A wrong decision, which is not in accordance with the standard list in SOP, occurs as a result of making wrong decisions. On the other hand, the wrong sequence of activities is called a wrong pattern. For detecting those two anomalies, this paper proposed a graph-based method. The graph-based method creates rules for detecting a wrong pattern by measuring the similarity between traces of SOP and the process and checks the attributes of activities to detect a wrong decision. The evaluation uses an event log of the credit application process in a bank. Based on the evaluation, the proposed graph-based method gains 100% for the accuracy value in checking wrong pattern and wrong decision.
{"title":"Checking Wrong Decision and Wrong Pattern by Using A Graph-based Method","authors":"K. R. Sungkono, Erina Oktavia Putri, Habibatul Azkiyah, R. Sarno","doi":"10.1109/ICTS52701.2021.9608229","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608229","url":null,"abstract":"Companies in the world have been considered fraud as a crucial problem. The fraud can be caused by many things, including data manipulation and anomalies in business processes or standard operational procedures (SOP). Anomalies have several types; two of them are a wrong decision and a wrong pattern. A wrong decision, which is not in accordance with the standard list in SOP, occurs as a result of making wrong decisions. On the other hand, the wrong sequence of activities is called a wrong pattern. For detecting those two anomalies, this paper proposed a graph-based method. The graph-based method creates rules for detecting a wrong pattern by measuring the similarity between traces of SOP and the process and checks the attributes of activities to detect a wrong decision. The evaluation uses an event log of the credit application process in a bank. Based on the evaluation, the proposed graph-based method gains 100% for the accuracy value in checking wrong pattern and wrong decision.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"27 1","pages":"184-189"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77045177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-20DOI: 10.1109/ICTS52701.2021.9609062
Aulia Teaku Nururrahmah, T. Ahmad
In this cyber era, protecting data has been a must, one of which is performed by implementing data hiding methods or steganography. It is a technique of hiding data or messages into a media cover, which can be an audio, image, or video file. This paper examines two transformed-based steganography, implementing DCT (Discrete Cosine Transform) and DWT (Discrete Wavelet Transform) to two types of cover images, general and medical. The purpose of this research is to analyze the performance of the embedding process using the low, middle, and low-middle frequency components of the DC coefficient and embed the payload bits into Low-Low (LL), Low-High (LH), High-Low (HL), High-High (HH) of the DWT bands. The experimental results show that, in general, the DCT method has better PSNR (Peak Signal to Noise Ratio) and payload capacity than DWT. Meanwhile, the combined low and middle frequency in the DCT method has a greater PSNR than other components in this transformation, which is 51.6751. Whereas in the DWT technique, LH and HL frequency components show better PSNR values than other DWT band components, which are 43.6939 and 43.5576, respectively.
在这个网络时代,保护数据是必须的,其中一个是通过实施数据隐藏方法或隐写术来实现。它是一种将数据或消息隐藏到媒体封面中的技术,媒体封面可以是音频、图像或视频文件。本文研究了两种基于变换的隐写,将DCT(离散余弦变换)和DWT(离散小波变换)实现到两种类型的封面图像,一般和医疗。本研究的目的是利用DC系数的低、中、中低频分量分析嵌入过程的性能,并将有效载荷位嵌入到DWT频段的low- low (LL)、low- high (LH)、High-Low (HL)、High-High (HH)中。实验结果表明,总体而言,DCT方法比DWT具有更好的PSNR(峰值信噪比)和有效载荷能力。同时,DCT方法中低频组合的PSNR比该变换中其他分量的PSNR更高,为51.6751。而在DWT技术中,LH和HL频率分量的PSNR值分别为43.6939和43.5576,优于其他DWT波段分量。
{"title":"Analysis of Image Steganography using Wavelet and Cosine Transforms","authors":"Aulia Teaku Nururrahmah, T. Ahmad","doi":"10.1109/ICTS52701.2021.9609062","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9609062","url":null,"abstract":"In this cyber era, protecting data has been a must, one of which is performed by implementing data hiding methods or steganography. It is a technique of hiding data or messages into a media cover, which can be an audio, image, or video file. This paper examines two transformed-based steganography, implementing DCT (Discrete Cosine Transform) and DWT (Discrete Wavelet Transform) to two types of cover images, general and medical. The purpose of this research is to analyze the performance of the embedding process using the low, middle, and low-middle frequency components of the DC coefficient and embed the payload bits into Low-Low (LL), Low-High (LH), High-Low (HL), High-High (HH) of the DWT bands. The experimental results show that, in general, the DCT method has better PSNR (Peak Signal to Noise Ratio) and payload capacity than DWT. Meanwhile, the combined low and middle frequency in the DCT method has a greater PSNR than other components in this transformation, which is 51.6751. Whereas in the DWT technique, LH and HL frequency components show better PSNR values than other DWT band components, which are 43.6939 and 43.5576, respectively.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"2016 1","pages":"40-45"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86652396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-20DOI: 10.1109/ICTS52701.2021.9608258
S. D. S. de S Sirisuriya, L. Ranathunge, S. Karunanayake, N. A. Abdullah
e-Learning has been revolutionizing education system based on the concept of learning occurring at any time and any place. Today the whole world adopts their life to the concept called “new normal” due to COVID-19 pandemic. When applying the new normal idea to the education system, online education is more important than ever. At present, online learning is applicable not only to learn academic work but also it helps to learn extracurricular activities for students as well as conducting training sessions for employees. With no face-to-face instructions, e-Learning provides a safe and effective alternative to classroom learning. Therefore, carefully designed and attractive content is the success of an e-Learning program that can keep the audience focused and interested. Hence, evaluating the web-based e-Learning content is necessary. The evaluation process usually consists of pedagogical evaluation and content evaluation. This research study is mainly focused on automating the pedagogical evaluation component. In automating the pedagogical evaluation, identifying inconsistencies is the biggest challenge faced by pedagogical experts in the current manual reviewing process, because different institutions use different checklists to pedagogically evaluate their e-Learning content. Developing a calibrated checklist to be used in the pedagogical evaluation process is the solution. This calibrated checklist was devised based on studying existing checklists, followed by creating a questionnaire, and a survey conducted with pedagogical experts to identify the essential review factors considered in the pedagogical evaluation process. Analyzed survey results identified that, “Learner interface of the course” was the most important review factor. A simple and user-friendly interface is a key component in enhancing the quality of online courses. This paper focused on giving comparative clarification about factors considered under ‘Learner Interface’ in the web-based e-Learning content.
{"title":"Pedagogical Importance of Learner Interface in Web-Based e-Learning Content","authors":"S. D. S. de S Sirisuriya, L. Ranathunge, S. Karunanayake, N. A. Abdullah","doi":"10.1109/ICTS52701.2021.9608258","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608258","url":null,"abstract":"e-Learning has been revolutionizing education system based on the concept of learning occurring at any time and any place. Today the whole world adopts their life to the concept called “new normal” due to COVID-19 pandemic. When applying the new normal idea to the education system, online education is more important than ever. At present, online learning is applicable not only to learn academic work but also it helps to learn extracurricular activities for students as well as conducting training sessions for employees. With no face-to-face instructions, e-Learning provides a safe and effective alternative to classroom learning. Therefore, carefully designed and attractive content is the success of an e-Learning program that can keep the audience focused and interested. Hence, evaluating the web-based e-Learning content is necessary. The evaluation process usually consists of pedagogical evaluation and content evaluation. This research study is mainly focused on automating the pedagogical evaluation component. In automating the pedagogical evaluation, identifying inconsistencies is the biggest challenge faced by pedagogical experts in the current manual reviewing process, because different institutions use different checklists to pedagogically evaluate their e-Learning content. Developing a calibrated checklist to be used in the pedagogical evaluation process is the solution. This calibrated checklist was devised based on studying existing checklists, followed by creating a questionnaire, and a survey conducted with pedagogical experts to identify the essential review factors considered in the pedagogical evaluation process. Analyzed survey results identified that, “Learner interface of the course” was the most important review factor. A simple and user-friendly interface is a key component in enhancing the quality of online courses. This paper focused on giving comparative clarification about factors considered under ‘Learner Interface’ in the web-based e-Learning content.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"1 1","pages":"101-105"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91525567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}