首页 > 最新文献

2021 13th International Conference on Information & Communication Technology and System (ICTS)最新文献

英文 中文
Systematic Analysis of Hateful Text Detection Using Machine Learning Classifiers 基于机器学习分类器的仇恨文本检测系统分析
Tanzina Akter Tani, Tabassum Islam, Sayed Atique Newaz, N. Sultana
In today's internet-based world, social media is one of the most popular platforms through which users can outburst their different types of feelings, emotions, frustration, anger, happiness etc. without having concern about distinguishes between moral and social values. These kinds of abusive or offensive texts cause social disturbances, crimes, and many unethical deeds. So, there is a huge necessity to distinguish these kinds of abusive texts/posts and remove them from social media. Different researchers have distinguished different text detection processes in their related work. In our proposed work, three classifiers have been used: Naïve Bayes (NB), Random Forest (RF), and Support Vector Machine (SVM) for detecting hateful text. Bag of Words (BoW) and TF-IDF feature extraction methods have been used to compare these three classifiers for both unigram and bigrams words. To balance hateful and clean content, the Twitter dataset has been under-sampled. Text preprocessing is essential for NLP to produce better and more accurate results which have been carried out in this work. In our result, Naive Bayes has provided the highest accuracy (89%) using the TF-IDF feature extraction model, whereas Random Forest has provided the most accuracy (88%) using Bag of words (BoW) in the case of unigram word. Overall, we got much better performance using unigram than using bigrams word. Finally, we made a number of principle contributions.
在当今以互联网为基础的世界,社交媒体是最受欢迎的平台之一,用户可以通过社交媒体来发泄他们不同类型的感受、情绪、沮丧、愤怒、快乐等,而不必担心道德和社会价值观的区别。这类辱骂性或攻击性的短信会引起社会骚乱、犯罪和许多不道德的行为。因此,有必要区分这些类型的辱骂文本/帖子并将其从社交媒体中删除。不同的研究者在他们的相关工作中区分了不同的文本检测过程。在我们提出的工作中,使用了三种分类器:Naïve贝叶斯(NB),随机森林(RF)和支持向量机(SVM)来检测仇恨文本。单词袋(BoW)和TF-IDF特征提取方法被用来比较这三种分类器对单字和双字的分类。为了平衡仇恨和干净的内容,Twitter数据集的采样不足。文本预处理是自然语言处理产生更好、更准确结果的必要条件。在我们的结果中,使用TF-IDF特征提取模型的朴素贝叶斯提供了最高的准确率(89%),而在单字母单词的情况下,使用词袋(BoW)的随机森林提供了最高的准确率(88%)。总的来说,我们使用单字符比使用双字符获得了更好的性能。最后,我们做出了一些原则性的贡献。
{"title":"Systematic Analysis of Hateful Text Detection Using Machine Learning Classifiers","authors":"Tanzina Akter Tani, Tabassum Islam, Sayed Atique Newaz, N. Sultana","doi":"10.1109/ICTS52701.2021.9608010","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608010","url":null,"abstract":"In today's internet-based world, social media is one of the most popular platforms through which users can outburst their different types of feelings, emotions, frustration, anger, happiness etc. without having concern about distinguishes between moral and social values. These kinds of abusive or offensive texts cause social disturbances, crimes, and many unethical deeds. So, there is a huge necessity to distinguish these kinds of abusive texts/posts and remove them from social media. Different researchers have distinguished different text detection processes in their related work. In our proposed work, three classifiers have been used: Naïve Bayes (NB), Random Forest (RF), and Support Vector Machine (SVM) for detecting hateful text. Bag of Words (BoW) and TF-IDF feature extraction methods have been used to compare these three classifiers for both unigram and bigrams words. To balance hateful and clean content, the Twitter dataset has been under-sampled. Text preprocessing is essential for NLP to produce better and more accurate results which have been carried out in this work. In our result, Naive Bayes has provided the highest accuracy (89%) using the TF-IDF feature extraction model, whereas Random Forest has provided the most accuracy (88%) using Bag of words (BoW) in the case of unigram word. Overall, we got much better performance using unigram than using bigrams word. Finally, we made a number of principle contributions.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"339 1","pages":"330-335"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80733948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Graph Algorithm for Anomaly Prediction in East Java Student Admission System 东爪哇招生系统异常预测的图算法
Dwi Sunaryono, Annas Nuril Iman, D. Purwitasari, A. B. Raharjo
Before the zoning policy, students or their parents tend to choose a recognized school with high educational quality despite its distance. New Student Admissions or Penerimaan Peserta Didik Baru (PPDB) is a school zoning enrollment system that aims to reduce the student travel distance. The online-based PPDB system requires home location input supplemented with legal documents as validation mechanism. However, falsifying home residence or enrollment fraud could not be identified by the PPDB system. This study examines the possible fraud cases from the PPDB enrollment ranks data. The ranks data forms a graph relationship between the registrant and the school. Every data contains a longitude-latitude point, and it is the main factor for accepting based on PPDB policy. The process is trying to analyze the connection between distance gap distribution derived from the ranks data, with the concurrent fraud cases. Because the distance gap distribution still has a missing value on several gap points, it is useful to use KDE (Kernel Density Estimation) to estimate those unknown values. KDE will result in estimated distance gap distribution. The distance gap distribution is affected by the residence location that is plotted on a geo map. When there's an uncommon location of some registrant it will create fluctuation on the distance gap distribution. The gap distribution value exceeds the estimated distance gap distribution from this situation and will be detected as an enrollment fraud. The process to detect enrollment fraud is handled with a graph algorithm. The graph algorithm traverses the graph data and gets ranked registrant from a school. The data are grouped every two meters and check whether its count does not exceed the estimated distance gap distribution. The graph algorithm builds over the PPDB system and tests several manipulated residence locations. It could detect those manipulated data and has a fast process since it only took less than one second.
在区域政策之前,学生或家长倾向于选择一所公认的教育质量高的学校,尽管距离远。新学生招生(Penerimaan Peserta Didik Baru,简称PPDB)是一种学校分区招生制度,旨在减少学生的出行距离。基于在线的PPDB系统需要家庭位置输入,并辅以法律文件作为验证机制。然而,伪造家庭住址或入学欺诈无法被PPDB系统识别。本研究考察了PPDB招生排名数据中可能存在的欺诈案例。排名数据在注册者和学校之间形成了一个图表关系。每个数据都包含一个经纬度点,它是基于PPDB策略接受数据的主要因素。该过程试图分析从排名数据中得出的距离差距分布与并发欺诈案件之间的联系。因为距离间隙分布在几个间隙点上仍然有一个缺失值,所以使用KDE (Kernel Density Estimation)来估计这些未知值是有用的。KDE将产生估计的距离差距分布。距离差距分布受绘制在地理地图上的居住地位置的影响。当某些注册者的位置不常见时,它会在距离间隙分布上产生波动。差距分布值超过这种情况下估计的距离差距分布,将被检测为入学欺诈。通过图形算法处理注册欺诈检测过程。图形算法遍历图形数据并从学校获得排名注册者。每隔两米对数据进行分组,检查其计数是否超过估计的距离差距分布。图算法建立在PPDB系统上,并测试了几个被操纵的居住位置。它可以检测到那些被操纵的数据,而且处理速度很快,因为只需要不到一秒的时间。
{"title":"Graph Algorithm for Anomaly Prediction in East Java Student Admission System","authors":"Dwi Sunaryono, Annas Nuril Iman, D. Purwitasari, A. B. Raharjo","doi":"10.1109/ICTS52701.2021.9608565","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608565","url":null,"abstract":"Before the zoning policy, students or their parents tend to choose a recognized school with high educational quality despite its distance. New Student Admissions or Penerimaan Peserta Didik Baru (PPDB) is a school zoning enrollment system that aims to reduce the student travel distance. The online-based PPDB system requires home location input supplemented with legal documents as validation mechanism. However, falsifying home residence or enrollment fraud could not be identified by the PPDB system. This study examines the possible fraud cases from the PPDB enrollment ranks data. The ranks data forms a graph relationship between the registrant and the school. Every data contains a longitude-latitude point, and it is the main factor for accepting based on PPDB policy. The process is trying to analyze the connection between distance gap distribution derived from the ranks data, with the concurrent fraud cases. Because the distance gap distribution still has a missing value on several gap points, it is useful to use KDE (Kernel Density Estimation) to estimate those unknown values. KDE will result in estimated distance gap distribution. The distance gap distribution is affected by the residence location that is plotted on a geo map. When there's an uncommon location of some registrant it will create fluctuation on the distance gap distribution. The gap distribution value exceeds the estimated distance gap distribution from this situation and will be detected as an enrollment fraud. The process to detect enrollment fraud is handled with a graph algorithm. The graph algorithm traverses the graph data and gets ranked registrant from a school. The data are grouped every two meters and check whether its count does not exceed the estimated distance gap distribution. The graph algorithm builds over the PPDB system and tests several manipulated residence locations. It could detect those manipulated data and has a fast process since it only took less than one second.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"104 1","pages":"252-257"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72817666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking Position of Airborne Target on SPx-Radar-Simulator Using Probabilistic Data Association Filter 基于概率数据关联滤波的spx -雷达模拟器机载目标跟踪
M. Sahal, Zaidan Adenin Said, Rusdhianto Effendi Abdul Kadir, Z. Hidayat, Y. Bilfaqih, Abdullah Alkaff
Radar has various functions, one of which is tracking the position of targets in the air. In tracking the target position, there are obstacles, namely the uncertainty of data associations. To overcome the uncertainty of data associations, the concept of data association can be used where one of the algorithms that use this concept is the Probabilistic Data Association Filter (PDAF). A single target position tracking system in the air will be tested on radar using the PDAF with our proposed maintenance track algorithm. The data used for testing comes from simulating two motions on the SPx-Radar-Simulator. False alarms originating from interference (clutter) will be generated in the environment around the simulated target. The test results of the target tracking system using the PDAF algorithm that has been designed can track targets well, maintain tracking conditions in an environment that has clutter, and maintain track on multitarget environments. The error between the original data and the prediction on the validated target has a relatively small value, although there is a relatively significant difference in the error of the altitude data when the motion has varying altitude conditions.
雷达具有多种功能,其中之一是跟踪空中目标的位置。在跟踪目标位置时,存在着障碍,即数据关联的不确定性。为了克服数据关联的不确定性,可以使用数据关联的概念,其中使用该概念的算法之一是概率数据关联过滤器(PDAF)。利用PDAF和我们提出的维护跟踪算法,在雷达上对空中单目标位置跟踪系统进行了测试。用于测试的数据来自于在spx雷达模拟器上模拟两个运动。在模拟目标周围的环境中会产生由干扰(杂波)引起的虚警。实验结果表明,采用所设计的PDAF算法的目标跟踪系统可以很好地跟踪目标,在杂波环境下保持跟踪条件,在多目标环境下保持跟踪。当运动具有不同高度条件时,虽然高度数据的误差有比较显著的差异,但原始数据与对验证目标的预测之间的误差值相对较小。
{"title":"Tracking Position of Airborne Target on SPx-Radar-Simulator Using Probabilistic Data Association Filter","authors":"M. Sahal, Zaidan Adenin Said, Rusdhianto Effendi Abdul Kadir, Z. Hidayat, Y. Bilfaqih, Abdullah Alkaff","doi":"10.1109/ICTS52701.2021.9608370","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608370","url":null,"abstract":"Radar has various functions, one of which is tracking the position of targets in the air. In tracking the target position, there are obstacles, namely the uncertainty of data associations. To overcome the uncertainty of data associations, the concept of data association can be used where one of the algorithms that use this concept is the Probabilistic Data Association Filter (PDAF). A single target position tracking system in the air will be tested on radar using the PDAF with our proposed maintenance track algorithm. The data used for testing comes from simulating two motions on the SPx-Radar-Simulator. False alarms originating from interference (clutter) will be generated in the environment around the simulated target. The test results of the target tracking system using the PDAF algorithm that has been designed can track targets well, maintain tracking conditions in an environment that has clutter, and maintain track on multitarget environments. The error between the original data and the prediction on the validated target has a relatively small value, although there is a relatively significant difference in the error of the altitude data when the motion has varying altitude conditions.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"56 1","pages":"258-263"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76622298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hiding Messages in Audio using Modulus Operation and Simple Partition 隐藏消息在音频使用模数操作和简单分区
I. B. Prayogi, T. Ahmad, Ntivuguruzwa Jean de La Croix, Pascal Maniriho
Today's information technology tools allow people to share personal data and information through online media. Data security techniques are also developed to protect confidential messages that are only intended for certain parties, one of which is by embedding those data in a specific medium. The hiding method proposed in this study uses a simple partition to split the 16-bit audio cover file into two groups of the same length. The simple partition technique is applied to improve the quality of the resulting stego file. Furthermore, it is also developing an adaptive modulus operation calculation method, whose implementation on the difference of sample can increase the embedding capacity of the secret messages. The final result of the proposed method shows that all secret messages at the 700 kb payload file size can be successfully embedded, and the PSNR can be maintained up to 120.55 dB.
今天的信息技术工具允许人们通过在线媒体分享个人数据和信息。还开发了数据安全技术来保护仅针对某些方的机密信息,其中一种方法是将这些数据嵌入特定的介质中。本文提出的隐藏方法使用简单的分区将16位音频覆盖文件分成两组长度相同的文件。应用简单分区技术来提高生成的隐写文件的质量。此外,还开发了一种自适应模运算计算方法,该方法利用样本的差异来实现,可以提高秘密信息的嵌入容量。最终结果表明,该方法可以成功嵌入700 kb有效载荷文件下的所有秘密消息,PSNR保持在120.55 dB。
{"title":"Hiding Messages in Audio using Modulus Operation and Simple Partition","authors":"I. B. Prayogi, T. Ahmad, Ntivuguruzwa Jean de La Croix, Pascal Maniriho","doi":"10.1109/ICTS52701.2021.9609028","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9609028","url":null,"abstract":"Today's information technology tools allow people to share personal data and information through online media. Data security techniques are also developed to protect confidential messages that are only intended for certain parties, one of which is by embedding those data in a specific medium. The hiding method proposed in this study uses a simple partition to split the 16-bit audio cover file into two groups of the same length. The simple partition technique is applied to improve the quality of the resulting stego file. Furthermore, it is also developing an adaptive modulus operation calculation method, whose implementation on the difference of sample can increase the embedding capacity of the secret messages. The final result of the proposed method shows that all secret messages at the 700 kb payload file size can be successfully embedded, and the PSNR can be maintained up to 120.55 dB.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"72 1","pages":"51-55"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73342980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Dungeon's Room Generation Using Cellular Automata and Poisson Disk Sampling in Roguelike Game 在Roguelike游戏中使用细胞自动机和Poisson磁盘采样来生成地牢的房间
Nur Muhammad Husnul Habib Yahya, Hadziq Fabroyir, D. Herumurti, Imam Kuswardayan, S. Arifiani
Procedural Content Generation (PCG) allows game developers to create worlds in games easier compared with creating them manually. One of many PCG algorithms is Cellular Automata (CA). CA was used in this research together with Poisson Disk Sampling (PDS) to generate two-dimensional dungeons in a roguelike game. PDS was modified to adapt to a grid-based environment to increase computation's efficiency. Each room generated in the game was connected to each other. Together, all the rooms created a dungeon. The connections between the rooms were architected using Depth First Search (DFS) algorithm as commonly used in maze generation. The created dungeons became the world in the game. Players could explore the dungeons to conquer them and collect resources. Then, players could spend the resources to upgrade their character. According to the research evaluation, using only CA was not sufficient to generate dungeons with specific designs. Certain pipelines were demanded to adjust and to control the end results. The algorithm still run adequately fast, which was around 73 milliseconds, despite all the pipeline additions.
程序内容生成(PCG)让游戏开发者能够更轻松地创造游戏世界。细胞自动机(CA)是众多PCG算法中的一种。在本研究中,CA与泊松磁盘采样(PDS)一起用于生成roguelike游戏中的二维地下城。改进了PDS以适应网格环境,提高了计算效率。游戏中生成的每个房间都是相互连接的。所有的房间合在一起形成了一个地牢。房间之间的连接使用深度优先搜索(DFS)算法构建,这是迷宫生成中常用的算法。创造出来的地下城变成了游戏中的世界。玩家可以探索地下城并征服它们并收集资源。然后,玩家可以使用这些资源来升级自己的角色。根据研究评估,仅使用CA不足以生成具有特定设计的地下城。要求某些管道调整和控制最终结果。尽管增加了所有的流水线,但算法的运行速度仍然足够快,大约为73毫秒。
{"title":"Dungeon's Room Generation Using Cellular Automata and Poisson Disk Sampling in Roguelike Game","authors":"Nur Muhammad Husnul Habib Yahya, Hadziq Fabroyir, D. Herumurti, Imam Kuswardayan, S. Arifiani","doi":"10.1109/ICTS52701.2021.9608037","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608037","url":null,"abstract":"Procedural Content Generation (PCG) allows game developers to create worlds in games easier compared with creating them manually. One of many PCG algorithms is Cellular Automata (CA). CA was used in this research together with Poisson Disk Sampling (PDS) to generate two-dimensional dungeons in a roguelike game. PDS was modified to adapt to a grid-based environment to increase computation's efficiency. Each room generated in the game was connected to each other. Together, all the rooms created a dungeon. The connections between the rooms were architected using Depth First Search (DFS) algorithm as commonly used in maze generation. The created dungeons became the world in the game. Players could explore the dungeons to conquer them and collect resources. Then, players could spend the resources to upgrade their character. According to the research evaluation, using only CA was not sufficient to generate dungeons with specific designs. Certain pipelines were demanded to adjust and to control the end results. The algorithm still run adequately fast, which was around 73 milliseconds, despite all the pipeline additions.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"25 1","pages":"29-34"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74043900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Team Quality Formula to Predict Product Quality in Software Engineering Project of College Students 构建团队质量公式预测大学生软件工程项目产品质量
Umi Sa’adah, Maulidan Bagus Afridian Rasyid, S. Rochimah, Umi Laili Yuhana
In the CHAOS report 2020 released by the Standish Group, “a good team” is an essential factor in determining the success of a project. Therefore, team quality assessment becomes significant for team management. Furthermore, it has believed that the quality of the product developed represents the quality of the development team. For this reason, it is necessary to measure the quality of the team at any point, so we can take mitigation steps to improve the quality. The study has two aims: (1) to build a team quality measurement model to prove that team quality positively correlates with product quality and (2) to produce a team quality formula to predict product quality. We conducted the study on ten teams with 57 students who attended the Software Development Workshop class. The students applied self-assessment to assess the quality of the team. In addition, software development practitioners carried out product quality assessments. The method used to obtain a relationship between the two is the Pearson correlation. We generated a team quality measurement formula using the Brute Force method. This study proves that the quality of the development team has a positive correlation with the quality of the product developed by 0.87. Meanwhile, the team quality formula can be used as a reference to predict the quality of the product developed
在Standish Group发布的《CHAOS 2020》报告中,“一个好的团队”是决定项目成功的关键因素。因此,团队质量评估对团队管理具有重要意义。此外,它还认为开发的产品质量代表了开发团队的质量。由于这个原因,有必要在任何时候度量团队的质量,这样我们就可以采取缓解步骤来提高质量。本研究有两个目的:(1)建立团队质量测量模型,证明团队质量与产品质量正相关;(2)建立团队质量公式,预测产品质量。我们在10个团队中进行了这项研究,其中有57名学生参加了软件开发研讨会。学生们运用自我评估来评估团队的素质。此外,软件开发从业者进行产品质量评估。用来获得两者之间关系的方法是皮尔逊相关。我们使用Brute Force方法生成了一个团队质量度量公式。本研究证明,开发团队的素质与开发的产品质量正相关,正相关系数为0.87。同时,团队质量公式可以作为预测所开发产品质量的参考
{"title":"Generating Team Quality Formula to Predict Product Quality in Software Engineering Project of College Students","authors":"Umi Sa’adah, Maulidan Bagus Afridian Rasyid, S. Rochimah, Umi Laili Yuhana","doi":"10.1109/ICTS52701.2021.9607916","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9607916","url":null,"abstract":"In the CHAOS report 2020 released by the Standish Group, “a good team” is an essential factor in determining the success of a project. Therefore, team quality assessment becomes significant for team management. Furthermore, it has believed that the quality of the product developed represents the quality of the development team. For this reason, it is necessary to measure the quality of the team at any point, so we can take mitigation steps to improve the quality. The study has two aims: (1) to build a team quality measurement model to prove that team quality positively correlates with product quality and (2) to produce a team quality formula to predict product quality. We conducted the study on ten teams with 57 students who attended the Software Development Workshop class. The students applied self-assessment to assess the quality of the team. In addition, software development practitioners carried out product quality assessments. The method used to obtain a relationship between the two is the Pearson correlation. We generated a team quality measurement formula using the Brute Force method. This study proves that the quality of the development team has a positive correlation with the quality of the product developed by 0.87. Meanwhile, the team quality formula can be used as a reference to predict the quality of the product developed","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"108 1","pages":"106-111"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79406066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sketch Generation From Real Object Images Using Generative Adversarial Network and Deep Reinforcement Learning 使用生成对抗网络和深度强化学习从真实物体图像生成草图
Shintya Rezky Rahmayanti, C. Fatichah, N. Suciati
Technology in Robotics and machine learning have been applied in numerous fields including the arts. Paul The Robot is able to draw sketches from human faces using the conventional convolution filter method. Generative Adversarial Network (GAN) has been successful in generating synthetic images. Researches in sketch generation have been conducted either by using Recurrent Neural Network (RNN) or by using Deep Reinforcement Learning, with step-by-step stroke drawing. This research proposes a system to generate sketches from real object images using GAN dan Deep Reinforcement Learning. The training framework used is based on Doodle-SDQ (Doodle with Stroke Demonstration and Deep Q-Network) that combines supervised learning and reinforcement learning. Real object images are converted into contour images by GAN to be the reference images by the reinforcement learning agent to generate the sketch. The experiment is done by modifying pooling layers during the supervised learning stage and rare exploration scenarios during the reinforcement learning stage. The result of this research is a model that can reach an average total reward of 2558.98 with an average pixel error of 0.0489 using 200 as the maximum step in an average time of 3.29 seconds for the sketch generation.
机器人技术和机器学习技术已经应用于包括艺术在内的许多领域。机器人保罗能够使用传统的卷积滤波方法从人脸上绘制草图。生成对抗网络(GAN)在生成合成图像方面取得了成功。草图生成的研究有两种,一种是使用递归神经网络(RNN),另一种是使用深度强化学习,逐步绘制笔画。本研究提出了一种利用GAN和深度强化学习从真实物体图像中生成草图的系统。使用的训练框架基于Doodle- sdq (Doodle with Stroke Demonstration and Deep Q-Network),结合了监督学习和强化学习。通过GAN将真实物体图像转换为轮廓图像,作为强化学习代理生成草图的参考图像。实验通过修改监督学习阶段的池化层和强化学习阶段的罕见探索场景来完成。本研究的结果是,在草图生成的平均时间为3.29秒的情况下,以200为最大步长,平均总奖励达到2558.98,平均像素误差为0.0489。
{"title":"Sketch Generation From Real Object Images Using Generative Adversarial Network and Deep Reinforcement Learning","authors":"Shintya Rezky Rahmayanti, C. Fatichah, N. Suciati","doi":"10.1109/ICTS52701.2021.9608634","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608634","url":null,"abstract":"Technology in Robotics and machine learning have been applied in numerous fields including the arts. Paul The Robot is able to draw sketches from human faces using the conventional convolution filter method. Generative Adversarial Network (GAN) has been successful in generating synthetic images. Researches in sketch generation have been conducted either by using Recurrent Neural Network (RNN) or by using Deep Reinforcement Learning, with step-by-step stroke drawing. This research proposes a system to generate sketches from real object images using GAN dan Deep Reinforcement Learning. The training framework used is based on Doodle-SDQ (Doodle with Stroke Demonstration and Deep Q-Network) that combines supervised learning and reinforcement learning. Real object images are converted into contour images by GAN to be the reference images by the reinforcement learning agent to generate the sketch. The experiment is done by modifying pooling layers during the supervised learning stage and rare exploration scenarios during the reinforcement learning stage. The result of this research is a model that can reach an average total reward of 2558.98 with an average pixel error of 0.0489 using 200 as the maximum step in an average time of 3.29 seconds for the sketch generation.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"114 1","pages":"134-139"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80463960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Checking Wrong Decision and Wrong Pattern by Using A Graph-based Method 基于图的错误决策和错误模式检测方法
K. R. Sungkono, Erina Oktavia Putri, Habibatul Azkiyah, R. Sarno
Companies in the world have been considered fraud as a crucial problem. The fraud can be caused by many things, including data manipulation and anomalies in business processes or standard operational procedures (SOP). Anomalies have several types; two of them are a wrong decision and a wrong pattern. A wrong decision, which is not in accordance with the standard list in SOP, occurs as a result of making wrong decisions. On the other hand, the wrong sequence of activities is called a wrong pattern. For detecting those two anomalies, this paper proposed a graph-based method. The graph-based method creates rules for detecting a wrong pattern by measuring the similarity between traces of SOP and the process and checks the attributes of activities to detect a wrong decision. The evaluation uses an event log of the credit application process in a bank. Based on the evaluation, the proposed graph-based method gains 100% for the accuracy value in checking wrong pattern and wrong decision.
全世界的公司都认为欺诈是一个至关重要的问题。欺诈可能由许多因素引起,包括数据操纵和业务流程或标准操作过程(SOP)中的异常。异常有几种类型;其中两个是错误的决定和错误的模式。不符合SOP中标准清单的错误决策是由于做出错误决策造成的。另一方面,错误的活动序列称为错误的模式。针对这两种异常,本文提出了一种基于图的检测方法。基于图的方法通过度量SOP轨迹与流程之间的相似性来创建检测错误模式的规则,并检查活动的属性以检测错误决策。评估使用银行信贷申请流程的事件日志。通过评价,所提出的基于图的方法在检查错误模式和错误决策方面的准确率值达到100%。
{"title":"Checking Wrong Decision and Wrong Pattern by Using A Graph-based Method","authors":"K. R. Sungkono, Erina Oktavia Putri, Habibatul Azkiyah, R. Sarno","doi":"10.1109/ICTS52701.2021.9608229","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608229","url":null,"abstract":"Companies in the world have been considered fraud as a crucial problem. The fraud can be caused by many things, including data manipulation and anomalies in business processes or standard operational procedures (SOP). Anomalies have several types; two of them are a wrong decision and a wrong pattern. A wrong decision, which is not in accordance with the standard list in SOP, occurs as a result of making wrong decisions. On the other hand, the wrong sequence of activities is called a wrong pattern. For detecting those two anomalies, this paper proposed a graph-based method. The graph-based method creates rules for detecting a wrong pattern by measuring the similarity between traces of SOP and the process and checks the attributes of activities to detect a wrong decision. The evaluation uses an event log of the credit application process in a bank. Based on the evaluation, the proposed graph-based method gains 100% for the accuracy value in checking wrong pattern and wrong decision.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"27 1","pages":"184-189"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77045177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Analysis of Image Steganography using Wavelet and Cosine Transforms 基于小波变换和余弦变换的图像隐写分析
Aulia Teaku Nururrahmah, T. Ahmad
In this cyber era, protecting data has been a must, one of which is performed by implementing data hiding methods or steganography. It is a technique of hiding data or messages into a media cover, which can be an audio, image, or video file. This paper examines two transformed-based steganography, implementing DCT (Discrete Cosine Transform) and DWT (Discrete Wavelet Transform) to two types of cover images, general and medical. The purpose of this research is to analyze the performance of the embedding process using the low, middle, and low-middle frequency components of the DC coefficient and embed the payload bits into Low-Low (LL), Low-High (LH), High-Low (HL), High-High (HH) of the DWT bands. The experimental results show that, in general, the DCT method has better PSNR (Peak Signal to Noise Ratio) and payload capacity than DWT. Meanwhile, the combined low and middle frequency in the DCT method has a greater PSNR than other components in this transformation, which is 51.6751. Whereas in the DWT technique, LH and HL frequency components show better PSNR values than other DWT band components, which are 43.6939 and 43.5576, respectively.
在这个网络时代,保护数据是必须的,其中一个是通过实施数据隐藏方法或隐写术来实现。它是一种将数据或消息隐藏到媒体封面中的技术,媒体封面可以是音频、图像或视频文件。本文研究了两种基于变换的隐写,将DCT(离散余弦变换)和DWT(离散小波变换)实现到两种类型的封面图像,一般和医疗。本研究的目的是利用DC系数的低、中、中低频分量分析嵌入过程的性能,并将有效载荷位嵌入到DWT频段的low- low (LL)、low- high (LH)、High-Low (HL)、High-High (HH)中。实验结果表明,总体而言,DCT方法比DWT具有更好的PSNR(峰值信噪比)和有效载荷能力。同时,DCT方法中低频组合的PSNR比该变换中其他分量的PSNR更高,为51.6751。而在DWT技术中,LH和HL频率分量的PSNR值分别为43.6939和43.5576,优于其他DWT波段分量。
{"title":"Analysis of Image Steganography using Wavelet and Cosine Transforms","authors":"Aulia Teaku Nururrahmah, T. Ahmad","doi":"10.1109/ICTS52701.2021.9609062","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9609062","url":null,"abstract":"In this cyber era, protecting data has been a must, one of which is performed by implementing data hiding methods or steganography. It is a technique of hiding data or messages into a media cover, which can be an audio, image, or video file. This paper examines two transformed-based steganography, implementing DCT (Discrete Cosine Transform) and DWT (Discrete Wavelet Transform) to two types of cover images, general and medical. The purpose of this research is to analyze the performance of the embedding process using the low, middle, and low-middle frequency components of the DC coefficient and embed the payload bits into Low-Low (LL), Low-High (LH), High-Low (HL), High-High (HH) of the DWT bands. The experimental results show that, in general, the DCT method has better PSNR (Peak Signal to Noise Ratio) and payload capacity than DWT. Meanwhile, the combined low and middle frequency in the DCT method has a greater PSNR than other components in this transformation, which is 51.6751. Whereas in the DWT technique, LH and HL frequency components show better PSNR values than other DWT band components, which are 43.6939 and 43.5576, respectively.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"2016 1","pages":"40-45"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86652396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Pedagogical Importance of Learner Interface in Web-Based e-Learning Content 学习者界面在基于网络的电子学习内容中的教学意义
S. D. S. de S Sirisuriya, L. Ranathunge, S. Karunanayake, N. A. Abdullah
e-Learning has been revolutionizing education system based on the concept of learning occurring at any time and any place. Today the whole world adopts their life to the concept called “new normal” due to COVID-19 pandemic. When applying the new normal idea to the education system, online education is more important than ever. At present, online learning is applicable not only to learn academic work but also it helps to learn extracurricular activities for students as well as conducting training sessions for employees. With no face-to-face instructions, e-Learning provides a safe and effective alternative to classroom learning. Therefore, carefully designed and attractive content is the success of an e-Learning program that can keep the audience focused and interested. Hence, evaluating the web-based e-Learning content is necessary. The evaluation process usually consists of pedagogical evaluation and content evaluation. This research study is mainly focused on automating the pedagogical evaluation component. In automating the pedagogical evaluation, identifying inconsistencies is the biggest challenge faced by pedagogical experts in the current manual reviewing process, because different institutions use different checklists to pedagogically evaluate their e-Learning content. Developing a calibrated checklist to be used in the pedagogical evaluation process is the solution. This calibrated checklist was devised based on studying existing checklists, followed by creating a questionnaire, and a survey conducted with pedagogical experts to identify the essential review factors considered in the pedagogical evaluation process. Analyzed survey results identified that, “Learner interface of the course” was the most important review factor. A simple and user-friendly interface is a key component in enhancing the quality of online courses. This paper focused on giving comparative clarification about factors considered under ‘Learner Interface’ in the web-based e-Learning content.
基于“随时随地学习”的理念,电子学习正在彻底改变教育系统。今天,由于COVID-19大流行,全世界都将他们的生活纳入了“新常态”的概念。在将新常态理念应用于教育体系时,在线教育比以往任何时候都更加重要。目前,在线学习不仅适用于学习学术工作,还可以帮助学生学习课外活动,以及为员工进行培训。由于没有面对面的指导,电子学习为课堂学习提供了一种安全有效的选择。因此,精心设计和吸引人的内容是一个成功的电子学习计划,可以保持观众的注意力和兴趣。因此,评估基于web的电子学习内容是必要的。评价过程通常包括教学评价和内容评价。本研究主要集中在教学评价的自动化方面。在自动化教学评估中,识别不一致性是教学专家在当前手工审查过程中面临的最大挑战,因为不同的机构使用不同的检查表来对其电子学习内容进行教学评估。解决办法是制定一份用于教学评估过程的校准清单。这个校准的检查表是在研究现有检查表的基础上设计的,随后创建了一份问卷,并与教学专家进行了一项调查,以确定教学评估过程中考虑的基本审查因素。分析调查结果发现,“课程的学习者界面”是最重要的评价因素。一个简单易用的界面是提高在线课程质量的关键因素。本文着重对基于网络的电子学习内容中“学习者界面”所考虑的因素进行了比较澄清。
{"title":"Pedagogical Importance of Learner Interface in Web-Based e-Learning Content","authors":"S. D. S. de S Sirisuriya, L. Ranathunge, S. Karunanayake, N. A. Abdullah","doi":"10.1109/ICTS52701.2021.9608258","DOIUrl":"https://doi.org/10.1109/ICTS52701.2021.9608258","url":null,"abstract":"e-Learning has been revolutionizing education system based on the concept of learning occurring at any time and any place. Today the whole world adopts their life to the concept called “new normal” due to COVID-19 pandemic. When applying the new normal idea to the education system, online education is more important than ever. At present, online learning is applicable not only to learn academic work but also it helps to learn extracurricular activities for students as well as conducting training sessions for employees. With no face-to-face instructions, e-Learning provides a safe and effective alternative to classroom learning. Therefore, carefully designed and attractive content is the success of an e-Learning program that can keep the audience focused and interested. Hence, evaluating the web-based e-Learning content is necessary. The evaluation process usually consists of pedagogical evaluation and content evaluation. This research study is mainly focused on automating the pedagogical evaluation component. In automating the pedagogical evaluation, identifying inconsistencies is the biggest challenge faced by pedagogical experts in the current manual reviewing process, because different institutions use different checklists to pedagogically evaluate their e-Learning content. Developing a calibrated checklist to be used in the pedagogical evaluation process is the solution. This calibrated checklist was devised based on studying existing checklists, followed by creating a questionnaire, and a survey conducted with pedagogical experts to identify the essential review factors considered in the pedagogical evaluation process. Analyzed survey results identified that, “Learner interface of the course” was the most important review factor. A simple and user-friendly interface is a key component in enhancing the quality of online courses. This paper focused on giving comparative clarification about factors considered under ‘Learner Interface’ in the web-based e-Learning content.","PeriodicalId":6738,"journal":{"name":"2021 13th International Conference on Information & Communication Technology and System (ICTS)","volume":"1 1","pages":"101-105"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91525567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 13th International Conference on Information & Communication Technology and System (ICTS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1