Pub Date : 2019-12-31DOI: 10.5614/itbj.ict.res.appl.2019.13.3.1
S. Saripudin, Modestus Oliver Asali, B. Trilaksono, T. Indriyanto
Considering the increasing use of security and surveillance systems, moving object tracking systems are an interesting research topic in the field of computer vision. In general, a moving object tracking system consists of two integrated parts, namely the video tracking part that predicts the position of the target in the image plane, and the visual servo part that controls the movement of the camera following the movement of objects in the image plane. For tracking purposes, the camera is used as a visual sensor and applied to a 2-DOF (yaw-pitch) manipulator platform with an eye-in-hand camera configuration. Although its operation is relatively simple, the yaw-pitch camera platform still needs a good control method to improve its performance. In this study, we propose a moving object tracking system on a prototype yaw-pitch platform. A m-synthesis controller was used to control the movement of the visual servo part and keep the target in the center of the image plane. The experimental results showed relatively good results from the proposed system to work in real-time conditions with high tracking accuracy in both indoor and outdoor environments.
{"title":"Design and Implementation of Moving Object Visual Tracking System using μ-Synthesis Controller","authors":"S. Saripudin, Modestus Oliver Asali, B. Trilaksono, T. Indriyanto","doi":"10.5614/itbj.ict.res.appl.2019.13.3.1","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2019.13.3.1","url":null,"abstract":"Considering the increasing use of security and surveillance systems, moving object tracking systems are an interesting research topic in the field of computer vision. In general, a moving object tracking system consists of two integrated parts, namely the video tracking part that predicts the position of the target in the image plane, and the visual servo part that controls the movement of the camera following the movement of objects in the image plane. For tracking purposes, the camera is used as a visual sensor and applied to a 2-DOF (yaw-pitch) manipulator platform with an eye-in-hand camera configuration. Although its operation is relatively simple, the yaw-pitch camera platform still needs a good control method to improve its performance. In this study, we propose a moving object tracking system on a prototype yaw-pitch platform. A m-synthesis controller was used to control the movement of the visual servo part and keep the target in the center of the image plane. The experimental results showed relatively good results from the proposed system to work in real-time conditions with high tracking accuracy in both indoor and outdoor environments.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":"13 1","pages":"177-191"},"PeriodicalIF":0.6,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46088261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-31DOI: 10.5614/itbj.ict.res.appl.2019.13.3.5
Iresha D. Rubasinghe, D. Meedeniya
Deep probabilistic programming concatenates the strengths of deep learning to the context of probabilistic modeling for efficient and flexible computation in practice. Being an evolving field, there exist only a few expressive programming languages for uncertainty management. This paper discusses an application for analysis of ultrasound nerve segmentation-based biomedical images. Our method uses the probabilistic programming language Edward with the U-Net model and generative adversarial networks under different optimizers. The segmentation process showed the least Dice loss (‑0.54) and the highest accuracy (0.99) with the Adam optimizer in the U-Net model with the least time consumption compared to other optimizers. The smallest amount of generative network loss in the generative adversarial network model gained was 0.69 for the Adam optimizer. The Dice loss, accuracy, time consumption and output image quality in the results show the applicability of deep probabilistic programming in the long run. Thus, we further propose a neuroscience decision support system based on the proposed approach.
{"title":"Ultrasound Nerve Segmentation Using Deep Probabilistic Programming","authors":"Iresha D. Rubasinghe, D. Meedeniya","doi":"10.5614/itbj.ict.res.appl.2019.13.3.5","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2019.13.3.5","url":null,"abstract":"Deep probabilistic programming concatenates the strengths of deep learning to the context of probabilistic modeling for efficient and flexible computation in practice. Being an evolving field, there exist only a few expressive programming languages for uncertainty management. This paper discusses an application for analysis of ultrasound nerve segmentation-based biomedical images. Our method uses the probabilistic programming language Edward with the U-Net model and generative adversarial networks under different optimizers. The segmentation process showed the least Dice loss (‑0.54) and the highest accuracy (0.99) with the Adam optimizer in the U-Net model with the least time consumption compared to other optimizers. The smallest amount of generative network loss in the generative adversarial network model gained was 0.69 for the Adam optimizer. The Dice loss, accuracy, time consumption and output image quality in the results show the applicability of deep probabilistic programming in the long run. Thus, we further propose a neuroscience decision support system based on the proposed approach.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":"13 1","pages":"241-256"},"PeriodicalIF":0.6,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46164262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-31DOI: 10.5614/itbj.ict.res.appl.2019.13.3.6
A. Barmawi, Ali Muhammad
Generating paraphrases is an important component of natural language processing and generation. There are several applications that use paraphrasing, for example linguistic steganography, recommender systems, machine translation, etc. One method for paraphrasing sentences is by using synonym substitution, such as the NGM-based paraphrasing method proposed by Gadag et al. The weakness of this method is that ambiguous meanings frequently occur because the paraphrasing process is based solely on n-gram. This negatively affects the naturalness of the paraphrased sentences. For overcoming this problem, a contextual synonym substitution method is proposed, which aims to increase the naturalness of the paraphrased sentences. Using the proposed method, the paraphrasing process is not only based on n-gram but also on the context of the sentence such that the naturalness is increased. Based on the experimental result, the sentences generated using the proposed method had higher naturalness than the sentences generated using the original method.
{"title":"Paraphrasing Method Based on Contextual Synonym Substitution","authors":"A. Barmawi, Ali Muhammad","doi":"10.5614/itbj.ict.res.appl.2019.13.3.6","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2019.13.3.6","url":null,"abstract":"Generating paraphrases is an important component of natural language processing and generation. There are several applications that use paraphrasing, for example linguistic steganography, recommender systems, machine translation, etc. One method for paraphrasing sentences is by using synonym substitution, such as the NGM-based paraphrasing method proposed by Gadag et al. The weakness of this method is that ambiguous meanings frequently occur because the paraphrasing process is based solely on n-gram. This negatively affects the naturalness of the paraphrased sentences. For overcoming this problem, a contextual synonym substitution method is proposed, which aims to increase the naturalness of the paraphrased sentences. Using the proposed method, the paraphrasing process is not only based on n-gram but also on the context of the sentence such that the naturalness is increased. Based on the experimental result, the sentences generated using the proposed method had higher naturalness than the sentences generated using the original method.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":"13 1","pages":"257-282"},"PeriodicalIF":0.6,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48009527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-31DOI: 10.5614/itbj.ict.res.appl.2019.13.3.2
Bilal Bataineh, M. Y. Shambour
The increasing use of smartphones and social media apps for communication results in a massive number of screenshot images. These images enrich the written language through text and emojis. In this regard, several studies in the image analysis field have considered text. However, they ignored the use of emojis. In this study, a robust two-stage algorithm for detecting emojis in screenshot images is proposed. The first stage localizes the regions of candidate emojis by using the proposed RGB-channel analysis method followed by a connected component method with a set of proposed rules. In the second verification stage, each of the emojis and non-emojis are classified by using proposed features with a decision tree classifier. Experiments were conducted to evaluate each stage independently and assess the performance of the proposed algorithm completely by using a self-collected dataset. The results showed that the proposed RGB-channel analysis method achieved better performance than the Niblack and Sauvola methods. Moreover, the proposed feature extraction method with decision tree classifier achieved more satisfactory performance than the LBP feature extraction method with all Bayesian network, perceptron neural network, and decision table rules. Overall, the proposed algorithm exhibited high efficiency in detecting emojis in screenshot images.
{"title":"A Robust Algorithm for Emoji Detection in Smartphone Screenshot Images","authors":"Bilal Bataineh, M. Y. Shambour","doi":"10.5614/itbj.ict.res.appl.2019.13.3.2","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2019.13.3.2","url":null,"abstract":"The increasing use of smartphones and social media apps for communication results in a massive number of screenshot images. These images enrich the written language through text and emojis. In this regard, several studies in the image analysis field have considered text. However, they ignored the use of emojis. In this study, a robust two-stage algorithm for detecting emojis in screenshot images is proposed. The first stage localizes the regions of candidate emojis by using the proposed RGB-channel analysis method followed by a connected component method with a set of proposed rules. In the second verification stage, each of the emojis and non-emojis are classified by using proposed features with a decision tree classifier. Experiments were conducted to evaluate each stage independently and assess the performance of the proposed algorithm completely by using a self-collected dataset. The results showed that the proposed RGB-channel analysis method achieved better performance than the Niblack and Sauvola methods. Moreover, the proposed feature extraction method with decision tree classifier achieved more satisfactory performance than the LBP feature extraction method with all Bayesian network, perceptron neural network, and decision table rules. Overall, the proposed algorithm exhibited high efficiency in detecting emojis in screenshot images.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":"13 1","pages":"192-212"},"PeriodicalIF":0.6,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43466635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-30DOI: 10.5614/itbj.ict.res.appl.2019.13.2.2
Mohammed Basil Albayati, A. Altamimi
Facebook, the popular online social network, has changed our lives. Users can create a customized profile to share information about themselves with others that have agreed to be their ‘friend’. However, this gigantic social network can be misused for carrying out malicious activities. Facebook faces the problem of fake accounts that enable scammers to violate users’ privacy by creating fake profiles to infiltrate personal social networks. Many techniques have been proposed to address this issue. Most of them are based on detecting fake profiles/accounts, considering the characteristics of the user profile. However, the limited profile data made publicly available by Facebook makes it ineligible for applying the existing approaches in fake profile identification. Therefore, this research utilized data mining techniques to detect fake profiles. A set of supervised (ID3 decision tree, k-NN, and SVM) and unsupervised (k-Means and k-medoids) algorithms were applied to 12 behavioral and non-behavioral discriminative profile attributes from a dataset of 982 profiles. The results showed that ID3 had the highest accuracy in the detection process while k-medoids had the lowest accuracy.
{"title":"Identifying Fake Facebook Profiles Using Data Mining Techniques","authors":"Mohammed Basil Albayati, A. Altamimi","doi":"10.5614/itbj.ict.res.appl.2019.13.2.2","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2019.13.2.2","url":null,"abstract":"Facebook, the popular online social network, has changed our lives. Users can create a customized profile to share information about themselves with others that have agreed to be their ‘friend’. However, this gigantic social network can be misused for carrying out malicious activities. Facebook faces the problem of fake accounts that enable scammers to violate users’ privacy by creating fake profiles to infiltrate personal social networks. Many techniques have been proposed to address this issue. Most of them are based on detecting fake profiles/accounts, considering the characteristics of the user profile. However, the limited profile data made publicly available by Facebook makes it ineligible for applying the existing approaches in fake profile identification. Therefore, this research utilized data mining techniques to detect fake profiles. A set of supervised (ID3 decision tree, k-NN, and SVM) and unsupervised (k-Means and k-medoids) algorithms were applied to 12 behavioral and non-behavioral discriminative profile attributes from a dataset of 982 profiles. The results showed that ID3 had the highest accuracy in the detection process while k-medoids had the lowest accuracy.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46460185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-30DOI: 10.5614/itbj.ict.res.appl.2019.13.2.5
Hamdan Gani, Kiyoshi Tomimatsu
Investigating customer emotional experience using natural language processing (NLP) is an example of a way to obtain product insight. However, it relies on interpreting and representing the results understandably. Currently, the results of NLP are presented in numerical or graphical form, and human experts still need to provide an explanation in natural language. It is desirable to develop a computational system that can automatically transform NLP results into a descriptive report in natural language. The goal of this study was to develop a computational linguistic description method to generate evaluation and advice reports on game products. This study used NLP to extract emotional experiences (emotions and sentiments) from e-commerce customer reviews in the form of numerical information. This paper also presents a linguistic description method to generate evaluation and advice reports, adopting the Granular Linguistic Model of a Phenomenon (GLMP) method for analyzing the results of the NLP method. The test result showed that the proposed method could successfully generate evaluation and advice reports assessing the quality of 5 game products based on the emotional experience of customers.
{"title":"Using Customer Emotional Experience from E-Commerce for Generating Natural Language Evaluation and Advice Reports on Game Products","authors":"Hamdan Gani, Kiyoshi Tomimatsu","doi":"10.5614/itbj.ict.res.appl.2019.13.2.5","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2019.13.2.5","url":null,"abstract":"Investigating customer emotional experience using natural language processing (NLP) is an example of a way to obtain product insight. However, it relies on interpreting and representing the results understandably. Currently, the results of NLP are presented in numerical or graphical form, and human experts still need to provide an explanation in natural language. It is desirable to develop a computational system that can automatically transform NLP results into a descriptive report in natural language. The goal of this study was to develop a computational linguistic description method to generate evaluation and advice reports on game products. This study used NLP to extract emotional experiences (emotions and sentiments) from e-commerce customer reviews in the form of numerical information. This paper also presents a linguistic description method to generate evaluation and advice reports, adopting the Granular Linguistic Model of a Phenomenon (GLMP) method for analyzing the results of the NLP method. The test result showed that the proposed method could successfully generate evaluation and advice reports assessing the quality of 5 game products based on the emotional experience of customers.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45983497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-30DOI: 10.5614/itbj.ict.res.appl.2019.13.2.4
D. Sundani, S. Widiyanto, Y. Karyanti, D. T. Wardani
Identification of image edges using edge detection is done to obtain images that are sharp and clear. The selection of the edge detection algorithm will affect the result. Canny operators have an advantage compared to other edge detection operators because of their ability to detect not only strong edges but also weak edges. Until now, Canny edge detection has been done using classical computing where data are expressed in bits, 0 or 1. This paper proposes the identification of image edges using a quantum Canny edge detection algorithm, where data are expressed in the form of quantum bits (qubits). Besides 0 or 1, a value can also be 0 and 1 simultaneously so there will be many more possible values that can be obtained. There are three stages in the proposed method, namely the input image stage, the preprocessing stage, and the quantum edge detection stage. Visually, the results show that quantum Canny edge detection can detect more edges compared to classic Canny edge detection, with an average increase of 4.05% .
{"title":"Identification of Image Edge Using Quantum Canny Edge Detection Algorithm","authors":"D. Sundani, S. Widiyanto, Y. Karyanti, D. T. Wardani","doi":"10.5614/itbj.ict.res.appl.2019.13.2.4","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2019.13.2.4","url":null,"abstract":"Identification of image edges using edge detection is done to obtain images that are sharp and clear. The selection of the edge detection algorithm will affect the result. Canny operators have an advantage compared to other edge detection operators because of their ability to detect not only strong edges but also weak edges. Until now, Canny edge detection has been done using classical computing where data are expressed in bits, 0 or 1. This paper proposes the identification of image edges using a quantum Canny edge detection algorithm, where data are expressed in the form of quantum bits (qubits). Besides 0 or 1, a value can also be 0 and 1 simultaneously so there will be many more possible values that can be obtained. There are three stages in the proposed method, namely the input image stage, the preprocessing stage, and the quantum edge detection stage. Visually, the results show that quantum Canny edge detection can detect more edges compared to classic Canny edge detection, with an average increase of 4.05% .","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49559223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-30DOI: 10.5614/itbj.ict.res.appl.2019.13.2.1
K. Khurshid, A. Khan, Haroon Siddiqui, Imran Rashid, M. Hadi
The recent development of Big Data, Internet of Things (IoT) and 5G network technology offers a plethora of opportunities to the IT industry and mobile network operators. 5G cellular technology promises to offer connectivity to massive numbers of IoT devices while meeting low-latency data transmission requirements. A deficiency of the current 4G networks is that the data from IoT devices and mobile nodes are merely passed on to the cloud and the communication infrastructure does not play a part in data analysis. Instead of only passing data on to the cloud, the system could also contribute to data analysis and decision-making. In this work, a Big Data driven self-optimized 5G network design is proposed using the knowledge of emerging technologies CRAN, NVF and SDN. Also, some technical impediments in 5G network optimization are discussed. A case study is presented to demonstrate the assistance of Big Data in solving the resource allocation problem.
{"title":"Big Data Assisted CRAN Enabled 5G SON Architecture","authors":"K. Khurshid, A. Khan, Haroon Siddiqui, Imran Rashid, M. Hadi","doi":"10.5614/itbj.ict.res.appl.2019.13.2.1","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2019.13.2.1","url":null,"abstract":"The recent development of Big Data, Internet of Things (IoT) and 5G network technology offers a plethora of opportunities to the IT industry and mobile network operators. 5G cellular technology promises to offer connectivity to massive numbers of IoT devices while meeting low-latency data transmission requirements. A deficiency of the current 4G networks is that the data from IoT devices and mobile nodes are merely passed on to the cloud and the communication infrastructure does not play a part in data analysis. Instead of only passing data on to the cloud, the system could also contribute to data analysis and decision-making. In this work, a Big Data driven self-optimized 5G network design is proposed using the knowledge of emerging technologies CRAN, NVF and SDN. Also, some technical impediments in 5G network optimization are discussed. A case study is presented to demonstrate the assistance of Big Data in solving the resource allocation problem.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47161189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tunnel settlement has a significant impact on property security and personal safety. Accurate tunnel-settlement predictions can quickly reveal problems that may be addressed to prevent accidents. However, each acquisition point in the tunnel is only monitored once daily for around two months. This paper presents a new method for predicting tunnel settlement via transfer learning. First, a source model is constructed and trained by deep learning, then parameter transfer is used to transfer the knowledge gained from the source model to the target model, which has a small dataset. Based on this, the training complexity and training time of the target model can be reduced. The proposed method was tested to predict tunnel settlement in the tunnel of Shanghai metro line 13 at Jinshajiang Road and proven to be effective. Artificial neural network and support vector machines were also tested for comparison. The results showed that the transfer-learning method provided the most accurate tunnel-settlement prediction.
{"title":"Tunnel Settlement Prediction by Transfer Learning","authors":"Qicai Zhou, Hehong Shen, Jiong Zhao, Xiaolei Xiong","doi":"10.5614/itbj.ict.res.appl.2019.13.2.3","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2019.13.2.3","url":null,"abstract":"Tunnel settlement has a significant impact on property security and personal safety. Accurate tunnel-settlement predictions can quickly reveal problems that may be addressed to prevent accidents. However, each acquisition point in the tunnel is only monitored once daily for around two months. This paper presents a new method for predicting tunnel settlement via transfer learning. First, a source model is constructed and trained by deep learning, then parameter transfer is used to transfer the knowledge gained from the source model to the target model, which has a small dataset. Based on this, the training complexity and training time of the target model can be reduced. The proposed method was tested to predict tunnel settlement in the tunnel of Shanghai metro line 13 at Jinshajiang Road and proven to be effective. Artificial neural network and support vector machines were also tested for comparison. The results showed that the transfer-learning method provided the most accurate tunnel-settlement prediction.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43158124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-30DOI: 10.5614/ITBJ.ICT.RES.APPL.2019.13.1.3
F. Saputra, Taufik Djatna, L. T. Handoko
Individual expert selection and ranking is a challenging research topic that has received a lot attention in recent years because of its importance related to referencing experts in particular domains and research fund allocation and management. In this work, scientific articles were used as the most common source for ranking expertise in particular domains. Previous studies only considered title and abstract content using language modeling. This study used the whole content of scientific documents obtained from Aminer citation data. The modified weighted language model (MWLM) is proposed that combines document length and number of citations as prior document probability to improve precision. Also, the author’s dominance in a single document is computed using the Learning-to-Rank (L2R) method. The evaluation results using p@n, MAP, MRR, r-prec, and bpref showed a precision enhancement. MWLM improved the weighted language model (WLM) by p@n (4%), MAP (22.5%), and bpref (1.7%). MWLM also improved the precision of a model that used author dominance by MAP (4.3%), r-prec (8.2%), and bpref (2.1%).
{"title":"Individual Expert Selection and Ranking of Scientific Articles Using Document Length","authors":"F. Saputra, Taufik Djatna, L. T. Handoko","doi":"10.5614/ITBJ.ICT.RES.APPL.2019.13.1.3","DOIUrl":"https://doi.org/10.5614/ITBJ.ICT.RES.APPL.2019.13.1.3","url":null,"abstract":"Individual expert selection and ranking is a challenging research topic that has received a lot attention in recent years because of its importance related to referencing experts in particular domains and research fund allocation and management. In this work, scientific articles were used as the most common source for ranking expertise in particular domains. Previous studies only considered title and abstract content using language modeling. This study used the whole content of scientific documents obtained from Aminer citation data. The modified weighted language model (MWLM) is proposed that combines document length and number of citations as prior document probability to improve precision. Also, the author’s dominance in a single document is computed using the Learning-to-Rank (L2R) method. The evaluation results using p@n, MAP, MRR, r-prec, and bpref showed a precision enhancement. MWLM improved the weighted language model (WLM) by p@n (4%), MAP (22.5%), and bpref (1.7%). MWLM also improved the precision of a model that used author dominance by MAP (4.3%), r-prec (8.2%), and bpref (2.1%).","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70736726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}