Pub Date : 2023-11-28DOI: 10.32913/mic-ict-research-vn.v2023.n2.1240
Huy Lê Đức, Binh Le Huu, Công Đỗ Thành, Giang Nguyễn Đỗ Hoàng
Trong mạng cảm biến không dây (WSN), việc sử dụng năng lượng sao cho hiệu quả để kéo dài thời gian hoạt động của các nút cảm biến là điều cần thiết. Trong bài báo này, chúng tôi đề xuất một thuật toán định tuyến có xét đến mức tiêu thụ năng lượng giữa các nút cảm biến. Mục tiêu của thuật toán được đề xuất là cân bằng mức tiêu thụ năng lượng, giảm thiểu số nút phải tiêu thụ nhiều năng lượng nhằm tăng thời gian hoạt động của chúng. Phương pháp của chúng tôi là xây dựng một hàm trọng số cho các kết nối không dây có chứa các tham số về năng lượng còn lại tại các nút. Sau đó, sử dụng cơ chế định tuyến tập trung dựa trên kiến trúc mạng điều khiển bằng phần mềm (SDN) để tìm lộ trình có trọng số tốt nhất để truyền dữ liệu. Kết quả mô phỏng trên OMNeT++ cho thấy rằng, thuật toán được đề xuất cho phép tăng thời gian hoạt động của các nút, tăng thông lượng mạng so với các thuật toán định tuyến phổ biến hiện hành.
{"title":"Một thuật toán định tuyến cân bằng năng lượng trong mạng cảm biến không dây dựa trên SDN","authors":"Huy Lê Đức, Binh Le Huu, Công Đỗ Thành, Giang Nguyễn Đỗ Hoàng","doi":"10.32913/mic-ict-research-vn.v2023.n2.1240","DOIUrl":"https://doi.org/10.32913/mic-ict-research-vn.v2023.n2.1240","url":null,"abstract":"Trong mạng cảm biến không dây (WSN), việc sử dụng năng lượng sao cho hiệu quả để kéo dài thời gian hoạt động của các nút cảm biến là điều cần thiết. Trong bài báo này, chúng tôi đề xuất một thuật toán định tuyến có xét đến mức tiêu thụ năng lượng giữa các nút cảm biến. Mục tiêu của thuật toán được đề xuất là cân bằng mức tiêu thụ năng lượng, giảm thiểu số nút phải tiêu thụ nhiều năng lượng nhằm tăng thời gian hoạt động của chúng. Phương pháp của chúng tôi là xây dựng một hàm trọng số cho các kết nối không dây có chứa các tham số về năng lượng còn lại tại các nút. Sau đó, sử dụng cơ chế định tuyến tập trung dựa trên kiến trúc mạng điều khiển bằng phần mềm (SDN) để tìm lộ trình có trọng số tốt nhất để truyền dữ liệu. Kết quả mô phỏng trên OMNeT++ cho thấy rằng, thuật toán được đề xuất cho phép tăng thời gian hoạt động của các nút, tăng thông lượng mạng so với các thuật toán định tuyến phổ biến hiện hành.","PeriodicalId":432355,"journal":{"name":"Research and Development on Information and Communication Technology","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139222775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-18DOI: 10.32913/mic-ict-research.v2023.n1.1089
Manh-Tuan Nguyen, Thi-Huong-Giang Vu
This paper presents the state of the arts in security risk assessment of web systems. The process of assessing security risks and the process of developing and operating information systems in general, web systems in particular, are depicted step by step, showing how the risk assessment is performed during the deployment and the operation of web systems. Based on this analysis, different methods related to the manual and automatic risk assessment are reviewed, focusing on the methods using probability theory and Bayesian networks. The techniques developed for quantitative and qualitative assessment are presented and compared in terms of their objectives, scopes, and results to pick out advantages and limits. Finally, the approaches dedicated to assessing the risks of web systems are presented.
{"title":"A review of cyber security risk assessment for web systems during its deployment and operation","authors":"Manh-Tuan Nguyen, Thi-Huong-Giang Vu","doi":"10.32913/mic-ict-research.v2023.n1.1089","DOIUrl":"https://doi.org/10.32913/mic-ict-research.v2023.n1.1089","url":null,"abstract":"This paper presents the state of the arts in security risk assessment of web systems. The process of assessing security risks and the process of developing and operating information systems in general, web systems in particular, are depicted step by step, showing how the risk assessment is performed during the deployment and the operation of web systems. Based on this analysis, different methods related to the manual and automatic risk assessment are reviewed, focusing on the methods using probability theory and Bayesian networks. The techniques developed for quantitative and qualitative assessment are presented and compared in terms of their objectives, scopes, and results to pick out advantages and limits. Finally, the approaches dedicated to assessing the risks of web systems are presented.","PeriodicalId":432355,"journal":{"name":"Research and Development on Information and Communication Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122318775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-18DOI: 10.32913/mic-ict-research.v2023.n1.1063
Quynh Dao Thi Thuy
mage retrieval with traditional relevance feedback encounters problems: (1) ability to represent handcrafted features which is limited, and (2) inefficient withhigh-dimensional data such as image data. In this paper,we propose a framework based on very deep convolutionalneural network autoencoder for image retrieval, called AIR(Autoencoders for Image Retrieval). Our proposed frameworkallows to learn feature vectors directly from the raw imageand in an unsupervised manner. In addition, our frameworkutilizes a hybrid approach of unsupervised and supervisedto improve retrieval performance. The experimental resultsshow that our method gives better results than some existingmethods on the CIFAR-100 image set, which consists of 60,000images.
使用传统的相关反馈进行图像检索会遇到以下问题:(1)表示手工特征的能力有限;(2)处理高维数据(如图像数据)的效率低下。在本文中,我们提出了一个基于深度卷积神经网络的图像检索自编码器框架,称为AIR(Autoencoders for image retrieval)。我们提出的框架允许以无监督的方式直接从原始图像中学习特征向量。此外,我们的框架利用无监督和有监督的混合方法来提高检索性能。实验结果表明,在包含6万张图像的CIFAR-100图像集上,我们的方法比现有的一些方法得到了更好的结果。
{"title":"Deep Learning of Image Representations with Convolutional Neural Networks Autoencoder for Image Retrieval with Relevance Feedback","authors":"Quynh Dao Thi Thuy","doi":"10.32913/mic-ict-research.v2023.n1.1063","DOIUrl":"https://doi.org/10.32913/mic-ict-research.v2023.n1.1063","url":null,"abstract":"mage retrieval with traditional relevance feedback encounters problems: (1) ability to represent handcrafted features which is limited, and (2) inefficient withhigh-dimensional data such as image data. In this paper,we propose a framework based on very deep convolutionalneural network autoencoder for image retrieval, called AIR(Autoencoders for Image Retrieval). Our proposed frameworkallows to learn feature vectors directly from the raw imageand in an unsupervised manner. In addition, our frameworkutilizes a hybrid approach of unsupervised and supervisedto improve retrieval performance. The experimental resultsshow that our method gives better results than some existingmethods on the CIFAR-100 image set, which consists of 60,000images.","PeriodicalId":432355,"journal":{"name":"Research and Development on Information and Communication Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128431110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The application of deep learning in medical image diagnosis has been widely studied recently. Unlike general objects, thoracic abnormalities in chest X-ray radiographs are much harder to label consistently by domain experts. Theproblem’s difficulty and inconsistency in data labeling lead to the downgraded performance of the robust deep learning models. This paper presents two methods to improve the accuracy of thoracic abnormalities detection in chest X-ray images. The first method is to fuse the locations of the same abnormality marked differently by radiologists. The second method is applying mosaic data augmentation in the training process to enrich the training data. Experiments on the VinDr-CXR chest X-ray data show that combining the two methods helps improve the predictive performance by up to 8% for F1-score and 9% for the mean average precision (MAP) score.
{"title":"Location Fusion and Data Augmentation for Thoracic Abnormalites Detection in Chest X-Ray Images","authors":"Nguyen Thi Van Anh, Nguyen Duc Dung, Nguyen Thi Phuong Thuy","doi":"10.32913/mic-ict-research.v2023.n1.1172","DOIUrl":"https://doi.org/10.32913/mic-ict-research.v2023.n1.1172","url":null,"abstract":"The application of deep learning in medical image diagnosis has been widely studied recently. Unlike general objects, thoracic abnormalities in chest X-ray radiographs are much harder to label consistently by domain experts. Theproblem’s difficulty and inconsistency in data labeling lead to the downgraded performance of the robust deep learning models. This paper presents two methods to improve the accuracy of thoracic abnormalities detection in chest X-ray images. The first method is to fuse the locations of the same abnormality marked differently by radiologists. The second method is applying mosaic data augmentation in the training process to enrich the training data. Experiments on the VinDr-CXR chest X-ray data show that combining the two methods helps improve the predictive performance by up to 8% for F1-score and 9% for the mean average precision (MAP) score.","PeriodicalId":432355,"journal":{"name":"Research and Development on Information and Communication Technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114677007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-18DOI: 10.32913/mic-ict-research.v2023.n1.1197
PhanTan Quoc
Maximum clique graph problem is a combinatorial optimization problem that has many applications in science and engineering such as social networks, telecommunication networks, bioinformatics, etc. Maximum clique is a problem of class NP-hard. There are many approaches to solving the maximum clique graph problem such as algorithms to find exact solutions, heuristic algorithms, metaheuristic algorithms, etc. In this paper, we survey the approach to solving the maximum clique graph problem in the direction of metaheuristic algorithms. We evaluate the quality of these research based on the experimental data system DIMACS. Our survey can be useful information for further research on maximum clique graph problems.
{"title":"Surveying Some Metaheuristic Algorithms For Solving Maximum Clique Graph Problem","authors":"PhanTan Quoc","doi":"10.32913/mic-ict-research.v2023.n1.1197","DOIUrl":"https://doi.org/10.32913/mic-ict-research.v2023.n1.1197","url":null,"abstract":"Maximum clique graph problem is a combinatorial optimization problem that has many applications in science and engineering such as social networks, telecommunication networks, bioinformatics, etc. Maximum clique is a problem of class NP-hard. There are many approaches to solving the maximum clique graph problem such as algorithms to find exact solutions, heuristic algorithms, metaheuristic algorithms, etc. In this paper, we survey the approach to solving the maximum clique graph problem in the direction of metaheuristic algorithms. We evaluate the quality of these research based on the experimental data system DIMACS. Our survey can be useful information for further research on maximum clique graph problems.","PeriodicalId":432355,"journal":{"name":"Research and Development on Information and Communication Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127048899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-18DOI: 10.32913/mic-ict-research.v2023.n1.1105
T. Luong
The diffusion layer of the SPN block ciphers is usually built on the basis of the MDS (Maximum Distance Separable) matrices which is the matrix of the maximum distance separable code (MDS code). MDS codes have long been studied in error correcting code theory and have applications not only in coding theory but also in the design of block ciphers and hash functions. Thanks to that important role, there have been many studies on methods of building MDS matrices. In particular, the recursive MDS matrices and the symmetric recursive MDS matrices have particularly important applications because they are very efficient for execution. In this paper, we will give an estimate of the number of MDS matrices, recursive MDS matrices and symmetric recursive MDS matrices built from Reed-Solomon codes. This result is meaningful in determining the efficiency from this method of building matrices based on the Reed-Solomon codes. From there, this method can be applied to find out many MDS matrices, secure and efficient symmetric recursive MDS matrices for execution to apply in current block ciphers. Furthermore, recursive MDS matrices can be efficiently implemented using Linear Feedback Shift Registers (LFSR), making them well suited for lightweight cryptographic algorithms, so suitable for limited resources application.
{"title":"Estimation for the number of MDS Matrices, Recursive MDS Matrices and Symmetric Recursive MDS Matrices from the Reed-Solomon Codes","authors":"T. Luong","doi":"10.32913/mic-ict-research.v2023.n1.1105","DOIUrl":"https://doi.org/10.32913/mic-ict-research.v2023.n1.1105","url":null,"abstract":"The diffusion layer of the SPN block ciphers is usually built on the basis of the MDS (Maximum Distance Separable) matrices which is the matrix of the maximum distance separable code (MDS code). MDS codes have long been studied in error correcting code theory and have applications not only in coding theory but also in the design of block ciphers and hash functions. Thanks to that important role, there have been many studies on methods of building MDS matrices. In particular, the recursive MDS matrices and the symmetric recursive MDS matrices have particularly important applications because they are very efficient for execution. In this paper, we will give an estimate of the number of MDS matrices, recursive MDS matrices and symmetric recursive MDS matrices built from Reed-Solomon codes. This result is meaningful in determining the efficiency from this method of building matrices based on the Reed-Solomon codes. From there, this method can be applied to find out many MDS matrices, secure and efficient symmetric recursive MDS matrices for execution to apply in current block ciphers. Furthermore, recursive MDS matrices can be efficiently implemented using Linear Feedback Shift Registers (LFSR), making them well suited for lightweight cryptographic algorithms, so suitable for limited resources application.","PeriodicalId":432355,"journal":{"name":"Research and Development on Information and Communication Technology","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128929429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-11DOI: 10.32913/mic-ict-research.v2022.n2.1099
T. V. Nguyen, Huy Nguyen Xuan
The main purpose of the paper is to proposea lambda function and its apply to the concept of measurein comparing tuples of relations. Extend the generalizedpositive Boolean dependency to obtain a new type ofdependency called approximate generalized positive Booleandependency.The results can be applied in constructing more complicateddatabases, especially allowing extended search capabilitiesfor the real-world data.
{"title":"Lambda Functions and Approximate Generalized Positive Boolean Dependencies","authors":"T. V. Nguyen, Huy Nguyen Xuan","doi":"10.32913/mic-ict-research.v2022.n2.1099","DOIUrl":"https://doi.org/10.32913/mic-ict-research.v2022.n2.1099","url":null,"abstract":"The main purpose of the paper is to proposea lambda function and its apply to the concept of measurein comparing tuples of relations. Extend the generalizedpositive Boolean dependency to obtain a new type ofdependency called approximate generalized positive Booleandependency.The results can be applied in constructing more complicateddatabases, especially allowing extended search capabilitiesfor the real-world data.","PeriodicalId":432355,"journal":{"name":"Research and Development on Information and Communication Technology","volume":"2018 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126604637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-30DOI: 10.32913/mic-ict-research.v2022.n2.1032
T. Nguyen, Le Dang Nguyen
This paper is concerned with two salient allocationproblems in fair division of indivisible goods, aiming atmaximizing egalitarian and Nash product social welfare.These problems are computationally NP-hard, meaning thatachieving polynomial time algorithms is impossible, unlessP = NP. Approximation algorithms, which return near-optimalsolution with a theoretical guarantee, have been widely usedfor tackling the problems. However, most of them are often ofhigh computational complexity or not easy to implement. It istherefore of great interest to explore fast greedy methods thatcan quickly produce a good solution. This paper presents anempirical study of the performance of several such methods.Interestingly, the obtained results show that fair allocationproblems can be practically approximated by greedy algorithms.Keywords: Fair allocation, exact algorithm, greedy algorithm,mixed-integer linear program, NP-hard.I. INTRODUCTIONIn this paper, we study the fair allocation problem, whichhas shown its growing interest during last decades, with awide range of real-world applications [1]. In short, this is acombinatorial optimization problem which asks to allocate???? discrete items amongst a set of ???? agents (or players)so as to meet a certain notion of fairness. It is assumedthat every item is “indivisible” and “non-sharable”, thatis, i) it cannot be broken in pieces before allocating toagents, and ii) it cannot be shared by two or more agents.For example, laptops and cell-phones are indivisible itemswhich agents might not want to share with others. Anallocation of items to agents is simply a partition of thewhole set of items into ???? disjoint subsets. There are up to???????? such partitions, making the solution space large enoughso that an exhaustive search for an optimal solution isimpossible.It now remains to define what a fair allocation is, aconcept that is of independent interest in the field ofEconomic and Social Choice Theory [2, 3]. In general, thereare many different ways of defining fairness, depending onparticular applications. The most common way is to eitheruse a so-called Collective Utility Function (CUF), which isa function for aggregating individual agents’ utilities in afair manner, or to follow an orthogonal method relying ondetermining the fair share of agents. Since we are focusingon the first method in this paper, we refer the reader tothe paper [4] and the references therein for more details ofthe second method. Suppose that every agent evaluates thevalue of items through a utility function, which maps eachsubset of items to a numerical value representing the utilityof the agent for the subset. Then, one can define a maxmin fair allocation to be the one that maximizes the
{"title":"An Experimental Study of Fast Greedy Algorithms for Fair Allocation Problems","authors":"T. Nguyen, Le Dang Nguyen","doi":"10.32913/mic-ict-research.v2022.n2.1032","DOIUrl":"https://doi.org/10.32913/mic-ict-research.v2022.n2.1032","url":null,"abstract":"This paper is concerned with two salient allocationproblems in fair division of indivisible goods, aiming atmaximizing egalitarian and Nash product social welfare.These problems are computationally NP-hard, meaning thatachieving polynomial time algorithms is impossible, unlessP = NP. Approximation algorithms, which return near-optimalsolution with a theoretical guarantee, have been widely usedfor tackling the problems. However, most of them are often ofhigh computational complexity or not easy to implement. It istherefore of great interest to explore fast greedy methods thatcan quickly produce a good solution. This paper presents anempirical study of the performance of several such methods.Interestingly, the obtained results show that fair allocationproblems can be practically approximated by greedy algorithms.Keywords: Fair allocation, exact algorithm, greedy algorithm,mixed-integer linear program, NP-hard.I. INTRODUCTIONIn this paper, we study the fair allocation problem, whichhas shown its growing interest during last decades, with awide range of real-world applications [1]. In short, this is acombinatorial optimization problem which asks to allocate???? discrete items amongst a set of ???? agents (or players)so as to meet a certain notion of fairness. It is assumedthat every item is “indivisible” and “non-sharable”, thatis, i) it cannot be broken in pieces before allocating toagents, and ii) it cannot be shared by two or more agents.For example, laptops and cell-phones are indivisible itemswhich agents might not want to share with others. Anallocation of items to agents is simply a partition of thewhole set of items into ???? disjoint subsets. There are up to???????? such partitions, making the solution space large enoughso that an exhaustive search for an optimal solution isimpossible.It now remains to define what a fair allocation is, aconcept that is of independent interest in the field ofEconomic and Social Choice Theory [2, 3]. In general, thereare many different ways of defining fairness, depending onparticular applications. The most common way is to eitheruse a so-called Collective Utility Function (CUF), which isa function for aggregating individual agents’ utilities in afair manner, or to follow an orthogonal method relying ondetermining the fair share of agents. Since we are focusingon the first method in this paper, we refer the reader tothe paper [4] and the references therein for more details ofthe second method. Suppose that every agent evaluates thevalue of items through a utility function, which maps eachsubset of items to a numerical value representing the utilityof the agent for the subset. Then, one can define a maxmin fair allocation to be the one that maximizes the \u0000 \u0000 \u0000","PeriodicalId":432355,"journal":{"name":"Research and Development on Information and Communication Technology","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122148898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-30DOI: 10.32913/mic-ict-research.v2022.n2.1069
Various long non-coding RNAs have been shownto play crucial roles in different biological processes includingcell cycle control, transcription, translation, epigenetic regulation, splicing, differentiation, immune response and so forthin the human body. Discovering lncRNA-disease associationspromotes the awareness of human complex disease at molecular level and support the diagnosis, treatment and prevention of complex diseases. It is costly, laboratory and timeconsuming to discover and verify lncRNA-disease associationsby biological experiments. Therefore, it is crucial to develop acomputational method to predict lncRNA-disease associationsto save time and resources. In this paper, we proposed a newmethod to predict lncRNA-disease associations using multiplefeatures and deep learning. Our method uses a weighted????-nearest known neighbors algorithm as a pre-processingstep to eliminate the impact of sparsity data problem. Andit combines the linear and non-linear features extracted bysingular value decomposition and deep learning techniques,respectively, to obtain better prediction performance. Ourproposed method achieves a decisive performance with thebest AUC and AUPR values of 0.9702 and 0.8814, respectively,under LOOCV experiments. It is superior to other stateof-the-art SDLDA and NCPLDA methods in both AUC andAUPR evaluation metrics. It could be considered as a powerfultool to predict lncRNA-disease associations.
{"title":"Predicting Long Non-coding RNA-disease Associations using Multiple Features and Deep Learning","authors":"","doi":"10.32913/mic-ict-research.v2022.n2.1069","DOIUrl":"https://doi.org/10.32913/mic-ict-research.v2022.n2.1069","url":null,"abstract":"Various long non-coding RNAs have been shownto play crucial roles in different biological processes includingcell cycle control, transcription, translation, epigenetic regulation, splicing, differentiation, immune response and so forthin the human body. Discovering lncRNA-disease associationspromotes the awareness of human complex disease at molecular level and support the diagnosis, treatment and prevention of complex diseases. It is costly, laboratory and timeconsuming to discover and verify lncRNA-disease associationsby biological experiments. Therefore, it is crucial to develop acomputational method to predict lncRNA-disease associationsto save time and resources. In this paper, we proposed a newmethod to predict lncRNA-disease associations using multiplefeatures and deep learning. Our method uses a weighted????-nearest known neighbors algorithm as a pre-processingstep to eliminate the impact of sparsity data problem. Andit combines the linear and non-linear features extracted bysingular value decomposition and deep learning techniques,respectively, to obtain better prediction performance. Ourproposed method achieves a decisive performance with thebest AUC and AUPR values of 0.9702 and 0.8814, respectively,under LOOCV experiments. It is superior to other stateof-the-art SDLDA and NCPLDA methods in both AUC andAUPR evaluation metrics. It could be considered as a powerfultool to predict lncRNA-disease associations.","PeriodicalId":432355,"journal":{"name":"Research and Development on Information and Communication Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127644846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-30DOI: 10.32913/mic-ict-research.v2022.n2.1064
T. Cao, Dinh Phong Pham, Hoang Hiep Pham
Reversible data hiding based on the preservationof sorted pixel value ordering (PVO) technique has beenresearched and expanded recently due to its high embeddingcapacity and good image quality. secret data is always embedded in the largest or smallest pixel of the sub-block. Thus,each sub-block will embed two bits. The more blocks an imagehas, the higher the embedding potential. In this paper, to havemore sub-blocks, the paper divides the image into sub-blockswith 3 pixels and embeds 2 bits in the maximum value and1 bit in the minimum value. Therefore, each sub-block of theproposed embedding scheme is able to hide three bits insteadof the two bits as in the original scheme. The experimentalresults also show that the proposed scheme has a significantlyhigher embedding capacity.
{"title":"A Novel Revesible Data Hiding based on Improved Pixel Value Ordering Method","authors":"T. Cao, Dinh Phong Pham, Hoang Hiep Pham","doi":"10.32913/mic-ict-research.v2022.n2.1064","DOIUrl":"https://doi.org/10.32913/mic-ict-research.v2022.n2.1064","url":null,"abstract":"Reversible data hiding based on the preservationof sorted pixel value ordering (PVO) technique has beenresearched and expanded recently due to its high embeddingcapacity and good image quality. secret data is always embedded in the largest or smallest pixel of the sub-block. Thus,each sub-block will embed two bits. The more blocks an imagehas, the higher the embedding potential. In this paper, to havemore sub-blocks, the paper divides the image into sub-blockswith 3 pixels and embeds 2 bits in the maximum value and1 bit in the minimum value. Therefore, each sub-block of theproposed embedding scheme is able to hide three bits insteadof the two bits as in the original scheme. The experimentalresults also show that the proposed scheme has a significantlyhigher embedding capacity.","PeriodicalId":432355,"journal":{"name":"Research and Development on Information and Communication Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120956790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}