Abstract Detection of local text reuse is central to a variety of applications, including plagiarism detection, origin detection, and information flow analysis. This paper evaluates and compares effectiveness of fingerprint selection algorithms for the source retrieval stage of local text reuse detection. In total, six algorithms are compared – Every p-th, 0 mod p, Winnowing, Hailstorm, Frequency-biased Winnowing (FBW), as well as the proposed modified version of FBW (MFBW). Most of the previously published studies in local text reuse detection are based on datasets having either artificially generated, long-sized, or unobfuscated text reuse. In this study, to evaluate performance of the algorithms, a new dataset has been built containing real text reuse cases from Bachelor and Master Theses (written in English in the field of computer science) where about half of the cases involve less than 1 % of document text while about two-thirds of the cases involve paraphrasing. In the performed experiments, the overall best detection quality is reached by Winnowing, 0 mod p, and MFBW. The proposed MFBW algorithm is a considerable improvement over FBW and becomes one of the best performing algorithms. The software developed for this study is freely available at the author’s website http://www.cs.rtu.lv/jekabsons/.
摘要本地文本重用检测是文本剽窃检测、文本来源检测和信息流分析等应用的核心。本文对指纹选择算法在局部文本重用检测的源检索阶段的有效性进行了评价和比较。总共比较了六种算法——每p次、0模p、Winnowing、Hailstorm、Frequency-biased Winnowing (FBW),以及提出的FBW修正版本(MFBW)。以前发表的大多数关于本地文本重用检测的研究都是基于人工生成的、长尺寸的或未混淆的文本重用的数据集。在本研究中,为了评估算法的性能,建立了一个新的数据集,其中包含来自学士和硕士论文(在计算机科学领域用英语撰写)的真实文本重用案例,其中约一半的案例涉及不到1%的文档文本,而约三分之二的案例涉及释义。在所进行的实验中,Winnowing、0 mod p和MFBW的检测质量总体最佳。本文提出的MFBW算法比FBW算法有了很大的改进,成为性能最好的算法之一。为这项研究开发的软件可以在作者的网站http://www.cs.rtu.lv/jekabsons/上免费获得。
{"title":"Evaluation of Fingerprint Selection Algorithms for Local Text Reuse Detection","authors":"Gints Jēkabsons","doi":"10.2478/acss-2020-0002","DOIUrl":"https://doi.org/10.2478/acss-2020-0002","url":null,"abstract":"Abstract Detection of local text reuse is central to a variety of applications, including plagiarism detection, origin detection, and information flow analysis. This paper evaluates and compares effectiveness of fingerprint selection algorithms for the source retrieval stage of local text reuse detection. In total, six algorithms are compared – Every p-th, 0 mod p, Winnowing, Hailstorm, Frequency-biased Winnowing (FBW), as well as the proposed modified version of FBW (MFBW). Most of the previously published studies in local text reuse detection are based on datasets having either artificially generated, long-sized, or unobfuscated text reuse. In this study, to evaluate performance of the algorithms, a new dataset has been built containing real text reuse cases from Bachelor and Master Theses (written in English in the field of computer science) where about half of the cases involve less than 1 % of document text while about two-thirds of the cases involve paraphrasing. In the performed experiments, the overall best detection quality is reached by Winnowing, 0 mod p, and MFBW. The proposed MFBW algorithm is a considerable improvement over FBW and becomes one of the best performing algorithms. The software developed for this study is freely available at the author’s website http://www.cs.rtu.lv/jekabsons/.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"100 4-1 1","pages":"11 - 18"},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84455560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Deep learning is a new branch of machine learning, which is widely used by researchers in a lot of artificial intelligence applications, including signal processing and computer vision. The present research investigates the use of deep learning to solve the hand gesture recognition (HGR) problem and proposes two models using deep learning architecture. The first model comprises a convolutional neural network (CNN) and a recurrent neural network with a long short-term memory (RNN-LSTM). The accuracy of model achieves up to 82 % when fed by colour channel, and 89 % when fed by depth channel. The second model comprises two parallel convolutional neural networks, which are merged by a merge layer, and a recurrent neural network with a long short-term memory fed by RGB-D. The accuracy of the latest model achieves up to 93 %.
{"title":"Hand Gesture Recognition in Video Sequences Using Deep Convolutional and Recurrent Neural Networks","authors":"Falah Obaid, Amin Babadi, Ahmad Yoosofan","doi":"10.2478/acss-2020-0007","DOIUrl":"https://doi.org/10.2478/acss-2020-0007","url":null,"abstract":"Abstract Deep learning is a new branch of machine learning, which is widely used by researchers in a lot of artificial intelligence applications, including signal processing and computer vision. The present research investigates the use of deep learning to solve the hand gesture recognition (HGR) problem and proposes two models using deep learning architecture. The first model comprises a convolutional neural network (CNN) and a recurrent neural network with a long short-term memory (RNN-LSTM). The accuracy of model achieves up to 82 % when fed by colour channel, and 89 % when fed by depth channel. The second model comprises two parallel convolutional neural networks, which are merged by a merge layer, and a recurrent neural network with a long short-term memory fed by RGB-D. The accuracy of the latest model achieves up to 93 %.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"89 1","pages":"57 - 61"},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85475352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Awais Qasim, Hafiz Muhammad Basharat Ameen, Zeeshan Aziz, A. Khalid
Abstract The foundational features of multi-agent systems are communication and interaction with other agents. To achieve these features, agents have to transfer messages in the predefined format and semantics. The communication among these agents takes place with the help of ACL (Agent Communication Language). ACL is a predefined language for communication among agents that has been standardised by the FIPA (Foundation for Intelligent Physical Agent). FIPA-ACL defines different performatives for communication among the agents. These performatives are generic, and it becomes computationally expensive to use them for a specific domain like e-commerce. These performatives do not define the exact meaning of communication for any specific domain like e-commerce. In the present research, we introduced new performatives specifically for e-commerce domain. Our designed performatives are based on FIPA-ACL so that they can still support communication within diverse agent platforms. The proposed performatives are helpful in modelling e-commerce negotiation protocol applications using the paradigm of multi-agent systems for efficient communication. For exact semantic interpretation of the proposed performatives, we also performed formal modelling of these performatives using BNF. The primary objective of our research was to provide the negotiation facility to agents, working in an e-commerce domain, in a succinct way to reduce the number of negotiation messages, time consumption and network overhead on the platform. We used an e-commerce based bidding case study among agents to demonstrate the efficiency of our approach. The results showed that there was a lot of reduction in total time required for the bidding process.
多智能体系统的基本特征是与其他智能体的通信和交互。为了实现这些特性,代理必须以预定义的格式和语义传输消息。这些代理之间通过ACL (Agent communication Language)进行通信。ACL是由FIPA (Foundation for Intelligent Physical Agent)标准化的用于代理间通信的预定义语言。FIPA-ACL为代理之间的通信定义了不同的操作。这些性能是通用的,将它们用于特定领域(如电子商务)的计算成本很高。这些行为并没有定义任何特定领域(如电子商务)通信的确切含义。在本研究中,我们引入了专门针对电子商务领域的新行为体。我们设计的执行器基于FIPA-ACL,因此它们仍然可以支持不同代理平台内的通信。所提出的行为体有助于利用多智能体系统的范式对电子商务协商协议应用程序进行建模,以实现有效的通信。为了准确的语义解释所提出的行为,我们还使用BNF对这些行为进行了形式化建模。我们研究的主要目的是为在电子商务领域工作的代理提供谈判工具,以简洁的方式减少平台上的谈判消息数量、时间消耗和网络开销。我们使用了一个基于电子商务的代理商竞标案例研究来证明我们方法的有效性。结果表明,招标过程所需的总时间大大减少。
{"title":"Efficient Performative Actions for E-Commerce Agents","authors":"Awais Qasim, Hafiz Muhammad Basharat Ameen, Zeeshan Aziz, A. Khalid","doi":"10.2478/acss-2020-0003","DOIUrl":"https://doi.org/10.2478/acss-2020-0003","url":null,"abstract":"Abstract The foundational features of multi-agent systems are communication and interaction with other agents. To achieve these features, agents have to transfer messages in the predefined format and semantics. The communication among these agents takes place with the help of ACL (Agent Communication Language). ACL is a predefined language for communication among agents that has been standardised by the FIPA (Foundation for Intelligent Physical Agent). FIPA-ACL defines different performatives for communication among the agents. These performatives are generic, and it becomes computationally expensive to use them for a specific domain like e-commerce. These performatives do not define the exact meaning of communication for any specific domain like e-commerce. In the present research, we introduced new performatives specifically for e-commerce domain. Our designed performatives are based on FIPA-ACL so that they can still support communication within diverse agent platforms. The proposed performatives are helpful in modelling e-commerce negotiation protocol applications using the paradigm of multi-agent systems for efficient communication. For exact semantic interpretation of the proposed performatives, we also performed formal modelling of these performatives using BNF. The primary objective of our research was to provide the negotiation facility to agents, working in an e-commerce domain, in a succinct way to reduce the number of negotiation messages, time consumption and network overhead on the platform. We used an e-commerce based bidding case study among agents to demonstrate the efficiency of our approach. The results showed that there was a lot of reduction in total time required for the bidding process.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"27 1","pages":"19 - 32"},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79681918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Evolution of software development process and increasing complexity of software systems calls for developers to pay great attention to the evolution of CASE tools for software development. This, in turn, causes explosion for appearance of a new wave (or new generation) of such CASE tools. The authors of the paper have been working on the development of the so-called two-hemisphere model-driven approach and its supporting BrainTool for the past 10 years. This paper is a step forward in the research on the ability to use the two-hemisphere model driven approach for system modelling at the problem domain level and to generate UML diagrams and software code from the two-hemisphere model. The paper discusses the usage of anemic domain model instead of rich domain model and offers the main principle of transformation of the two-hemisphere model into the first one.
{"title":"Anemic Domain Model vs Rich Domain Model to Improve the Two-Hemisphere Model-Driven Approach","authors":"O. Ņikiforova, Konstantins Gusarovs","doi":"10.2478/acss-2020-0006","DOIUrl":"https://doi.org/10.2478/acss-2020-0006","url":null,"abstract":"Abstract Evolution of software development process and increasing complexity of software systems calls for developers to pay great attention to the evolution of CASE tools for software development. This, in turn, causes explosion for appearance of a new wave (or new generation) of such CASE tools. The authors of the paper have been working on the development of the so-called two-hemisphere model-driven approach and its supporting BrainTool for the past 10 years. This paper is a step forward in the research on the ability to use the two-hemisphere model driven approach for system modelling at the problem domain level and to generate UML diagrams and software code from the two-hemisphere model. The paper discusses the usage of anemic domain model instead of rich domain model and offers the main principle of transformation of the two-hemisphere model into the first one.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"10 1","pages":"51 - 56"},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87777743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Halchenko, R. Trembovetska, V. Tychkov, A. Storchak
Abstract Uniform multi-dimensional designs of experiments for effective research in computer modelling are highly demanded. The combinations of several one-dimensional quasi-random sequences with a uniform distribution are used to create designs with high homogeneity, but their optimal choice is a separate problem, the solution of which is not trivial. It is believed that now the best results are achieved using Sobol’s LPτ-sequences, but this is not observed in all cases of their combinations. The authors proposed the creation of effective uniform designs with guaranteed acceptably low discrepancy using recursive Rd-sequences and not requiring additional research to find successful combinations of vectors set distributed in a single hypercube. The authors performed a comparative analysis of both approaches using indicators of centred and wrap-around discrepancies, graphical visualization based on Voronoi diagrams. The conclusion was drawn on the practical use of the proposed approach in cases where the requirements for the designs allowed restricting to its not ideal but close to it variant with low discrepancy, which was obtained automatically without additional research.
{"title":"The Construction of Effective Multi-Dimensional Computer Designs of Experiments Based on a Quasi-Random Additive Recursive Rd-sequence","authors":"V. Halchenko, R. Trembovetska, V. Tychkov, A. Storchak","doi":"10.2478/acss-2020-0009","DOIUrl":"https://doi.org/10.2478/acss-2020-0009","url":null,"abstract":"Abstract Uniform multi-dimensional designs of experiments for effective research in computer modelling are highly demanded. The combinations of several one-dimensional quasi-random sequences with a uniform distribution are used to create designs with high homogeneity, but their optimal choice is a separate problem, the solution of which is not trivial. It is believed that now the best results are achieved using Sobol’s LPτ-sequences, but this is not observed in all cases of their combinations. The authors proposed the creation of effective uniform designs with guaranteed acceptably low discrepancy using recursive Rd-sequences and not requiring additional research to find successful combinations of vectors set distributed in a single hypercube. The authors performed a comparative analysis of both approaches using indicators of centred and wrap-around discrepancies, graphical visualization based on Voronoi diagrams. The conclusion was drawn on the practical use of the proposed approach in cases where the requirements for the designs allowed restricting to its not ideal but close to it variant with low discrepancy, which was obtained automatically without additional research.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"21 1","pages":"70 - 76"},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89933389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The achievement of high-precision segmentation in medical image analysis has been an active direction of research over the past decade. Significant success in medical imaging tasks has been feasible due to the employment of deep learning methods, including convolutional neural networks (CNNs). Convolutional architectures have been mostly applied to homogeneous medical datasets with separate organs. Nevertheless, the segmentation of volumetric medical images of several organs remains an open question. In this paper, we investigate fully convolutional neural networks (FCNs) and propose a modified 3D U-Net architecture devoted to the processing of computed tomography (CT) volumetric images in the automatic semantic segmentation tasks. To benchmark the architecture, we utilised the differentiable Sørensen-Dice similarity coefficient (SDSC) as a validation metric and optimised it on the training data by minimising the loss function. Our hand-crafted architecture was trained and tested on the manually compiled dataset of CT scans. The improved 3D UNet architecture achieved the average SDSC score of 84.8 % on testing subset among multiple abdominal organs. We also compared our architecture with recognised state-of-the-art results and demonstrated that 3D U-Net based architectures could achieve competitive performance and efficiency in the multi-organ segmentation task.
{"title":"Applying 3D U-Net Architecture to the Task of Multi-Organ Segmentation in Computed Tomography","authors":"Pavlo Radiuk","doi":"10.2478/acss-2020-0005","DOIUrl":"https://doi.org/10.2478/acss-2020-0005","url":null,"abstract":"Abstract The achievement of high-precision segmentation in medical image analysis has been an active direction of research over the past decade. Significant success in medical imaging tasks has been feasible due to the employment of deep learning methods, including convolutional neural networks (CNNs). Convolutional architectures have been mostly applied to homogeneous medical datasets with separate organs. Nevertheless, the segmentation of volumetric medical images of several organs remains an open question. In this paper, we investigate fully convolutional neural networks (FCNs) and propose a modified 3D U-Net architecture devoted to the processing of computed tomography (CT) volumetric images in the automatic semantic segmentation tasks. To benchmark the architecture, we utilised the differentiable Sørensen-Dice similarity coefficient (SDSC) as a validation metric and optimised it on the training data by minimising the loss function. Our hand-crafted architecture was trained and tested on the manually compiled dataset of CT scans. The improved 3D UNet architecture achieved the average SDSC score of 84.8 % on testing subset among multiple abdominal organs. We also compared our architecture with recognised state-of-the-art results and demonstrated that 3D U-Net based architectures could achieve competitive performance and efficiency in the multi-organ segmentation task.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"35 1","pages":"43 - 50"},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86612474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Predicting the stock market remains a challenging task due to the numerous influencing factors such as investor sentiment, firm performance, economic factors and social media sentiments. However, the profitability and economic advantage associated with accurate prediction of stock price draw the interest of academicians, economic, and financial analyst into researching in this field. Despite the improvement in stock prediction accuracy, the literature argues that prediction accuracy can be further improved beyond its current measure by looking for newer information sources particularly on the Internet. Using web news, financial tweets posted on Twitter, Google trends and forum discussions, the current study examines the association between public sentiments and the predictability of future stock price movement using Artificial Neural Network (ANN). We experimented the proposed predictive framework with stock data obtained from the Ghana Stock Exchange (GSE) between January 2010 and September 2019, and predicted the future stock value for a time window of 1 day, 7 days, 30 days, 60 days, and 90 days. We observed an accuracy of (49.4–52.95 %) based on Google trends, (55.5–60.05 %) based on Twitter, (41.52–41.77 %) based on forum post, (50.43–55.81 %) based on web news and (70.66–77.12 %) based on a combined dataset. Thus, we recorded an increase in prediction accuracy as several stock-related data sources were combined as input to our prediction model. We also established a high level of direct association between stock market behaviour and social networking sites. Therefore, based on the study outcome, we advised that stock market investors could utilise the information from web financial news, tweet, forum discussion, and Google trends to effectively perceive the future stock price movement and design effective portfolio/investment plans.
{"title":"Predicting Stock Market Price Movement Using Sentiment Analysis: Evidence From Ghana","authors":"Isaac Kofi Nti, Adebayo Felix Adekoya, B. Weyori","doi":"10.2478/acss-2020-0004","DOIUrl":"https://doi.org/10.2478/acss-2020-0004","url":null,"abstract":"Abstract Predicting the stock market remains a challenging task due to the numerous influencing factors such as investor sentiment, firm performance, economic factors and social media sentiments. However, the profitability and economic advantage associated with accurate prediction of stock price draw the interest of academicians, economic, and financial analyst into researching in this field. Despite the improvement in stock prediction accuracy, the literature argues that prediction accuracy can be further improved beyond its current measure by looking for newer information sources particularly on the Internet. Using web news, financial tweets posted on Twitter, Google trends and forum discussions, the current study examines the association between public sentiments and the predictability of future stock price movement using Artificial Neural Network (ANN). We experimented the proposed predictive framework with stock data obtained from the Ghana Stock Exchange (GSE) between January 2010 and September 2019, and predicted the future stock value for a time window of 1 day, 7 days, 30 days, 60 days, and 90 days. We observed an accuracy of (49.4–52.95 %) based on Google trends, (55.5–60.05 %) based on Twitter, (41.52–41.77 %) based on forum post, (50.43–55.81 %) based on web news and (70.66–77.12 %) based on a combined dataset. Thus, we recorded an increase in prediction accuracy as several stock-related data sources were combined as input to our prediction model. We also established a high level of direct association between stock market behaviour and social networking sites. Therefore, based on the study outcome, we advised that stock market investors could utilise the information from web financial news, tweet, forum discussion, and Google trends to effectively perceive the future stock price movement and design effective portfolio/investment plans.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"94 1","pages":"33 - 42"},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81611518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this day and age, access to the Internet has become very easy, thereby providing access to different educational resources posted on the cloud even easier. Open access to resources, such as research journals, publications, articles in periodicals etc. is restricted to retain their authenticity and integrity, as well as to track and record their usage in the form of citations. This gives the author of the resource his fair share of credibility in the community, but this may not be the case with open educational resources such as lecture notes, presentations, test papers, reports etc. that are produced and used internally within an organisation or multiple organisations. This calls for the need to build a system that stores a permanent and immutable repository of these resources in addition to keeping a track record of who utilises them. Keeping in view the above-mentioned problem in mind, the present research attempts to explore how a Blockchain based system called Block-ED can be used to help the educational community manage their resources in a way to avoid any unauthorised manipulations or alterations to the documents, as well as recognise how this system can provide an innovative method of giving credibility to the creator of the resource whenever it is utilised.
{"title":"Block-ED: The Proposed Blockchain Solution for Effectively Utilising Educational Resources","authors":"Shareen Irshad, M. N. Brohi, Tariq Rahim Soomro","doi":"10.2478/acss-2020-0001","DOIUrl":"https://doi.org/10.2478/acss-2020-0001","url":null,"abstract":"Abstract In this day and age, access to the Internet has become very easy, thereby providing access to different educational resources posted on the cloud even easier. Open access to resources, such as research journals, publications, articles in periodicals etc. is restricted to retain their authenticity and integrity, as well as to track and record their usage in the form of citations. This gives the author of the resource his fair share of credibility in the community, but this may not be the case with open educational resources such as lecture notes, presentations, test papers, reports etc. that are produced and used internally within an organisation or multiple organisations. This calls for the need to build a system that stores a permanent and immutable repository of these resources in addition to keeping a track record of who utilises them. Keeping in view the above-mentioned problem in mind, the present research attempts to explore how a Blockchain based system called Block-ED can be used to help the educational community manage their resources in a way to avoid any unauthorised manipulations or alterations to the documents, as well as recognise how this system can provide an innovative method of giving credibility to the creator of the resource whenever it is utilised.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"81 5 1","pages":"1 - 10"},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75743265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Deconvolutional neural networks are a very accurate tool for semantic image segmentation. Segmenting curvilinear meandering regions is a typical task in computer vision applied to navigational, civil engineering, and defence problems. In the study, such regions of interest are modelled as meandering transparent stripes whose width is not constant. The stripe on the white background is formed by the upper and lower non-parallel black curves so that the upper and lower image parts are completely separated. An algorithm of generating datasets of such regions is developed. It is revealed that deeper networks segment the regions more accurately. However, the segmentation is harder when the regions become bigger. This is why an alternative method of the region segmentation consisting in segmenting the upper and lower image parts by subsequently unifying the results is not effective. If the region of interest becomes bigger, it must be squeezed in order to avoid segmenting the empty image. Once the squeezed region is segmented, the image is conversely rescaled to the original view. To control the accuracy, the mean BF score having the least value among the other accuracy indicators should be maximised first.
{"title":"A Prototype Model for Semantic Segmentation of Curvilinear Meandering Regions by Deconvolutional Neural Networks","authors":"V. Romanuke","doi":"10.2478/acss-2020-0008","DOIUrl":"https://doi.org/10.2478/acss-2020-0008","url":null,"abstract":"Abstract Deconvolutional neural networks are a very accurate tool for semantic image segmentation. Segmenting curvilinear meandering regions is a typical task in computer vision applied to navigational, civil engineering, and defence problems. In the study, such regions of interest are modelled as meandering transparent stripes whose width is not constant. The stripe on the white background is formed by the upper and lower non-parallel black curves so that the upper and lower image parts are completely separated. An algorithm of generating datasets of such regions is developed. It is revealed that deeper networks segment the regions more accurately. However, the segmentation is harder when the regions become bigger. This is why an alternative method of the region segmentation consisting in segmenting the upper and lower image parts by subsequently unifying the results is not effective. If the region of interest becomes bigger, it must be squeezed in order to avoid segmenting the empty image. Once the squeezed region is segmented, the image is conversely rescaled to the original view. To control the accuracy, the mean BF score having the least value among the other accuracy indicators should be maximised first.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"1 1","pages":"62 - 69"},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80129673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The aim of the research is to study the effect of microwave Wi-Fi radiation on humans and plants. The paper investigates national standards for permissible exposure levels to microwave radiation, measures electric field intensity and justifies the point of view regarding the safe use of microwave technologies based on multiple plant cultivation experiments at different distances from a Wi-Fi router. The results demonstrate that the radiation of Wi-Fi routers significantly impairs the growth, development, yield and unexpected drought resistance of plants at short distances from the microwave source (up to 1 m to 2 m; –33 dBm to –43 dBm; >10 V/m). Slight effects are found up to about 4.5 m from a full-power home Wi-Fi router. As a result, suggestions are made for safe and balanced use of modern wireless technologies, which can complement occupational safety and health regulations.
{"title":"Some Aspects of Good Practice for Safe Use of Wi-Fi, Based on Experiments and Standards","authors":"I. Gorbans, A. Jurenoks","doi":"10.2478/acss-2019-0020","DOIUrl":"https://doi.org/10.2478/acss-2019-0020","url":null,"abstract":"Abstract The aim of the research is to study the effect of microwave Wi-Fi radiation on humans and plants. The paper investigates national standards for permissible exposure levels to microwave radiation, measures electric field intensity and justifies the point of view regarding the safe use of microwave technologies based on multiple plant cultivation experiments at different distances from a Wi-Fi router. The results demonstrate that the radiation of Wi-Fi routers significantly impairs the growth, development, yield and unexpected drought resistance of plants at short distances from the microwave source (up to 1 m to 2 m; –33 dBm to –43 dBm; >10 V/m). Slight effects are found up to about 4.5 m from a full-power home Wi-Fi router. As a result, suggestions are made for safe and balanced use of modern wireless technologies, which can complement occupational safety and health regulations.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"1 1","pages":"161 - 165"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90447635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}