首页 > 最新文献

Applied Computer Systems最新文献

英文 中文
Evaluation of Fingerprint Selection Algorithms for Local Text Reuse Detection 局部文本重用检测中指纹选择算法的评价
IF 1 Pub Date : 2020-05-01 DOI: 10.2478/acss-2020-0002
Gints Jēkabsons
Abstract Detection of local text reuse is central to a variety of applications, including plagiarism detection, origin detection, and information flow analysis. This paper evaluates and compares effectiveness of fingerprint selection algorithms for the source retrieval stage of local text reuse detection. In total, six algorithms are compared – Every p-th, 0 mod p, Winnowing, Hailstorm, Frequency-biased Winnowing (FBW), as well as the proposed modified version of FBW (MFBW). Most of the previously published studies in local text reuse detection are based on datasets having either artificially generated, long-sized, or unobfuscated text reuse. In this study, to evaluate performance of the algorithms, a new dataset has been built containing real text reuse cases from Bachelor and Master Theses (written in English in the field of computer science) where about half of the cases involve less than 1 % of document text while about two-thirds of the cases involve paraphrasing. In the performed experiments, the overall best detection quality is reached by Winnowing, 0 mod p, and MFBW. The proposed MFBW algorithm is a considerable improvement over FBW and becomes one of the best performing algorithms. The software developed for this study is freely available at the author’s website http://www.cs.rtu.lv/jekabsons/.
摘要本地文本重用检测是文本剽窃检测、文本来源检测和信息流分析等应用的核心。本文对指纹选择算法在局部文本重用检测的源检索阶段的有效性进行了评价和比较。总共比较了六种算法——每p次、0模p、Winnowing、Hailstorm、Frequency-biased Winnowing (FBW),以及提出的FBW修正版本(MFBW)。以前发表的大多数关于本地文本重用检测的研究都是基于人工生成的、长尺寸的或未混淆的文本重用的数据集。在本研究中,为了评估算法的性能,建立了一个新的数据集,其中包含来自学士和硕士论文(在计算机科学领域用英语撰写)的真实文本重用案例,其中约一半的案例涉及不到1%的文档文本,而约三分之二的案例涉及释义。在所进行的实验中,Winnowing、0 mod p和MFBW的检测质量总体最佳。本文提出的MFBW算法比FBW算法有了很大的改进,成为性能最好的算法之一。为这项研究开发的软件可以在作者的网站http://www.cs.rtu.lv/jekabsons/上免费获得。
{"title":"Evaluation of Fingerprint Selection Algorithms for Local Text Reuse Detection","authors":"Gints Jēkabsons","doi":"10.2478/acss-2020-0002","DOIUrl":"https://doi.org/10.2478/acss-2020-0002","url":null,"abstract":"Abstract Detection of local text reuse is central to a variety of applications, including plagiarism detection, origin detection, and information flow analysis. This paper evaluates and compares effectiveness of fingerprint selection algorithms for the source retrieval stage of local text reuse detection. In total, six algorithms are compared – Every p-th, 0 mod p, Winnowing, Hailstorm, Frequency-biased Winnowing (FBW), as well as the proposed modified version of FBW (MFBW). Most of the previously published studies in local text reuse detection are based on datasets having either artificially generated, long-sized, or unobfuscated text reuse. In this study, to evaluate performance of the algorithms, a new dataset has been built containing real text reuse cases from Bachelor and Master Theses (written in English in the field of computer science) where about half of the cases involve less than 1 % of document text while about two-thirds of the cases involve paraphrasing. In the performed experiments, the overall best detection quality is reached by Winnowing, 0 mod p, and MFBW. The proposed MFBW algorithm is a considerable improvement over FBW and becomes one of the best performing algorithms. The software developed for this study is freely available at the author’s website http://www.cs.rtu.lv/jekabsons/.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84455560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hand Gesture Recognition in Video Sequences Using Deep Convolutional and Recurrent Neural Networks 基于深度卷积和循环神经网络的视频序列手势识别
IF 1 Pub Date : 2020-05-01 DOI: 10.2478/acss-2020-0007
Falah Obaid, Amin Babadi, Ahmad Yoosofan
Abstract Deep learning is a new branch of machine learning, which is widely used by researchers in a lot of artificial intelligence applications, including signal processing and computer vision. The present research investigates the use of deep learning to solve the hand gesture recognition (HGR) problem and proposes two models using deep learning architecture. The first model comprises a convolutional neural network (CNN) and a recurrent neural network with a long short-term memory (RNN-LSTM). The accuracy of model achieves up to 82 % when fed by colour channel, and 89 % when fed by depth channel. The second model comprises two parallel convolutional neural networks, which are merged by a merge layer, and a recurrent neural network with a long short-term memory fed by RGB-D. The accuracy of the latest model achieves up to 93 %.
深度学习是机器学习的一个新分支,被研究人员广泛应用于包括信号处理和计算机视觉在内的许多人工智能应用中。本研究探讨了使用深度学习来解决手势识别(HGR)问题,并提出了两个使用深度学习架构的模型。第一个模型包括卷积神经网络(CNN)和具有长短期记忆的递归神经网络(RNN-LSTM)。采用颜色通道时,模型精度可达82%,采用深度通道时,模型精度可达89%。第二个模型由两个并行卷积神经网络和一个由RGB-D馈入的具有长短期记忆的递归神经网络组成。最新模型的精度可达93%。
{"title":"Hand Gesture Recognition in Video Sequences Using Deep Convolutional and Recurrent Neural Networks","authors":"Falah Obaid, Amin Babadi, Ahmad Yoosofan","doi":"10.2478/acss-2020-0007","DOIUrl":"https://doi.org/10.2478/acss-2020-0007","url":null,"abstract":"Abstract Deep learning is a new branch of machine learning, which is widely used by researchers in a lot of artificial intelligence applications, including signal processing and computer vision. The present research investigates the use of deep learning to solve the hand gesture recognition (HGR) problem and proposes two models using deep learning architecture. The first model comprises a convolutional neural network (CNN) and a recurrent neural network with a long short-term memory (RNN-LSTM). The accuracy of model achieves up to 82 % when fed by colour channel, and 89 % when fed by depth channel. The second model comprises two parallel convolutional neural networks, which are merged by a merge layer, and a recurrent neural network with a long short-term memory fed by RGB-D. The accuracy of the latest model achieves up to 93 %.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85475352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Efficient Performative Actions for E-Commerce Agents 电子商务代理的高效行为
IF 1 Pub Date : 2020-05-01 DOI: 10.2478/acss-2020-0003
Awais Qasim, Hafiz Muhammad Basharat Ameen, Zeeshan Aziz, A. Khalid
Abstract The foundational features of multi-agent systems are communication and interaction with other agents. To achieve these features, agents have to transfer messages in the predefined format and semantics. The communication among these agents takes place with the help of ACL (Agent Communication Language). ACL is a predefined language for communication among agents that has been standardised by the FIPA (Foundation for Intelligent Physical Agent). FIPA-ACL defines different performatives for communication among the agents. These performatives are generic, and it becomes computationally expensive to use them for a specific domain like e-commerce. These performatives do not define the exact meaning of communication for any specific domain like e-commerce. In the present research, we introduced new performatives specifically for e-commerce domain. Our designed performatives are based on FIPA-ACL so that they can still support communication within diverse agent platforms. The proposed performatives are helpful in modelling e-commerce negotiation protocol applications using the paradigm of multi-agent systems for efficient communication. For exact semantic interpretation of the proposed performatives, we also performed formal modelling of these performatives using BNF. The primary objective of our research was to provide the negotiation facility to agents, working in an e-commerce domain, in a succinct way to reduce the number of negotiation messages, time consumption and network overhead on the platform. We used an e-commerce based bidding case study among agents to demonstrate the efficiency of our approach. The results showed that there was a lot of reduction in total time required for the bidding process.
多智能体系统的基本特征是与其他智能体的通信和交互。为了实现这些特性,代理必须以预定义的格式和语义传输消息。这些代理之间通过ACL (Agent communication Language)进行通信。ACL是由FIPA (Foundation for Intelligent Physical Agent)标准化的用于代理间通信的预定义语言。FIPA-ACL为代理之间的通信定义了不同的操作。这些性能是通用的,将它们用于特定领域(如电子商务)的计算成本很高。这些行为并没有定义任何特定领域(如电子商务)通信的确切含义。在本研究中,我们引入了专门针对电子商务领域的新行为体。我们设计的执行器基于FIPA-ACL,因此它们仍然可以支持不同代理平台内的通信。所提出的行为体有助于利用多智能体系统的范式对电子商务协商协议应用程序进行建模,以实现有效的通信。为了准确的语义解释所提出的行为,我们还使用BNF对这些行为进行了形式化建模。我们研究的主要目的是为在电子商务领域工作的代理提供谈判工具,以简洁的方式减少平台上的谈判消息数量、时间消耗和网络开销。我们使用了一个基于电子商务的代理商竞标案例研究来证明我们方法的有效性。结果表明,招标过程所需的总时间大大减少。
{"title":"Efficient Performative Actions for E-Commerce Agents","authors":"Awais Qasim, Hafiz Muhammad Basharat Ameen, Zeeshan Aziz, A. Khalid","doi":"10.2478/acss-2020-0003","DOIUrl":"https://doi.org/10.2478/acss-2020-0003","url":null,"abstract":"Abstract The foundational features of multi-agent systems are communication and interaction with other agents. To achieve these features, agents have to transfer messages in the predefined format and semantics. The communication among these agents takes place with the help of ACL (Agent Communication Language). ACL is a predefined language for communication among agents that has been standardised by the FIPA (Foundation for Intelligent Physical Agent). FIPA-ACL defines different performatives for communication among the agents. These performatives are generic, and it becomes computationally expensive to use them for a specific domain like e-commerce. These performatives do not define the exact meaning of communication for any specific domain like e-commerce. In the present research, we introduced new performatives specifically for e-commerce domain. Our designed performatives are based on FIPA-ACL so that they can still support communication within diverse agent platforms. The proposed performatives are helpful in modelling e-commerce negotiation protocol applications using the paradigm of multi-agent systems for efficient communication. For exact semantic interpretation of the proposed performatives, we also performed formal modelling of these performatives using BNF. The primary objective of our research was to provide the negotiation facility to agents, working in an e-commerce domain, in a succinct way to reduce the number of negotiation messages, time consumption and network overhead on the platform. We used an e-commerce based bidding case study among agents to demonstrate the efficiency of our approach. The results showed that there was a lot of reduction in total time required for the bidding process.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79681918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Construction of Effective Multi-Dimensional Computer Designs of Experiments Based on a Quasi-Random Additive Recursive Rd-sequence 基于拟随机加性递归rd -序列的有效多维实验计算机设计的构建
IF 1 Pub Date : 2020-05-01 DOI: 10.2478/acss-2020-0009
V. Halchenko, R. Trembovetska, V. Tychkov, A. Storchak
Abstract Uniform multi-dimensional designs of experiments for effective research in computer modelling are highly demanded. The combinations of several one-dimensional quasi-random sequences with a uniform distribution are used to create designs with high homogeneity, but their optimal choice is a separate problem, the solution of which is not trivial. It is believed that now the best results are achieved using Sobol’s LPτ-sequences, but this is not observed in all cases of their combinations. The authors proposed the creation of effective uniform designs with guaranteed acceptably low discrepancy using recursive Rd-sequences and not requiring additional research to find successful combinations of vectors set distributed in a single hypercube. The authors performed a comparative analysis of both approaches using indicators of centred and wrap-around discrepancies, graphical visualization based on Voronoi diagrams. The conclusion was drawn on the practical use of the proposed approach in cases where the requirements for the designs allowed restricting to its not ideal but close to it variant with low discrepancy, which was obtained automatically without additional research.
摘要为了有效地进行计算机建模研究,迫切需要统一的多维实验设计。几个均匀分布的一维拟随机序列的组合可以产生高均匀性的设计,但它们的最优选择是一个单独的问题,其解决方法并不简单。目前认为,使用Sobol的lpτ序列可以获得最好的结果,但并不是在它们的所有组合中都能观察到这一点。作者建议使用递归rd序列创建保证可接受的低差异的有效均匀设计,并且不需要额外的研究来找到分布在单个超立方体中的向量集的成功组合。作者对两种方法进行了比较分析,使用中心和环绕差异的指标,基于Voronoi图的图形可视化。在设计要求允许限制其不理想但接近其低差异的变体的情况下,所提出的方法的实际使用得出结论,这是自动获得的,无需额外的研究。
{"title":"The Construction of Effective Multi-Dimensional Computer Designs of Experiments Based on a Quasi-Random Additive Recursive Rd-sequence","authors":"V. Halchenko, R. Trembovetska, V. Tychkov, A. Storchak","doi":"10.2478/acss-2020-0009","DOIUrl":"https://doi.org/10.2478/acss-2020-0009","url":null,"abstract":"Abstract Uniform multi-dimensional designs of experiments for effective research in computer modelling are highly demanded. The combinations of several one-dimensional quasi-random sequences with a uniform distribution are used to create designs with high homogeneity, but their optimal choice is a separate problem, the solution of which is not trivial. It is believed that now the best results are achieved using Sobol’s LPτ-sequences, but this is not observed in all cases of their combinations. The authors proposed the creation of effective uniform designs with guaranteed acceptably low discrepancy using recursive Rd-sequences and not requiring additional research to find successful combinations of vectors set distributed in a single hypercube. The authors performed a comparative analysis of both approaches using indicators of centred and wrap-around discrepancies, graphical visualization based on Voronoi diagrams. The conclusion was drawn on the practical use of the proposed approach in cases where the requirements for the designs allowed restricting to its not ideal but close to it variant with low discrepancy, which was obtained automatically without additional research.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89933389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Anemic Domain Model vs Rich Domain Model to Improve the Two-Hemisphere Model-Driven Approach 贫域模型与富域模型改进双半球模型驱动方法
IF 1 Pub Date : 2020-05-01 DOI: 10.2478/acss-2020-0006
O. Ņikiforova, Konstantins Gusarovs
Abstract Evolution of software development process and increasing complexity of software systems calls for developers to pay great attention to the evolution of CASE tools for software development. This, in turn, causes explosion for appearance of a new wave (or new generation) of such CASE tools. The authors of the paper have been working on the development of the so-called two-hemisphere model-driven approach and its supporting BrainTool for the past 10 years. This paper is a step forward in the research on the ability to use the two-hemisphere model driven approach for system modelling at the problem domain level and to generate UML diagrams and software code from the two-hemisphere model. The paper discusses the usage of anemic domain model instead of rich domain model and offers the main principle of transformation of the two-hemisphere model into the first one.
软件开发过程的演变和软件系统复杂性的增加要求开发人员高度关注软件开发用例工具的演变。这反过来又导致了这种CASE工具的新浪潮(或新一代)的出现。这篇论文的作者在过去的10年里一直致力于所谓的双半球模型驱动方法及其支持的BrainTool的发展。本文研究了在问题域级别使用双半球模型驱动方法进行系统建模,并从双半球模型生成UML图和软件代码的能力。本文讨论了贫域模型代替富域模型的用法,并提出了由双半球模型转换为第一半球模型的主要原理。
{"title":"Anemic Domain Model vs Rich Domain Model to Improve the Two-Hemisphere Model-Driven Approach","authors":"O. Ņikiforova, Konstantins Gusarovs","doi":"10.2478/acss-2020-0006","DOIUrl":"https://doi.org/10.2478/acss-2020-0006","url":null,"abstract":"Abstract Evolution of software development process and increasing complexity of software systems calls for developers to pay great attention to the evolution of CASE tools for software development. This, in turn, causes explosion for appearance of a new wave (or new generation) of such CASE tools. The authors of the paper have been working on the development of the so-called two-hemisphere model-driven approach and its supporting BrainTool for the past 10 years. This paper is a step forward in the research on the ability to use the two-hemisphere model driven approach for system modelling at the problem domain level and to generate UML diagrams and software code from the two-hemisphere model. The paper discusses the usage of anemic domain model instead of rich domain model and offers the main principle of transformation of the two-hemisphere model into the first one.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87777743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Applying 3D U-Net Architecture to the Task of Multi-Organ Segmentation in Computed Tomography 三维U-Net结构在计算机断层扫描中多器官分割中的应用
IF 1 Pub Date : 2020-05-01 DOI: 10.2478/acss-2020-0005
Pavlo Radiuk
Abstract The achievement of high-precision segmentation in medical image analysis has been an active direction of research over the past decade. Significant success in medical imaging tasks has been feasible due to the employment of deep learning methods, including convolutional neural networks (CNNs). Convolutional architectures have been mostly applied to homogeneous medical datasets with separate organs. Nevertheless, the segmentation of volumetric medical images of several organs remains an open question. In this paper, we investigate fully convolutional neural networks (FCNs) and propose a modified 3D U-Net architecture devoted to the processing of computed tomography (CT) volumetric images in the automatic semantic segmentation tasks. To benchmark the architecture, we utilised the differentiable Sørensen-Dice similarity coefficient (SDSC) as a validation metric and optimised it on the training data by minimising the loss function. Our hand-crafted architecture was trained and tested on the manually compiled dataset of CT scans. The improved 3D UNet architecture achieved the average SDSC score of 84.8 % on testing subset among multiple abdominal organs. We also compared our architecture with recognised state-of-the-art results and demonstrated that 3D U-Net based architectures could achieve competitive performance and efficiency in the multi-organ segmentation task.
在医学图像分析中实现高精度分割是近十年来研究的一个活跃方向。由于采用了深度学习方法,包括卷积神经网络(cnn),在医学成像任务中取得了重大成功。卷积架构主要应用于具有独立器官的同构医疗数据集。然而,分割几个器官的体积医学图像仍然是一个悬而未决的问题。在本文中,我们研究了全卷积神经网络(FCNs),并提出了一种改进的3D U-Net架构,专门用于处理计算机断层扫描(CT)体积图像的自动语义分割任务。为了对该架构进行基准测试,我们使用可微Sørensen-Dice相似系数(SDSC)作为验证度量,并通过最小化损失函数在训练数据上进行优化。我们手工制作的架构在手工编译的CT扫描数据集上进行了训练和测试。改进的3D UNet架构在多个腹部器官的测试子集上平均SDSC得分为84.8%。我们还将我们的架构与公认的最先进的结果进行了比较,并证明了基于3D U-Net的架构可以在多器官分割任务中实现具有竞争力的性能和效率。
{"title":"Applying 3D U-Net Architecture to the Task of Multi-Organ Segmentation in Computed Tomography","authors":"Pavlo Radiuk","doi":"10.2478/acss-2020-0005","DOIUrl":"https://doi.org/10.2478/acss-2020-0005","url":null,"abstract":"Abstract The achievement of high-precision segmentation in medical image analysis has been an active direction of research over the past decade. Significant success in medical imaging tasks has been feasible due to the employment of deep learning methods, including convolutional neural networks (CNNs). Convolutional architectures have been mostly applied to homogeneous medical datasets with separate organs. Nevertheless, the segmentation of volumetric medical images of several organs remains an open question. In this paper, we investigate fully convolutional neural networks (FCNs) and propose a modified 3D U-Net architecture devoted to the processing of computed tomography (CT) volumetric images in the automatic semantic segmentation tasks. To benchmark the architecture, we utilised the differentiable Sørensen-Dice similarity coefficient (SDSC) as a validation metric and optimised it on the training data by minimising the loss function. Our hand-crafted architecture was trained and tested on the manually compiled dataset of CT scans. The improved 3D UNet architecture achieved the average SDSC score of 84.8 % on testing subset among multiple abdominal organs. We also compared our architecture with recognised state-of-the-art results and demonstrated that 3D U-Net based architectures could achieve competitive performance and efficiency in the multi-organ segmentation task.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86612474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Predicting Stock Market Price Movement Using Sentiment Analysis: Evidence From Ghana 用情绪分析预测股票市场价格走势:来自加纳的证据
IF 1 Pub Date : 2020-05-01 DOI: 10.2478/acss-2020-0004
Isaac Kofi Nti, Adebayo Felix Adekoya, B. Weyori
Abstract Predicting the stock market remains a challenging task due to the numerous influencing factors such as investor sentiment, firm performance, economic factors and social media sentiments. However, the profitability and economic advantage associated with accurate prediction of stock price draw the interest of academicians, economic, and financial analyst into researching in this field. Despite the improvement in stock prediction accuracy, the literature argues that prediction accuracy can be further improved beyond its current measure by looking for newer information sources particularly on the Internet. Using web news, financial tweets posted on Twitter, Google trends and forum discussions, the current study examines the association between public sentiments and the predictability of future stock price movement using Artificial Neural Network (ANN). We experimented the proposed predictive framework with stock data obtained from the Ghana Stock Exchange (GSE) between January 2010 and September 2019, and predicted the future stock value for a time window of 1 day, 7 days, 30 days, 60 days, and 90 days. We observed an accuracy of (49.4–52.95 %) based on Google trends, (55.5–60.05 %) based on Twitter, (41.52–41.77 %) based on forum post, (50.43–55.81 %) based on web news and (70.66–77.12 %) based on a combined dataset. Thus, we recorded an increase in prediction accuracy as several stock-related data sources were combined as input to our prediction model. We also established a high level of direct association between stock market behaviour and social networking sites. Therefore, based on the study outcome, we advised that stock market investors could utilise the information from web financial news, tweet, forum discussion, and Google trends to effectively perceive the future stock price movement and design effective portfolio/investment plans.
由于投资者情绪、公司业绩、经济因素和社交媒体情绪等诸多因素的影响,预测股票市场仍然是一项具有挑战性的任务。然而,准确的股票价格预测所带来的盈利能力和经济优势吸引了学术界、经济和金融分析师对这一领域的研究兴趣。尽管股票预测精度有所提高,但文献认为,通过寻找更新的信息源,特别是在互联网上,预测精度可以进一步提高,超出目前的衡量标准。本研究利用网络新闻、Twitter上发布的金融推文、谷歌趋势和论坛讨论,利用人工神经网络(ANN)研究公众情绪与未来股价走势的可预测性之间的关系。我们用2010年1月至2019年9月期间从加纳证券交易所(GSE)获得的股票数据对提出的预测框架进行了实验,并预测了1天、7天、30天、60天和90天的未来股票价值。我们观察到基于谷歌趋势的准确率为(49.4 - 52.95%),基于Twitter的准确率为(55.5 - 60.05%),基于论坛帖子的准确率为(41.52 - 41.77%),基于网络新闻的准确率为(50.43 - 55.81%),基于组合数据集的准确率为(70.66 - 77.12%)。因此,我们记录了预测准确性的提高,因为几个与股票相关的数据源被组合为我们的预测模型的输入。我们还在股市行为和社交网站之间建立了高度直接的联系。因此,基于研究结果,我们建议股票市场投资者可以利用网络财经新闻、推特、论坛讨论和谷歌趋势的信息来有效地感知未来的股价走势,并设计有效的投资组合/投资计划。
{"title":"Predicting Stock Market Price Movement Using Sentiment Analysis: Evidence From Ghana","authors":"Isaac Kofi Nti, Adebayo Felix Adekoya, B. Weyori","doi":"10.2478/acss-2020-0004","DOIUrl":"https://doi.org/10.2478/acss-2020-0004","url":null,"abstract":"Abstract Predicting the stock market remains a challenging task due to the numerous influencing factors such as investor sentiment, firm performance, economic factors and social media sentiments. However, the profitability and economic advantage associated with accurate prediction of stock price draw the interest of academicians, economic, and financial analyst into researching in this field. Despite the improvement in stock prediction accuracy, the literature argues that prediction accuracy can be further improved beyond its current measure by looking for newer information sources particularly on the Internet. Using web news, financial tweets posted on Twitter, Google trends and forum discussions, the current study examines the association between public sentiments and the predictability of future stock price movement using Artificial Neural Network (ANN). We experimented the proposed predictive framework with stock data obtained from the Ghana Stock Exchange (GSE) between January 2010 and September 2019, and predicted the future stock value for a time window of 1 day, 7 days, 30 days, 60 days, and 90 days. We observed an accuracy of (49.4–52.95 %) based on Google trends, (55.5–60.05 %) based on Twitter, (41.52–41.77 %) based on forum post, (50.43–55.81 %) based on web news and (70.66–77.12 %) based on a combined dataset. Thus, we recorded an increase in prediction accuracy as several stock-related data sources were combined as input to our prediction model. We also established a high level of direct association between stock market behaviour and social networking sites. Therefore, based on the study outcome, we advised that stock market investors could utilise the information from web financial news, tweet, forum discussion, and Google trends to effectively perceive the future stock price movement and design effective portfolio/investment plans.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81611518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Block-ED: The Proposed Blockchain Solution for Effectively Utilising Educational Resources Block-ED:有效利用教育资源的区块链方案建议
IF 1 Pub Date : 2020-05-01 DOI: 10.2478/acss-2020-0001
Shareen Irshad, M. N. Brohi, Tariq Rahim Soomro
Abstract In this day and age, access to the Internet has become very easy, thereby providing access to different educational resources posted on the cloud even easier. Open access to resources, such as research journals, publications, articles in periodicals etc. is restricted to retain their authenticity and integrity, as well as to track and record their usage in the form of citations. This gives the author of the resource his fair share of credibility in the community, but this may not be the case with open educational resources such as lecture notes, presentations, test papers, reports etc. that are produced and used internally within an organisation or multiple organisations. This calls for the need to build a system that stores a permanent and immutable repository of these resources in addition to keeping a track record of who utilises them. Keeping in view the above-mentioned problem in mind, the present research attempts to explore how a Blockchain based system called Block-ED can be used to help the educational community manage their resources in a way to avoid any unauthorised manipulations or alterations to the documents, as well as recognise how this system can provide an innovative method of giving credibility to the creator of the resource whenever it is utilised.
在这个时代,访问互联网已经变得非常容易,从而提供访问不同的教育资源发布在云上更加容易。对资源(如研究期刊、出版物、期刊文章等)的开放获取受到限制,以保持其真实性和完整性,并以引用的形式跟踪和记录其使用情况。这使资源的作者在社区中享有公平的信誉份额,但对于开放教育资源,如课堂讲稿、演示文稿、试卷、报告等,在一个组织或多个组织内部制作和使用的情况可能并非如此。这就需要构建一个系统来存储这些资源的永久和不可变的存储库,此外还要跟踪记录谁使用了这些资源。考虑到上述问题,本研究试图探索如何使用基于区块链的Block-ED系统来帮助教育界管理其资源,以避免任何未经授权的操作或对文件的更改,并认识到该系统如何提供一种创新方法,在使用资源时为资源的创建者提供可信度。
{"title":"Block-ED: The Proposed Blockchain Solution for Effectively Utilising Educational Resources","authors":"Shareen Irshad, M. N. Brohi, Tariq Rahim Soomro","doi":"10.2478/acss-2020-0001","DOIUrl":"https://doi.org/10.2478/acss-2020-0001","url":null,"abstract":"Abstract In this day and age, access to the Internet has become very easy, thereby providing access to different educational resources posted on the cloud even easier. Open access to resources, such as research journals, publications, articles in periodicals etc. is restricted to retain their authenticity and integrity, as well as to track and record their usage in the form of citations. This gives the author of the resource his fair share of credibility in the community, but this may not be the case with open educational resources such as lecture notes, presentations, test papers, reports etc. that are produced and used internally within an organisation or multiple organisations. This calls for the need to build a system that stores a permanent and immutable repository of these resources in addition to keeping a track record of who utilises them. Keeping in view the above-mentioned problem in mind, the present research attempts to explore how a Blockchain based system called Block-ED can be used to help the educational community manage their resources in a way to avoid any unauthorised manipulations or alterations to the documents, as well as recognise how this system can provide an innovative method of giving credibility to the creator of the resource whenever it is utilised.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75743265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Prototype Model for Semantic Segmentation of Curvilinear Meandering Regions by Deconvolutional Neural Networks 基于反卷积神经网络的曲线曲流区域语义分割原型模型
IF 1 Pub Date : 2020-05-01 DOI: 10.2478/acss-2020-0008
V. Romanuke
Abstract Deconvolutional neural networks are a very accurate tool for semantic image segmentation. Segmenting curvilinear meandering regions is a typical task in computer vision applied to navigational, civil engineering, and defence problems. In the study, such regions of interest are modelled as meandering transparent stripes whose width is not constant. The stripe on the white background is formed by the upper and lower non-parallel black curves so that the upper and lower image parts are completely separated. An algorithm of generating datasets of such regions is developed. It is revealed that deeper networks segment the regions more accurately. However, the segmentation is harder when the regions become bigger. This is why an alternative method of the region segmentation consisting in segmenting the upper and lower image parts by subsequently unifying the results is not effective. If the region of interest becomes bigger, it must be squeezed in order to avoid segmenting the empty image. Once the squeezed region is segmented, the image is conversely rescaled to the original view. To control the accuracy, the mean BF score having the least value among the other accuracy indicators should be maximised first.
反卷积神经网络是一种非常精确的语义图像分割工具。曲线弯曲区域分割是计算机视觉应用于航海、土木工程和国防等领域的一个典型问题。在研究中,这些感兴趣的区域被建模为蜿蜒的透明条纹,其宽度不是恒定的。白色背景上的条纹由上下非平行的黑色曲线构成,使上下图像部分完全分离。提出了一种生成此类区域数据集的算法。结果表明,越深的网络对区域的分割越准确。然而,当区域变大时,分割就变得更加困难。这就是为什么区域分割的另一种方法,即通过随后统一结果来分割图像的上下部分是无效的。如果感兴趣的区域变大,则必须对其进行压缩,以避免分割空图像。一旦被压缩的区域被分割,图像被反向地重新缩放到原始视图。为了控制精度,应首先最大化其他精度指标中最小的平均BF分数。
{"title":"A Prototype Model for Semantic Segmentation of Curvilinear Meandering Regions by Deconvolutional Neural Networks","authors":"V. Romanuke","doi":"10.2478/acss-2020-0008","DOIUrl":"https://doi.org/10.2478/acss-2020-0008","url":null,"abstract":"Abstract Deconvolutional neural networks are a very accurate tool for semantic image segmentation. Segmenting curvilinear meandering regions is a typical task in computer vision applied to navigational, civil engineering, and defence problems. In the study, such regions of interest are modelled as meandering transparent stripes whose width is not constant. The stripe on the white background is formed by the upper and lower non-parallel black curves so that the upper and lower image parts are completely separated. An algorithm of generating datasets of such regions is developed. It is revealed that deeper networks segment the regions more accurately. However, the segmentation is harder when the regions become bigger. This is why an alternative method of the region segmentation consisting in segmenting the upper and lower image parts by subsequently unifying the results is not effective. If the region of interest becomes bigger, it must be squeezed in order to avoid segmenting the empty image. Once the squeezed region is segmented, the image is conversely rescaled to the original view. To control the accuracy, the mean BF score having the least value among the other accuracy indicators should be maximised first.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80129673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Minimal Total Weighted Tardiness in Tight-Tardy Single Machine Preemptive Idling-Free Scheduling 紧延迟单机抢占无空闲调度的最小总加权延迟
IF 1 Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0019
V. Romanuke
Abstract Two possibilities of obtaining the minimal total weighted tardiness in tight-tardy single machine preemptive idling-free scheduling are studied. The Boolean linear programming model, which allows obtaining the exactly minimal tardiness, becomes too time-consuming as either the number of jobs or numbers of job parts increase. Therefore, a heuristic based on remaining available and processing periods is used instead. The heuristic schedules 2 jobs always with the minimal tardiness. In scheduling 3 to 7 jobs, the risk of missing the minimal tardiness is just 1.5 % to 3.2 %. It is expected that scheduling 12 and more jobs has at the most the same risk or even lower. In scheduling 10 jobs without a timeout, the heuristic is almost 1 million times faster than the exact model. The exact model is still applicable for scheduling 3 to 5 jobs, where the averaged computation time varies from 0.1 s to 1.02 s. However, the maximal computation time for 6 jobs is close to 1 minute. Further increment of jobs may delay obtaining the minimal tardiness at least for a few minutes, but 7 jobs still can be scheduled at worst for 7 minutes. When scheduling 8 jobs and more, the exact model should be substituted with the heuristic.
摘要研究了紧延迟单机抢占无空闲调度中总加权延迟最小的两种可能。布尔线性规划模型允许获得精确的最小延迟,但随着作业数量或作业部件数量的增加,该模型变得过于耗时。因此,使用基于剩余可用性和处理周期的启发式方法。启发式调度总是以最小的延迟调度2个作业。在调度3到7个作业时,错过最小延迟的风险仅为1.5%到3.2%。预计调度12个或更多的作业的风险最多是相同的,甚至更低。在没有超时的情况下调度10个作业时,启发式算法几乎比精确模型快100万倍。精确的模型仍然适用于调度3到5个作业,其中平均计算时间从0.1秒到1.02秒不等。然而,6个作业的最大计算时间接近1分钟。作业的进一步增加可能会延迟获得最小延迟至少几分钟,但7个作业仍然可以调度在最坏的7分钟。当调度8个或更多的作业时,应该用启发式模型代替精确模型。
{"title":"Minimal Total Weighted Tardiness in Tight-Tardy Single Machine Preemptive Idling-Free Scheduling","authors":"V. Romanuke","doi":"10.2478/acss-2019-0019","DOIUrl":"https://doi.org/10.2478/acss-2019-0019","url":null,"abstract":"Abstract Two possibilities of obtaining the minimal total weighted tardiness in tight-tardy single machine preemptive idling-free scheduling are studied. The Boolean linear programming model, which allows obtaining the exactly minimal tardiness, becomes too time-consuming as either the number of jobs or numbers of job parts increase. Therefore, a heuristic based on remaining available and processing periods is used instead. The heuristic schedules 2 jobs always with the minimal tardiness. In scheduling 3 to 7 jobs, the risk of missing the minimal tardiness is just 1.5 % to 3.2 %. It is expected that scheduling 12 and more jobs has at the most the same risk or even lower. In scheduling 10 jobs without a timeout, the heuristic is almost 1 million times faster than the exact model. The exact model is still applicable for scheduling 3 to 5 jobs, where the averaged computation time varies from 0.1 s to 1.02 s. However, the maximal computation time for 6 jobs is close to 1 minute. Further increment of jobs may delay obtaining the minimal tardiness at least for a few minutes, but 7 jobs still can be scheduled at worst for 7 minutes. When scheduling 8 jobs and more, the exact model should be substituted with the heuristic.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85554943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Applied Computer Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1