首页 > 最新文献

Applied Computer Systems最新文献

英文 中文
Minimal Total Weighted Tardiness in Tight-Tardy Single Machine Preemptive Idling-Free Scheduling 紧延迟单机抢占无空闲调度的最小总加权延迟
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0019
V. Romanuke
Abstract Two possibilities of obtaining the minimal total weighted tardiness in tight-tardy single machine preemptive idling-free scheduling are studied. The Boolean linear programming model, which allows obtaining the exactly minimal tardiness, becomes too time-consuming as either the number of jobs or numbers of job parts increase. Therefore, a heuristic based on remaining available and processing periods is used instead. The heuristic schedules 2 jobs always with the minimal tardiness. In scheduling 3 to 7 jobs, the risk of missing the minimal tardiness is just 1.5 % to 3.2 %. It is expected that scheduling 12 and more jobs has at the most the same risk or even lower. In scheduling 10 jobs without a timeout, the heuristic is almost 1 million times faster than the exact model. The exact model is still applicable for scheduling 3 to 5 jobs, where the averaged computation time varies from 0.1 s to 1.02 s. However, the maximal computation time for 6 jobs is close to 1 minute. Further increment of jobs may delay obtaining the minimal tardiness at least for a few minutes, but 7 jobs still can be scheduled at worst for 7 minutes. When scheduling 8 jobs and more, the exact model should be substituted with the heuristic.
摘要研究了紧延迟单机抢占无空闲调度中总加权延迟最小的两种可能。布尔线性规划模型允许获得精确的最小延迟,但随着作业数量或作业部件数量的增加,该模型变得过于耗时。因此,使用基于剩余可用性和处理周期的启发式方法。启发式调度总是以最小的延迟调度2个作业。在调度3到7个作业时,错过最小延迟的风险仅为1.5%到3.2%。预计调度12个或更多的作业的风险最多是相同的,甚至更低。在没有超时的情况下调度10个作业时,启发式算法几乎比精确模型快100万倍。精确的模型仍然适用于调度3到5个作业,其中平均计算时间从0.1秒到1.02秒不等。然而,6个作业的最大计算时间接近1分钟。作业的进一步增加可能会延迟获得最小延迟至少几分钟,但7个作业仍然可以调度在最坏的7分钟。当调度8个或更多的作业时,应该用启发式模型代替精确模型。
{"title":"Minimal Total Weighted Tardiness in Tight-Tardy Single Machine Preemptive Idling-Free Scheduling","authors":"V. Romanuke","doi":"10.2478/acss-2019-0019","DOIUrl":"https://doi.org/10.2478/acss-2019-0019","url":null,"abstract":"Abstract Two possibilities of obtaining the minimal total weighted tardiness in tight-tardy single machine preemptive idling-free scheduling are studied. The Boolean linear programming model, which allows obtaining the exactly minimal tardiness, becomes too time-consuming as either the number of jobs or numbers of job parts increase. Therefore, a heuristic based on remaining available and processing periods is used instead. The heuristic schedules 2 jobs always with the minimal tardiness. In scheduling 3 to 7 jobs, the risk of missing the minimal tardiness is just 1.5 % to 3.2 %. It is expected that scheduling 12 and more jobs has at the most the same risk or even lower. In scheduling 10 jobs without a timeout, the heuristic is almost 1 million times faster than the exact model. The exact model is still applicable for scheduling 3 to 5 jobs, where the averaged computation time varies from 0.1 s to 1.02 s. However, the maximal computation time for 6 jobs is close to 1 minute. Further increment of jobs may delay obtaining the minimal tardiness at least for a few minutes, but 7 jobs still can be scheduled at worst for 7 minutes. When scheduling 8 jobs and more, the exact model should be substituted with the heuristic.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"83 1","pages":"150 - 160"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85554943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Results From Expert Survey on System Analysis Process Activities 系统分析过程活动专家调查结果
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0018
Laima Leimane, O. Ņikiforova
Abstract System analysis is a crucial and complex step in software engineering process, which affects the overall success of the project and quality of the project outcome. Even though Agile methods have become widely popular, these methods have no structure when it comes to requirements elicitation and specification, which can have impact on whether a project has favourable outcome. Nevertheless, regardless of the chosen approach by industry practitioners, it is important to identify, which activities are currently performed, and analyse the causes and possible issues, which are encountered. The paper presents results from expert survey on the importance of activities related to requirements elicitation, analysis and specification process and the use of tools to support this process. Delphi method, which is used to evaluate the responses, is described. Lists of activities are ranked according to importance and additional information on expert responses is given in the paper. The information can give an insight into the activities and tools that are used in the industry.
系统分析是软件工程过程中一个关键而复杂的步骤,它直接影响到项目的整体成功和项目成果的质量。尽管敏捷方法已经广泛流行,但是当涉及到需求引出和规范时,这些方法没有结构,这可能会影响项目是否有良好的结果。然而,无论行业从业者选择何种方法,重要的是要确定当前正在执行哪些活动,并分析遇到的原因和可能的问题。本文展示了专家调查的结果,这些调查涉及与需求引出、分析和规范过程相关的活动的重要性,以及支持该过程的工具的使用。介绍了采用德尔菲法进行评价的方法。根据重要性对活动列表进行排序,并在论文中给出了专家回答的附加信息。这些信息可以深入了解行业中使用的活动和工具。
{"title":"Results From Expert Survey on System Analysis Process Activities","authors":"Laima Leimane, O. Ņikiforova","doi":"10.2478/acss-2019-0018","DOIUrl":"https://doi.org/10.2478/acss-2019-0018","url":null,"abstract":"Abstract System analysis is a crucial and complex step in software engineering process, which affects the overall success of the project and quality of the project outcome. Even though Agile methods have become widely popular, these methods have no structure when it comes to requirements elicitation and specification, which can have impact on whether a project has favourable outcome. Nevertheless, regardless of the chosen approach by industry practitioners, it is important to identify, which activities are currently performed, and analyse the causes and possible issues, which are encountered. The paper presents results from expert survey on the importance of activities related to requirements elicitation, analysis and specification process and the use of tools to support this process. Delphi method, which is used to evaluate the responses, is described. Lists of activities are ranked according to importance and additional information on expert responses is given in the paper. The information can give an insight into the activities and tools that are used in the industry.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"23 1","pages":"141 - 149"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80226462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Extracting TFM Core Elements From Use Case Scenarios by Processing Structure and Text in Natural Language 利用自然语言处理结构和文本从用例场景中提取TFM核心元素
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0012
Erika Nazaruka, J. Osis, Viktorija Gribermane
Abstract Extracting core elements of Topological Functioning Model (TFM) from use case scenarios requires processing of both structure and natural language constructs in use case step descriptions. The processing steps are discussed in the present paper. Analysis of natural language constructs is based on outcomes provided by Stanford CoreNLP. Stanford CoreNLP is the Natural Language Processing pipeline that allows analysing text at paragraph, sentence and word levels. The proposed technique allows extracting actions, objects, results, preconditions, post-conditions and executors of the functional features, as well as cause-effect relations between them. However, accuracy of it is dependent on the used language constructs and accuracy of specification of event flows. The analysis of the results allows concluding that even use case specifications require the use of rigor, or even uniform, structure of paths and sentences as well as awareness of the possible parsing errors.
摘要从用例场景中提取拓扑功能模型(TFM)的核心元素需要对用例步骤描述中的结构和自然语言结构进行处理。本文对其加工步骤进行了讨论。自然语言结构的分析是基于斯坦福CoreNLP提供的结果。斯坦福CoreNLP是自然语言处理管道,允许在段落,句子和单词级别分析文本。提出的技术允许提取功能特征的动作、对象、结果、前提条件、后置条件和执行者,以及它们之间的因果关系。然而,它的准确性取决于所使用的语言结构和事件流规范的准确性。对结果的分析可以得出这样的结论:即使是用例规范也需要使用严格的,甚至是统一的路径和句子结构,以及对可能的解析错误的认识。
{"title":"Extracting TFM Core Elements From Use Case Scenarios by Processing Structure and Text in Natural Language","authors":"Erika Nazaruka, J. Osis, Viktorija Gribermane","doi":"10.2478/acss-2019-0012","DOIUrl":"https://doi.org/10.2478/acss-2019-0012","url":null,"abstract":"Abstract Extracting core elements of Topological Functioning Model (TFM) from use case scenarios requires processing of both structure and natural language constructs in use case step descriptions. The processing steps are discussed in the present paper. Analysis of natural language constructs is based on outcomes provided by Stanford CoreNLP. Stanford CoreNLP is the Natural Language Processing pipeline that allows analysing text at paragraph, sentence and word levels. The proposed technique allows extracting actions, objects, results, preconditions, post-conditions and executors of the functional features, as well as cause-effect relations between them. However, accuracy of it is dependent on the used language constructs and accuracy of specification of event flows. The analysis of the results allows concluding that even use case specifications require the use of rigor, or even uniform, structure of paths and sentences as well as awareness of the possible parsing errors.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"1 1","pages":"103 - 94"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88198157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Genetic Algorithm Based Feature Selection Technique for Electroencephalography Data 基于遗传算法的脑电图数据特征选择技术
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0015
Tariq Ali, Asif Nawaz, H. Sadia
Abstract High dimensionality is a well-known problem that has a huge number of highlights in the data, yet none is helpful for a particular data mining task undertaking, for example, classification and grouping. Therefore, selection of features is used frequently to reduce the data set dimensionality. Feature selection is a multi-target errand, which diminishes dataset dimensionality, decreases the running time, and furthermore enhances the expected precision. In the study, our goal is to diminish the quantity of features of electroencephalography data for eye state classification and achieve the same or even better classification accuracy with the least number of features. We propose a genetic algorithm-based feature selection technique with the KNN classifier. The accuracy is improved with the selected feature subset using the proposed technique as compared to the full feature set. Results prove that the classification precision of the proposed strategy is enhanced by 3 % on average when contrasted with the accuracy without feature selection.
摘要高维是一个众所周知的问题,数据中有大量的亮点,但没有一个对特定的数据挖掘任务有帮助,例如分类和分组。因此,经常使用特征选择来降低数据集的维数。特征选择是一项多目标任务,它降低了数据集的维数,减少了运行时间,进一步提高了期望的精度。在本研究中,我们的目标是减少脑电数据中用于眼状态分类的特征数量,以最少的特征数量达到相同甚至更好的分类精度。我们提出了一种基于遗传算法的KNN分类器特征选择技术。与完整的特征集相比,所提出的技术提高了所选特征子集的准确性。结果表明,与不进行特征选择的方法相比,该方法的分类精度平均提高了3%。
{"title":"Genetic Algorithm Based Feature Selection Technique for Electroencephalography Data","authors":"Tariq Ali, Asif Nawaz, H. Sadia","doi":"10.2478/acss-2019-0015","DOIUrl":"https://doi.org/10.2478/acss-2019-0015","url":null,"abstract":"Abstract High dimensionality is a well-known problem that has a huge number of highlights in the data, yet none is helpful for a particular data mining task undertaking, for example, classification and grouping. Therefore, selection of features is used frequently to reduce the data set dimensionality. Feature selection is a multi-target errand, which diminishes dataset dimensionality, decreases the running time, and furthermore enhances the expected precision. In the study, our goal is to diminish the quantity of features of electroencephalography data for eye state classification and achieve the same or even better classification accuracy with the least number of features. We propose a genetic algorithm-based feature selection technique with the KNN classifier. The accuracy is improved with the selected feature subset using the proposed technique as compared to the full feature set. Results prove that the classification precision of the proposed strategy is enhanced by 3 % on average when contrasted with the accuracy without feature selection.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"25 1","pages":"119 - 127"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72890196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Dataset-Independent Model for Estimating Software Development Effort Using Soft Computing Techniques 使用软计算技术估算软件开发工作量的数据集独立模型
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0011
Mahdi Khazaiepoor, A. K. Bardsiri, F. Keynia
Abstract During the recent years, numerous endeavours have been made in the area of software development effort estimation for calculating the software costs in the preliminary development stages. These studies have resulted in the offering of a great many of the models. Despite the large deal of efforts, the substantial problems of the offered methods are their dependency on the used data collection and, sometimes, their lack of appropriate efficiency. The current article attempts to present a model for software development effort estimation through making use of evolutionary algorithms and neural networks. The distinctive characteristic of this model is its lack of dependency on the collection of data used as well as its high efficiency. To evaluate the proposed model, six different data collections have been used in the area of software effort estimation. The reason for the application of several data collections is related to the investigation of the model performance independence of the data collection used. The evaluation scales have been MMRE, MdMRE and PRED (0.25). The results have indicated that the proposed model, besides delivering high efficiency in contrast to its counterparts, produces the best responses for all of the used data collections.
摘要近年来,为了计算软件开发初期的成本,在软件开发工作量估算领域进行了大量的研究。这些研究产生了许多模型。尽管付出了大量的努力,但所提供的方法的实质问题是它们依赖于所使用的数据收集,有时缺乏适当的效率。本文试图通过使用进化算法和神经网络,为软件开发工作量估算提供一个模型。该模型的显著特点是不依赖所使用的数据集合,而且效率高。为了评估所提出的模型,在软件工作量估计领域中使用了六种不同的数据集合。应用多个数据集合的原因与所使用的数据集合的模型性能独立性的调查有关。评价量表有MMRE、MdMRE和PRED(0.25)。结果表明,与其他模型相比,所提出的模型除了提供高效率外,还为所有使用的数据集合产生最佳响应。
{"title":"A Dataset-Independent Model for Estimating Software Development Effort Using Soft Computing Techniques","authors":"Mahdi Khazaiepoor, A. K. Bardsiri, F. Keynia","doi":"10.2478/acss-2019-0011","DOIUrl":"https://doi.org/10.2478/acss-2019-0011","url":null,"abstract":"Abstract During the recent years, numerous endeavours have been made in the area of software development effort estimation for calculating the software costs in the preliminary development stages. These studies have resulted in the offering of a great many of the models. Despite the large deal of efforts, the substantial problems of the offered methods are their dependency on the used data collection and, sometimes, their lack of appropriate efficiency. The current article attempts to present a model for software development effort estimation through making use of evolutionary algorithms and neural networks. The distinctive characteristic of this model is its lack of dependency on the collection of data used as well as its high efficiency. To evaluate the proposed model, six different data collections have been used in the area of software effort estimation. The reason for the application of several data collections is related to the investigation of the model performance independence of the data collection used. The evaluation scales have been MMRE, MdMRE and PRED (0.25). The results have indicated that the proposed model, besides delivering high efficiency in contrast to its counterparts, produces the best responses for all of the used data collections.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"61 1","pages":"82 - 93"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84573420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fuzzy Expert System Generalised Model for Medical Applications 医学应用中的模糊专家系统广义模型
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0016
Osée Muhindo Masivi
Abstract Over the past two decades an exponential growth of medical fuzzy expert systems has been observed. These systems address specific forms of medical and health problems resulting in differentiated models which are application dependent and may lack adaptability. This research proposes a generalized model encompassing major features in specialized existing fuzzy systems. Generalization modelling by design in which the major components of differentiated the system were identified and used as the components of the general model. The prototype shows that the proposed model allows medical experts to define fuzzy variables (rules base) for any medical application and users to enter symptoms (facts base) and ask their medical conditions from the designed generalised core inference engine. Further research may include adding more composition conditions, more combining techniques and more tests in several environments in order to check its precision, sensitivity and specificity.
摘要近二十年来,医学模糊专家系统呈指数级增长。这些系统处理特定形式的医疗和健康问题,导致不同的模型,这些模型依赖于应用程序,可能缺乏适应性。本研究提出了一种包含现有专业模糊系统主要特征的广义模型。通过设计进行泛化建模,其中区分系统的主要组件被识别并用作一般模型的组件。原型表明,提出的模型允许医学专家为任何医疗应用定义模糊变量(规则库),用户输入症状(事实库)并从设计的广义核心推理引擎中询问他们的医疗状况。进一步的研究可能包括增加更多的组成条件,更多的组合技术和更多的测试在不同的环境,以检查其精度,灵敏度和特异性。
{"title":"Fuzzy Expert System Generalised Model for Medical Applications","authors":"Osée Muhindo Masivi","doi":"10.2478/acss-2019-0016","DOIUrl":"https://doi.org/10.2478/acss-2019-0016","url":null,"abstract":"Abstract Over the past two decades an exponential growth of medical fuzzy expert systems has been observed. These systems address specific forms of medical and health problems resulting in differentiated models which are application dependent and may lack adaptability. This research proposes a generalized model encompassing major features in specialized existing fuzzy systems. Generalization modelling by design in which the major components of differentiated the system were identified and used as the components of the general model. The prototype shows that the proposed model allows medical experts to define fuzzy variables (rules base) for any medical application and users to enter symptoms (facts base) and ask their medical conditions from the designed generalised core inference engine. Further research may include adding more composition conditions, more combining techniques and more tests in several environments in order to check its precision, sensitivity and specificity.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"61 7 1","pages":"128 - 133"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90567134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Big Data Model Using Distributed Cluster-Based Resampling for Class-Imbalance Problem 基于分布式聚类重采样的类不平衡问题大数据模型
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0013
Duygu Sinanc Terzi, Ş. Sağiroğlu
Abstract The class imbalance problem, one of the common data irregularities, causes the development of under-represented models. To resolve this issue, the present study proposes a new cluster-based MapReduce design, entitled Distributed Cluster-based Resampling for Imbalanced Big Data (DIBID). The design aims at modifying the existing dataset to increase the classification success. Within the study, DIBID has been implemented on public datasets under two strategies. The first strategy has been designed to present the success of the model on data sets with different imbalanced ratios. The second strategy has been designed to compare the success of the model with other imbalanced big data solutions in the literature. According to the results, DIBID outperformed other imbalanced big data solutions in the literature and increased area under the curve values between 10 % and 24 % through the case study.
类失衡问题是常见的数据不规范现象之一,它导致了代表性不足模型的发展。为了解决这一问题,本研究提出了一种新的基于集群的MapReduce设计,称为分布式基于集群的不平衡大数据重采样(DIBID)。该设计旨在修改现有数据集以提高分类成功率。在本研究中,DIBID已在两种策略下在公共数据集上实施。第一个策略被设计用来展示模型在具有不同不平衡比率的数据集上的成功。第二个策略旨在将该模型的成功与文献中其他不平衡大数据解决方案进行比较。结果显示,DIBID优于文献中其他不均衡大数据解决方案,通过案例研究,曲线下面积增加了10%至24%。
{"title":"A New Big Data Model Using Distributed Cluster-Based Resampling for Class-Imbalance Problem","authors":"Duygu Sinanc Terzi, Ş. Sağiroğlu","doi":"10.2478/acss-2019-0013","DOIUrl":"https://doi.org/10.2478/acss-2019-0013","url":null,"abstract":"Abstract The class imbalance problem, one of the common data irregularities, causes the development of under-represented models. To resolve this issue, the present study proposes a new cluster-based MapReduce design, entitled Distributed Cluster-based Resampling for Imbalanced Big Data (DIBID). The design aims at modifying the existing dataset to increase the classification success. Within the study, DIBID has been implemented on public datasets under two strategies. The first strategy has been designed to present the success of the model on data sets with different imbalanced ratios. The second strategy has been designed to compare the success of the model with other imbalanced big data solutions in the literature. According to the results, DIBID outperformed other imbalanced big data solutions in the literature and increased area under the curve values between 10 % and 24 % through the case study.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"32 1","pages":"104 - 110"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79628408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Affective State Based Anomaly Detection in Crowd 基于情感状态的人群异常检测
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0017
Glorija Baliniskite, E. Lavendelis, Mara Pudane
Abstract To distinguish individuals with dangerous abnormal behaviours from the crowd, human characteristics (e.g., speed and direction of motion, interaction with other people), crowd characteristics (such as flow and density), space available to individuals, etc. must be considered. The paper proposes an approach that considers individual and crowd metrics to determine anomaly. An individual’s abnormal behaviour alone cannot indicate behaviour, which can be threatening toward other individuals, as this behaviour can also be triggered by positive emotions or events. To avoid individuals whose abnormal behaviour is potentially unrelated to aggression and is not environmentally dangerous, it is suggested to use emotional state of individuals. The aim of the proposed approach is to automate video surveillance systems by enabling them to automatically detect potentially dangerous situations.
要从人群中区分具有危险异常行为的个体,必须考虑人的特征(如运动速度和方向、与他人的互动)、人群特征(如流量和密度)、个体可用空间等。本文提出了一种综合考虑个体和群体指标来判断异常的方法。一个人的异常行为本身并不能表明这种行为可能对其他个体构成威胁,因为这种行为也可能由积极的情绪或事件触发。为了避免个体的异常行为可能与攻击性无关,并且对环境没有危险,建议使用个体的情绪状态。提出的方法的目的是使视频监控系统能够自动检测潜在的危险情况,从而实现自动化。
{"title":"Affective State Based Anomaly Detection in Crowd","authors":"Glorija Baliniskite, E. Lavendelis, Mara Pudane","doi":"10.2478/acss-2019-0017","DOIUrl":"https://doi.org/10.2478/acss-2019-0017","url":null,"abstract":"Abstract To distinguish individuals with dangerous abnormal behaviours from the crowd, human characteristics (e.g., speed and direction of motion, interaction with other people), crowd characteristics (such as flow and density), space available to individuals, etc. must be considered. The paper proposes an approach that considers individual and crowd metrics to determine anomaly. An individual’s abnormal behaviour alone cannot indicate behaviour, which can be threatening toward other individuals, as this behaviour can also be triggered by positive emotions or events. To avoid individuals whose abnormal behaviour is potentially unrelated to aggression and is not environmentally dangerous, it is suggested to use emotional state of individuals. The aim of the proposed approach is to automate video surveillance systems by enabling them to automatically detect potentially dangerous situations.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"83 1","pages":"134 - 140"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90141210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Development of Ontology Based Competence Management Model for Non-Formal Education Services 基于本体的非正规教育服务能力管理模型开发
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0014
Uldis Zandbergs, J. Grundspeņķis, Janis Judrups, Signe Brike
Abstract Competence management is a discipline that recently has regained popularity due to the growing demand for constantly higher competences of employees as well as graduates. One of the main implementation challenges of competence management is that, as a rule, it is based on experts’ implicit knowledge. This is the reason why the transformation of implicit knowledge into explicit knowledge practically is unmanageable and, as a consequence, limits the ability to transfer the already existing knowledge from one organisation to another. The paper proposes an ontology-based competence model that allows the reuse of existing competence frameworks in the field of non-formal education where different competence frameworks need to be used together for the purpose of identification, assessment and development of customers’ competences without forcing the organisations to change their routine competence management processes. The proposed competence model is used as a basis for development of competence management model on which IT tools that support a competence management processes may be built up. Several existing frameworks have been analysed and the terminology used in them has been combined in a single model. The usage of the proposed model is discussed and the possible IT tools to support the competence management process are identified in the paper.
摘要能力管理是近年来由于员工和毕业生对不断提高的能力的需求不断增长而重新流行起来的一门学科。能力管理的主要实施挑战之一是,它通常是基于专家的隐性知识。这就是为什么隐性知识转化为显性知识实际上难以管理的原因,因此,限制了将已有知识从一个组织转移到另一个组织的能力。本文提出了一种基于本体的胜任力模型,该模型允许在非正规教育领域重用现有的胜任力框架,在非正规教育领域,不同的胜任力框架需要一起使用,以识别、评估和发展客户的胜任力,而不会迫使组织改变其常规的胜任力管理流程。提出的胜任力模型是开发胜任力管理模型的基础,在此基础上可以建立支持胜任力管理流程的IT工具。对几个现有框架进行了分析,并将其中使用的术语合并到一个模型中。本文讨论了所提出的模型的使用,并确定了支持能力管理过程的可能的IT工具。
{"title":"Development of Ontology Based Competence Management Model for Non-Formal Education Services","authors":"Uldis Zandbergs, J. Grundspeņķis, Janis Judrups, Signe Brike","doi":"10.2478/acss-2019-0014","DOIUrl":"https://doi.org/10.2478/acss-2019-0014","url":null,"abstract":"Abstract Competence management is a discipline that recently has regained popularity due to the growing demand for constantly higher competences of employees as well as graduates. One of the main implementation challenges of competence management is that, as a rule, it is based on experts’ implicit knowledge. This is the reason why the transformation of implicit knowledge into explicit knowledge practically is unmanageable and, as a consequence, limits the ability to transfer the already existing knowledge from one organisation to another. The paper proposes an ontology-based competence model that allows the reuse of existing competence frameworks in the field of non-formal education where different competence frameworks need to be used together for the purpose of identification, assessment and development of customers’ competences without forcing the organisations to change their routine competence management processes. The proposed competence model is used as a basis for development of competence management model on which IT tools that support a competence management processes may be built up. Several existing frameworks have been analysed and the terminology used in them has been combined in a single model. The usage of the proposed model is discussed and the possible IT tools to support the competence management process are identified in the paper.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"26 1","pages":"111 - 118"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85087438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Semantic Retrieval System for Case Law 判例法语义检索系统
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2019-05-01 DOI: 10.2478/acss-2019-0006
E. P. Ebietomere, G. Ekuobase
Abstract Legal reasoning, the core of legal practice in many countries, is “stare decisis” and its soundness is usually strengthened by relevant case law consulted. However, the task of relevant case law access and retrieval is tiring to legal practitioners and constitutes a serious drain on their productivity. Existing efforts at addressing this problem are conceptional, restrictive or unreliable. Specifically, existing semantic retrieval (SR) systems for case law are desirous of exceptional retrieval precision. Ontology promises to meet this desire, if introduced to the SR system. As a consequence, an ontology-based SR system for case law has been built using the systems analysis and design methodology. In particular, the component-based software engineering and the agile methodologies are employed to implement the system. Finally, the search and retrieval performance of the resultant SR system has been evaluated using the heuristics evaluation method. The retrieval system has shown to have a search and retrieval performance of about 94 % precision, 80 % recall and 84 % F-measure. Overall, the paper implements the SR system for case law with excellent precision and affirms the superiority of ontology approach over other semantic approaches to SR systems for document retrieval in the legal domain.
摘要法律推理是许多国家法律实践的核心,其合理性通常通过查阅相关判例法来加强。然而,相关判例法的获取和检索工作对法律从业人员来说是一项繁重的任务,严重消耗了他们的生产力。解决这一问题的现有努力是概念性的、限制性的或不可靠的。具体而言,现有的判例法语义检索(SR)系统对检索精度的要求很高。如果将本体引入到SR系统中,则有望满足这一愿望。因此,使用系统分析和设计方法构建了基于本体的判例法SR系统。特别地,采用了基于组件的软件工程和敏捷方法来实现该系统。最后,利用启发式评价方法对合成的SR系统的搜索和检索性能进行了评价。该检索系统的检索精度为94%,查全率为80%,F-measure为84%。总体而言,本文以优异的精度实现了判例法检索系统,并肯定了本体方法相对于其他语义方法在法律领域文档检索检索系统中的优越性。
{"title":"A Semantic Retrieval System for Case Law","authors":"E. P. Ebietomere, G. Ekuobase","doi":"10.2478/acss-2019-0006","DOIUrl":"https://doi.org/10.2478/acss-2019-0006","url":null,"abstract":"Abstract Legal reasoning, the core of legal practice in many countries, is “stare decisis” and its soundness is usually strengthened by relevant case law consulted. However, the task of relevant case law access and retrieval is tiring to legal practitioners and constitutes a serious drain on their productivity. Existing efforts at addressing this problem are conceptional, restrictive or unreliable. Specifically, existing semantic retrieval (SR) systems for case law are desirous of exceptional retrieval precision. Ontology promises to meet this desire, if introduced to the SR system. As a consequence, an ontology-based SR system for case law has been built using the systems analysis and design methodology. In particular, the component-based software engineering and the agile methodologies are employed to implement the system. Finally, the search and retrieval performance of the resultant SR system has been evaluated using the heuristics evaluation method. The retrieval system has shown to have a search and retrieval performance of about 94 % precision, 80 % recall and 84 % F-measure. Overall, the paper implements the SR system for case law with excellent precision and affirms the superiority of ontology approach over other semantic approaches to SR systems for document retrieval in the legal domain.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"22 1","pages":"38 - 48"},"PeriodicalIF":1.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78482998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Applied Computer Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1