首页 > 最新文献

Applied Computer Systems最新文献

英文 中文
Some Aspects of Good Practice for Safe Use of Wi-Fi, Based on Experiments and Standards 基于实验和标准的安全使用Wi-Fi良好做法的某些方面
IF 1 Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0020
I. Gorbans, A. Jurenoks
Abstract The aim of the research is to study the effect of microwave Wi-Fi radiation on humans and plants. The paper investigates national standards for permissible exposure levels to microwave radiation, measures electric field intensity and justifies the point of view regarding the safe use of microwave technologies based on multiple plant cultivation experiments at different distances from a Wi-Fi router. The results demonstrate that the radiation of Wi-Fi routers significantly impairs the growth, development, yield and unexpected drought resistance of plants at short distances from the microwave source (up to 1 m to 2 m; –33 dBm to –43 dBm; >10 V/m). Slight effects are found up to about 4.5 m from a full-power home Wi-Fi router. As a result, suggestions are made for safe and balanced use of modern wireless technologies, which can complement occupational safety and health regulations.
摘要本研究的目的是研究微波Wi-Fi辐射对人类和植物的影响。本文调查了微波辐射允许暴露水平的国家标准,测量了电场强度,并基于距离Wi-Fi路由器不同距离的多个植物栽培实验,证明了微波技术安全使用的观点。结果表明,Wi-Fi路由器的辐射显著影响了距离微波源1 ~ 2 m范围内植物的生长发育、产量和意外抗旱性。-33 dBm ~ -43 dBm;> 10 V / m)。在离全功率家庭Wi-Fi路由器4.5米远的地方会有轻微的影响。因此,提出了安全、平衡地使用现代无线技术的建议,这些建议可以补充职业安全和健康条例。
{"title":"Some Aspects of Good Practice for Safe Use of Wi-Fi, Based on Experiments and Standards","authors":"I. Gorbans, A. Jurenoks","doi":"10.2478/acss-2019-0020","DOIUrl":"https://doi.org/10.2478/acss-2019-0020","url":null,"abstract":"Abstract The aim of the research is to study the effect of microwave Wi-Fi radiation on humans and plants. The paper investigates national standards for permissible exposure levels to microwave radiation, measures electric field intensity and justifies the point of view regarding the safe use of microwave technologies based on multiple plant cultivation experiments at different distances from a Wi-Fi router. The results demonstrate that the radiation of Wi-Fi routers significantly impairs the growth, development, yield and unexpected drought resistance of plants at short distances from the microwave source (up to 1 m to 2 m; –33 dBm to –43 dBm; >10 V/m). Slight effects are found up to about 4.5 m from a full-power home Wi-Fi router. As a result, suggestions are made for safe and balanced use of modern wireless technologies, which can complement occupational safety and health regulations.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90447635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Results From Expert Survey on System Analysis Process Activities 系统分析过程活动专家调查结果
IF 1 Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0018
Laima Leimane, O. Ņikiforova
Abstract System analysis is a crucial and complex step in software engineering process, which affects the overall success of the project and quality of the project outcome. Even though Agile methods have become widely popular, these methods have no structure when it comes to requirements elicitation and specification, which can have impact on whether a project has favourable outcome. Nevertheless, regardless of the chosen approach by industry practitioners, it is important to identify, which activities are currently performed, and analyse the causes and possible issues, which are encountered. The paper presents results from expert survey on the importance of activities related to requirements elicitation, analysis and specification process and the use of tools to support this process. Delphi method, which is used to evaluate the responses, is described. Lists of activities are ranked according to importance and additional information on expert responses is given in the paper. The information can give an insight into the activities and tools that are used in the industry.
系统分析是软件工程过程中一个关键而复杂的步骤,它直接影响到项目的整体成功和项目成果的质量。尽管敏捷方法已经广泛流行,但是当涉及到需求引出和规范时,这些方法没有结构,这可能会影响项目是否有良好的结果。然而,无论行业从业者选择何种方法,重要的是要确定当前正在执行哪些活动,并分析遇到的原因和可能的问题。本文展示了专家调查的结果,这些调查涉及与需求引出、分析和规范过程相关的活动的重要性,以及支持该过程的工具的使用。介绍了采用德尔菲法进行评价的方法。根据重要性对活动列表进行排序,并在论文中给出了专家回答的附加信息。这些信息可以深入了解行业中使用的活动和工具。
{"title":"Results From Expert Survey on System Analysis Process Activities","authors":"Laima Leimane, O. Ņikiforova","doi":"10.2478/acss-2019-0018","DOIUrl":"https://doi.org/10.2478/acss-2019-0018","url":null,"abstract":"Abstract System analysis is a crucial and complex step in software engineering process, which affects the overall success of the project and quality of the project outcome. Even though Agile methods have become widely popular, these methods have no structure when it comes to requirements elicitation and specification, which can have impact on whether a project has favourable outcome. Nevertheless, regardless of the chosen approach by industry practitioners, it is important to identify, which activities are currently performed, and analyse the causes and possible issues, which are encountered. The paper presents results from expert survey on the importance of activities related to requirements elicitation, analysis and specification process and the use of tools to support this process. Delphi method, which is used to evaluate the responses, is described. Lists of activities are ranked according to importance and additional information on expert responses is given in the paper. The information can give an insight into the activities and tools that are used in the industry.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80226462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Extracting TFM Core Elements From Use Case Scenarios by Processing Structure and Text in Natural Language 利用自然语言处理结构和文本从用例场景中提取TFM核心元素
IF 1 Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0012
Erika Nazaruka, J. Osis, Viktorija Gribermane
Abstract Extracting core elements of Topological Functioning Model (TFM) from use case scenarios requires processing of both structure and natural language constructs in use case step descriptions. The processing steps are discussed in the present paper. Analysis of natural language constructs is based on outcomes provided by Stanford CoreNLP. Stanford CoreNLP is the Natural Language Processing pipeline that allows analysing text at paragraph, sentence and word levels. The proposed technique allows extracting actions, objects, results, preconditions, post-conditions and executors of the functional features, as well as cause-effect relations between them. However, accuracy of it is dependent on the used language constructs and accuracy of specification of event flows. The analysis of the results allows concluding that even use case specifications require the use of rigor, or even uniform, structure of paths and sentences as well as awareness of the possible parsing errors.
摘要从用例场景中提取拓扑功能模型(TFM)的核心元素需要对用例步骤描述中的结构和自然语言结构进行处理。本文对其加工步骤进行了讨论。自然语言结构的分析是基于斯坦福CoreNLP提供的结果。斯坦福CoreNLP是自然语言处理管道,允许在段落,句子和单词级别分析文本。提出的技术允许提取功能特征的动作、对象、结果、前提条件、后置条件和执行者,以及它们之间的因果关系。然而,它的准确性取决于所使用的语言结构和事件流规范的准确性。对结果的分析可以得出这样的结论:即使是用例规范也需要使用严格的,甚至是统一的路径和句子结构,以及对可能的解析错误的认识。
{"title":"Extracting TFM Core Elements From Use Case Scenarios by Processing Structure and Text in Natural Language","authors":"Erika Nazaruka, J. Osis, Viktorija Gribermane","doi":"10.2478/acss-2019-0012","DOIUrl":"https://doi.org/10.2478/acss-2019-0012","url":null,"abstract":"Abstract Extracting core elements of Topological Functioning Model (TFM) from use case scenarios requires processing of both structure and natural language constructs in use case step descriptions. The processing steps are discussed in the present paper. Analysis of natural language constructs is based on outcomes provided by Stanford CoreNLP. Stanford CoreNLP is the Natural Language Processing pipeline that allows analysing text at paragraph, sentence and word levels. The proposed technique allows extracting actions, objects, results, preconditions, post-conditions and executors of the functional features, as well as cause-effect relations between them. However, accuracy of it is dependent on the used language constructs and accuracy of specification of event flows. The analysis of the results allows concluding that even use case specifications require the use of rigor, or even uniform, structure of paths and sentences as well as awareness of the possible parsing errors.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88198157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy Expert System Generalised Model for Medical Applications 医学应用中的模糊专家系统广义模型
IF 1 Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0016
Osée Muhindo Masivi
Abstract Over the past two decades an exponential growth of medical fuzzy expert systems has been observed. These systems address specific forms of medical and health problems resulting in differentiated models which are application dependent and may lack adaptability. This research proposes a generalized model encompassing major features in specialized existing fuzzy systems. Generalization modelling by design in which the major components of differentiated the system were identified and used as the components of the general model. The prototype shows that the proposed model allows medical experts to define fuzzy variables (rules base) for any medical application and users to enter symptoms (facts base) and ask their medical conditions from the designed generalised core inference engine. Further research may include adding more composition conditions, more combining techniques and more tests in several environments in order to check its precision, sensitivity and specificity.
摘要近二十年来,医学模糊专家系统呈指数级增长。这些系统处理特定形式的医疗和健康问题,导致不同的模型,这些模型依赖于应用程序,可能缺乏适应性。本研究提出了一种包含现有专业模糊系统主要特征的广义模型。通过设计进行泛化建模,其中区分系统的主要组件被识别并用作一般模型的组件。原型表明,提出的模型允许医学专家为任何医疗应用定义模糊变量(规则库),用户输入症状(事实库)并从设计的广义核心推理引擎中询问他们的医疗状况。进一步的研究可能包括增加更多的组成条件,更多的组合技术和更多的测试在不同的环境,以检查其精度,灵敏度和特异性。
{"title":"Fuzzy Expert System Generalised Model for Medical Applications","authors":"Osée Muhindo Masivi","doi":"10.2478/acss-2019-0016","DOIUrl":"https://doi.org/10.2478/acss-2019-0016","url":null,"abstract":"Abstract Over the past two decades an exponential growth of medical fuzzy expert systems has been observed. These systems address specific forms of medical and health problems resulting in differentiated models which are application dependent and may lack adaptability. This research proposes a generalized model encompassing major features in specialized existing fuzzy systems. Generalization modelling by design in which the major components of differentiated the system were identified and used as the components of the general model. The prototype shows that the proposed model allows medical experts to define fuzzy variables (rules base) for any medical application and users to enter symptoms (facts base) and ask their medical conditions from the designed generalised core inference engine. Further research may include adding more composition conditions, more combining techniques and more tests in several environments in order to check its precision, sensitivity and specificity.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90567134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Genetic Algorithm Based Feature Selection Technique for Electroencephalography Data 基于遗传算法的脑电图数据特征选择技术
IF 1 Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0015
Tariq Ali, Asif Nawaz, H. Sadia
Abstract High dimensionality is a well-known problem that has a huge number of highlights in the data, yet none is helpful for a particular data mining task undertaking, for example, classification and grouping. Therefore, selection of features is used frequently to reduce the data set dimensionality. Feature selection is a multi-target errand, which diminishes dataset dimensionality, decreases the running time, and furthermore enhances the expected precision. In the study, our goal is to diminish the quantity of features of electroencephalography data for eye state classification and achieve the same or even better classification accuracy with the least number of features. We propose a genetic algorithm-based feature selection technique with the KNN classifier. The accuracy is improved with the selected feature subset using the proposed technique as compared to the full feature set. Results prove that the classification precision of the proposed strategy is enhanced by 3 % on average when contrasted with the accuracy without feature selection.
摘要高维是一个众所周知的问题,数据中有大量的亮点,但没有一个对特定的数据挖掘任务有帮助,例如分类和分组。因此,经常使用特征选择来降低数据集的维数。特征选择是一项多目标任务,它降低了数据集的维数,减少了运行时间,进一步提高了期望的精度。在本研究中,我们的目标是减少脑电数据中用于眼状态分类的特征数量,以最少的特征数量达到相同甚至更好的分类精度。我们提出了一种基于遗传算法的KNN分类器特征选择技术。与完整的特征集相比,所提出的技术提高了所选特征子集的准确性。结果表明,与不进行特征选择的方法相比,该方法的分类精度平均提高了3%。
{"title":"Genetic Algorithm Based Feature Selection Technique for Electroencephalography Data","authors":"Tariq Ali, Asif Nawaz, H. Sadia","doi":"10.2478/acss-2019-0015","DOIUrl":"https://doi.org/10.2478/acss-2019-0015","url":null,"abstract":"Abstract High dimensionality is a well-known problem that has a huge number of highlights in the data, yet none is helpful for a particular data mining task undertaking, for example, classification and grouping. Therefore, selection of features is used frequently to reduce the data set dimensionality. Feature selection is a multi-target errand, which diminishes dataset dimensionality, decreases the running time, and furthermore enhances the expected precision. In the study, our goal is to diminish the quantity of features of electroencephalography data for eye state classification and achieve the same or even better classification accuracy with the least number of features. We propose a genetic algorithm-based feature selection technique with the KNN classifier. The accuracy is improved with the selected feature subset using the proposed technique as compared to the full feature set. Results prove that the classification precision of the proposed strategy is enhanced by 3 % on average when contrasted with the accuracy without feature selection.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72890196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Dataset-Independent Model for Estimating Software Development Effort Using Soft Computing Techniques 使用软计算技术估算软件开发工作量的数据集独立模型
IF 1 Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0011
Mahdi Khazaiepoor, A. K. Bardsiri, F. Keynia
Abstract During the recent years, numerous endeavours have been made in the area of software development effort estimation for calculating the software costs in the preliminary development stages. These studies have resulted in the offering of a great many of the models. Despite the large deal of efforts, the substantial problems of the offered methods are their dependency on the used data collection and, sometimes, their lack of appropriate efficiency. The current article attempts to present a model for software development effort estimation through making use of evolutionary algorithms and neural networks. The distinctive characteristic of this model is its lack of dependency on the collection of data used as well as its high efficiency. To evaluate the proposed model, six different data collections have been used in the area of software effort estimation. The reason for the application of several data collections is related to the investigation of the model performance independence of the data collection used. The evaluation scales have been MMRE, MdMRE and PRED (0.25). The results have indicated that the proposed model, besides delivering high efficiency in contrast to its counterparts, produces the best responses for all of the used data collections.
摘要近年来,为了计算软件开发初期的成本,在软件开发工作量估算领域进行了大量的研究。这些研究产生了许多模型。尽管付出了大量的努力,但所提供的方法的实质问题是它们依赖于所使用的数据收集,有时缺乏适当的效率。本文试图通过使用进化算法和神经网络,为软件开发工作量估算提供一个模型。该模型的显著特点是不依赖所使用的数据集合,而且效率高。为了评估所提出的模型,在软件工作量估计领域中使用了六种不同的数据集合。应用多个数据集合的原因与所使用的数据集合的模型性能独立性的调查有关。评价量表有MMRE、MdMRE和PRED(0.25)。结果表明,与其他模型相比,所提出的模型除了提供高效率外,还为所有使用的数据集合产生最佳响应。
{"title":"A Dataset-Independent Model for Estimating Software Development Effort Using Soft Computing Techniques","authors":"Mahdi Khazaiepoor, A. K. Bardsiri, F. Keynia","doi":"10.2478/acss-2019-0011","DOIUrl":"https://doi.org/10.2478/acss-2019-0011","url":null,"abstract":"Abstract During the recent years, numerous endeavours have been made in the area of software development effort estimation for calculating the software costs in the preliminary development stages. These studies have resulted in the offering of a great many of the models. Despite the large deal of efforts, the substantial problems of the offered methods are their dependency on the used data collection and, sometimes, their lack of appropriate efficiency. The current article attempts to present a model for software development effort estimation through making use of evolutionary algorithms and neural networks. The distinctive characteristic of this model is its lack of dependency on the collection of data used as well as its high efficiency. To evaluate the proposed model, six different data collections have been used in the area of software effort estimation. The reason for the application of several data collections is related to the investigation of the model performance independence of the data collection used. The evaluation scales have been MMRE, MdMRE and PRED (0.25). The results have indicated that the proposed model, besides delivering high efficiency in contrast to its counterparts, produces the best responses for all of the used data collections.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84573420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Affective State Based Anomaly Detection in Crowd 基于情感状态的人群异常检测
IF 1 Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0017
Glorija Baliniskite, E. Lavendelis, Mara Pudane
Abstract To distinguish individuals with dangerous abnormal behaviours from the crowd, human characteristics (e.g., speed and direction of motion, interaction with other people), crowd characteristics (such as flow and density), space available to individuals, etc. must be considered. The paper proposes an approach that considers individual and crowd metrics to determine anomaly. An individual’s abnormal behaviour alone cannot indicate behaviour, which can be threatening toward other individuals, as this behaviour can also be triggered by positive emotions or events. To avoid individuals whose abnormal behaviour is potentially unrelated to aggression and is not environmentally dangerous, it is suggested to use emotional state of individuals. The aim of the proposed approach is to automate video surveillance systems by enabling them to automatically detect potentially dangerous situations.
要从人群中区分具有危险异常行为的个体,必须考虑人的特征(如运动速度和方向、与他人的互动)、人群特征(如流量和密度)、个体可用空间等。本文提出了一种综合考虑个体和群体指标来判断异常的方法。一个人的异常行为本身并不能表明这种行为可能对其他个体构成威胁,因为这种行为也可能由积极的情绪或事件触发。为了避免个体的异常行为可能与攻击性无关,并且对环境没有危险,建议使用个体的情绪状态。提出的方法的目的是使视频监控系统能够自动检测潜在的危险情况,从而实现自动化。
{"title":"Affective State Based Anomaly Detection in Crowd","authors":"Glorija Baliniskite, E. Lavendelis, Mara Pudane","doi":"10.2478/acss-2019-0017","DOIUrl":"https://doi.org/10.2478/acss-2019-0017","url":null,"abstract":"Abstract To distinguish individuals with dangerous abnormal behaviours from the crowd, human characteristics (e.g., speed and direction of motion, interaction with other people), crowd characteristics (such as flow and density), space available to individuals, etc. must be considered. The paper proposes an approach that considers individual and crowd metrics to determine anomaly. An individual’s abnormal behaviour alone cannot indicate behaviour, which can be threatening toward other individuals, as this behaviour can also be triggered by positive emotions or events. To avoid individuals whose abnormal behaviour is potentially unrelated to aggression and is not environmentally dangerous, it is suggested to use emotional state of individuals. The aim of the proposed approach is to automate video surveillance systems by enabling them to automatically detect potentially dangerous situations.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90141210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A New Big Data Model Using Distributed Cluster-Based Resampling for Class-Imbalance Problem 基于分布式聚类重采样的类不平衡问题大数据模型
IF 1 Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0013
Duygu Sinanc Terzi, Ş. Sağiroğlu
Abstract The class imbalance problem, one of the common data irregularities, causes the development of under-represented models. To resolve this issue, the present study proposes a new cluster-based MapReduce design, entitled Distributed Cluster-based Resampling for Imbalanced Big Data (DIBID). The design aims at modifying the existing dataset to increase the classification success. Within the study, DIBID has been implemented on public datasets under two strategies. The first strategy has been designed to present the success of the model on data sets with different imbalanced ratios. The second strategy has been designed to compare the success of the model with other imbalanced big data solutions in the literature. According to the results, DIBID outperformed other imbalanced big data solutions in the literature and increased area under the curve values between 10 % and 24 % through the case study.
类失衡问题是常见的数据不规范现象之一,它导致了代表性不足模型的发展。为了解决这一问题,本研究提出了一种新的基于集群的MapReduce设计,称为分布式基于集群的不平衡大数据重采样(DIBID)。该设计旨在修改现有数据集以提高分类成功率。在本研究中,DIBID已在两种策略下在公共数据集上实施。第一个策略被设计用来展示模型在具有不同不平衡比率的数据集上的成功。第二个策略旨在将该模型的成功与文献中其他不平衡大数据解决方案进行比较。结果显示,DIBID优于文献中其他不均衡大数据解决方案,通过案例研究,曲线下面积增加了10%至24%。
{"title":"A New Big Data Model Using Distributed Cluster-Based Resampling for Class-Imbalance Problem","authors":"Duygu Sinanc Terzi, Ş. Sağiroğlu","doi":"10.2478/acss-2019-0013","DOIUrl":"https://doi.org/10.2478/acss-2019-0013","url":null,"abstract":"Abstract The class imbalance problem, one of the common data irregularities, causes the development of under-represented models. To resolve this issue, the present study proposes a new cluster-based MapReduce design, entitled Distributed Cluster-based Resampling for Imbalanced Big Data (DIBID). The design aims at modifying the existing dataset to increase the classification success. Within the study, DIBID has been implemented on public datasets under two strategies. The first strategy has been designed to present the success of the model on data sets with different imbalanced ratios. The second strategy has been designed to compare the success of the model with other imbalanced big data solutions in the literature. According to the results, DIBID outperformed other imbalanced big data solutions in the literature and increased area under the curve values between 10 % and 24 % through the case study.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79628408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Development of Ontology Based Competence Management Model for Non-Formal Education Services 基于本体的非正规教育服务能力管理模型开发
IF 1 Pub Date : 2019-12-01 DOI: 10.2478/acss-2019-0014
Uldis Zandbergs, J. Grundspeņķis, Janis Judrups, Signe Brike
Abstract Competence management is a discipline that recently has regained popularity due to the growing demand for constantly higher competences of employees as well as graduates. One of the main implementation challenges of competence management is that, as a rule, it is based on experts’ implicit knowledge. This is the reason why the transformation of implicit knowledge into explicit knowledge practically is unmanageable and, as a consequence, limits the ability to transfer the already existing knowledge from one organisation to another. The paper proposes an ontology-based competence model that allows the reuse of existing competence frameworks in the field of non-formal education where different competence frameworks need to be used together for the purpose of identification, assessment and development of customers’ competences without forcing the organisations to change their routine competence management processes. The proposed competence model is used as a basis for development of competence management model on which IT tools that support a competence management processes may be built up. Several existing frameworks have been analysed and the terminology used in them has been combined in a single model. The usage of the proposed model is discussed and the possible IT tools to support the competence management process are identified in the paper.
摘要能力管理是近年来由于员工和毕业生对不断提高的能力的需求不断增长而重新流行起来的一门学科。能力管理的主要实施挑战之一是,它通常是基于专家的隐性知识。这就是为什么隐性知识转化为显性知识实际上难以管理的原因,因此,限制了将已有知识从一个组织转移到另一个组织的能力。本文提出了一种基于本体的胜任力模型,该模型允许在非正规教育领域重用现有的胜任力框架,在非正规教育领域,不同的胜任力框架需要一起使用,以识别、评估和发展客户的胜任力,而不会迫使组织改变其常规的胜任力管理流程。提出的胜任力模型是开发胜任力管理模型的基础,在此基础上可以建立支持胜任力管理流程的IT工具。对几个现有框架进行了分析,并将其中使用的术语合并到一个模型中。本文讨论了所提出的模型的使用,并确定了支持能力管理过程的可能的IT工具。
{"title":"Development of Ontology Based Competence Management Model for Non-Formal Education Services","authors":"Uldis Zandbergs, J. Grundspeņķis, Janis Judrups, Signe Brike","doi":"10.2478/acss-2019-0014","DOIUrl":"https://doi.org/10.2478/acss-2019-0014","url":null,"abstract":"Abstract Competence management is a discipline that recently has regained popularity due to the growing demand for constantly higher competences of employees as well as graduates. One of the main implementation challenges of competence management is that, as a rule, it is based on experts’ implicit knowledge. This is the reason why the transformation of implicit knowledge into explicit knowledge practically is unmanageable and, as a consequence, limits the ability to transfer the already existing knowledge from one organisation to another. The paper proposes an ontology-based competence model that allows the reuse of existing competence frameworks in the field of non-formal education where different competence frameworks need to be used together for the purpose of identification, assessment and development of customers’ competences without forcing the organisations to change their routine competence management processes. The proposed competence model is used as a basis for development of competence management model on which IT tools that support a competence management processes may be built up. Several existing frameworks have been analysed and the terminology used in them has been combined in a single model. The usage of the proposed model is discussed and the possible IT tools to support the competence management process are identified in the paper.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85087438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Shared Subscribe Hyper Simulation Optimization (SUBHSO) Algorithm for Clustering Big Data – Using Big Databases of Iran Electricity Market 大数据聚类的共享订阅超模拟优化(SUBHSO)算法——基于伊朗电力市场大数据库
IF 1 Pub Date : 2019-05-01 DOI: 10.2478/acss-2019-0007
Mesbaholdin Salami, F. Sobhani, M. Ghazizadeh
Abstract Many real world problems have big data, including recorded fields and/or attributes. In such cases, data mining requires dimension reduction techniques because there are serious challenges facing conventional clustering methods in dealing with big data. The subspace selection method is one of the most important dimension reduction techniques. In such methods, a selected set of subspaces is substituted for the general dataset of the problem and clustering is done using this set. This article introduces the Shared Subscribe Hyper Simulation Optimization (SUBHSO) algorithm to introduce the optimized cluster centres to a set of subspaces. SUBHSO uses an optimization loop for modifying and optimizing the coordinates of the cluster centres with the particle swarm optimization (PSO) and the fitness function calculation using the Monte Carlo simulation. The case study on the big data of Iran electricity market (IEM) has shown the improvement of the defined fitness function, which represents the cluster cohesion and separation relative to other dimension reduction algorithms.
许多现实世界的问题都有大数据,包括记录的字段和/或属性。在这种情况下,数据挖掘需要降维技术,因为传统的聚类方法在处理大数据时面临着严重的挑战。子空间选择方法是最重要的降维技术之一。在这种方法中,一组选定的子空间被替换为问题的一般数据集,并使用该集合进行聚类。本文介绍了共享订阅超模拟优化(SUBHSO)算法,将优化后的集群中心引入一组子空间。该算法采用粒子群优化(PSO)和蒙特卡罗模拟适应度函数计算方法,通过优化循环对簇中心坐标进行修改和优化。通过对伊朗电力市场(IEM)大数据的实例研究表明,相对于其他降维算法,改进了代表聚类内聚和分离的定义适应度函数。
{"title":"Shared Subscribe Hyper Simulation Optimization (SUBHSO) Algorithm for Clustering Big Data – Using Big Databases of Iran Electricity Market","authors":"Mesbaholdin Salami, F. Sobhani, M. Ghazizadeh","doi":"10.2478/acss-2019-0007","DOIUrl":"https://doi.org/10.2478/acss-2019-0007","url":null,"abstract":"Abstract Many real world problems have big data, including recorded fields and/or attributes. In such cases, data mining requires dimension reduction techniques because there are serious challenges facing conventional clustering methods in dealing with big data. The subspace selection method is one of the most important dimension reduction techniques. In such methods, a selected set of subspaces is substituted for the general dataset of the problem and clustering is done using this set. This article introduces the Shared Subscribe Hyper Simulation Optimization (SUBHSO) algorithm to introduce the optimized cluster centres to a set of subspaces. SUBHSO uses an optimization loop for modifying and optimizing the coordinates of the cluster centres with the particle swarm optimization (PSO) and the fitness function calculation using the Monte Carlo simulation. The case study on the big data of Iran electricity market (IEM) has shown the improvement of the defined fitness function, which represents the cluster cohesion and separation relative to other dimension reduction algorithms.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76925861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Applied Computer Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1