Pub Date : 2020-11-01DOI: 10.1142/s2196888820500232
Ishwar Baidari, Channamma Patil
Clustering is a key method in unsupervised learning with various applications in data mining, pattern recognition and intelligent information processing. However, the number of groups to be formed, usually notated as [Formula: see text] is a vital parameter for most of the existing clustering algorithms as their clustering results depend heavily on this parameter. The problem of finding the optimal [Formula: see text] value is very challenging. This paper proposes a novel idea for finding the correct number of groups in a dataset based on data depth. The idea is to avoid the traditional process of running the clustering algorithm over a dataset for [Formula: see text] times and further, finding the [Formula: see text] value for a dataset without setting any specific search range for [Formula: see text] parameter. We experiment with different indices, namely CH, KL, Silhouette, Gap, CSP and the proposed method on different real and synthetic datasets to estimate the correct number of groups in a dataset. The experimental results on real and synthetic datasets indicate good performance of the proposed method.
聚类是无监督学习的一种关键方法,在数据挖掘、模式识别和智能信息处理等领域有着广泛的应用。然而,对于大多数现有的聚类算法来说,要形成的组的数量(通常记为[公式:见文本])是一个重要的参数,因为它们的聚类结果严重依赖于这个参数。寻找最优[公式:见文本]值的问题非常具有挑战性。本文提出了一种基于数据深度的数据集中查找正确组数的新思路。这个想法是为了避免在数据集上运行[Formula: see text]次的传统聚类算法,并且在不为[Formula: see text]参数设置任何特定搜索范围的情况下为数据集找到[Formula: see text]值。我们在不同的真实和合成数据集上实验了不同的指标,即CH, KL, Silhouette, Gap, CSP和所提出的方法,以估计数据集中正确的组数。在真实数据集和合成数据集上的实验结果表明了该方法的良好性能。
{"title":"A Criterion for Deciding the Number of Clusters in a Dataset Based on Data Depth","authors":"Ishwar Baidari, Channamma Patil","doi":"10.1142/s2196888820500232","DOIUrl":"https://doi.org/10.1142/s2196888820500232","url":null,"abstract":"Clustering is a key method in unsupervised learning with various applications in data mining, pattern recognition and intelligent information processing. However, the number of groups to be formed, usually notated as [Formula: see text] is a vital parameter for most of the existing clustering algorithms as their clustering results depend heavily on this parameter. The problem of finding the optimal [Formula: see text] value is very challenging. This paper proposes a novel idea for finding the correct number of groups in a dataset based on data depth. The idea is to avoid the traditional process of running the clustering algorithm over a dataset for [Formula: see text] times and further, finding the [Formula: see text] value for a dataset without setting any specific search range for [Formula: see text] parameter. We experiment with different indices, namely CH, KL, Silhouette, Gap, CSP and the proposed method on different real and synthetic datasets to estimate the correct number of groups in a dataset. The experimental results on real and synthetic datasets indicate good performance of the proposed method.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116978111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1142/s2196888820500220
J. Guerrero-Sosa, Víctor Hugo Menéndez Domínguez, M. Castellanos-Bolaños, L. Curi-Quintal
This paper presents an analysis of scientific collaboration through graph theory, based on a previous study focused on the collaborative work of researchers within an institution. This proposal also exposes the representation of inter-institutional collaboration of research groups, combining graph theory and data mining. The state of the art relates the concepts of scientific production, digital repositories, interoperability between repositories, the law of Open Science in Mexico, the theory of graphs and their use in previous studies for the analysis of scientific collaboration, and the definition of research groups in Mexico. Furthermore, the methodology uses elements of knowledge extraction for data mining, involving recovery, processing and visualization. Results present the collaboration status at the Universidad Autónoma de Yucatán, internally and externally, by the research groups. Internally, 22 groups were found and each researcher collaborates with six other professors within the institution, on average. In addition, consolidated research groups are those with the highest level of production and collaboration at national and international level, compared to the groups with less consolidation.
本文基于先前对机构内研究人员协作工作的研究,通过图论对科学协作进行了分析。同时,结合图论和数据挖掘,揭示了机构间研究团队合作的表现形式。最先进的技术涉及科学生产的概念、数字存储库、存储库之间的互操作性、墨西哥的开放科学法、图论及其在先前科学合作分析研究中的应用,以及墨西哥研究小组的定义。此外,该方法将知识提取的元素用于数据挖掘,包括恢复、处理和可视化。结果显示了各研究小组在universsidad Autónoma de Yucatán内部和外部的合作状况。在研究所内部,共有22个小组,每位研究员平均与6名教授合作。此外,与合并较少的研究小组相比,合并的研究小组是在国家和国际一级具有最高水平的生产和合作的研究小组。
{"title":"Analysis of Internal and External Academic Collaboration in an Institution Through Graph Theory","authors":"J. Guerrero-Sosa, Víctor Hugo Menéndez Domínguez, M. Castellanos-Bolaños, L. Curi-Quintal","doi":"10.1142/s2196888820500220","DOIUrl":"https://doi.org/10.1142/s2196888820500220","url":null,"abstract":"This paper presents an analysis of scientific collaboration through graph theory, based on a previous study focused on the collaborative work of researchers within an institution. This proposal also exposes the representation of inter-institutional collaboration of research groups, combining graph theory and data mining. The state of the art relates the concepts of scientific production, digital repositories, interoperability between repositories, the law of Open Science in Mexico, the theory of graphs and their use in previous studies for the analysis of scientific collaboration, and the definition of research groups in Mexico. Furthermore, the methodology uses elements of knowledge extraction for data mining, involving recovery, processing and visualization. Results present the collaboration status at the Universidad Autónoma de Yucatán, internally and externally, by the research groups. Internally, 22 groups were found and each researcher collaborates with six other professors within the institution, on average. In addition, consolidated research groups are those with the highest level of production and collaboration at national and international level, compared to the groups with less consolidation.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"243 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126817202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1142/s2196888820500219
A. A. Neloy, R. A. Bindu, S. Alam, Ridwanul Haque, Md. Saif Khan, Nasim Mahmud Mishu, Shahnewaz Siddique
An improved version of Alpha-N, a self-powered, wheel-driven Automated Delivery Robot (ADR), is presented in this study. Alpha-N-V2 is capable of navigating autonomously by detecting and avoiding objects or obstacles in its path. For autonomous navigation and path planning, Alpha-N uses a vector map and calculates the shortest path by Grid Count Method (GCM) of Dijkstra’s Algorithm. The RFID Reading System (RRS) is assembled in Alpha-N to read Landmark determination with Radio Frequency Identification (RFID) tags. With the help of the RFID tags, Alpha-N verifies the path for identification between source and destination and calibrates the current position. Along with the RRS, GCM, to detect and avoid obstacles, an Object Detection Module (ODM) is constructed by Faster R-CNN with VGGNet-16 architecture that builds and supports the Path Planning System (PPS). In the testing phase, the following results are acquired from the Alpha-N: ODM exhibits an accuracy of [Formula: see text], RRS shows [Formula: see text] accuracy and the PPS maintains the accuracy of [Formula: see text]. This proposed version of Alpha-N shows significant improvement in terms of performance and usability compared with the previous version of Alpha-N.
{"title":"Alpha-N-V2: Shortest Path Finder Automated Delivery Robot with Obstacle Detection and Avoiding System","authors":"A. A. Neloy, R. A. Bindu, S. Alam, Ridwanul Haque, Md. Saif Khan, Nasim Mahmud Mishu, Shahnewaz Siddique","doi":"10.1142/s2196888820500219","DOIUrl":"https://doi.org/10.1142/s2196888820500219","url":null,"abstract":"An improved version of Alpha-N, a self-powered, wheel-driven Automated Delivery Robot (ADR), is presented in this study. Alpha-N-V2 is capable of navigating autonomously by detecting and avoiding objects or obstacles in its path. For autonomous navigation and path planning, Alpha-N uses a vector map and calculates the shortest path by Grid Count Method (GCM) of Dijkstra’s Algorithm. The RFID Reading System (RRS) is assembled in Alpha-N to read Landmark determination with Radio Frequency Identification (RFID) tags. With the help of the RFID tags, Alpha-N verifies the path for identification between source and destination and calibrates the current position. Along with the RRS, GCM, to detect and avoid obstacles, an Object Detection Module (ODM) is constructed by Faster R-CNN with VGGNet-16 architecture that builds and supports the Path Planning System (PPS). In the testing phase, the following results are acquired from the Alpha-N: ODM exhibits an accuracy of [Formula: see text], RRS shows [Formula: see text] accuracy and the PPS maintains the accuracy of [Formula: see text]. This proposed version of Alpha-N shows significant improvement in terms of performance and usability compared with the previous version of Alpha-N.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125229587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1142/s2196888820500256
J. Castro, J. Gabarró, M. Serna
Often, uncertainty is present in processes that are part of our routines. Having tools to understand the consequences of unpredictability is convenient. We introduce a general framework to deal with uncertainty in the realm of distribution sets that are descriptions of imprecise probabilities. We propose several non-biased re ̄nement strategies to obtain sensible forecasts about results of uncertain processes. Initially, uncertainty on a system is modeled as the nondeterministic choice of its possible behaviors. Our re ̄nement hypothesis translates non-determinism into imprecise probabilistic choices. Imprecise probabilities allow us to propose a notion of uncertainty re ̄nement in terms of set inclusions. Later on, unpredictability is tackled through a strategic approach using uncertainty pro ̄les and angel/daemon games (a=d-games). Here, imprecise probabilities form the set of mixed strategies and Nash equilibria corresponds to natural uncertainty re ̄nements. We use this approach to study the performance of Web applications in terms of response times under stress conditions.
{"title":"Refining Indeterministic Choice: Imprecise Probabilities and Strategic Thinking","authors":"J. Castro, J. Gabarró, M. Serna","doi":"10.1142/s2196888820500256","DOIUrl":"https://doi.org/10.1142/s2196888820500256","url":null,"abstract":"Often, uncertainty is present in processes that are part of our routines. Having tools to understand the consequences of unpredictability is convenient. We introduce a general framework to deal with uncertainty in the realm of distribution sets that are descriptions of imprecise probabilities. We propose several non-biased re ̄nement strategies to obtain sensible forecasts about results of uncertain processes. Initially, uncertainty on a system is modeled as the nondeterministic choice of its possible behaviors. Our re ̄nement hypothesis translates non-determinism into imprecise probabilistic choices. Imprecise probabilities allow us to propose a notion of uncertainty re ̄nement in terms of set inclusions. Later on, unpredictability is tackled through a strategic approach using uncertainty pro ̄les and angel/daemon games (a=d-games). Here, imprecise probabilities form the set of mixed strategies and Nash equilibria corresponds to natural uncertainty re ̄nements. We use this approach to study the performance of Web applications in terms of response times under stress conditions.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121393922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-29DOI: 10.1142/s219688882150010x
Sebastian Schmied, Daniel Grossmann, Selvine G. Mathias, R. Mueller
An important part of the industry 4.0 concept is the horizontal and vertical integration of manufacturing systems. Information exchange in traditional production environments happens through interfaces that are connections between strictly defined senders and receivers. This limits the possibility for changing and extending the manufacturing system. A possible approach to enable the information exchange between all system entities uniformly are information models. Such models are semantic descriptions of the available data. The creation of these models needs to follow the manufacturing process, but also requires certain standardization to improve efficiency. Another challenge is the actual technical integration of the information into a common address space. This paper connects an approach for information modeling with a concept for dynamic aggregation. The approach is described with the help of a continuous example that uses OPC UA as a middleware technology.
{"title":"Integration of Manufacturing Information via Dynamic Information Model Aggregation","authors":"Sebastian Schmied, Daniel Grossmann, Selvine G. Mathias, R. Mueller","doi":"10.1142/s219688882150010x","DOIUrl":"https://doi.org/10.1142/s219688882150010x","url":null,"abstract":"An important part of the industry 4.0 concept is the horizontal and vertical integration of manufacturing systems. Information exchange in traditional production environments happens through interfaces that are connections between strictly defined senders and receivers. This limits the possibility for changing and extending the manufacturing system. A possible approach to enable the information exchange between all system entities uniformly are information models. Such models are semantic descriptions of the available data. The creation of these models needs to follow the manufacturing process, but also requires certain standardization to improve efficiency. Another challenge is the actual technical integration of the information into a common address space. This paper connects an approach for information modeling with a concept for dynamic aggregation. The approach is described with the help of a continuous example that uses OPC UA as a middleware technology.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120975936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-22DOI: 10.1142/s2196888821500081
S. Sowmyayani, V. Murugan, J. Kavitha
Fall detection is a serious problem in elder people. Constant inspection is important for this fall identification. Currently, numerous methods associated with fall detection are a significant area of research for safety purposes and for the healthcare industries. The objective of this paper is to identify elderly falls. The proposed method introduces keyframe based fall detection in elderly care system. Experiments were conducted on University of Rzeszow (UR) Fall Detection dataset, Fall Detection Dataset and MultiCam dataset. It is substantially proved that the proposed method achieves higher accuracy rate of 99%, 98.15% and 99% for UR Fall detection dataset, Fall Detection Dataset and MultiCam dataset, respectively. The performance of the proposed method is compared with other methods and proved to have higher accuracy rate than those methods.
{"title":"Fall Detection in Elderly Care System Based on Group of Pictures","authors":"S. Sowmyayani, V. Murugan, J. Kavitha","doi":"10.1142/s2196888821500081","DOIUrl":"https://doi.org/10.1142/s2196888821500081","url":null,"abstract":"Fall detection is a serious problem in elder people. Constant inspection is important for this fall identification. Currently, numerous methods associated with fall detection are a significant area of research for safety purposes and for the healthcare industries. The objective of this paper is to identify elderly falls. The proposed method introduces keyframe based fall detection in elderly care system. Experiments were conducted on University of Rzeszow (UR) Fall Detection dataset, Fall Detection Dataset and MultiCam dataset. It is substantially proved that the proposed method achieves higher accuracy rate of 99%, 98.15% and 99% for UR Fall detection dataset, Fall Detection Dataset and MultiCam dataset, respectively. The performance of the proposed method is compared with other methods and proved to have higher accuracy rate than those methods.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134208637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1142/s219688882050013x
Aleksandra Klašnja-Milićević, M. Ivanović, Bela Stantic
Learning analytics, as a rapidly evolving field, offers an encouraging approach with the aim of understanding, optimizing and enhancing learning process. Learners have the capabilities to interact with the learning analytics system through adequate user interface. Such systems enables various features such as learning recommendations, visualizations, reminders, rating and self-assessments possibilities. This paper proposes a framework for learning analytics aimed to improve personalized learning environments, encouraging the learner’s skills to monitor, adapt, and improve their own learning. It is an attempt to articulate the characterizing properties that reveals the association between learning analytics and personalized learning environment. In order to verify data analysis approaches and to determine the validity and accuracy of a learning analytics, and its corresponding to learning profiles, a case study was performed. The findings indicate that educational data for learning analytics are context specific and variables carry different meanings and can have different implications on learning success prediction.
{"title":"Designing Personalized Learning Environments - The Role of Learning Analytics","authors":"Aleksandra Klašnja-Milićević, M. Ivanović, Bela Stantic","doi":"10.1142/s219688882050013x","DOIUrl":"https://doi.org/10.1142/s219688882050013x","url":null,"abstract":"Learning analytics, as a rapidly evolving field, offers an encouraging approach with the aim of understanding, optimizing and enhancing learning process. Learners have the capabilities to interact with the learning analytics system through adequate user interface. Such systems enables various features such as learning recommendations, visualizations, reminders, rating and self-assessments possibilities. This paper proposes a framework for learning analytics aimed to improve personalized learning environments, encouraging the learner’s skills to monitor, adapt, and improve their own learning. It is an attempt to articulate the characterizing properties that reveals the association between learning analytics and personalized learning environment. In order to verify data analysis approaches and to determine the validity and accuracy of a learning analytics, and its corresponding to learning profiles, a case study was performed. The findings indicate that educational data for learning analytics are context specific and variables carry different meanings and can have different implications on learning success prediction.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114808731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1142/s2196888820500165
F. Leon, P. Cașcaval, C. Bǎdicǎ
This paper addresses the issue of optimal allocation of spare modules in large series-redundant systems in order to obtain a required reliability under cost constraints. Both cases of active and standby redundancy are considered. Moreover, for a subsystem with standby redundancy, two cases are examined: in the first case, all the spares are maintained in cold state (cold standby redundancy) and, in the second one, to reduce the time needed to put a spare into operation when the active one fails, one of the spares is maintained in warm conditions. To solve this optimization problem, for the simpler case of active redundancy an analytical method based on the Lagrange multipliers technique is first applied. Then the results are improved by using Pairwise Hill Climbing, an original fine-tuning algorithm. An alternative approach is an innovative evolutionary algorithm, RELIVE, in which an individual lives for several generations and improves its fitness based on local search. These methods are especially needed in case of very large systems.
{"title":"Optimization Methods for Redundancy Allocation in Large Systems","authors":"F. Leon, P. Cașcaval, C. Bǎdicǎ","doi":"10.1142/s2196888820500165","DOIUrl":"https://doi.org/10.1142/s2196888820500165","url":null,"abstract":"This paper addresses the issue of optimal allocation of spare modules in large series-redundant systems in order to obtain a required reliability under cost constraints. Both cases of active and standby redundancy are considered. Moreover, for a subsystem with standby redundancy, two cases are examined: in the first case, all the spares are maintained in cold state (cold standby redundancy) and, in the second one, to reduce the time needed to put a spare into operation when the active one fails, one of the spares is maintained in warm conditions. To solve this optimization problem, for the simpler case of active redundancy an analytical method based on the Lagrange multipliers technique is first applied. Then the results are improved by using Pairwise Hill Climbing, an original fine-tuning algorithm. An alternative approach is an innovative evolutionary algorithm, RELIVE, in which an individual lives for several generations and improves its fitness based on local search. These methods are especially needed in case of very large systems.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133144669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-22DOI: 10.1142/S2196888820500244
S. S. M. Ziabari, Jan Treur
The influence of acute severe stress or extreme emotion based on a Network-Oriented modeling methodology has been addressed here. Adaptive temporal-causal network modeling is an approach to address the phenomena with a complexity that cannot be or are hard to be explained in a real-world experiment. In the first phase, the suppression of the existing network connections as a consequence of the acute stress modeled and in the second phase relaxing the suppression by giving some time and starting new learning of the decision making in accordance with the presence of stress starts again.
{"title":"Decision Making Under Acute Stress Modeled by an Adaptive Temporal-Causal Network Model","authors":"S. S. M. Ziabari, Jan Treur","doi":"10.1142/S2196888820500244","DOIUrl":"https://doi.org/10.1142/S2196888820500244","url":null,"abstract":"The influence of acute severe stress or extreme emotion based on a Network-Oriented modeling methodology has been addressed here. Adaptive temporal-causal network modeling is an approach to address the phenomena with a complexity that cannot be or are hard to be explained in a real-world experiment. In the first phase, the suppression of the existing network connections as a consequence of the acute stress modeled and in the second phase relaxing the suppression by giving some time and starting new learning of the decision making in accordance with the presence of stress starts again.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-08DOI: 10.1142/s2196888820500207
Ayoub Bokhabrine, Ismaïl Biskri, Nadia Ghazzali
The analysis of numerical data, whether structured, semi-structured, or raw, is of paramount importance in many sectors of economic, scientific, or simply social activity. The process of extraction of association rules is based on the lexical quality of the text and on the minimum support set by the user. In this paper, we implemented a platform named “IDETEX” capable of extracting itemsets from textual data and using it for the experimentation in different types of clustering methods, such as [Formula: see text]-Medoids and Hierarchical clustering. The experiments conducted demonstrate the potential of the proposed approach for defining similarity between segments.
{"title":"New Descriptors of Textual Records: Getting Help from Frequent Itemsets","authors":"Ayoub Bokhabrine, Ismaïl Biskri, Nadia Ghazzali","doi":"10.1142/s2196888820500207","DOIUrl":"https://doi.org/10.1142/s2196888820500207","url":null,"abstract":"The analysis of numerical data, whether structured, semi-structured, or raw, is of paramount importance in many sectors of economic, scientific, or simply social activity. The process of extraction of association rules is based on the lexical quality of the text and on the minimum support set by the user. In this paper, we implemented a platform named “IDETEX” capable of extracting itemsets from textual data and using it for the experimentation in different types of clustering methods, such as [Formula: see text]-Medoids and Hierarchical clustering. The experiments conducted demonstrate the potential of the proposed approach for defining similarity between segments.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"38 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134260245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}