With the quickly developing data centers in smart cities, reducing energy consumption and improving network performance, as well as economic benefits, are essential research topics. In particular, Data Center Networks do not always run at full capacity, which leads to significant energy consumption. This paper experiments with a range of optimization tools to find the optimal solutions for the Integer Linear Programming (ILP) model of network power consumption. The study reports on experiments under three communication patterns (near, long, and random), measuring runtime and memory consumption in order to evaluate the performance of different ILP solvers.While the results show that, for near traffic pattern, most of the tools rapidly converge to the optimal solution, CP-SAT provides the most stable performance and outperforms the other solvers for the long traffic pattern. On the other hand, for random traffic pattern, Gurobi can be considered to be the best choice, since it is able to solve all the benchmark instances under the time limit and finds solutions faster by 1 or 2 orders of magnitude than the other solvers do.
{"title":"Integer Programming Based Optimization of Power Consumption for Data Center Networks","authors":"Gergely Kovásznai, Mohammed Nsaif","doi":"10.14232/actacyb.299115","DOIUrl":"https://doi.org/10.14232/actacyb.299115","url":null,"abstract":"With the quickly developing data centers in smart cities, reducing energy consumption and improving network performance, as well as economic benefits, are essential research topics. In particular, Data Center Networks do not always run at full capacity, which leads to significant energy consumption. This paper experiments with a range of optimization tools to find the optimal solutions for the Integer Linear Programming (ILP) model of network power consumption. The study reports on experiments under three communication patterns (near, long, and random), measuring runtime and memory consumption in order to evaluate the performance of different ILP solvers.While the results show that, for near traffic pattern, most of the tools rapidly converge to the optimal solution, CP-SAT provides the most stable performance and outperforms the other solvers for the long traffic pattern. On the other hand, for random traffic pattern, Gurobi can be considered to be the best choice, since it is able to solve all the benchmark instances under the time limit and finds solutions faster by 1 or 2 orders of magnitude than the other solvers do.","PeriodicalId":42512,"journal":{"name":"Acta Cybernetica","volume":"18 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135684244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A profile describes a set of properties, e.g. a set of skills a person may have or a set of skills required for a particular job. Profile matching aims to determine how well a given profile fits to a requested profile and vice versa. Fuzzyness is naturally attached to this problem. The filter-based matching theory uses filters in lattices to represent profiles, and matching values in the interval [0,1], so the lattice order refers to subsumption between the concepts in a profile. In this article the lattice is extended by additional information in form of weighted extra edges that represent partial quantifiable relationships between these concepts. This gives rise to fuzzy filters, which permit a refinement of profile matching. Another way to introduce fuzzyness is to treat profiles as fuzzy sets. In the present paper we combine these two aproaches. Extra edges may introduce directed cycles in the directed graph of the ontology, and the structure of a lattice is lost. We provide a construction grounded in formal concept analysis to extend the original lattice and remove the cycles such that matching values determined over the extended lattice are exactly those resulting from the use of fuzzy filters in case of crisp profiles. For fuzzy profiles we show how to modify the weighting construction while eliminating the directed cycles but still regaining the matching values. We also give sharp estimates for the growth of the number of vertices in this construction.
{"title":"Refined Fuzzy Profile Matching","authors":"Gábor Rácz, Attila Sali, Klaus-Dieter Schewe","doi":"10.14232/actacyb.277380","DOIUrl":"https://doi.org/10.14232/actacyb.277380","url":null,"abstract":"A profile describes a set of properties, e.g. a set of skills a person may have or a set of skills required for a particular job. Profile matching aims to determine how well a given profile fits to a requested profile and vice versa. Fuzzyness is naturally attached to this problem. The filter-based matching theory uses filters in lattices to represent profiles, and matching values in the interval [0,1], so the lattice order refers to subsumption between the concepts in a profile. In this article the lattice is extended by additional information in form of weighted extra edges that represent partial quantifiable relationships between these concepts. This gives rise to fuzzy filters, which permit a refinement of profile matching. Another way to introduce fuzzyness is to treat profiles as fuzzy sets. In the present paper we combine these two aproaches. Extra edges may introduce directed cycles in the directed graph of the ontology, and the structure of a lattice is lost. We provide a construction grounded in formal concept analysis to extend the original lattice and remove the cycles such that matching values determined over the extended lattice are exactly those resulting from the use of fuzzy filters in case of crisp profiles. For fuzzy profiles we show how to modify the weighting construction while eliminating the directed cycles but still regaining the matching values. We also give sharp estimates for the growth of the number of vertices in this construction.","PeriodicalId":42512,"journal":{"name":"Acta Cybernetica","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135257543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Free-form multi-sided surfaces are often defined by side interpolants (also called ribbons), requiring that the surface has to connect to them with a prescribed degree of smoothness. I-patches represent a family of implicit surfaces defined by an arbitrary number of ribbons. While in the case of parametric surfaces describing ribbons is a well-discussed problem, defining implicit ribbons is a different task.
In this paper, we introduce a new representation, corner I-patches, where implicit corner interpolants are blended together. Corner interpolants are usually simpler, lower-degree surfaces than ribbons. The shape of the patch depends on a handful of scalar parameters; constraining them ensures continuity between adjacent patches. Corner I-patches have several favorable properties that can be exploited for design, volume rendering, or cell-based approximation of complex shapes.
{"title":"Corner-Based Implicit Patches","authors":"Ágoston Sipos","doi":"10.14232/actacyb.299598","DOIUrl":"https://doi.org/10.14232/actacyb.299598","url":null,"abstract":"Free-form multi-sided surfaces are often defined by side interpolants (also called ribbons), requiring that the surface has to connect to them with a prescribed degree of smoothness. I-patches represent a family of implicit surfaces defined by an arbitrary number of ribbons. While in the case of parametric surfaces describing ribbons is a well-discussed problem, defining implicit ribbons is a different task.
 In this paper, we introduce a new representation, corner I-patches, where implicit corner interpolants are blended together. Corner interpolants are usually simpler, lower-degree surfaces than ribbons. The shape of the patch depends on a handful of scalar parameters; constraining them ensures continuity between adjacent patches. Corner I-patches have several favorable properties that can be exploited for design, volume rendering, or cell-based approximation of complex shapes.","PeriodicalId":42512,"journal":{"name":"Acta Cybernetica","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134933348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Probabilistic programs that can represent both probabilistic and non-deterministic choices are useful for creating reliability models of complex safety-critical systems that interact with humans or external systems. Such models are often quite complex, so their analysis can be hindered by state-space explosion. One common approach to deal with this problem is the application of abstraction techniques. We present improvements for an abstraction-refinement scheme for the analysis of probabilistic programs, aiming to improve the scalability of the scheme by adapting modern techniques from qualitative software model checking, and make the analysis result more reliable using better convergence checks. We implemented and evaluated the improvements in our Theta model checking framework.
{"title":"Towards Abstraction-based Probabilistic Program Analysis","authors":"D. Szekeres, I. Majzik","doi":"10.14232/actacyb.298287","DOIUrl":"https://doi.org/10.14232/actacyb.298287","url":null,"abstract":"Probabilistic programs that can represent both probabilistic and non-deterministic choices are useful for creating reliability models of complex safety-critical systems that interact with humans or external systems. Such models are often quite complex, so their analysis can be hindered by state-space explosion. One common approach to deal with this problem is the application of abstraction techniques. We present improvements for an abstraction-refinement scheme for the analysis of probabilistic programs, aiming to improve the scalability of the scheme by adapting modern techniques from qualitative software model checking, and make the analysis result more reliable using better convergence checks. We implemented and evaluated the improvements in our Theta model checking framework.","PeriodicalId":42512,"journal":{"name":"Acta Cybernetica","volume":"1 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43081977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. F. Rachmadi, I. Purnama, S. M. S. Nugroho, Y. Suprapto
A family is the smallest entity that formed the world with specific characteristics. The characteristics of a family are that the member can/may share some similar DNA and leads to similar physical appearances, including similar facial features. This paper proposed a dual convolutional neural network (CNN) with a pyramid attention network for image-based kinship verification problems. The dual CNN classifier is formed by paralleling the FaceNet CNN architecture followed by family-aware features extraction network and three final fully-connected layers. A channel-wise pyramid attention network is added after the last convolutional layers of FaceNet CNN architecture. The family-aware features extraction network is used to learn family-aware features using the SphereFace loss function. The final features used to classify the kin/non-kin pair are joint aggregation features between the pyramid attention features and family-aware features. At the end of the fully connected layer, a softmax loss layer is attached to learn kinship verification via binary classification problems. To analyze the performance of our proposed classifier, we performed experiments heavily on the Family in The Wild (FIW) kinship verification dataset. The FIW kinship verification dataset is the largest dataset for kinship verification currently available. Experiments of the FIW dataset show that our proposed classifier can achieve the highest average accuracy of 68.05% on a single classifier scenario and 68.73% on an ensemble classifier scenario which is comparable with other state-of-the-art methods.
家庭是构成具有特定特征的世界的最小实体。一个家庭的特征是成员可以/可能拥有一些相似的DNA,并导致相似的外貌,包括相似的面部特征。本文提出了一种具有金字塔关注网络的双卷积神经网络(CNN)来解决基于图像的亲属关系验证问题。双CNN分类器由并行FaceNet CNN架构,然后是家族感知特征提取网络和三个最终的全连接层组成。在FaceNet CNN架构的最后一层卷积层之后,添加了一个通道式金字塔注意力网络。家族感知特征提取网络利用SphereFace损失函数学习家族感知特征。最后用于分类亲缘/非亲缘对的特征是金字塔注意特征和家族意识特征之间的联合聚集特征。为了分析我们提出的分类器的性能,我们在Family in the Wild (FIW)亲属关系验证数据集上进行了大量实验。FIW亲属关系验证数据集是目前可用的最大的亲属关系验证数据集。FIW数据集的实验表明,我们提出的分类器在单个分类器场景下可以达到68.05%的最高平均准确率,在集成分类器场景下可以达到68.73%的最高平均准确率,与其他最先进的方法相当。
{"title":"Dual Convolutional Neural Network Classifier with Pyramid Attention Network for Image-Based Kinship Verification","authors":"R. F. Rachmadi, I. Purnama, S. M. S. Nugroho, Y. Suprapto","doi":"10.14232/actacyb.296355","DOIUrl":"https://doi.org/10.14232/actacyb.296355","url":null,"abstract":"A family is the smallest entity that formed the world with specific characteristics. The characteristics of a family are that the member can/may share some similar DNA and leads to similar physical appearances, including similar facial features. This paper proposed a dual convolutional neural network (CNN) with a pyramid attention network for image-based kinship verification problems. The dual CNN classifier is formed by paralleling the FaceNet CNN architecture followed by family-aware features extraction network and three final fully-connected layers. A channel-wise pyramid attention network is added after the last convolutional layers of FaceNet CNN architecture. The family-aware features extraction network is used to learn family-aware features using the SphereFace loss function. The final features used to classify the kin/non-kin pair are joint aggregation features between the pyramid attention features and family-aware features. At the end of the fully connected layer, a softmax loss layer is attached to learn kinship verification via binary classification problems. To analyze the performance of our proposed classifier, we performed experiments heavily on the Family in The Wild (FIW) kinship verification dataset. The FIW kinship verification dataset is the largest dataset for kinship verification currently available. Experiments of the FIW dataset show that our proposed classifier can achieve the highest average accuracy of 68.05% on a single classifier scenario and 68.73% on an ensemble classifier scenario which is comparable with other state-of-the-art methods. ","PeriodicalId":42512,"journal":{"name":"Acta Cybernetica","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45730148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The culpable cybersecurity practices that threaten leading organizations are logically prone to establishing countermeasures, including HoneyPots, and bestow research innovations in various dimensions such as ML-enabled threat predictions. This article proposes an explainable AI-assisted permissioned blockchain framework named EA-POT for predicting potential defaulters' IP addresses. EA-POT registers the predicted defaulters based on the suggestions levied by explainable AI and the approval of IP authorizers to blockchain database to enhance immutability. Experiments were carried out at IoT Cloud Research laboratory using three prediction models such as Random Forest Modeling (RFM), Linear Regression Modeling (LRM), and Support Vector Machines (SVM); and, the observed experimental results for predicting the AWS HoneyPots were explored. The proposed EA-POT framework revealed the procedure to include interpretable knowledge while blacklisting IPs that reach HoneyPots.
{"title":"EA-POT: An Explainable AI Assisted Blockchain Framework for HoneyPot IP Predictions","authors":"S. Benedict","doi":"10.14232/actacyb.293319","DOIUrl":"https://doi.org/10.14232/actacyb.293319","url":null,"abstract":"The culpable cybersecurity practices that threaten leading organizations are logically prone to establishing countermeasures, including HoneyPots, and bestow research innovations in various dimensions such as ML-enabled threat predictions. This article proposes an explainable AI-assisted permissioned blockchain framework named EA-POT for predicting potential defaulters' IP addresses. EA-POT registers the predicted defaulters based on the suggestions levied by explainable AI and the approval of IP authorizers to blockchain database to enhance immutability. Experiments were carried out at IoT Cloud Research laboratory using three prediction models such as Random Forest Modeling (RFM), Linear Regression Modeling (LRM), and Support Vector Machines (SVM); and, the observed experimental results for predicting the AWS HoneyPots were explored. The proposed EA-POT framework revealed the procedure to include interpretable knowledge while blacklisting IPs that reach HoneyPots.","PeriodicalId":42512,"journal":{"name":"Acta Cybernetica","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44196998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design and operation of modern software systems exhibit a shift towards virtualization, containerization and service-based orchestration. Performance capacity engineering and resource utilization tuning become priority requirements in such environments. Measurement-based performance evaluation is the cornerstone of capacity engineering and designing for performance. Moreover, the increasing complexity of systems necessitates rigorous performance analysis approaches. However, empirical performance analysis lacks sophisticated model-based support similar to the functional design of the system. The paper proposes an ontology-based approach for facilitating and guiding the empirical evaluation throughout its various steps. Hyperledger Fabric (HLF), an open-source blockchain platform by the Linux Foundation, is modelled and evaluated as a pilot example of the approach, using the standard TPC-C performance benchmark workload.
{"title":"Adding Semantics to Measurements","authors":"Attila Klenik, A. Pataricza","doi":"10.14232/actacyb.295182","DOIUrl":"https://doi.org/10.14232/actacyb.295182","url":null,"abstract":"The design and operation of modern software systems exhibit a shift towards virtualization, containerization and service-based orchestration. Performance capacity engineering and resource utilization tuning become priority requirements in such environments. Measurement-based performance evaluation is the cornerstone of capacity engineering and designing for performance. Moreover, the increasing complexity of systems necessitates rigorous performance analysis approaches. However, empirical performance analysis lacks sophisticated model-based support similar to the functional design of the system. The paper proposes an ontology-based approach for facilitating and guiding the empirical evaluation throughout its various steps. Hyperledger Fabric (HLF), an open-source blockchain platform by the Linux Foundation, is modelled and evaluated as a pilot example of the approach, using the standard TPC-C performance benchmark workload.","PeriodicalId":42512,"journal":{"name":"Acta Cybernetica","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2022-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46902063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stability contractors, based on interval analysis, were introduced in recent work as a tool to verify stability domains for nonlinear dynamic systems. These contractors rely on the property that - in case of provable asymptotic stability - a certain domain in a multi-dimensional state space is mapped into its interior after a certain integration time for continuous-time processes or after a certain number of discretization steps in a discrete-time setting. However, a disadvantage of the use of axis-aligned interval boxes in such computations is the omnipresent wrapping effect. As shown in this contribution, the replacement of classical interval representations by ellipsoidal domain enclosures reduces this undesirable effect. It also helps to find suitable ratios for the edge lengths if interval-based domain representations are investigated. Moreover, ellipsoidal domains naturally represent the possible regions of attraction of asymptotically stable equilibrium points that can be analyzed with the help of quadratic Lyapunov functions, for which stability criteria can be cast into linear matrix inequality (LMI) constraints. For that reason, this paper further presents possible interfaces of ellipsoidal enclosure techniques with LMI approaches. This combination aims at the maximization of those domains that can be proven to be stable for a discrete-time range-only localization algorithm in robotics. There, an Extended Kalman Filter (EKF) is applied to a system for which the dynamics are characterized by a discrete-time integrator disturbance model with additive Gaussian noise. In this scenario, the measurement equations correspond to the distances between the object to be localized and beacons with known positions.
{"title":"Verifying Provable Stability Domains for Discrete-Time Systems Using Ellipsoidal State Enclosures","authors":"A. Rauh, Auguste Bourgois, L. Jaulin","doi":"10.14232/actacyb.293871","DOIUrl":"https://doi.org/10.14232/actacyb.293871","url":null,"abstract":"Stability contractors, based on interval analysis, were introduced in recent work as a tool to verify stability domains for nonlinear dynamic systems. These contractors rely on the property that - in case of provable asymptotic stability - a certain domain in a multi-dimensional state space is mapped into its interior after a certain integration time for continuous-time processes or after a certain number of discretization steps in a discrete-time setting. However, a disadvantage of the use of axis-aligned interval boxes in such computations is the omnipresent wrapping effect. As shown in this contribution, the replacement of classical interval representations by ellipsoidal domain enclosures reduces this undesirable effect. It also helps to find suitable ratios for the edge lengths if interval-based domain representations are investigated. Moreover, ellipsoidal domains naturally represent the possible regions of attraction of asymptotically stable equilibrium points that can be analyzed with the help of quadratic Lyapunov functions, for which stability criteria can be cast into linear matrix inequality (LMI) constraints. For that reason, this paper further presents possible interfaces of ellipsoidal enclosure techniques with LMI approaches. This combination aims at the maximization of those domains that can be proven to be stable for a discrete-time range-only localization algorithm in robotics. There, an Extended Kalman Filter (EKF) is applied to a system for which the dynamics are characterized by a discrete-time integrator disturbance model with additive Gaussian noise. In this scenario, the measurement equations correspond to the distances between the object to be localized and beacons with known positions.","PeriodicalId":42512,"journal":{"name":"Acta Cybernetica","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2022-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46338218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Currently there are many attempts around the world to use computers, smartphones, tablets and other electronic devices in order to stop the spread of COVID-19. Most of these attempts focus on collecting information about infected people, in order to help healthy people avoid contact with them. However, social distancing decisions are still taken by the governments empirically. That is, the authorities do not have an automated tool to recommend which decisions to make in order to maximize social distancing and to minimize the impact for the economy. In this paper we address the aforementioned problem and we design an algorithm that provides social distancing methods (i.e., what schools, shops, factories, etc. to close) that are efficient (i.e., that help reduce the spread of the virus) and have low impact on the economy. On short: a) we propose several models (i.e., combinatorial optimization problems); b) we show some theoretical results regarding the computational complexity of the formulated problems; c) we give an algorithm for the most complex of the previously formulated problems; d) we implement and test our algorithm.
{"title":"Models and Algorithms for Social Distancing in Order to Stop the Spread of COVID-19","authors":"Alexandru-Catalin Popa","doi":"10.14232/actacyb.292146","DOIUrl":"https://doi.org/10.14232/actacyb.292146","url":null,"abstract":"Currently there are many attempts around the world to use computers, smartphones, tablets and other electronic devices in order to stop the spread of COVID-19. Most of these attempts focus on collecting information about infected people, in order to help healthy people avoid contact with them. However, social distancing decisions are still taken by the governments empirically. That is, the authorities do not have an automated tool to recommend which decisions to make in order to maximize social distancing and to minimize the impact for the economy. \u0000In this paper we address the aforementioned problem and we design an algorithm that provides social distancing methods (i.e., what schools, shops, factories, etc. to close) that are efficient (i.e., that help reduce the spread of the virus) and have low impact on the economy. \u0000On short: a) we propose several models (i.e., combinatorial optimization problems); b) we show some theoretical results regarding the computational complexity of the formulated problems; c) we give an algorithm for the most complex of the previously formulated problems; d) we implement and test our algorithm.","PeriodicalId":42512,"journal":{"name":"Acta Cybernetica","volume":"1 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66816559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we are concerned with dense languages and non primitive words. A language L is said to be dense if any string can be found as a substring of element of L. In 2020, Ryoma Syn'ya proved that any regular language with positive asymptotic density always containsinfinitely many non-primitive words. Since positive asymptotic density implies density, it is natural to ask whether his result can be generalized for a wider class of dense languages. In this paper, we actually obtain such generalization.
{"title":"Dense Languages and Non Primitive Words","authors":"T. Koga","doi":"10.14232/actacyb.293457","DOIUrl":"https://doi.org/10.14232/actacyb.293457","url":null,"abstract":"In this paper, we are concerned with dense languages and non primitive words. A language L is said to be dense if any string can be found as a substring of element of L. In 2020, Ryoma Syn'ya proved that any regular language with positive asymptotic density always containsinfinitely many non-primitive words. Since positive asymptotic density implies density, it is natural to ask whether his result can be generalized for a wider class of dense languages. In this paper, we actually obtain such generalization.","PeriodicalId":42512,"journal":{"name":"Acta Cybernetica","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45548751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}