Abstract In this work we discuss the motivation for innovations and need of a teaching tool for the visualization of the natural semantics method of imperative programming languages. We present the rôle of the teaching software, its design, development and use in the teaching process. Our software module is able to visualize the natural semantics evaluation of programs. It serves as a compiler with environment that can visually interpret simple programming language Jane statements and to depict them into a derivation tree that represents the semantic method of natural semantics. A formal definition of programming language Jane used in the teaching of formal semantics and production rules in natural semantics for that language are shown as well. We present, how the presented teaching tool can provide particular visual steps in the process of finding the meaning of well-structured input program and to depict complete natural-semantic representation of an input program.
{"title":"On some innovations in teaching the formal semantics using software tools","authors":"William Steingartner","doi":"10.1515/comp-2020-0130","DOIUrl":"https://doi.org/10.1515/comp-2020-0130","url":null,"abstract":"Abstract In this work we discuss the motivation for innovations and need of a teaching tool for the visualization of the natural semantics method of imperative programming languages. We present the rôle of the teaching software, its design, development and use in the teaching process. Our software module is able to visualize the natural semantics evaluation of programs. It serves as a compiler with environment that can visually interpret simple programming language Jane statements and to depict them into a derivation tree that represents the semantic method of natural semantics. A formal definition of programming language Jane used in the teaching of formal semantics and production rules in natural semantics for that language are shown as well. We present, how the presented teaching tool can provide particular visual steps in the process of finding the meaning of well-structured input program and to depict complete natural-semantic representation of an input program.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"2 - 11"},"PeriodicalIF":1.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0130","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44963575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Static analysis is an essential way to find code smells and bugs. It checks the source code without execution and no test cases are required, therefore its cost is lower than testing. Moreover, static analysis can help in software engineering comprehensively, since static analysis can be used for the validation of code conventions, for measuring software complexity and for executing code refactorings as well. Symbolic execution is a static analysis method where the variables (e.g. input data) are interpreted with symbolic values. Clang Static Analyzer is a powerful symbolic execution engine based on the Clang compiler infrastructure that can be used with C, C++ and Objective-C. Validation of resources’ usage (e.g. files, memory) requires finite state automata (FSA) for modeling the state of resource (e.g. locked or acquired resource). In this paper, we argue for an approach in which automata are in-use during symbolic execution. The generic automaton can be customized for different resources. We present our domain-specific language to define automata in terms of syntactic and semantic rules. We have developed a tool for this approach which parses the automaton and generates Clang Static Analyzer checker that can be used in the symbolic execution engine. We show an example automaton in our domain-specific language and the usage of generated checker.
{"title":"A DSL for Resource Checking Using Finite State Automaton-Driven Symbolic Execution","authors":"Endre Fülöp, Norbert Pataki","doi":"10.1515/comp-2020-0120","DOIUrl":"https://doi.org/10.1515/comp-2020-0120","url":null,"abstract":"Abstract Static analysis is an essential way to find code smells and bugs. It checks the source code without execution and no test cases are required, therefore its cost is lower than testing. Moreover, static analysis can help in software engineering comprehensively, since static analysis can be used for the validation of code conventions, for measuring software complexity and for executing code refactorings as well. Symbolic execution is a static analysis method where the variables (e.g. input data) are interpreted with symbolic values. Clang Static Analyzer is a powerful symbolic execution engine based on the Clang compiler infrastructure that can be used with C, C++ and Objective-C. Validation of resources’ usage (e.g. files, memory) requires finite state automata (FSA) for modeling the state of resource (e.g. locked or acquired resource). In this paper, we argue for an approach in which automata are in-use during symbolic execution. The generic automaton can be customized for different resources. We present our domain-specific language to define automata in terms of syntactic and semantic rules. We have developed a tool for this approach which parses the automaton and generates Clang Static Analyzer checker that can be used in the symbolic execution engine. We show an example automaton in our domain-specific language and the usage of generated checker.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"107 - 115"},"PeriodicalIF":1.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0120","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45957105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The conference was held in Poprad, Slovakia, November 20-22, 2019. It became a significant international forum for academic scientists, engineers, researchers and young IT experts together to exchange and share their experiences and research results about most aspects of science and social research, discuss the practical challenges encountered and the solutions adopted and bringing new ideas. The research in computer science and technology has been growing so exponentially over the recent years. Computer science has a large potential impact on interdisciplinary collaboration. Moreover, computer technologies are often central to much scientific research. The main purpose of the conference was, in particular, to bring together people active in the area of computer science and related sciences and to present new research results. A lot of interesting papers were presented at the conference, so the selection process was not easy. The program committee formulated criteria for the first round of the selections process, then the selectionswere confirmedby sessions chairs in the second round. After the careful selection process for this special issue, 19 articles were recommended for further publishing. All articles in this edition have been significantly expanded by the authors with new scientific results and have been reviewed again. The topics covered in this special issue include theoretical computer science, artificial intelligence, programming paradigms, software engineering, applied informatics, computer networks, simulation, computer graphics and virtual reality. This volume focuses on actual problems in computer science and related fields, and it presents both theoretical and practical results and applications thus exploiting the innovative approaches. It reflects the main thrust of research in computer science and related fields and establishes contactswith experts of these fields. I would like to express my appreciation to the authors of all papers for their important contribution. Due to their fruitful research andhelpful cooperation, it was possible to provide such broad coverage of this area of scientific research. I also would like to thank all reviewers for their valuable opinions and recommendations. Thanks to their expertise andexperience, theyhave contributed to increasing the impact of the submitted articles. My special thanks go to the journal’s management for their kind cooperation and willingness to helpwith this special issue, as well as for thewonderful cooperation, which has become a nice tradition.
{"title":"Preface to Special Issue “Informatics 2019”","authors":"William Steingartner","doi":"10.1515/comp-2020-0216","DOIUrl":"https://doi.org/10.1515/comp-2020-0216","url":null,"abstract":"The conference was held in Poprad, Slovakia, November 20-22, 2019. It became a significant international forum for academic scientists, engineers, researchers and young IT experts together to exchange and share their experiences and research results about most aspects of science and social research, discuss the practical challenges encountered and the solutions adopted and bringing new ideas. The research in computer science and technology has been growing so exponentially over the recent years. Computer science has a large potential impact on interdisciplinary collaboration. Moreover, computer technologies are often central to much scientific research. The main purpose of the conference was, in particular, to bring together people active in the area of computer science and related sciences and to present new research results. A lot of interesting papers were presented at the conference, so the selection process was not easy. The program committee formulated criteria for the first round of the selections process, then the selectionswere confirmedby sessions chairs in the second round. After the careful selection process for this special issue, 19 articles were recommended for further publishing. All articles in this edition have been significantly expanded by the authors with new scientific results and have been reviewed again. The topics covered in this special issue include theoretical computer science, artificial intelligence, programming paradigms, software engineering, applied informatics, computer networks, simulation, computer graphics and virtual reality. This volume focuses on actual problems in computer science and related fields, and it presents both theoretical and practical results and applications thus exploiting the innovative approaches. It reflects the main thrust of research in computer science and related fields and establishes contactswith experts of these fields. I would like to express my appreciation to the authors of all papers for their important contribution. Due to their fruitful research andhelpful cooperation, it was possible to provide such broad coverage of this area of scientific research. I also would like to thank all reviewers for their valuable opinions and recommendations. Thanks to their expertise andexperience, theyhave contributed to increasing the impact of the submitted articles. My special thanks go to the journal’s management for their kind cooperation and willingness to helpwith this special issue, as well as for thewonderful cooperation, which has become a nice tradition.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"1 - 1"},"PeriodicalIF":1.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0216","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42113514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Kainz, E. Karpiel, R. Petija, M. Michalko, F. Jakab
Abstract In this paper an algorithm for detection of nonstandard situations in smart water metering based on machine learning is designed. The main categories for nonstandard situation or anomaly detection and two common methods for anomaly detection are analyzed. The proposed solution needs to fit the requirements for correct, efficient and real-time detection of non-standard situations in actual water consumption with minimal required consumer intervention to its operation. Moreover, a proposal to extend the original hardware solution is described and implemented to accommodate the needs of the detection algorithm. The final implemented and tested solution evaluates anomalies in water consumption for a given time in specific day and month using machine learning with a semi-supervised approach.
{"title":"Non-standard situation detection in smart water metering","authors":"O. Kainz, E. Karpiel, R. Petija, M. Michalko, F. Jakab","doi":"10.1515/comp-2020-0190","DOIUrl":"https://doi.org/10.1515/comp-2020-0190","url":null,"abstract":"Abstract In this paper an algorithm for detection of nonstandard situations in smart water metering based on machine learning is designed. The main categories for nonstandard situation or anomaly detection and two common methods for anomaly detection are analyzed. The proposed solution needs to fit the requirements for correct, efficient and real-time detection of non-standard situations in actual water consumption with minimal required consumer intervention to its operation. Moreover, a proposal to extend the original hardware solution is described and implemented to accommodate the needs of the detection algorithm. The final implemented and tested solution evaluates anomalies in water consumption for a given time in specific day and month using machine learning with a semi-supervised approach.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"12 - 21"},"PeriodicalIF":1.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0190","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45633156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this paper, we study a well-known computationally hard problem, called the subgraph isomorphism problem where the goal is for a given pattern and target graphs to determine whether the pattern is a subgraph of the target graph. Numerous algorithms for solving the problem exist in the literature and most of them are based on the backtracking approach. Since straightforward backtracking is usually slow, many algorithmic refinement techniques are used in practical algorithms. The main goal of this paper is to study such refinement techniques and to determine their ability to speed up backtracking algorithms. To do this we use a methodology of experimental algorithmics. We perform an experimental evaluation of the techniques and their combinations and, hence, demonstrate their usefulness in practice.
{"title":"An experimental evaluation of refinement techniques for the subgraph isomorphism backtracking algorithms","authors":"J. Mihelic, U. Cibej","doi":"10.1515/comp-2020-0149","DOIUrl":"https://doi.org/10.1515/comp-2020-0149","url":null,"abstract":"Abstract In this paper, we study a well-known computationally hard problem, called the subgraph isomorphism problem where the goal is for a given pattern and target graphs to determine whether the pattern is a subgraph of the target graph. Numerous algorithms for solving the problem exist in the literature and most of them are based on the backtracking approach. Since straightforward backtracking is usually slow, many algorithmic refinement techniques are used in practical algorithms. The main goal of this paper is to study such refinement techniques and to determine their ability to speed up backtracking algorithms. To do this we use a methodology of experimental algorithmics. We perform an experimental evaluation of the techniques and their combinations and, hence, demonstrate their usefulness in practice.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"33 - 42"},"PeriodicalIF":1.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0149","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45945038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The paper proposes the concept of a weak Berge equilibrium. Unlike the Berge equilibrium, the moral basis of this equilibrium is the Hippocratic Oath “First do no harm”. On the other hand, any Berge equilibrium is a weak Berge equilibrium. But, there are weak Berge equilibria, which are not the Berge equilibria. The properties of the weak Berge equilibrium have been investigated. The existence of the weak Berge equilibrium in mixed strategies has been established for finite games. The weak Berge equilibria for finite three-person non-cooperative games are computed.
{"title":"Weak Berge Equilibrium in Finite Three-person Games: Conception and Computation","authors":"K. Kudryavtsev, U. Malkov","doi":"10.1515/comp-2020-0210","DOIUrl":"https://doi.org/10.1515/comp-2020-0210","url":null,"abstract":"Abstract The paper proposes the concept of a weak Berge equilibrium. Unlike the Berge equilibrium, the moral basis of this equilibrium is the Hippocratic Oath “First do no harm”. On the other hand, any Berge equilibrium is a weak Berge equilibrium. But, there are weak Berge equilibria, which are not the Berge equilibria. The properties of the weak Berge equilibrium have been investigated. The existence of the weak Berge equilibrium in mixed strategies has been established for finite games. The weak Berge equilibria for finite three-person non-cooperative games are computed.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"127 - 134"},"PeriodicalIF":1.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0210","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42674173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Unit under test identification (UUT) is often difficult due to test smells, such as testing multiple UUTs in one test. Because the tests best reflect the current product specification they can be used to comprehend parts of the production code and the relationships between them. Because there is a similar vocabulary between the test and UUT, five NLP techniques were used on the source code of 5 popular Github projects in this paper. The collected results were compared with the manually identified UUTs. The tf-idf model achieved the best accuracy of 22% for a right UUT and 57% with a tolerance up to fifth place of manual identification. These results were obtained after preprocessing input documents with java keywords removal and word split. The tf-idf model achieved the best model training time and the index search takes within 1s per request, so it could be used in an Integrated Development Environment (IDE) as a support tool in the future. At the same time, it has been found that, for document preprocessing, word splitting improves accuracy best and removing java keywords has just a small improvement for tf-idf model results. Removing comments only slightly worsens the accuracy of Natural Language Processing (NLP) models. The best speed provided the word splitting with average 0.3s preprocessing time per all documents in a project.
{"title":"Unit Under Test Identification Using Natural Language Processing Techniques","authors":"Matej Madeja, J. Porubän","doi":"10.1515/comp-2020-0150","DOIUrl":"https://doi.org/10.1515/comp-2020-0150","url":null,"abstract":"Abstract Unit under test identification (UUT) is often difficult due to test smells, such as testing multiple UUTs in one test. Because the tests best reflect the current product specification they can be used to comprehend parts of the production code and the relationships between them. Because there is a similar vocabulary between the test and UUT, five NLP techniques were used on the source code of 5 popular Github projects in this paper. The collected results were compared with the manually identified UUTs. The tf-idf model achieved the best accuracy of 22% for a right UUT and 57% with a tolerance up to fifth place of manual identification. These results were obtained after preprocessing input documents with java keywords removal and word split. The tf-idf model achieved the best model training time and the index search takes within 1s per request, so it could be used in an Integrated Development Environment (IDE) as a support tool in the future. At the same time, it has been found that, for document preprocessing, word splitting improves accuracy best and removing java keywords has just a small improvement for tf-idf model results. Removing comments only slightly worsens the accuracy of Natural Language Processing (NLP) models. The best speed provided the word splitting with average 0.3s preprocessing time per all documents in a project.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"22 - 32"},"PeriodicalIF":1.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0150","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42762344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this paper we study the question of parallelization of a variant of Branch-and-Bound method for solving of the subset sum problem which is a special case of the Boolean knapsack problem. The following natural approach to the solution of this question is considered. At the first stage one of the processors (control processor) performs some number of algorithm steps of solving a given problem with generating some number of subproblems of the problem. In the second stage the generated subproblems are sent to other processors for solving (one subproblem per processor). Processors solve completely the received subproblems and return their solutions to the control processor which chooses the optimal solution of the initial problem from these solutions. For this approach we define formally a model of parallel computing (frontal parallelization scheme) and the notion of complexity of the frontal scheme. We study the asymptotic behavior of the complexity of the frontal scheme for two special cases of the subset sum problem.
{"title":"Optimality and Complexity Analysis of a Branch-and-Bound Method in Solving Some Instances of the Subset Sum Problem","authors":"R. Kolpakov, M. Posypkin","doi":"10.1515/comp-2020-0212","DOIUrl":"https://doi.org/10.1515/comp-2020-0212","url":null,"abstract":"Abstract In this paper we study the question of parallelization of a variant of Branch-and-Bound method for solving of the subset sum problem which is a special case of the Boolean knapsack problem. The following natural approach to the solution of this question is considered. At the first stage one of the processors (control processor) performs some number of algorithm steps of solving a given problem with generating some number of subproblems of the problem. In the second stage the generated subproblems are sent to other processors for solving (one subproblem per processor). Processors solve completely the received subproblems and return their solutions to the control processor which chooses the optimal solution of the initial problem from these solutions. For this approach we define formally a model of parallel computing (frontal parallelization scheme) and the notion of complexity of the frontal scheme. We study the asymptotic behavior of the complexity of the frontal scheme for two special cases of the subset sum problem.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"116 - 126"},"PeriodicalIF":1.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0212","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44053560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Detecting automorphisms is a natural way to identify redundant information presented in structured data. When such redundancies are detected they can be used for data compression. In this paper we explore two different classes of graphs to capture this intuitive property of automorphisms. Symmetry-compressible graphs are the first class which introduces the basic concepts but use only global symmetries for the compression. In order for this concept to be more practical, we need to use local symmetries. Thus, we extend the basic graph class with Near Symmetry compressible graphs. Furthermore, we develop two algorithms that can be used to compress practical instances and empirically evaluate them on a set of realistic graphs.
{"title":"Graph automorphisms for compression","authors":"U. Cibej, J. Mihelic","doi":"10.1515/comp-2020-0186","DOIUrl":"https://doi.org/10.1515/comp-2020-0186","url":null,"abstract":"Abstract Detecting automorphisms is a natural way to identify redundant information presented in structured data. When such redundancies are detected they can be used for data compression. In this paper we explore two different classes of graphs to capture this intuitive property of automorphisms. Symmetry-compressible graphs are the first class which introduces the basic concepts but use only global symmetries for the compression. In order for this concept to be more practical, we need to use local symmetries. Thus, we extend the basic graph class with Near Symmetry compressible graphs. Furthermore, we develop two algorithms that can be used to compress practical instances and empirically evaluate them on a set of realistic graphs.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"51 - 59"},"PeriodicalIF":1.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0186","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44198656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Legged robots have great potential to travel across various types of terrain. Their many degrees of freedom enable them to navigate through difficult terrains, narrow spaces or various obstacles and they can move even after losing a leg. However, legged robots mostly move quite slowly. This paper deals with the design and construction of an omni-directional seven degrees of freedom hexapod (i.e., six-legged) robot, which is equipped with omnidirectional wheels (two degrees of freedom are used, one for turning the wheel and one for the wheel itself) usable on flat terrain to increase travel speed and an additional coxa joint that makes the robot more robust when climbing inclined terrains. This unique combination of omnidirectional wheels and additional coxa joint makes the robot not only much faster but also more robust in rough terrains and allows the robot to ride inclined terrains up to 40 degrees and remain statically stable in slopes up to 50 degrees. The robot is controlled by a terrain adaptive movement controller which adjusts the movement speed and the gait of the robot according to terrain conditions.
{"title":"Design and Control of 7-DOF Omni-directional Hexapod Robot","authors":"Marek Zak, Jaroslav Rozman, F. Zboril","doi":"10.1515/comp-2020-0189","DOIUrl":"https://doi.org/10.1515/comp-2020-0189","url":null,"abstract":"Abstract Legged robots have great potential to travel across various types of terrain. Their many degrees of freedom enable them to navigate through difficult terrains, narrow spaces or various obstacles and they can move even after losing a leg. However, legged robots mostly move quite slowly. This paper deals with the design and construction of an omni-directional seven degrees of freedom hexapod (i.e., six-legged) robot, which is equipped with omnidirectional wheels (two degrees of freedom are used, one for turning the wheel and one for the wheel itself) usable on flat terrain to increase travel speed and an additional coxa joint that makes the robot more robust when climbing inclined terrains. This unique combination of omnidirectional wheels and additional coxa joint makes the robot not only much faster but also more robust in rough terrains and allows the robot to ride inclined terrains up to 40 degrees and remain statically stable in slopes up to 50 degrees. The robot is controlled by a terrain adaptive movement controller which adjusts the movement speed and the gait of the robot according to terrain conditions.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"80 - 89"},"PeriodicalIF":1.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0189","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43920732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}