This paper discusses some possibilities of using mathematical models and methods related to the field of artificial intelligence in modern digital control systems as well as a number of their characteristics. A new class of such models formed by interpolation-extrapolation dependencies extracted from accumulated empirical data is presented.
In an article based on the semantic theory of information developed by Yu.A. Schreider, the phenomenon of insight is considered. It is shown that insight is an expansion of the thesaurus, when new information emerges as part of autocommunication. The presented scheme is illustrated by means of the material of the insight that caught Ignatius Loyola on the bank of the Cardoner River in August 1523.
The article proposes an optimized algorithm that allows the use of a fuzzy inference system with a large number of inference rules (about 10 million) in computing systems limited by RAM and CPU power. Optimization is achieved by redistributing the membership functions of input variables and the dynamic formation of inference rules, which allows fuzzy system to store only inference rules conclusions, without using a complete search of the rules in the process of fuzzy inference.
Semantic artificial intelligence (AI) is considered as an element of subject–subject communications and as almost the only effective means of solving problems in search and analytical systems today. The role of semantic AI systems in making transparent and explicable decisions is explored; semantic tools are complemented by spectral models in analyzing the process of text generation.
Modern systems of open linked data, including Wikidata, are built on ontology representation standards such as RDF, RDFstar, RDFa, OWL, and query language standards such as SPARQL, GraphQL, etc. At the same time, the standards for the representation of axioms such as SWRL, RIF, and even OWL are little used, as calculations in ontologies presented on the Web based on axioms and rewriting rules are not widely used. In this regard, the article proposes to use the rich experience of the theory of algebraic calculations and algebraic representation of knowledge to create a convenient universal tool for constructing ontologies focused on computing answers to queries using axioms and rewriting rules to build answers to queries, and not just facts. The possibility of building a standard for the Algebraic Web Ontology Language (Algebraic OWL) is considered. Some elements of such standard is considered. The experience and methodologies of the approaches of the Common Algebraic Specification Language (CASL), the Mathematica system, and the Haskell and Prolog programming languages are used.
This paper describes procedures of the use of the Z-score for text document classification purposes. The author tested the efficiency of this approach to the solution of authorship attribution and genre classification tasks, based on the analysis of distribution of stop words. The paper finds that the calculation of this score based on the raw counts of stop words produces a negative result, while its calculation based on the deviations of frequencies of stop words from the Zipfian score allows a higher classification efficiency. Matching against the previously developed Y-method demonstrated a higher Z-score efficiency for the solution of text classification purposes.
The problem of automatic semantic analysis in natural language processing systems is considered. It is shown that annotating semantic roles is a promising method of analysis, which consists in the recognition of predicate-argument structures in each sentence. A program developed for marking semantic roles in Russian texts is described, and 2000 lexical units are marked on the examples of texts on aviation and astronautics. The unique inventory of semantic roles for aerospace domain is formed. Frame semantics method and ontological approach to frame organization in knowledge base are offered as a model of knowledge representation to organize labeled semantic roles.