首页 > 最新文献

Sistemnì tehnologìï最新文献

英文 中文
Mulsemedia data consolidation method 多媒体数据整合方法
Pub Date : 2023-11-13 DOI: 10.34185/1562-9945-6-143-2022-06
Rvach Dmytro, Yevgeniya Sulema
The synchronization of multimodal data is one of the essential tasks related to mulse-media data processing. The concept of mulsemedia (MULtiple SEnsorial MEDIA) involves the registration, storage, processing, transmission and reproduction by computer-based tools of multimodal information about a physical object that humans can perceive through their senses. Such information includes audiovisual information (object's appearance, acoustic properties, etc.), tactile information (surface texture, temperature), kinesthetic information (weight, object's centre of gravity), information about its taste, smell, etc. The perception of mulsemedia information by a person is the process that exists over time. Because of this, the registration of mulsemedia data should be carried out with the fixation of the moments of time when the relevant mulsemedia information existed or its perception made sense for a human who supervises the object as mulsemedia data is temporal. This paper presents a method that enables the consolidation and synchronization of mulsemedia data using the principles of multithreading. The universal method was designed to support combining data of different modalities in parallel threads. The application of the proposed method solves problems associated with integrating data of different modalities and formats in the same time interval. The effectiveness of applying this method increases by us-ing multithreaded distributed computing. This method is designed for use in the development of mulsemedia software systems. The modified JSON format (TJSON – Timeline JSON) was proposed in the paper, as well. TJSON-object is a complex data structure for representing the synchronized mulsemedia data and their further processing. The proposed method can be further extended with other approaches and technologies. For example, artificial intelligence methods can be applied to assess the correlation between data from different modalities. This can help improve the method's accuracy and the output files' quality.
多模态数据的同步是多媒体数据处理的基本任务之一。多媒体(multisensorial MEDIA)的概念涉及到通过基于计算机的工具对人类可以通过感官感知的物理对象的多模态信息进行登记、存储、处理、传输和复制。这些信息包括视听信息(物体的外观、声学特性等)、触觉信息(表面纹理、温度)、动觉信息(重量、物体的重心)、味道、气味等信息。一个人对多媒体信息的感知是一个长期存在的过程。因此,由于多媒体数据是暂时的,因此在进行多媒体数据的注册时,应该固定相关多媒体信息存在的时刻,或者对监控对象的人来说其感知是有意义的时刻。本文提出了一种利用多线程原理实现多媒体数据整合和同步的方法。通用方法旨在支持在并行线程中组合不同模态的数据。该方法的应用解决了在同一时间间隔内整合不同模式和格式数据的问题。采用多线程分布式计算,提高了应用该方法的有效性。这种方法是为多媒体软件系统的开发而设计的。本文还提出了修改后的JSON格式(TJSON - Timeline JSON)。TJSON-object是一种复杂的数据结构,用于表示同步多媒体数据及其进一步处理。该方法可以与其他方法和技术进一步扩展。例如,人工智能方法可以用于评估来自不同模态的数据之间的相关性。这有助于提高方法的准确性和输出文件的质量。
{"title":"Mulsemedia data consolidation method","authors":"Rvach Dmytro, Yevgeniya Sulema","doi":"10.34185/1562-9945-6-143-2022-06","DOIUrl":"https://doi.org/10.34185/1562-9945-6-143-2022-06","url":null,"abstract":"The synchronization of multimodal data is one of the essential tasks related to mulse-media data processing. The concept of mulsemedia (MULtiple SEnsorial MEDIA) involves the registration, storage, processing, transmission and reproduction by computer-based tools of multimodal information about a physical object that humans can perceive through their senses. Such information includes audiovisual information (object's appearance, acoustic properties, etc.), tactile information (surface texture, temperature), kinesthetic information (weight, object's centre of gravity), information about its taste, smell, etc. The perception of mulsemedia information by a person is the process that exists over time. Because of this, the registration of mulsemedia data should be carried out with the fixation of the moments of time when the relevant mulsemedia information existed or its perception made sense for a human who supervises the object as mulsemedia data is temporal. This paper presents a method that enables the consolidation and synchronization of mulsemedia data using the principles of multithreading. The universal method was designed to support combining data of different modalities in parallel threads. The application of the proposed method solves problems associated with integrating data of different modalities and formats in the same time interval. The effectiveness of applying this method increases by us-ing multithreaded distributed computing. This method is designed for use in the development of mulsemedia software systems. The modified JSON format (TJSON – Timeline JSON) was proposed in the paper, as well. TJSON-object is a complex data structure for representing the synchronized mulsemedia data and their further processing. The proposed method can be further extended with other approaches and technologies. For example, artificial intelligence methods can be applied to assess the correlation between data from different modalities. This can help improve the method's accuracy and the output files' quality.","PeriodicalId":493145,"journal":{"name":"Sistemnì tehnologìï","volume":"130 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136351973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global near-earth space coverage by zones of the use of its observation devices: concept and algorithms 全球近地空间观测设备使用区域覆盖:概念和算法
Pub Date : 2023-11-13 DOI: 10.34185/1562-9945-4-147-2023-05
Labutkina Tetyana, Ananko Ruslan
The results of the study are presented within the framework of the task of ensuring full coverage of a given area of heights above the Earth's surface (the area of space between two spheres with a common center at the center of the Earth) by instantaneous zones of possible application of orbital-based surveillance devices located on spacecraft in orbital groups of different heights in circular orbits. In the general case, the solution of the problem involves the use of several orbital groupings of different heights on circular quasi-polar orbits, which in the simplified statement of the problem are assumed to be polar. The instantaneous zone of possible application of the surveillance device is simplified in the form of a cone. The cases of using observation devices "up" (above the plane of the instantaneous local horizon of the spacecraft, which is the carrier of the observation device) and observations "down" (below this plane) are considered. The concept of solving the problem is proposed, which is based on the selection (based on the development of methods of applying known algorithms) of such a structure of each orbital grouping, which will ensure continuous coverage of a part of the given observation space (area of guaranteed observation), the boundaries of which are moved away from the location of observation devices, and then - filling the space with these areas. The work is devoted to the space theme, but by generalizing the statement of the prob-lem, varying a number of conditions of this statement and changing the "scale" of the input data, it is possible to arrive at a variety of technical problems where the proposed concept and algorithms used in its implementation will be appropriate and acceptable (in part or in full). In particular, when some surveillance systems or systems of complex application of technical operations devices are created.
研究结果是在任务框架内提出的,该任务是通过在圆形轨道上不同高度的轨道群中安装的航天器上的轨道监视设备的可能应用的瞬时区域,确保在地球表面以上给定高度的区域(在地球中心有共同中心的两个球体之间的空间区域)完全覆盖。一般情况下,问题的解决涉及在圆形准极轨道上使用几个不同高度的轨道群,在问题的简化表述中,这些轨道群被假定为极轨道。将监控装置可能应用的瞬时区域简化为圆锥体形式。考虑了“上”观测装置(在航天器的瞬时局部水平平面之上,航天器是观测装置的载体)和“下”观测装置(在该平面以下)的情况。提出了求解该问题的概念,即选择(基于应用已知算法的方法的发展)每个轨道分组的这种结构,确保连续覆盖给定观测空间(保证观测区域)的一部分,并将其边界移离观测设备的位置,然后用这些区域填充该空间。这项工作致力于空间主题,但通过概括问题的陈述,改变这种陈述的一些条件并改变输入数据的“规模”,有可能达到各种技术问题,其中在其实施中使用的拟议概念和算法将是适当的和可接受的(部分或全部)。特别是当建立了一些监视系统或复杂应用技术操作装置的系统时。
{"title":"Global near-earth space coverage by zones of the use of its observation devices: concept and algorithms","authors":"Labutkina Tetyana, Ananko Ruslan","doi":"10.34185/1562-9945-4-147-2023-05","DOIUrl":"https://doi.org/10.34185/1562-9945-4-147-2023-05","url":null,"abstract":"The results of the study are presented within the framework of the task of ensuring full coverage of a given area of heights above the Earth's surface (the area of space between two spheres with a common center at the center of the Earth) by instantaneous zones of possible application of orbital-based surveillance devices located on spacecraft in orbital groups of different heights in circular orbits. In the general case, the solution of the problem involves the use of several orbital groupings of different heights on circular quasi-polar orbits, which in the simplified statement of the problem are assumed to be polar. The instantaneous zone of possible application of the surveillance device is simplified in the form of a cone. The cases of using observation devices \"up\" (above the plane of the instantaneous local horizon of the spacecraft, which is the carrier of the observation device) and observations \"down\" (below this plane) are considered. The concept of solving the problem is proposed, which is based on the selection (based on the development of methods of applying known algorithms) of such a structure of each orbital grouping, which will ensure continuous coverage of a part of the given observation space (area of guaranteed observation), the boundaries of which are moved away from the location of observation devices, and then - filling the space with these areas. The work is devoted to the space theme, but by generalizing the statement of the prob-lem, varying a number of conditions of this statement and changing the \"scale\" of the input data, it is possible to arrive at a variety of technical problems where the proposed concept and algorithms used in its implementation will be appropriate and acceptable (in part or in full). In particular, when some surveillance systems or systems of complex application of technical operations devices are created.","PeriodicalId":493145,"journal":{"name":"Sistemnì tehnologìï","volume":"128 29","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136351576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relational-separable models of monitoring processes at variable and unclear observation intervals 在可变和不明确的观测间隔下监测过程的关系可分离模型
Pub Date : 2023-11-13 DOI: 10.34185/1562-9945-4-147-2023-01
Skalozub Vladyslav, Horiachkin Vadim, Murashov Oleg
The article is devoted to the development of combined models, methods and tools designed to solve the current problems of modeling and analysis of monitoring process data, which are repre-sented by time series and differ in variable or fuzzy observation intervals (CHRPNI). In the article, a new relational separable model (RSM) and a combined quantile algorithm are proposed to in-crease the accuracy and efficiency of modeling and analysis of the processes of CHRPNI. The rela-tional model is defined by a system of fuzzy relational relations of the first and second order ob-tained on the basis of the original sequence of data. In the combined algorithm, the results of calcu-lations obtained by SPM and models of fuzzy relational relationships were generalized with the op-timal selection of weighting factors for individual components. As a result of the conducted research by means of numerical modeling, it was established that the introduction of combined process models in the case of PNEU is rational and effective. Exam-ples of data analysis of monitoring processes of rehabilitation of diabetic patients showed certain possibilities of ensuring the accuracy of the results of the analysis of indicators and their short-term forecasting.
本文致力于开发组合模型、方法和工具,旨在解决当前以时间序列为代表的不同变量或模糊观测区间(CHRPNI)的监测过程数据建模和分析问题。本文提出了一种新的关系可分模型(RSM)和组合分位数算法,以提高CHRPNI过程建模和分析的准确性和效率。关系模型是在原始数据序列的基础上得到的一、二阶模糊关系系统来定义的。在该组合算法中,将SPM计算结果和模糊关系模型进行了推广,实现了各分量权重因子的最优选择。通过数值模拟的研究,证明了在PNEU情况下引入组合过程模型的合理性和有效性。以糖尿病患者康复监测过程的数据分析为例,表明了保证指标分析结果及其短期预测准确性的一定可能性。
{"title":"Relational-separable models of monitoring processes at variable and unclear observation intervals","authors":"Skalozub Vladyslav, Horiachkin Vadim, Murashov Oleg","doi":"10.34185/1562-9945-4-147-2023-01","DOIUrl":"https://doi.org/10.34185/1562-9945-4-147-2023-01","url":null,"abstract":"The article is devoted to the development of combined models, methods and tools designed to solve the current problems of modeling and analysis of monitoring process data, which are repre-sented by time series and differ in variable or fuzzy observation intervals (CHRPNI). In the article, a new relational separable model (RSM) and a combined quantile algorithm are proposed to in-crease the accuracy and efficiency of modeling and analysis of the processes of CHRPNI. The rela-tional model is defined by a system of fuzzy relational relations of the first and second order ob-tained on the basis of the original sequence of data. In the combined algorithm, the results of calcu-lations obtained by SPM and models of fuzzy relational relationships were generalized with the op-timal selection of weighting factors for individual components. As a result of the conducted research by means of numerical modeling, it was established that the introduction of combined process models in the case of PNEU is rational and effective. Exam-ples of data analysis of monitoring processes of rehabilitation of diabetic patients showed certain possibilities of ensuring the accuracy of the results of the analysis of indicators and their short-term forecasting.","PeriodicalId":493145,"journal":{"name":"Sistemnì tehnologìï","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136351980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated models of visual information processing 视觉信息处理的自动化模型
Pub Date : 2023-11-13 DOI: 10.34185/1562-9945-4-147-2023-09
Mohylnyi Oleksandr
The article presents a study devoted to the development and research of an automated model of visual information processing. The goal of the research was to create a comprehen-sive model capable of automatically processing and analyzing various forms of visual data, such as images and videos. The model is developed on the basis of a combined approach that combines various algorithms and methods of visual information processing. The literature review conducted within the scope of this study allowed us to study the existing methods and algorithms for visual information processing. Various image processing approaches were analyzed, including segmentation, pattern recognition, object classification and detection, video analysis, and other aspects. As a result of the review, the advantages and limitations of each approach were identified, as well as the areas of their application were determined. The developed model showed high accuracy and efficiency in visual data processing. It can suc-cessfully cope with the tasks of segmentation, recognition and classification of objects, as well as video analysis. The results of the study confirmed the superiority of the proposed model. Potential applications of the automated model are considered, such as medicine, robotics, security, and many others. However, limitations of the model such as computational resource requirements and quality of input data are also noted. Further development of this research can be aimed at optimizing the model, adapting it to specific tasks and expanding its func-tionality. In general, the study confirms the importance of automated models of visual infor-mation processing and its important place in modern technologies. The results of the research can be useful for the development of new systems based on visual data processing and con-tribute to progress in the field of computer vision and artificial intelligence.
本文介绍了一种视觉信息处理自动化模型的开发与研究。该研究的目标是创建一个综合模型,能够自动处理和分析各种形式的视觉数据,如图像和视频。该模型是在综合各种视觉信息处理算法和方法的基础上发展起来的。在本研究范围内进行的文献综述使我们能够研究现有的视觉信息处理方法和算法。分析了各种图像处理方法,包括分割、模式识别、目标分类与检测、视频分析等方面。通过审查,确定了每种方法的优点和局限性,并确定了它们的应用领域。该模型在可视化数据处理中具有较高的精度和效率。它可以很好地处理物体的分割、识别和分类以及视频分析等任务。研究结果证实了所提模型的优越性。考虑了自动化模型的潜在应用,例如医学、机器人、安全以及许多其他应用。然而,也注意到模型的局限性,如计算资源需求和输入数据的质量。本研究的进一步发展可以旨在优化模型,使其适应特定的任务并扩展其功能。总的来说,该研究证实了视觉信息处理自动化模型的重要性及其在现代技术中的重要地位。研究结果可用于开发基于视觉数据处理的新系统,并有助于计算机视觉和人工智能领域的进步。
{"title":"Automated models of visual information processing","authors":"Mohylnyi Oleksandr","doi":"10.34185/1562-9945-4-147-2023-09","DOIUrl":"https://doi.org/10.34185/1562-9945-4-147-2023-09","url":null,"abstract":"The article presents a study devoted to the development and research of an automated model of visual information processing. The goal of the research was to create a comprehen-sive model capable of automatically processing and analyzing various forms of visual data, such as images and videos. The model is developed on the basis of a combined approach that combines various algorithms and methods of visual information processing. The literature review conducted within the scope of this study allowed us to study the existing methods and algorithms for visual information processing. Various image processing approaches were analyzed, including segmentation, pattern recognition, object classification and detection, video analysis, and other aspects. As a result of the review, the advantages and limitations of each approach were identified, as well as the areas of their application were determined. The developed model showed high accuracy and efficiency in visual data processing. It can suc-cessfully cope with the tasks of segmentation, recognition and classification of objects, as well as video analysis. The results of the study confirmed the superiority of the proposed model. Potential applications of the automated model are considered, such as medicine, robotics, security, and many others. However, limitations of the model such as computational resource requirements and quality of input data are also noted. Further development of this research can be aimed at optimizing the model, adapting it to specific tasks and expanding its func-tionality. In general, the study confirms the importance of automated models of visual infor-mation processing and its important place in modern technologies. The results of the research can be useful for the development of new systems based on visual data processing and con-tribute to progress in the field of computer vision and artificial intelligence.","PeriodicalId":493145,"journal":{"name":"Sistemnì tehnologìï","volume":"123 15","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136352150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
USING SHARDING TO IMPROVE BLOCKCHAIN NETWORK SCALABILITY 使用分片提高区块链网络的可扩展性
Pub Date : 2023-11-13 DOI: 10.34185/1562-9945-6-143-2022-02
Gromova Viktoria, Borysenko Pavlo
Blockchain is a distributed and decentralized database for recording transactions. It is shared and maintained by network nodes, which ensures its operations using cryptography and consensus rules that allow all nodes to agree on a unique structure of the blockchain. However, modern blockchain solutions face network scalability issues due to different protocol design decisions. In this paper, we discuss sharding as a possible solution to overcome the technical limitations of existing blockchain systems and different forms of its practical realization presented in recent research spurred by blockchain popularity.
区块链是一个分布式和分散的数据库,用于记录事务。它由网络节点共享和维护,它使用加密和共识规则确保其操作,允许所有节点就区块链的唯一结构达成一致。然而,由于不同的协议设计决策,现代区块链解决方案面临网络可伸缩性问题。在本文中,我们讨论了分片作为一种可能的解决方案,以克服现有区块链系统的技术限制,以及在区块链普及的推动下,最近的研究中提出的不同形式的实际实现。
{"title":"USING SHARDING TO IMPROVE BLOCKCHAIN NETWORK SCALABILITY","authors":"Gromova Viktoria, Borysenko Pavlo","doi":"10.34185/1562-9945-6-143-2022-02","DOIUrl":"https://doi.org/10.34185/1562-9945-6-143-2022-02","url":null,"abstract":"Blockchain is a distributed and decentralized database for recording transactions. It is shared and maintained by network nodes, which ensures its operations using cryptography and consensus rules that allow all nodes to agree on a unique structure of the blockchain. However, modern blockchain solutions face network scalability issues due to different protocol design decisions. In this paper, we discuss sharding as a possible solution to overcome the technical limitations of existing blockchain systems and different forms of its practical realization presented in recent research spurred by blockchain popularity.","PeriodicalId":493145,"journal":{"name":"Sistemnì tehnologìï","volume":"123 44","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136351333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research of methods based on neural networks for the analysis of the tonality of the corps of the texts 基于神经网络的文本群调性分析方法研究
Pub Date : 2023-11-13 DOI: 10.34185/1562-9945-4-147-2023-14
Ostrovska Kateryna, Stovpchenko Ivan, Pechenyi Denys
The object of the study is methods based on neural networks for analyzing the tonality of a corpus of texts. To achieve the goal set in the work, it is necessary to solve the following tasks: study the theoretical material for learning deep neural networks and their features in relation to natural language processing; study the documentation of the Tensorflow library; develop models of convolutional and recurrent neural networks; to develop the implementation of linear and non-linear classification methods on bag of words and Word2Vec models; to compare the accuracy and other quality indicators of implemented neural network models with classical methods. Tensorboard is used for learning visualization. The work shows the superiority of classifiers based on deep neural networks over classical classification methods, even if the Word2Vec model is used for vector representations of words. The model of recurrent neural network with LSTM blocks has the highest accuracy for this corpus of texts.
本文的研究对象是基于神经网络的语料库调性分析方法。为了实现工作中设定的目标,需要解决以下任务:研究学习深度神经网络的理论材料及其与自然语言处理相关的特征;学习Tensorflow库的文档;开发卷积和循环神经网络模型;开发基于词袋和Word2Vec模型的线性和非线性分类方法的实现;将实现的神经网络模型与经典方法的准确率和其他质量指标进行比较。张sorboard用于学习可视化。这项工作显示了基于深度神经网络的分类器优于经典分类方法,即使Word2Vec模型用于单词的向量表示。基于LSTM块的递归神经网络模型对该文本语料库具有最高的准确率。
{"title":"Research of methods based on neural networks for the analysis of the tonality of the corps of the texts","authors":"Ostrovska Kateryna, Stovpchenko Ivan, Pechenyi Denys","doi":"10.34185/1562-9945-4-147-2023-14","DOIUrl":"https://doi.org/10.34185/1562-9945-4-147-2023-14","url":null,"abstract":"The object of the study is methods based on neural networks for analyzing the tonality of a corpus of texts. To achieve the goal set in the work, it is necessary to solve the following tasks: study the theoretical material for learning deep neural networks and their features in relation to natural language processing; study the documentation of the Tensorflow library; develop models of convolutional and recurrent neural networks; to develop the implementation of linear and non-linear classification methods on bag of words and Word2Vec models; to compare the accuracy and other quality indicators of implemented neural network models with classical methods. Tensorboard is used for learning visualization. The work shows the superiority of classifiers based on deep neural networks over classical classification methods, even if the Word2Vec model is used for vector representations of words. The model of recurrent neural network with LSTM blocks has the highest accuracy for this corpus of texts.","PeriodicalId":493145,"journal":{"name":"Sistemnì tehnologìï","volume":"128 17","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136351581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Management of data flows in modern industry using blockchain 使用区块链管理现代工业中的数据流
Pub Date : 2023-11-13 DOI: 10.34185/1562-9945-4-147-2023-11
Hnatushenko Viktoriia, Sytnyk Roman
Recent research and publications. "Industry 4.0" is a concept of the industrial revolu-tion, which is based on the use of modern technologies and digital innovations in production and distribution processes. The introduction of the concept of "Industry 4.0" was designed to improve the competitiveness of European industry and increase productivity and product quality. A blockchain is a distributed data structure that is replicated and distributed among network members. The purpose of the study is to improve automation processes, increase efficiency, re-duce delays and errors in information systems of industry and supply chains by using block-chain technologies in the construction of information systems. Main material of the study. The paper makes an analysis of approaches and algorithms to data management in "Industry 4.0" information systems. Blockchain algorithms are com-pared to classical approach with other databases in the client-server architecture. Conclusions. By implementing algorithms based on blockchain technology, namely by using the Merkle Tree, digital signature technology, and by using consensus algorithms in the framework of decentralized data storage in Distributed Ledger Technology, the processes of automation and efficiency in data flow management are improved, providing a secure and transparent way to store and share data that reduces delays and errors in industry informa-tion systems and supply chains.
最近的研究和出版物。“工业4.0”是工业革命的一个概念,它基于在生产和分销过程中使用现代技术和数字创新。“工业4.0”概念的引入旨在提高欧洲工业的竞争力,提高生产率和产品质量。区块链是一种分布式数据结构,可以在网络成员之间复制和分布。研究的目的是通过区块链技术在信息系统建设中的应用,改善工业和供应链信息系统的自动化流程,提高效率,减少延迟和错误。本研究的主要材料。分析了“工业4.0”信息系统中数据管理的方法和算法。区块链算法与客户端-服务器架构中的其他数据库的经典方法进行了比较。结论。通过实施基于区块链技术的算法,即通过使用默克尔树、数字签名技术,以及在分布式账本技术的去中心化数据存储框架中使用共识算法,提高了数据流管理的自动化和效率,提供了一种安全透明的数据存储和共享方式,减少了行业信息系统和供应链中的延迟和错误。
{"title":"Management of data flows in modern industry using blockchain","authors":"Hnatushenko Viktoriia, Sytnyk Roman","doi":"10.34185/1562-9945-4-147-2023-11","DOIUrl":"https://doi.org/10.34185/1562-9945-4-147-2023-11","url":null,"abstract":"Recent research and publications. \"Industry 4.0\" is a concept of the industrial revolu-tion, which is based on the use of modern technologies and digital innovations in production and distribution processes. The introduction of the concept of \"Industry 4.0\" was designed to improve the competitiveness of European industry and increase productivity and product quality. A blockchain is a distributed data structure that is replicated and distributed among network members. The purpose of the study is to improve automation processes, increase efficiency, re-duce delays and errors in information systems of industry and supply chains by using block-chain technologies in the construction of information systems. Main material of the study. The paper makes an analysis of approaches and algorithms to data management in \"Industry 4.0\" information systems. Blockchain algorithms are com-pared to classical approach with other databases in the client-server architecture. Conclusions. By implementing algorithms based on blockchain technology, namely by using the Merkle Tree, digital signature technology, and by using consensus algorithms in the framework of decentralized data storage in Distributed Ledger Technology, the processes of automation and efficiency in data flow management are improved, providing a secure and transparent way to store and share data that reduces delays and errors in industry informa-tion systems and supply chains.","PeriodicalId":493145,"journal":{"name":"Sistemnì tehnologìï","volume":"123 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136352146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of recurrent analysis to classify realizations of encephalograms 循环分析在脑电图分类中的应用
Pub Date : 2023-11-13 DOI: 10.34185/1562-9945-6-143-2022-08
Kirichenko Lyudmila, Zinchenko Petro
The current state of science and technology is characterized by a variety of methods and approaches to solving various tasks, including in the fields of time series analysis and computer vision. This abstract explores a novel approach to the classification of time series based on the analysis of brain activity using recurrent diagrams and deep neural networks. The work begins with an overview of recent achievements in the field of time series analysis and the application of machine learning methods. The importance of time series classification in various domains, including medicine, finance, technology, and others, is em-phasized. Next, the methodology is described, in which time series are transformed into gray-scale images using recurrent diagrams. The key idea is to use recurrent diagrams to visualize the structure of time series and identify their nonlinear properties. This transformed informa-tion serves as input data for deep neural networks. An important aspect of the work is the selection of deep neural networks as classifiers for the obtained images. Specifically, residual neural networks are applied, known for their ability to effectively learn and classify large volumes of data. The structure of such networks and their advantages over other architectures are discussed. The experimental part of the work describes the use of a dataset of brain activity, which includes realizations from different states of a person, including epileptic seizures. The ob-tained visualization and classification methods are applied for binary classification of EEG realizations, where the class of epileptic seizure is compared with the rest. The main evalua-tion metrics for classification are accuracy, precision, recall, and F1-score. The experimental results demonstrate high classification accuracy even for short EEG realizations. The quality metrics of classification indicate the potential effectiveness of this method for automated di-agnosis of epileptic seizures based on the analysis of brain signals. The conclusions highlight the importance of the proposed approach and its potential usefulness in various domains where time series classification based on the analysis of brain activity and recurrent diagrams is required.
当前的科学技术状况的特点是有各种各样的方法和途径来解决各种任务,包括在时间序列分析和计算机视觉领域。本文探讨了一种基于循环图和深度神经网络对大脑活动分析的时间序列分类的新方法。本文首先概述了时间序列分析和机器学习方法应用领域的最新成就。强调时间序列分类在各个领域的重要性,包括医学、金融、技术和其他领域。接下来,描述了方法,其中时间序列被转换成使用循环图的灰度图像。关键思想是使用循环图来可视化时间序列的结构和识别它们的非线性性质。转换后的信息作为深度神经网络的输入数据。该工作的一个重要方面是选择深度神经网络作为获得的图像的分类器。具体来说,残差神经网络的应用,以其有效学习和分类大量数据的能力而闻名。讨论了这种网络的结构及其相对于其他体系结构的优势。这项工作的实验部分描述了大脑活动数据集的使用,其中包括一个人在不同状态下的实现,包括癫痫发作。将获得的可视化和分类方法应用于脑电实现的二值分类,其中癫痫发作的类别与其他类型进行比较。分类的主要评价指标是准确率、精密度、召回率和f1分。实验结果表明,即使在较短的EEG实现中,该方法也具有较高的分类准确率。分类的质量指标表明了该方法在基于脑信号分析的癫痫发作自动诊断中的潜在有效性。结论强调了所提出的方法的重要性及其在需要基于大脑活动分析和循环图的时间序列分类的各个领域的潜在用途。
{"title":"Application of recurrent analysis to classify realizations of encephalograms","authors":"Kirichenko Lyudmila, Zinchenko Petro","doi":"10.34185/1562-9945-6-143-2022-08","DOIUrl":"https://doi.org/10.34185/1562-9945-6-143-2022-08","url":null,"abstract":"The current state of science and technology is characterized by a variety of methods and approaches to solving various tasks, including in the fields of time series analysis and computer vision. This abstract explores a novel approach to the classification of time series based on the analysis of brain activity using recurrent diagrams and deep neural networks. The work begins with an overview of recent achievements in the field of time series analysis and the application of machine learning methods. The importance of time series classification in various domains, including medicine, finance, technology, and others, is em-phasized. Next, the methodology is described, in which time series are transformed into gray-scale images using recurrent diagrams. The key idea is to use recurrent diagrams to visualize the structure of time series and identify their nonlinear properties. This transformed informa-tion serves as input data for deep neural networks. An important aspect of the work is the selection of deep neural networks as classifiers for the obtained images. Specifically, residual neural networks are applied, known for their ability to effectively learn and classify large volumes of data. The structure of such networks and their advantages over other architectures are discussed. The experimental part of the work describes the use of a dataset of brain activity, which includes realizations from different states of a person, including epileptic seizures. The ob-tained visualization and classification methods are applied for binary classification of EEG realizations, where the class of epileptic seizure is compared with the rest. The main evalua-tion metrics for classification are accuracy, precision, recall, and F1-score. The experimental results demonstrate high classification accuracy even for short EEG realizations. The quality metrics of classification indicate the potential effectiveness of this method for automated di-agnosis of epileptic seizures based on the analysis of brain signals. The conclusions highlight the importance of the proposed approach and its potential usefulness in various domains where time series classification based on the analysis of brain activity and recurrent diagrams is required.","PeriodicalId":493145,"journal":{"name":"Sistemnì tehnologìï","volume":"123 36","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136352254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Informativeness of statistical processing of experimental measurements by the modified Bush-Wind criterion 改进的Bush-Wind准则对实验测量数据统计处理的信息量
Pub Date : 2023-11-13 DOI: 10.34185/1562-9945-6-143-2022-03
Malaichuk Valentin, Klymenko Svitlana, Lysenko Nataliia
The use of effective decision-making criteria is very important, especially when it comes to ensuring information security. Controlled attributes, such as keyboard handwriting charac-teristics, intensity of network attacks, and many others, are described by random variables whose distribution laws are usually unknown. Classical nonparametric statistics suggests comparing samples of random variables by rank-based homogeneity criteria that are inde-pendent of the type of distribution. Using the Van der Warden shift criterion and the Klotz scale criterion, Bush and Wind proposed the combined Bush-Wind criterion. It is an asymp-totically optimal nonparametric statistic for equal testing of two normal means and sample variances in a population. The article considers the problem of testing the hypothesis of sta-tistical homogeneity of two experimental measurement samples if the Van der Warden and Klotz criteria, which are formed by approximations of the inverse Gaussian functions, are re-placed by their analogues - the inverse functions of logistic random variables. Computational experiments are carried out and the informativeness of the classical Bush-Wind criterion and its analog, which is formed on the logistic inverse distribution function, is investigated. The analog of the Bush-Wind criterion proposed in this paper differs from the classical criterion by reducing computational complexity while maintaining efficiency. The empirical probabili-ties of recognizing the homogeneity of samples, obtained by conducting computational ex-periments for samples of logistic, Rayleigh and exponential random variables, indicate non-parametricity, high sensitivity and the possibility of applying the criterion in conditions of limited experimental data. The modified Bush-Wind criterion is characterized by high infor-mation content and can be recommended for statistical processing of experimental measure-ments.
使用有效的决策标准非常重要,特别是在确保信息安全方面。受控制的属性,如键盘手写特征、网络攻击强度等,都是由随机变量描述的,其分布规律通常是未知的。经典的非参数统计建议通过独立于分布类型的基于秩的同质性标准来比较随机变量的样本。利用Van der Warden位移准则和Klotz尺度准则,Bush和Wind提出了Bush-Wind联合准则。它是总体中两个正态均值和样本方差相等检验的渐近最优非参数统计量。本文考虑了如果由反高斯函数的近似形成的Van der Warden准则和Klotz准则被它们的类似物——logistic随机变量的逆函数所取代,那么检验两个实验测量样本的统计同质性假设的问题。通过计算实验,研究了基于logistic逆分布函数的经典Bush-Wind判据及其类似判据的信息量。本文提出的Bush-Wind准则的模拟与经典准则的不同之处在于在保持效率的同时降低了计算复杂度。通过对logistic、Rayleigh和指数随机变量样本进行计算实验得到的样本同质性识别的经验概率表明,该准则具有非参数性、高灵敏度和在实验数据有限的情况下应用该准则的可能性。改进的Bush-Wind判据具有信息量大的特点,可推荐用于实验测量的统计处理。
{"title":"Informativeness of statistical processing of experimental measurements by the modified Bush-Wind criterion","authors":"Malaichuk Valentin, Klymenko Svitlana, Lysenko Nataliia","doi":"10.34185/1562-9945-6-143-2022-03","DOIUrl":"https://doi.org/10.34185/1562-9945-6-143-2022-03","url":null,"abstract":"The use of effective decision-making criteria is very important, especially when it comes to ensuring information security. Controlled attributes, such as keyboard handwriting charac-teristics, intensity of network attacks, and many others, are described by random variables whose distribution laws are usually unknown. Classical nonparametric statistics suggests comparing samples of random variables by rank-based homogeneity criteria that are inde-pendent of the type of distribution. Using the Van der Warden shift criterion and the Klotz scale criterion, Bush and Wind proposed the combined Bush-Wind criterion. It is an asymp-totically optimal nonparametric statistic for equal testing of two normal means and sample variances in a population. The article considers the problem of testing the hypothesis of sta-tistical homogeneity of two experimental measurement samples if the Van der Warden and Klotz criteria, which are formed by approximations of the inverse Gaussian functions, are re-placed by their analogues - the inverse functions of logistic random variables. Computational experiments are carried out and the informativeness of the classical Bush-Wind criterion and its analog, which is formed on the logistic inverse distribution function, is investigated. The analog of the Bush-Wind criterion proposed in this paper differs from the classical criterion by reducing computational complexity while maintaining efficiency. The empirical probabili-ties of recognizing the homogeneity of samples, obtained by conducting computational ex-periments for samples of logistic, Rayleigh and exponential random variables, indicate non-parametricity, high sensitivity and the possibility of applying the criterion in conditions of limited experimental data. The modified Bush-Wind criterion is characterized by high infor-mation content and can be recommended for statistical processing of experimental measure-ments.","PeriodicalId":493145,"journal":{"name":"Sistemnì tehnologìï","volume":"124 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136351323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Alternative to mean and least squares methods used in processing the results of scientific and technical experiments 用于处理科学和技术实验结果的均数和最小二乘法的替代方法
Pub Date : 2023-11-13 DOI: 10.34185/1562-9945-4-147-2023-04
Ignatkin Valery, Dudnikov Volodymyr, Luchyshyn Taras, Alekseenko Serhii, Yushkevich Oleh, Karpova Tetyana, Khokhlova Tetyana, Khomosh Yuriy, Tikhonov Vasyl
Increasing the complexity and size of systems of various nature requires constant improvement of modeling and verification of the obtained results by experiment. It is possible to clearly conduct each experiment, objectively evaluate the summaries of the researched process, and spread the material obtained in one study to a series of other studies only if they are correctly set up and processed. On the basis of experimental data, algebraic expressions are selected, which are called empirical formulas, which are used if the analytical expression of some function is complex or does not exist at this stage of the description of the object, system or phenomenon. When selecting empirical formulas, polynomials of the form: у = А0 + А1х+ А2х2+ А3х3+…+ Аnхn are widely used, which can be used to approximate any measurement results if they are expressed as continuous functions. It is especially valuable that even if the exact expression of the solution (polynomial) is unknown, it is possible to determine the value of the coefficients An using the methods of mean and least squares. But in the method of least squares, there is a shift in estimates when the noise in the data is increased, as it is affected by the noise of the previous stages of information processing. Therefore, for real-time information processing procedures, a pseudo-reverse operation is proposed, which is performed using recurrent formulas. This procedure is a procedure of successive updating (with a shift) along the columns of the matrix of given sizes and pseudo-reversal at each step of information change. This approach is straightforward and takes advantage of the bounding method. With pseudo-inversion, it is possible to control the correctness of calculations at each step, using Penrose conditions. The need for pseudo-inversion may arise during optimization, forecasting of certain parameters and characteristics of systems of various purposes, in various problems of linear algebra, statistics, presentation of the structure of the obtained solutions, to understand the content of the incorrectness of the resulting solution, in the sense of Adomar-Tikhonov, and to see the ways of regularization of such solutions.
增加各种性质系统的复杂性和规模需要不断改进建模和通过实验验证所获得的结果。只有正确地设置和处理,才能清晰地进行每个实验,客观地评价研究过程的总结,并将一项研究中获得的材料推广到一系列其他研究中。在实验数据的基础上,选择代数表达式,称为经验公式,在描述对象、系统或现象的这一阶段,如果某些函数的解析表达式很复杂或不存在,就使用它。在选择经验公式时,广泛使用的多项式形式是:* = А0 + А1х+ А2х2+ А3х3+…+ Аnхn,如果将任何测量结果表示为连续函数,都可以用它来近似。特别有价值的是,即使解(多项式)的确切表达式未知,也可以使用均值和最小二乘法确定系数An的值。但在最小二乘方法中,由于受到前一阶段信息处理噪声的影响,当数据中的噪声增加时,估计值会发生偏移。因此,对于实时信息处理过程,提出了一种伪反向运算,该运算使用递归公式进行。这个过程是沿着给定大小的矩阵的列逐次更新(带移位)的过程,并且在信息变化的每一步进行伪反转。这种方法很简单,并且利用了边界方法。使用伪反演,可以使用彭罗斯条件控制每一步计算的正确性。在各种目的的系统的优化、某些参数和特征的预测、线性代数、统计学、所得到的解的结构的表示、在Adomar-Tikhonov意义上理解所得到的解的不正确性的内容,以及看到这些解的正则化方法等过程中,可能会出现伪反演的需要。
{"title":"Alternative to mean and least squares methods used in processing the results of scientific and technical experiments","authors":"Ignatkin Valery, Dudnikov Volodymyr, Luchyshyn Taras, Alekseenko Serhii, Yushkevich Oleh, Karpova Tetyana, Khokhlova Tetyana, Khomosh Yuriy, Tikhonov Vasyl","doi":"10.34185/1562-9945-4-147-2023-04","DOIUrl":"https://doi.org/10.34185/1562-9945-4-147-2023-04","url":null,"abstract":"Increasing the complexity and size of systems of various nature requires constant improvement of modeling and verification of the obtained results by experiment. It is possible to clearly conduct each experiment, objectively evaluate the summaries of the researched process, and spread the material obtained in one study to a series of other studies only if they are correctly set up and processed. On the basis of experimental data, algebraic expressions are selected, which are called empirical formulas, which are used if the analytical expression of some function is complex or does not exist at this stage of the description of the object, system or phenomenon. When selecting empirical formulas, polynomials of the form: у = А0 + А1х+ А2х2+ А3х3+…+ Аnхn are widely used, which can be used to approximate any measurement results if they are expressed as continuous functions. It is especially valuable that even if the exact expression of the solution (polynomial) is unknown, it is possible to determine the value of the coefficients An using the methods of mean and least squares. But in the method of least squares, there is a shift in estimates when the noise in the data is increased, as it is affected by the noise of the previous stages of information processing. Therefore, for real-time information processing procedures, a pseudo-reverse operation is proposed, which is performed using recurrent formulas. This procedure is a procedure of successive updating (with a shift) along the columns of the matrix of given sizes and pseudo-reversal at each step of information change. This approach is straightforward and takes advantage of the bounding method. With pseudo-inversion, it is possible to control the correctness of calculations at each step, using Penrose conditions. The need for pseudo-inversion may arise during optimization, forecasting of certain parameters and characteristics of systems of various purposes, in various problems of linear algebra, statistics, presentation of the structure of the obtained solutions, to understand the content of the incorrectness of the resulting solution, in the sense of Adomar-Tikhonov, and to see the ways of regularization of such solutions.","PeriodicalId":493145,"journal":{"name":"Sistemnì tehnologìï","volume":"128 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136351421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Sistemnì tehnologìï
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1