首页 > 最新文献

Advanced Information Systems最新文献

英文 中文
METHOD FOR GENERATING A DATA SET FOR TRAINING A NEURAL NETWORK IN A TRANSPORT CONVEYOR MODEL 生成用于训练运输输送机模型中神经网络的数据集的方法
Pub Date : 2024-06-04 DOI: 10.20998/2522-9052.2024.2.09
O. Pihnastyi, G. Kozhevnikov, Anna Burduk
The object of research is a stochastic input flow of material coming in the input of a conveyor-type transport system. Subject of research is the development of a method for generating values of the stochastic input material flow of transport conveyor to form a training data set for neural network models of the transport conveyor. The goal of the research is to develop a method for generating random values to construct implementations of the input material flow of a transport conveyor that have specified statistical characteristics calculated based on the results of previously performed experimental measurements. The article proposes a method for generating a data set for training a neural network for a model of a branched, extended transport conveyor. A method has been developed for constructing implementations of the stochastic input flow of material of a transport conveyor. Dimensionless parameters are introduced to determine similarity criteria for input material flows. The stochastic input material flow is presented as a series expansion in coordinate functions. To form statistical characteristics, a material flow implementation based on the results of experimental measurements is used. As a zero approximation for expansion coefficients, that are random variables, the normal distribution law of a random variable is used. Conclusion. It is shown that with an increase in the time interval for the implementation of the input material flow, the correlation function of the generated implementation steadily tends to the theoretically determined correlation function. The length of the time interval for the generated implementation of the input material flow was estimated.
研究对象是输送式运输系统输入端的随机输入物料流。研究课题是开发一种方法,用于生成运输输送机随机输入物料流的数值,以形成运输输送机神经网络模型的训练数据集。研究的目标是开发一种生成随机值的方法,以构建具有根据先前进行的实验测量结果计算出的特定统计特征的运输输送机输入物料流的实现。文章提出了一种生成用于训练神经网络的数据集的方法,该神经网络用于训练分支式扩展运输输送机的模型。文章开发了一种方法,用于构建运输输送机物料随机输入流的实施方案。引入了无量纲参数,以确定输入物料流的相似性标准。随机输入物料流以坐标函数的序列展开形式呈现。为了形成统计特征,使用了基于实验测量结果的物料流实施方法。作为随机变量的膨胀系数的零近似值,使用了随机变量的正态分布定律。结论结果表明,随着输入物料流实施时间间隔的增加,生成的实施方案的相关函数稳步趋向于理论确定的相关函数。估算了输入物料流生成实施时间间隔的长度。
{"title":"METHOD FOR GENERATING A DATA SET FOR TRAINING A NEURAL NETWORK IN A TRANSPORT CONVEYOR MODEL","authors":"O. Pihnastyi, G. Kozhevnikov, Anna Burduk","doi":"10.20998/2522-9052.2024.2.09","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.09","url":null,"abstract":"The object of research is a stochastic input flow of material coming in the input of a conveyor-type transport system. Subject of research is the development of a method for generating values of the stochastic input material flow of transport conveyor to form a training data set for neural network models of the transport conveyor. The goal of the research is to develop a method for generating random values to construct implementations of the input material flow of a transport conveyor that have specified statistical characteristics calculated based on the results of previously performed experimental measurements. The article proposes a method for generating a data set for training a neural network for a model of a branched, extended transport conveyor. A method has been developed for constructing implementations of the stochastic input flow of material of a transport conveyor. Dimensionless parameters are introduced to determine similarity criteria for input material flows. The stochastic input material flow is presented as a series expansion in coordinate functions. To form statistical characteristics, a material flow implementation based on the results of experimental measurements is used. As a zero approximation for expansion coefficients, that are random variables, the normal distribution law of a random variable is used. Conclusion. It is shown that with an increase in the time interval for the implementation of the input material flow, the correlation function of the generated implementation steadily tends to the theoretically determined correlation function. The length of the time interval for the generated implementation of the input material flow was estimated.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MODELING THE DISTRIBUTION OF EMERGENCY RELEASE PRODUCTS AT A NUCLEAR POWER PLANT UNIT 模拟核电站机组紧急释放产物的分布情况
Pub Date : 2024-06-04 DOI: 10.20998/2522-9052.2024.2.03
Viktoriia Biliaieva, L. Levchenko, Iryna Myshchenko, O. Tykhenko, Vitalii Kozachyna
Despite the fact that much attention is paid to the safe operation of nuclear power plants, there is a possibility of an accident with the release of radionuclides. This is especially true in Ukraine, where there is a threat of the damage to nuclear reactors as a result of military operations. It is impossible to research the distribution of products emergency releases radioactive substances in laboratory conditions. Therefore, the only tool for the development predicting of an accident is the modeling the spread of a radionuclides cloud. The purpose of the research is a modeling the distribution of emergency release products in a nuclear power plant unit, suitable for the operative assessment of a development an accident. Results of the research: The mathematical model of the distribution emission products of a nuclear power plant has been developed, which takes into account the value of the initial activity of emission products, the rate of the settling radioactive particles, the wind speed components, the intensity changes radionuclide emission over time. The technique for solving the boundary value problem of modeling in conditions of a complex shape of the computational domain, taking into account the presence of obstacles to the spread of emission products has been developed. The use of the velocity potential equation in evolutionary form allows us to speed up the calculation process. The chosen splitting scheme of an alternating-triangular method allows to find the speed potential according to the explicit form at each splitting step. This allowed software implementation of the CFD model. The visualized models of the emission cloud distribution allow to determine the radiation situation in any place of the emission product distribution zone. The developed model makes it possible to quickly predict the development of an accident in space and time, which makes it possible to take measures to protect people from exposure in the shortest possible time. Conclusions: The obtained emission cloud propagation models and their visualization make it possible to determine the state of environmental pollution under various initial conditions during the development of the accident.
尽管核电厂的安全运行备受关注,但仍有可能发生放射性核素泄漏事故。这一点在乌克兰尤为突出,因为军事行动有可能导致核反应堆受损。在实验室条件下不可能对紧急释放放射性物质的产品分布进行研究。因此,预测事故发展的唯一工具就是放射性核素云的扩散模型。这项研究的目的是模拟核电站机组中紧急释放产品的分布情况,以便对事故发展情况进行实际评估。研究成果:建立了核电站排放产物分布数学模型,该模型考虑了排放产物的初始活度值、放射性粒子沉降速度、风速成分、放射性核素排放强度随时间的变化。在计算域形状复杂的条件下,考虑到排放产物扩散障碍物的存在,开发了解决建模边界值问题的技术。使用演化形式的速度势能方程可以加快计算速度。所选择的交替三角法分割方案允许在每个分割步骤中根据显式形式找到速度势能。这使得 CFD 模型的软件实施成为可能。排放云分布的可视化模型可以确定排放产物分布区任何地方的辐射情况。所开发的模型可以快速预测事故在空间和时间上的发展,从而可以在最短时间内采取措施保护人们免受辐射。结论:所获得的排放云传播模型及其可视化可确定事故发展过程中各种初始条件下的环境污染状况。
{"title":"MODELING THE DISTRIBUTION OF EMERGENCY RELEASE PRODUCTS AT A NUCLEAR POWER PLANT UNIT","authors":"Viktoriia Biliaieva, L. Levchenko, Iryna Myshchenko, O. Tykhenko, Vitalii Kozachyna","doi":"10.20998/2522-9052.2024.2.03","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.03","url":null,"abstract":"Despite the fact that much attention is paid to the safe operation of nuclear power plants, there is a possibility of an accident with the release of radionuclides. This is especially true in Ukraine, where there is a threat of the damage to nuclear reactors as a result of military operations. It is impossible to research the distribution of products emergency releases radioactive substances in laboratory conditions. Therefore, the only tool for the development predicting of an accident is the modeling the spread of a radionuclides cloud. The purpose of the research is a modeling the distribution of emergency release products in a nuclear power plant unit, suitable for the operative assessment of a development an accident. Results of the research: The mathematical model of the distribution emission products of a nuclear power plant has been developed, which takes into account the value of the initial activity of emission products, the rate of the settling radioactive particles, the wind speed components, the intensity changes radionuclide emission over time. The technique for solving the boundary value problem of modeling in conditions of a complex shape of the computational domain, taking into account the presence of obstacles to the spread of emission products has been developed. The use of the velocity potential equation in evolutionary form allows us to speed up the calculation process. The chosen splitting scheme of an alternating-triangular method allows to find the speed potential according to the explicit form at each splitting step. This allowed software implementation of the CFD model. The visualized models of the emission cloud distribution allow to determine the radiation situation in any place of the emission product distribution zone. The developed model makes it possible to quickly predict the development of an accident in space and time, which makes it possible to take measures to protect people from exposure in the shortest possible time. Conclusions: The obtained emission cloud propagation models and their visualization make it possible to determine the state of environmental pollution under various initial conditions during the development of the accident.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"39 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MEDOIDS AS A PACKING OF ORB IMAGE DESCRIPTORS 将 medoids 包装成球体图像描述符
Pub Date : 2024-06-04 DOI: 10.20998/2522-9052.2024.2.01
O. Gorokhovatskyi, Olena Yakovleva
The aim of the research. The paper presents the research about the feasibility to use matching medoids obtained from the set of ORB descriptors instead matching the full set of binary descriptors for image classification problem. Research results. Different methods that include direct brute force medoids matching, grouping of medoids for separate classes, and grouping of descriptors followed by calculation of medoids amongst them were proposed. Numerical experiments were performed for all these methods in order to compare the classification accuracy and inference time. It has been shown that using of medoids allowed us to redistribute processing time in order to perform more calculations during preprocessing rather than during classification. According to modelling performed on the Leeds Butterly dataset matching images based on medoids could have the same accuracy as matching of descriptors (0.69–0.88 for different number of features). Medoids require additional time for the calculation during preprocessing stage but classification time becomes faster: in our experiments we have obtained about 9–10 times faster classification and same 9–10 times increasing preprocessing time for the models that have comparable accuracies. Finally, the efficiency of the proposed ideas was compared to the CNN trained and evaluated on the same data. As expected, CNN required much more preprocessing (training) time but the result is worth it: this approach provides the best classification accuracy and inference time. Conclusion. Medoids matching could have the same accuracy as direct descriptors matching, but the usage of medoids allows us to redistribute the overall modeling time with the increasing preprocessing time and making inference faster.
研究目的本文介绍了在图像分类问题中使用从 ORB 描述符集获得的匹配 Medoids 代替二进制描述符全集匹配的可行性研究。研究成果。我们提出了不同的方法,包括直接蛮力中值匹配法、为不同类别分组中值法,以及先分组描述符再计算其中的中值法。对所有这些方法都进行了数值实验,以比较分类准确性和推理时间。结果表明,使用中值可以重新分配处理时间,在预处理期间而不是分类期间执行更多计算。根据在利兹 Butterly 数据集上进行的建模,基于中间值的图像匹配精度与描述符匹配精度相同(不同特征数量下为 0.69-0.88)。在预处理阶段,中间值的计算需要额外的时间,但分类时间却变得更快:在我们的实验中,对于精度相当的模型,分类速度提高了约 9-10 倍,而预处理时间却增加了 9-10 倍。最后,我们将所提想法的效率与在相同数据上训练和评估过的 CNN 进行了比较。不出所料,CNN 需要更多的预处理(训练)时间,但结果是值得的:这种方法提供了最佳的分类准确性和推理时间。结论中值匹配的准确率与直接描述符匹配的准确率相同,但使用中值可以重新分配整体建模时间,增加预处理时间,加快推理速度。
{"title":"MEDOIDS AS A PACKING OF ORB IMAGE DESCRIPTORS","authors":"O. Gorokhovatskyi, Olena Yakovleva","doi":"10.20998/2522-9052.2024.2.01","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.01","url":null,"abstract":"The aim of the research. The paper presents the research about the feasibility to use matching medoids obtained from the set of ORB descriptors instead matching the full set of binary descriptors for image classification problem. Research results. Different methods that include direct brute force medoids matching, grouping of medoids for separate classes, and grouping of descriptors followed by calculation of medoids amongst them were proposed. Numerical experiments were performed for all these methods in order to compare the classification accuracy and inference time. It has been shown that using of medoids allowed us to redistribute processing time in order to perform more calculations during preprocessing rather than during classification. According to modelling performed on the Leeds Butterly dataset matching images based on medoids could have the same accuracy as matching of descriptors (0.69–0.88 for different number of features). Medoids require additional time for the calculation during preprocessing stage but classification time becomes faster: in our experiments we have obtained about 9–10 times faster classification and same 9–10 times increasing preprocessing time for the models that have comparable accuracies. Finally, the efficiency of the proposed ideas was compared to the CNN trained and evaluated on the same data. As expected, CNN required much more preprocessing (training) time but the result is worth it: this approach provides the best classification accuracy and inference time. Conclusion. Medoids matching could have the same accuracy as direct descriptors matching, but the usage of medoids allows us to redistribute the overall modeling time with the increasing preprocessing time and making inference faster.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"237 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141386763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEEPFAKE DETECTION USING TRANSFER LEARNING-BASED XCEPTION MODEL 使用基于迁移学习的 Xception 模型进行深度伪造检测
Pub Date : 2024-06-04 DOI: 10.20998/2522-9052.2024.2.10
Velusamy Rajakumareswaran, Surendran Raguvaran, Venkatachalam Chandrasekar, Sugavanam Rajkumar, Vijayakumar Arun
Justification of the purpose of the research. In recent times, several approaches for face manipulation in videos have been extensively applied and availed to the public which makes editing faces in video easy for everyone effortlessly with realistic efforts.  While beneficial in various domains, these methods could significantly harm society if employed to spread misinformation. So, it is also vital to properly detect whether a face has been distorted in a video series. To detect this deepfake, convolutional neural networks can be used in past works. However, it needs a greater number of parameters and more computations. So, to overcome these limitations and to accurately detect deepfakes in videos, a transfer learning-based model named the Improved Xception model is suggested. Obtained results. This model is trained using extracted facial landmark features with robust training. Moreover, the improved Xception model's detection accuracy is evaluated alongside ResNet and Inception, considering model loss, accuracy, ROC, training time, and the Precision-Recall curve. The outcomes confirm the success of the proposed model, which employs transfer learning techniques to identify fraudulent videos. Furthermore, the method demonstrates a noteworthy 5% increase in efficiency compared to current systems.
研究目的的合理性。近来,在视频中处理人脸的几种方法已被广泛应用并提供给公众,这使得每个人都能毫不费力地编辑视频中的人脸,而且只需付出实际的努力。 虽然这些方法在不同领域都有益处,但如果被用于传播错误信息,则可能对社会造成严重危害。因此,正确检测视频系列中的人脸是否被篡改也至关重要。为了检测这种深度伪造,在过去的工作中可以使用卷积神经网络。然而,它需要更多的参数和更多的计算。因此,为了克服这些局限性并准确检测视频中的深度伪造,我们提出了一种基于迁移学习的模型,名为 "改进的 Xception 模型"。结果该模型使用提取的面部地标特征进行鲁棒训练。此外,考虑到模型损失、准确性、ROC、训练时间和精度-召回曲线,对改进 Xception 模型的检测准确性与 ResNet 和 Inception 进行了评估。结果证实了所提出模型的成功,该模型采用了迁移学习技术来识别欺诈视频。此外,与现有系统相比,该方法的效率显著提高了 5%。
{"title":"DEEPFAKE DETECTION USING TRANSFER LEARNING-BASED XCEPTION MODEL","authors":"Velusamy Rajakumareswaran, Surendran Raguvaran, Venkatachalam Chandrasekar, Sugavanam Rajkumar, Vijayakumar Arun","doi":"10.20998/2522-9052.2024.2.10","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.10","url":null,"abstract":"Justification of the purpose of the research. In recent times, several approaches for face manipulation in videos have been extensively applied and availed to the public which makes editing faces in video easy for everyone effortlessly with realistic efforts.  While beneficial in various domains, these methods could significantly harm society if employed to spread misinformation. So, it is also vital to properly detect whether a face has been distorted in a video series. To detect this deepfake, convolutional neural networks can be used in past works. However, it needs a greater number of parameters and more computations. So, to overcome these limitations and to accurately detect deepfakes in videos, a transfer learning-based model named the Improved Xception model is suggested. Obtained results. This model is trained using extracted facial landmark features with robust training. Moreover, the improved Xception model's detection accuracy is evaluated alongside ResNet and Inception, considering model loss, accuracy, ROC, training time, and the Precision-Recall curve. The outcomes confirm the success of the proposed model, which employs transfer learning techniques to identify fraudulent videos. Furthermore, the method demonstrates a noteworthy 5% increase in efficiency compared to current systems.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"4 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPGA-BASED IMPLEMENTATION OF A GAUSSIAN SMOOTHING FILTER WITH POWERS-OF-TWO COEFFICIENTS 基于 Fpga 的二乘幂高斯平滑滤波器的实现
Pub Date : 2024-06-04 DOI: 10.20998/2522-9052.2024.2.05
A. Ivashko, Andrey Zuev, Dmytro Karaman, Miha Moškon
The purpose of the study is to develop methods for synthesizing a Gaussian filter that ensures simplified hardware and software implementation, particularly filters with powers-of-two coefficients. Such filters can provide effective denoising of images, including landscape maps, both natural and synthetically generated. The study also involves analyzing of methods for FPGA implementation, comparing their hardware complexity, performance, and noise reduction with traditional Gaussian filters. Results. An algorithm for rounding filter coefficients to powers of two, providing optimal approximation of the constructed filter to the original, is presented, along with examples of developed filters. Topics covered include FPGA implementation, based on the Xilinx Artix-7 FPGA. Filter structures, testing methods, simulation results, and verification of the scheme are discussed. Examples of the technological placement of the implemented scheme on the FPGA chip are provided. Comparative evaluations of FPGA resources and performance for proposed and traditional Gaussian filters are carried out. Digital modeling of the filters and noise reduction estimates for noisy images of the terrain surface are presented. The developed algorithm provides approximation of Gaussian filter coefficients as powers of two for a given window size and maximum number of bits with a relative error of no more than 0.18. Implementing the proposed filters on FPGA results in a hardware costs reduction with comparable performance. Computer simulation show that Gaussian filters both traditional and proposed effectively suppress additive white noise in images. Proposed filters improve the signal-to-noise ratio within 5-10 dB and practically match the filtering quality of traditional Gaussian filters.
这项研究的目的是开发合成高斯滤波器的方法,以确保简化硬件和软件的实施,特别是具有两个系数幂的滤波器。这种滤波器可以有效地对自然和合成的图像(包括景观图)进行去噪处理。研究还包括分析 FPGA 实现方法,比较它们与传统高斯滤波器的硬件复杂性、性能和降噪效果。研究结果介绍了将滤波器系数取整为 2 的幂次的算法,以及所开发滤波器的示例,从而使所构建的滤波器与原始滤波器达到最佳近似。涵盖的主题包括基于 Xilinx Artix-7 FPGA 的 FPGA 实现。还讨论了滤波器结构、测试方法、仿真结果和方案验证。还提供了在 FPGA 芯片上实施方案的技术布局示例。对拟议的高斯滤波器和传统高斯滤波器的 FPGA 资源和性能进行了比较评估。介绍了滤波器的数字建模和地形表面噪声图像的降噪估算。对于给定的窗口大小和最大位数,所开发的算法可将高斯滤波器系数近似为 2 的幂次,相对误差不超过 0.18。在 FPGA 上实现所提出的滤波器,可在性能相当的情况下降低硬件成本。计算机仿真表明,传统的和建议的高斯滤波器都能有效抑制图像中的加性白噪声。建议的滤波器可将信噪比提高 5-10 dB,滤波质量实际上与传统高斯滤波器相当。
{"title":"FPGA-BASED IMPLEMENTATION OF A GAUSSIAN SMOOTHING FILTER WITH POWERS-OF-TWO COEFFICIENTS","authors":"A. Ivashko, Andrey Zuev, Dmytro Karaman, Miha Moškon","doi":"10.20998/2522-9052.2024.2.05","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.05","url":null,"abstract":"The purpose of the study is to develop methods for synthesizing a Gaussian filter that ensures simplified hardware and software implementation, particularly filters with powers-of-two coefficients. Such filters can provide effective denoising of images, including landscape maps, both natural and synthetically generated. The study also involves analyzing of methods for FPGA implementation, comparing their hardware complexity, performance, and noise reduction with traditional Gaussian filters. Results. An algorithm for rounding filter coefficients to powers of two, providing optimal approximation of the constructed filter to the original, is presented, along with examples of developed filters. Topics covered include FPGA implementation, based on the Xilinx Artix-7 FPGA. Filter structures, testing methods, simulation results, and verification of the scheme are discussed. Examples of the technological placement of the implemented scheme on the FPGA chip are provided. Comparative evaluations of FPGA resources and performance for proposed and traditional Gaussian filters are carried out. Digital modeling of the filters and noise reduction estimates for noisy images of the terrain surface are presented. The developed algorithm provides approximation of Gaussian filter coefficients as powers of two for a given window size and maximum number of bits with a relative error of no more than 0.18. Implementing the proposed filters on FPGA results in a hardware costs reduction with comparable performance. Computer simulation show that Gaussian filters both traditional and proposed effectively suppress additive white noise in images. Proposed filters improve the signal-to-noise ratio within 5-10 dB and practically match the filtering quality of traditional Gaussian filters.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"130 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RESEARCH AND ANALYSIS OF EFFICIENCY INDICATORS OF CRITICAL INFRASTRUCTURES IN THE COMMUNICATION SYSTEM 研究和分析通信系统关键基础设施的效率指标
Pub Date : 2024-06-04 DOI: 10.20998/2522-9052.2024.2.07
Bayram Ibrahimov, A. Hasanov, E. Hashimov
The efficiency indicators of the functioning critical information infrastructures in the communication system are analyzed based on the architectural concept of future networks. The object of the study is hardware and software complexes critical information infrastructures for special purposes. Critical information infrastructure represents information and telecommunication communication systems, the maintenance, reliability and security which are necessary for the safe operation special-purpose enterprises. In order to avoid the occurrence of various security and reliability incidents, the studied critical infrastructures communication systems require constant analysis and updating operating rules. The subject of the research is a method for calculating quality indicators of the functioning of critical information infrastructures in communication systems. In this work, using the example of a communication system based on modern technologies, the sequence of actions for analyzing threats to the security of a critical information infrastructure facility is considered. The purpose of the study is to develop a new approach for creating methods for calculating indicators of efficiency, reliability and information security systems. Based on the analysis of the work, a method for calculating efficiency indicators critical information infrastructures of communication systems is proposed and important analytical expressions for further research are obtained. As a result of the study, the main conclusions of the study were obtained, which can be implemented and used in critical infrastructures of communication systems to calculate the quality of functioning public computer and telecommunication systems.
根据未来网络的结构概念,分析了通信系统中关键信息基础设施运行的效率指标。研究对象是用于特殊目的的关键信息基础设施的硬件和软件综合体。关键信息基础设施代表信息和电信通信系统,其维护、可靠性和安全性是特殊用途企业安全运行所必需的。为了避免发生各种安全和可靠性事故,所研究的关键基础设施通信系统需要不断分析和更新运行规则。研究课题是计算通信系统中关键信息基础设施运行质量指标的方法。在这项工作中,以基于现代技术的通信系统为例,考虑了分析关键信息基础设施安全威胁的行动顺序。研究的目的是开发一种新方法,用于创建计算效率、可靠性和信息安全系统指标的方法。在分析工作的基础上,提出了计算通信系统关键信息基础设施效率指标的方法,并获得了进一步研究的重要分析表达式。作为研究结果,获得了研究的主要结论,这些结论可以在通信系统关键基础设施中实施和使用,以计算公共计算机和电信系统的运行质量。
{"title":"RESEARCH AND ANALYSIS OF EFFICIENCY INDICATORS OF CRITICAL INFRASTRUCTURES IN THE COMMUNICATION SYSTEM","authors":"Bayram Ibrahimov, A. Hasanov, E. Hashimov","doi":"10.20998/2522-9052.2024.2.07","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.07","url":null,"abstract":"The efficiency indicators of the functioning critical information infrastructures in the communication system are analyzed based on the architectural concept of future networks. The object of the study is hardware and software complexes critical information infrastructures for special purposes. Critical information infrastructure represents information and telecommunication communication systems, the maintenance, reliability and security which are necessary for the safe operation special-purpose enterprises. In order to avoid the occurrence of various security and reliability incidents, the studied critical infrastructures communication systems require constant analysis and updating operating rules. The subject of the research is a method for calculating quality indicators of the functioning of critical information infrastructures in communication systems. In this work, using the example of a communication system based on modern technologies, the sequence of actions for analyzing threats to the security of a critical information infrastructure facility is considered. The purpose of the study is to develop a new approach for creating methods for calculating indicators of efficiency, reliability and information security systems. Based on the analysis of the work, a method for calculating efficiency indicators critical information infrastructures of communication systems is proposed and important analytical expressions for further research are obtained. As a result of the study, the main conclusions of the study were obtained, which can be implemented and used in critical infrastructures of communication systems to calculate the quality of functioning public computer and telecommunication systems.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"154 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COMPARATIVE ANALYSIS OF SPECTRAL ANOMALIES DETECTION METHODS ON IMAGES FROM ON-BOARD REMOTE SENSING SYSTEMS 机载遥感系统图像光谱异常检测方法的比较分析
Pub Date : 2024-06-04 DOI: 10.20998/2522-9052.2024.2.06
Artem Hurin, H. Khudov, Oleksandr Kostyria, Oleh Maslenko, Serhii Siadrystyi
The subject matter of the article is methods of detecting spectral anomalies on images from remote sensing systems. The goal is to conduct a comparative analysis of methods for detecting spectral anomalies on images from remote sensing systems. The tasks are: analysis of the main methods of detecting spectral anomalies on images from remote sensing systems; processing of images from remote sensing systems using basic methods of detecting spectral anomalies; comparative assessment of the quality of methods for detecting spectral anomalies on images from remote monitoring systems. The methods used are: methods of digital image processing, mathematical apparatus of matrix theory, methods of mathematical modeling, methods of optimization theory, analytical and empirical methods of image comparison. The following results are obtained. The main methods of detecting spectral anomalies on images from remote sensing systems were analyzed. Processing of images from remote sensing systems using the basic methods of detecting spectral anomalies was carried out. A comparative assessment of the quality of methods for detecting spectral anomalies on images from remote monitoring systems was carried out. Conclusions. The spectral difference of the considered methods is revealed by the value of information indicators - Euclidean distance, Mahalanobis distance, brightness contrast, and Kullback-Leibler information divergence. Mathematical modeling of the considered methods of detecting spectral anomalies of images with a relatively “simple” and complicated background was carried out. It was established that when searching for a spectral anomaly on an image with a complicated background, the method based on the Kullback-Leibler divergence can be more effective than the other considered methods, but is not optimal. When determining several areas of the image with high divergence indicators, they should be additionally investigated using the specified methods in order to more accurately determine the position of the spectral anomaly.
文章的主题是检测遥感系统图像上光谱异常的方法。目的是对遥感系统图像光谱异常检测方法进行比较分析。任务是:分析检测遥感系统图像光谱异常的主要方法;使用检测光谱异常的基本方法处理遥感系统图像;比较评估检测遥感监测系统图像光谱异常的方法的质量。使用的方法包括:数字图像处理方法、矩阵理论数学装置、数学建模方法、优化理论方法、图像比较的分析和经验方法。结果如下分析了检测遥感系统图像光谱异常的主要方法。使用检测光谱异常的基本方法对遥感系统图像进行了处理。对遥感系统图像光谱异常检测方法的质量进行了比较评估。得出了结论。信息指标值--欧氏距离、马哈罗诺比距离、亮度对比和 Kullback-Leibler 信息发散--揭示了所考虑方法的光谱差异。对所考虑的检测背景相对 "简单 "和复杂的图像光谱异常的方法进行了数学建模。结果表明,在背景复杂的图像上搜索光谱异常时,基于库尔贝-莱布勒信息发散的方法比其他方法更有效,但并非最佳方法。在确定图像中具有高发散指标的几个区域时,应使用指定方法对这些区域进行额外调查,以便更准确地确定光谱异常点的位置。
{"title":"COMPARATIVE ANALYSIS OF SPECTRAL ANOMALIES DETECTION METHODS ON IMAGES FROM ON-BOARD REMOTE SENSING SYSTEMS","authors":"Artem Hurin, H. Khudov, Oleksandr Kostyria, Oleh Maslenko, Serhii Siadrystyi","doi":"10.20998/2522-9052.2024.2.06","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.06","url":null,"abstract":"The subject matter of the article is methods of detecting spectral anomalies on images from remote sensing systems. The goal is to conduct a comparative analysis of methods for detecting spectral anomalies on images from remote sensing systems. The tasks are: analysis of the main methods of detecting spectral anomalies on images from remote sensing systems; processing of images from remote sensing systems using basic methods of detecting spectral anomalies; comparative assessment of the quality of methods for detecting spectral anomalies on images from remote monitoring systems. The methods used are: methods of digital image processing, mathematical apparatus of matrix theory, methods of mathematical modeling, methods of optimization theory, analytical and empirical methods of image comparison. The following results are obtained. The main methods of detecting spectral anomalies on images from remote sensing systems were analyzed. Processing of images from remote sensing systems using the basic methods of detecting spectral anomalies was carried out. A comparative assessment of the quality of methods for detecting spectral anomalies on images from remote monitoring systems was carried out. Conclusions. The spectral difference of the considered methods is revealed by the value of information indicators - Euclidean distance, Mahalanobis distance, brightness contrast, and Kullback-Leibler information divergence. Mathematical modeling of the considered methods of detecting spectral anomalies of images with a relatively “simple” and complicated background was carried out. It was established that when searching for a spectral anomaly on an image with a complicated background, the method based on the Kullback-Leibler divergence can be more effective than the other considered methods, but is not optimal. When determining several areas of the image with high divergence indicators, they should be additionally investigated using the specified methods in order to more accurately determine the position of the spectral anomaly.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"107 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ENSURING THE FUNCTIONAL STABILITY OF THE INFORMATION SYSTEM OF THE POWER PLANT ON THE BASIS OF MONITORING THE PARAMETERS OF THE WORKING CONDITION OF COMPUTER DEVICES 在监测计算机设备工作状态参数的基础上,确保发电厂信息系统的功能稳定性
Pub Date : 2024-06-04 DOI: 10.20998/2522-9052.2024.2.12
Oleg Barabash, Olha Svynchuk, I. Salanda, Viktor Mashkov, M. Myroniuk
The functional stability of the information system of the power plant is ensured by a complex of processes and mechanisms that are capable of maintaining the normal operation of the system even in the event of errors, failures or negative impacts. The aim of the research. An important aspect of ensuring the functional stability of an information system is the monitoring of its healthy state, as it helps to identify, analyze and respond to any problems in a timely manner, ensuring the reliable and uninterrupted operation of the system. It was decided to choose a test diagnosis based on the principle of a wandering diagnostic core. Research results. An algorithm for detecting failures in the system has been developed based on the decryption of the totality of the results of the system's test checks. The developed software application allows you to monitor the state of various components of the information system and detect possible problems or failures in a timely manner in order to support the continuous operation of the system. This application allows you to increase the reliability of diagnostics, reduce the time of diagnostics, and carry out diagnostics with the specified completeness and depth. The depth and completeness of diagnosis is determined by the test task. Verification. To confirm the correctness of the developed software product, mathematical modeling of the process of diagnosing the information system, which was divided into several subsystems containing a certain number of modules, was carried out. For the division into subsystems, the number of modules in each subsystem is important - it should not exceed 30 modules. This limitation is due to the limited computing power of modern microprocessor technology during the solution of a class of NP-complete problems.
发电厂信息系统的功能稳定性由一系列复杂的流程和机制来保证,即使在出现错误、故障或负面影响的情况下,这些流程和机制也能维持系统的正常运行。研究目的确保信息系统功能稳定性的一个重要方面是监测其健康状态,因为这有助于及时发现、分析和应对任何问题,确保系统可靠、不间断地运行。因此决定选择基于游走诊断核心原则的测试诊断。研究成果。在对系统测试检查的全部结果进行解密的基础上,开发了一种检测系统故障的算法。通过开发的软件应用程序,可以监控信息系统各组件的状态,及时发现可能存在的问题或故障,以支持系统的持续运行。通过该应用软件,可以提高诊断的可靠性,缩短诊断时间,并按照规定的完整性和深度进行诊断。诊断的深度和完整性由测试任务决定。验证。为确认所开发软件产品的正确性,对信息系统的诊断过程进行了数学建模,将其划分为包含一定数量模块的若干子系统。在划分子系统时,每个子系统的模块数量非常重要--不应超过 30 个模块。这一限制是由于现代微处理器技术在解决一类 NP-完全问题时的计算能力有限。
{"title":"ENSURING THE FUNCTIONAL STABILITY OF THE INFORMATION SYSTEM OF THE POWER PLANT ON THE BASIS OF MONITORING THE PARAMETERS OF THE WORKING CONDITION OF COMPUTER DEVICES","authors":"Oleg Barabash, Olha Svynchuk, I. Salanda, Viktor Mashkov, M. Myroniuk","doi":"10.20998/2522-9052.2024.2.12","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.12","url":null,"abstract":"The functional stability of the information system of the power plant is ensured by a complex of processes and mechanisms that are capable of maintaining the normal operation of the system even in the event of errors, failures or negative impacts. The aim of the research. An important aspect of ensuring the functional stability of an information system is the monitoring of its healthy state, as it helps to identify, analyze and respond to any problems in a timely manner, ensuring the reliable and uninterrupted operation of the system. It was decided to choose a test diagnosis based on the principle of a wandering diagnostic core. Research results. An algorithm for detecting failures in the system has been developed based on the decryption of the totality of the results of the system's test checks. The developed software application allows you to monitor the state of various components of the information system and detect possible problems or failures in a timely manner in order to support the continuous operation of the system. This application allows you to increase the reliability of diagnostics, reduce the time of diagnostics, and carry out diagnostics with the specified completeness and depth. The depth and completeness of diagnosis is determined by the test task. Verification. To confirm the correctness of the developed software product, mathematical modeling of the process of diagnosing the information system, which was divided into several subsystems containing a certain number of modules, was carried out. For the division into subsystems, the number of modules in each subsystem is important - it should not exceed 30 modules. This limitation is due to the limited computing power of modern microprocessor technology during the solution of a class of NP-complete problems.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"225 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMAGE CLASSIFIER FOR FAST SEARCH IN LARGE DATABASES
Pub Date : 2024-06-04 DOI: 10.20998/2522-9052.2024.2.02
Valerii Filatov, Anna Filatova, Anatolii Povoroznyuk, Shakhin Omarov
Relevance. The avalanche-like growth in the amount of information on the Internet necessitates the development of effective methods for quickly processing such information in information systems. Clustering of news information is carried out by taking into account both the morphological analysis of texts and graphic content. Thus, an urgent task is the clustering of images accompanying textual information on various web resources, including news portals. The subject of study is an image classifier that exhibits low sensitivity to increased information in databases. The purpose of the article is to enhance the efficiency of searching for identical images in databases experiencing a daily influx of 10-12 thousand images, by developing an image classifier. Methods used: mathematical modeling, content-based image retrieval, two-dimensional discrete cosine transform, image processing methods, decision-making methods. The following results were obtained. An image classifier has been developed with low sensitivity to increased database information. The properties of the developed classifier have been analyzed. The experiments demonstrated that clustering information based on images using the developed classifier proved to be sufficiently fast and cost-effective in terms of information volumes and computational power requirements.
相关性。互联网信息量的雪崩式增长要求开发有效的方法,以便在信息系统中快速处理这些信息。新闻信息的聚类既要考虑文本的形态分析,也要考虑图片内容。因此,当务之急是对各种网络资源(包括新闻门户网站)中伴随文本信息的图像进行聚类。研究对象是一种图像分类器,它对数据库中增加的信息表现出较低的灵敏度。文章的目的是通过开发一种图像分类器,提高在每天涌入 1 万至 1.2 万张图像的数据库中搜索相同图像的效率。使用的方法:数学建模、基于内容的图像检索、二维离散余弦变换、图像处理方法、决策方法。结果如下开发的图像分类器对数据库信息增加的敏感度较低。分析了所开发的分类器的特性。实验证明,使用所开发的分类器对基于图像的信息进行聚类,在信息量和计算能力要求方面足够快速和经济有效。
{"title":"IMAGE CLASSIFIER FOR FAST SEARCH IN LARGE DATABASES","authors":"Valerii Filatov, Anna Filatova, Anatolii Povoroznyuk, Shakhin Omarov","doi":"10.20998/2522-9052.2024.2.02","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.02","url":null,"abstract":"Relevance. The avalanche-like growth in the amount of information on the Internet necessitates the development of effective methods for quickly processing such information in information systems. Clustering of news information is carried out by taking into account both the morphological analysis of texts and graphic content. Thus, an urgent task is the clustering of images accompanying textual information on various web resources, including news portals. The subject of study is an image classifier that exhibits low sensitivity to increased information in databases. The purpose of the article is to enhance the efficiency of searching for identical images in databases experiencing a daily influx of 10-12 thousand images, by developing an image classifier. Methods used: mathematical modeling, content-based image retrieval, two-dimensional discrete cosine transform, image processing methods, decision-making methods. The following results were obtained. An image classifier has been developed with low sensitivity to increased database information. The properties of the developed classifier have been analyzed. The experiments demonstrated that clustering information based on images using the developed classifier proved to be sufficiently fast and cost-effective in terms of information volumes and computational power requirements.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"54 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
METHOD OF TESTING LARGE NUMBERS FOR PRIMALITY 大数检验法
Pub Date : 2024-06-04 DOI: 10.20998/2522-9052.2024.2.11
Vladimir Pevnev, Oles Yudin, Peter Sedlaček, Nina Kuchuk
The current stage of scientific and technological development entails ensuring information security across all domains of human activity. Confidential data and wireless channels of remote control systems are particularly sensitive to various types of attacks. In these cases, various encryption systems are most commonly used for information protection, among which large prime numbers are widely utilized. The subject of research involves methods for generating prime numbers, which entail selecting candidates for primality and determining the primality of numbers. The subject of research involves methods for generating prime numbers, which choice selecting candidates for primality and determining the primality of numbers. The objective of the work is the development and theoretical justification of a method for determining the primality of numbers and providing the results of its testing. The aim to address the following main tasks: analyze the most commonly used and latest algorithms, methods, approaches, and tools for primality testing among large numbers; propose and theoretically justify a method for determining primality for large numbers; and conduct its testing. To achieve this aim, general scientific methods have been applied, including analysis of the subject area and mathematical apparatus, utilization of set theory, number theory, fields theory, as well as experimental design for organizing and conducting experimental research. The following results have been obtained: modern methods for selecting candidates for primality testing of large numbers have been analyzed, options for generating large prime numbers have been considered, and the main shortcomings of these methods for practical application of constructed prime numbers have been identified. Methods for determining candidates for primality testing of large numbers and a three-stage method for testing numbers for primality have been proposed and theoretically justified. The testing conducted on the proposed primality determination method has demonstrated the correctness of the theoretical conclusions regarding the feasibility of applying the proposed method to solve the stated problem. Conclusions. The use of a candidate primality testing strategy allows for a significant reduction in the number of tested numbers. For numbers of size 200 digits, the tested numbers is reduced to 8.82%. As the size of the tested numbers increases, their quantity will decrease. The proposed method for primality testing is sufficiently simple and effective. The first two stages allow for filtering out all composite numbers except for Carmichael numbers. In the first stage, using the first ten prime numbers filters out over 80 percent of the tested numbers. In the second stage, composite numbers with factors greater than 29 are sieved out. In the third stage, Carmichael numbers are sieved out. The test is polynomial, deterministic, and unconditional.
现阶段的科技发展要求确保人类活动各个领域的信息安全。机密数据和远程控制系统的无线信道对各种攻击尤为敏感。在这种情况下,最常用的是各种加密系统来保护信息,其中大质数被广泛使用。研究课题涉及生成质数的方法,其中包括选择候选质数和确定数的质数。研究课题涉及生成质数的方法,其中包括选择质数候选数和确定数的质数。这项工作的目标是开发一种确定数的基元性的方法并从理论上加以论证,同时提供其检验结果。其目的是解决以下主要任务:分析大数中最常用和最新的基元性检验算法、方法、途径和工具;提出一种确定大数基元性的方法并从理论上加以论证;进行检验。为了实现这一目标,我们采用了一般的科学方法,包括分析课题领域和数学仪器,利用集合论、数论、场论以及实验设计来组织和开展实验研究。研究取得了以下成果:分析了选择大数初等性检验候选数的现代方法,考虑了生成大质数的备选方案,并确定了这些方法在构造质数的实际应用中的主要缺点。提出了确定候选质数的方法和检验质数的三阶段方法,并从理论上进行了论证。对所提出的基元性确定方法进行的测试表明,关于应用所提出的方法解决所述问题的可行性的理论结论是正确的。结论使用候选基元性测试策略可以显著减少测试数的数量。对于大小为 200 位的数字,测试数减少到 8.82%。随着测试数大小的增加,其数量也会减少。所提出的原始性检验方法既简单又有效。前两个阶段可以筛选出除卡迈克尔数之外的所有复合数。在第一阶段,使用前十个质数可以过滤掉 80% 以上的测试数。在第二阶段,筛选出因数大于 29 的合数。在第三阶段,筛除卡迈克尔数。该测试是多项式、确定性和无条件的。
{"title":"METHOD OF TESTING LARGE NUMBERS FOR PRIMALITY","authors":"Vladimir Pevnev, Oles Yudin, Peter Sedlaček, Nina Kuchuk","doi":"10.20998/2522-9052.2024.2.11","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.11","url":null,"abstract":"The current stage of scientific and technological development entails ensuring information security across all domains of human activity. Confidential data and wireless channels of remote control systems are particularly sensitive to various types of attacks. In these cases, various encryption systems are most commonly used for information protection, among which large prime numbers are widely utilized. The subject of research involves methods for generating prime numbers, which entail selecting candidates for primality and determining the primality of numbers. The subject of research involves methods for generating prime numbers, which choice selecting candidates for primality and determining the primality of numbers. The objective of the work is the development and theoretical justification of a method for determining the primality of numbers and providing the results of its testing. The aim to address the following main tasks: analyze the most commonly used and latest algorithms, methods, approaches, and tools for primality testing among large numbers; propose and theoretically justify a method for determining primality for large numbers; and conduct its testing. To achieve this aim, general scientific methods have been applied, including analysis of the subject area and mathematical apparatus, utilization of set theory, number theory, fields theory, as well as experimental design for organizing and conducting experimental research. The following results have been obtained: modern methods for selecting candidates for primality testing of large numbers have been analyzed, options for generating large prime numbers have been considered, and the main shortcomings of these methods for practical application of constructed prime numbers have been identified. Methods for determining candidates for primality testing of large numbers and a three-stage method for testing numbers for primality have been proposed and theoretically justified. The testing conducted on the proposed primality determination method has demonstrated the correctness of the theoretical conclusions regarding the feasibility of applying the proposed method to solve the stated problem. Conclusions. The use of a candidate primality testing strategy allows for a significant reduction in the number of tested numbers. For numbers of size 200 digits, the tested numbers is reduced to 8.82%. As the size of the tested numbers increases, their quantity will decrease. The proposed method for primality testing is sufficiently simple and effective. The first two stages allow for filtering out all composite numbers except for Carmichael numbers. In the first stage, using the first ten prime numbers filters out over 80 percent of the tested numbers. In the second stage, composite numbers with factors greater than 29 are sieved out. In the third stage, Carmichael numbers are sieved out. The test is polynomial, deterministic, and unconditional.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141388096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Advanced Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1