首页 > 最新文献

Collection of selected papers of the III International Conference on Information Technology and Nanotechnology最新文献

英文 中文
Network traffic analyzing algorithms on the basis of machine learning methods 基于机器学习方法的网络流量分析算法
R. I. Battalov, A. Nikonov, M. Gayanova, V. V. Berkholts, R. Gayanov
Traffic analysis systems are widely used in monitoring the network activity of users or a specific user and restricting client access to certain types of services (VPN, HTTPS) which makes content analysis impossible. Algorithms for classifying encrypted traffic and detecting VPN traffic are proposed. Three algorithms for constructing classifiers are considered - MLP, RFT and KNN. The proposed classifier demonstrates recognition accuracy on a test sample up to 80%. The MLP, RFT and KNN algorithms had almost identical performance in all experiments. It was also found that the proposed classifiers work better when the network traffic flows are generated using short values of the time parameter (timeout). The novelty lies in the development of network traffic analysis algorithms based on a neural network, differing in the method of selection, generation and selection of features, which allows to classify the existing traffic of protected connections of selected users according to a predetermined set of categories.
流量分析系统广泛用于监控用户或特定用户的网络活动,并限制客户端访问某些类型的服务(VPN, HTTPS),这使得内容分析无法进行。提出了加密流量分类算法和VPN流量检测算法。考虑了三种构造分类器的算法——MLP、RFT和KNN。该分类器在测试样本上的识别准确率高达80%。在所有实验中,MLP、RFT和KNN算法的性能几乎相同。研究还发现,当使用较短的时间参数(超时)生成网络流量时,所提出的分类器工作得更好。其新颖性在于基于神经网络的网络流量分析算法的开发,在特征的选择、生成和选择方法上有所不同,它允许将选定用户的受保护连接的现有流量根据预先确定的类别集进行分类。
{"title":"Network traffic analyzing algorithms on the basis of machine learning methods","authors":"R. I. Battalov, A. Nikonov, M. Gayanova, V. V. Berkholts, R. Gayanov","doi":"10.18287/1613-0073-2019-2416-445-456","DOIUrl":"https://doi.org/10.18287/1613-0073-2019-2416-445-456","url":null,"abstract":"Traffic analysis systems are widely used in monitoring the network activity of users or a specific user and restricting client access to certain types of services (VPN, HTTPS) which makes content analysis impossible. Algorithms for classifying encrypted traffic and detecting VPN traffic are proposed. Three algorithms for constructing classifiers are considered - MLP, RFT and KNN. The proposed classifier demonstrates recognition accuracy on a test sample up to 80%. The MLP, RFT and KNN algorithms had almost identical performance in all experiments. It was also found that the proposed classifiers work better when the network traffic flows are generated using short values of the time parameter (timeout). The novelty lies in the development of network traffic analysis algorithms based on a neural network, differing in the method of selection, generation and selection of features, which allows to classify the existing traffic of protected connections of selected users according to a predetermined set of categories.","PeriodicalId":10486,"journal":{"name":"Collection of selected papers of the III International Conference on Information Technology and Nanotechnology","volume":"215 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75593223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Data-driven profiling of traffic flow with varying road conditions 不同路况下的交通流数据驱动分析
O. Golovnin
The article describes the road, institutional and weather conditions that affect the traffic flow. I proposed a method for traffic flow profiling using a data-driven approach. The method operates with macroscopic traffic flow characteristics and detailed data of road conditions. The article presents the results of traffic flow speed and intensity profiling taking into account weather conditions. The study used road traffic and conditions data for the city of Aarhus, Denmark. The results showed that the method is effective for traffic flow forecasting due to varying road conditions.
文章描述了影响交通流量的道路、机构和天气条件。我提出了一种使用数据驱动方法进行交通流分析的方法。该方法具有宏观交通流特征和详细路况数据。本文介绍了考虑天气条件的交通流速度和强度分析的结果。这项研究使用了丹麦奥胡斯市的道路交通和状况数据。结果表明,该方法对不同道路条件下的交通流预测是有效的。
{"title":"Data-driven profiling of traffic flow with varying road conditions","authors":"O. Golovnin","doi":"10.18287/1613-0073-2019-2416-149-157","DOIUrl":"https://doi.org/10.18287/1613-0073-2019-2416-149-157","url":null,"abstract":"The article describes the road, institutional and weather conditions that affect the traffic flow. I proposed a method for traffic flow profiling using a data-driven approach. The method operates with macroscopic traffic flow characteristics and detailed data of road conditions. The article presents the results of traffic flow speed and intensity profiling taking into account weather conditions. The study used road traffic and conditions data for the city of Aarhus, Denmark. The results showed that the method is effective for traffic flow forecasting due to varying road conditions.","PeriodicalId":10486,"journal":{"name":"Collection of selected papers of the III International Conference on Information Technology and Nanotechnology","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75737871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Decision support system in the task of ensuring information security of automated process control systems 决策支持系统中的任务是保证信息安全的自动化过程控制系统
A. Kirillova, V. Vasilyev, A. Nikonov, V. V. Berkholts
The problem of ensuring the information security of an automated process control system (APCS) is considered. An overview of the main regulatory documents on ensuring the safety of automated process control systems is given. For the operative solution of the tasks of ensuring information security of the automated control system of technological processes it is proposed to use an intelligent decision support system (DSS). An example of the construction and implementation of decision rules in the composition of the DSS based on the use of neurofuzzy models is considered.
研究了自动化过程控制系统(APCS)的信息安全问题。概述了确保自动化过程控制系统安全的主要规范性文件。针对工艺过程自动化控制系统信息安全保障任务的可操作性解决方案,提出采用智能决策支持系统(DSS)。给出了基于神经模糊模型的决策支持系统决策规则构建与实现的一个实例。
{"title":"Decision support system in the task of ensuring information security of automated process control systems","authors":"A. Kirillova, V. Vasilyev, A. Nikonov, V. V. Berkholts","doi":"10.18287/1613-0073-2019-2416-477-486","DOIUrl":"https://doi.org/10.18287/1613-0073-2019-2416-477-486","url":null,"abstract":"The problem of ensuring the information security of an automated process control system (APCS) is considered. An overview of the main regulatory documents on ensuring the safety of automated process control systems is given. For the operative solution of the tasks of ensuring information security of the automated control system of technological processes it is proposed to use an intelligent decision support system (DSS). An example of the construction and implementation of decision rules in the composition of the DSS based on the use of neurofuzzy models is considered.","PeriodicalId":10486,"journal":{"name":"Collection of selected papers of the III International Conference on Information Technology and Nanotechnology","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73160283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using genetic algorithm for generating optimal data sets to automatic testing the program code 采用遗传算法生成最优数据集,对程序代码进行自动测试
K. Serdyukov, T V Avdeenko
In present paper we propose an approach to automatic generation of test data set based on application of the genetic algorithm. We consider original procedure for computation of the weights of code operations used to formulate the fitness function being the sum of these weights. Terminal objective and result of fitness function selection is maximization of code coverage by generated test data set. The idea of the genetic algorithm application approach is that first we choose the most complex branches of the program code for accounting in the fitness function. After taking the branch into account its weight is reset to zero in order to ensure maximum code coverage. By adjusting the algorithm, it is possible to ensure that the automatic test data generating algorithm finds the most distant from each other parts of the program code and, thus, the higher level of code coverage is attained. We give a detailed example illustrating the work and advantages of considered approach and suppose further improvements of the method.
本文提出了一种基于遗传算法的测试数据集自动生成方法。我们考虑了原始的代码运算权值的计算方法,用来将适应度函数表示为这些权值的和。适应度函数选择的最终目标和结果是通过生成的测试数据集实现代码覆盖率的最大化。遗传算法应用方法的思想是,首先我们选择程序代码中最复杂的分支来计算适应度函数。在考虑分支之后,它的权重被重置为零,以确保最大的代码覆盖率。通过调整算法,可以确保自动测试数据生成算法找到程序代码中彼此距离最远的部分,从而获得更高级别的代码覆盖率。我们给出了一个详细的例子,说明了该方法的工作和优点,并设想了该方法的进一步改进。
{"title":"Using genetic algorithm for generating optimal data sets to automatic testing the program code","authors":"K. Serdyukov, T V Avdeenko","doi":"10.18287/1613-0073-2019-2416-173-182","DOIUrl":"https://doi.org/10.18287/1613-0073-2019-2416-173-182","url":null,"abstract":"In present paper we propose an approach to automatic generation of test data set based on application of the genetic algorithm. We consider original procedure for computation of the weights of code operations used to formulate the fitness function being the sum of these weights. Terminal objective and result of fitness function selection is maximization of code coverage by generated test data set. The idea of the genetic algorithm application approach is that first we choose the most complex branches of the program code for accounting in the fitness function. After taking the branch into account its weight is reset to zero in order to ensure maximum code coverage. By adjusting the algorithm, it is possible to ensure that the automatic test data generating algorithm finds the most distant from each other parts of the program code and, thus, the higher level of code coverage is attained. We give a detailed example illustrating the work and advantages of considered approach and suppose further improvements of the method.","PeriodicalId":10486,"journal":{"name":"Collection of selected papers of the III International Conference on Information Technology and Nanotechnology","volume":"306 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74972881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Industrial application of big data services in digital economy 大数据服务在数字经济中的产业应用
Oleg Surnin, P. Sitnikov, Anastasia Khorina, A. Ivaschenko, A. Stolbova, N. Ilyasova
Nowadays, the world is moving to automation. Appropriate programs for the implementation of industrial applications are developed by many companies. But is it so easy to implement systems capable of processing large amounts of information in production? Despite multiple positive results in research and development of Big Data technologies, their practical implementation and use remain challenging. At the same time most prominent trends of digital economy require Big Data analysis in various problem domains. We carried out the analysis of existing data processing works. Based on generalization of theoretical research and a number of real economy projects in this area there is proposed in this paper an architecture of a software development kit that can be used as a solid platform to build industrial applications. Was formed a basic algorithm for processing data from various sources (sensors, corporate systems, etc.). Examples are given for automobile industry with a reference of Industry 4.0 paradigm implementation in practice. The given examples are illustrated by trends graphs and by subject area ontology of the automotive industry.
如今,世界正在走向自动化。许多公司为实现工业应用开发了适当的程序。但是,实现能够在生产中处理大量信息的系统真的那么容易吗?尽管大数据技术的研究和发展取得了许多积极成果,但它们的实际实施和使用仍然具有挑战性。同时,数字经济的大多数突出趋势都需要对各种问题领域进行大数据分析。我们对现有的数据处理工作进行了分析。本文在对该领域的理论研究进行总结的基础上,结合多个实体经济项目,提出了一个软件开发工具包的体系结构,该软件开发工具包可以作为构建工业应用的坚实平台。形成了处理各种来源(传感器、企业系统等)数据的基本算法。以汽车工业为例,为工业4.0范式在实践中的实施提供参考。给出的例子由趋势图和汽车工业的主题领域本体来说明。
{"title":"Industrial application of big data services in digital economy","authors":"Oleg Surnin, P. Sitnikov, Anastasia Khorina, A. Ivaschenko, A. Stolbova, N. Ilyasova","doi":"10.18287/1613-0073-2019-2416-409-416","DOIUrl":"https://doi.org/10.18287/1613-0073-2019-2416-409-416","url":null,"abstract":"Nowadays, the world is moving to automation. Appropriate programs for the implementation of industrial applications are developed by many companies. But is it so easy to implement systems capable of processing large amounts of information in production? Despite multiple positive results in research and development of Big Data technologies, their practical implementation and use remain challenging. At the same time most prominent trends of digital economy require Big Data analysis in various problem domains. We carried out the analysis of existing data processing works. Based on generalization of theoretical research and a number of real economy projects in this area there is proposed in this paper an architecture of a software development kit that can be used as a solid platform to build industrial applications. Was formed a basic algorithm for processing data from various sources (sensors, corporate systems, etc.). Examples are given for automobile industry with a reference of Industry 4.0 paradigm implementation in practice. The given examples are illustrated by trends graphs and by subject area ontology of the automotive industry.","PeriodicalId":10486,"journal":{"name":"Collection of selected papers of the III International Conference on Information Technology and Nanotechnology","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76033474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Optimization of computational complexity of lossy compression algorithms for hyperspectral images 高光谱图像有损压缩算法的计算复杂度优化
L. Lebedev, A. O. Shakhlan
In this paper, we consider the solution of the problem of increasing the speed of the algorithm for hyperspectral images (HSI) compression, based on recognition methods. Two methods are proposed to reduce the computational complexity of a lossy compression algorithm. The first method is based on the use of compression results obtained with other parameters, including those of the recognition method. The second method is based on adaptive partitioning of hyperspectral image pixels into clusters and calculating the estimates of similarity only with the templates of one of the subsets. Theoretical and practical estimates of the increase in the speed of the compression algorithm are obtained.
在本文中,我们考虑了在识别方法的基础上提高高光谱图像(HSI)压缩算法速度的问题。提出了两种降低有损压缩算法计算复杂度的方法。第一种方法是基于使用其他参数获得的压缩结果,包括识别方法的压缩结果。第二种方法是基于高光谱图像像素的自适应分割成簇,并仅使用其中一个子集的模板计算相似性估计。得到了压缩算法速度提高的理论和实际估计。
{"title":"Optimization of computational complexity of lossy compression algorithms for hyperspectral images","authors":"L. Lebedev, A. O. Shakhlan","doi":"10.18287/1613-0073-2019-2391-297-301","DOIUrl":"https://doi.org/10.18287/1613-0073-2019-2391-297-301","url":null,"abstract":"In this paper, we consider the solution of the problem of increasing the speed of the algorithm for hyperspectral images (HSI) compression, based on recognition methods. Two methods are proposed to reduce the computational complexity of a lossy compression algorithm. The first method is based on the use of compression results obtained with other parameters, including those of the recognition method. The second method is based on adaptive partitioning of hyperspectral image pixels into clusters and calculating the estimates of similarity only with the templates of one of the subsets. Theoretical and practical estimates of the increase in the speed of the compression algorithm are obtained.","PeriodicalId":10486,"journal":{"name":"Collection of selected papers of the III International Conference on Information Technology and Nanotechnology","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82143822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Solution for the problem of the parameters identification for autoregressions with multiple roots of characteristic equations 特征方程多根自回归参数辨识问题的求解
N. Andriyanov, M. N. Sluzhivyi
When describing a real image using a mathematical model, the problem of model parameters identification is of importance. In this case the identification itself is easier to perform when a particular type of model is known. In other words, if there is a number of models characterized by different properties, then if there is a correspondence with the type of suitable images, then the model to be used can be determined in advance. Therefore, in this paper, we do not consider the criteria for model selection, but perform the identification of parameters for autoregressive models, including those with multiple roots of characteristic equations. This is due to the fact that the effectiveness of identification is verified by the images generated by this model. However, even using this approach where the model is known, one must first determine the order of the model. In this regard, on the basis of YuleWalker equations, an algorithm for determining the order of the model is investigated, and the optimal parameters of the model are also found. In this case the proposed algorithm can be used when processing real images.
在用数学模型描述真实图像时,模型参数的辨识是一个重要的问题。在这种情况下,当特定类型的模型已知时,识别本身更容易执行。换句话说,如果存在多个具有不同属性特征的模型,那么如果与适合的图像类型有对应关系,那么就可以提前确定要使用的模型。因此,在本文中,我们不考虑模型选择的标准,而是对自回归模型(包括具有多个特征方程根的模型)进行参数识别。这是因为该模型生成的图像验证了识别的有效性。然而,即使在已知模型的情况下使用这种方法,也必须首先确定模型的顺序。为此,在YuleWalker方程的基础上,研究了一种确定模型阶数的算法,并找到了模型的最优参数。在这种情况下,该算法可以用于处理真实图像。
{"title":"Solution for the problem of the parameters identification for autoregressions with multiple roots of characteristic equations","authors":"N. Andriyanov, M. N. Sluzhivyi","doi":"10.18287/1613-0073-2019-2391-79-85","DOIUrl":"https://doi.org/10.18287/1613-0073-2019-2391-79-85","url":null,"abstract":"When describing a real image using a mathematical model, the problem of model parameters identification is of importance. In this case the identification itself is easier to perform when a particular type of model is known. In other words, if there is a number of models characterized by different properties, then if there is a correspondence with the type of suitable images, then the model to be used can be determined in advance. Therefore, in this paper, we do not consider the criteria for model selection, but perform the identification of parameters for autoregressive models, including those with multiple roots of characteristic equations. This is due to the fact that the effectiveness of identification is verified by the images generated by this model. However, even using this approach where the model is known, one must first determine the order of the model. In this regard, on the basis of YuleWalker equations, an algorithm for determining the order of the model is investigated, and the optimal parameters of the model are also found. In this case the proposed algorithm can be used when processing real images.","PeriodicalId":10486,"journal":{"name":"Collection of selected papers of the III International Conference on Information Technology and Nanotechnology","volume":"32 1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89537449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optimization of the process of 3D visualization of the model of urban environment objects generated on the basis of the attributive information from a digital map 基于数字地图属性信息生成的城市环境对象模型的三维可视化过程优化
M. P. Osipov, O. A. Chekodaev
The paper presents methods for optimizing the process of visualization of the urban environment model based on the characteristics of its presentation. Various approaches are described which provide a reduction in computational complexity in visualizing threedimensional models that can optimize the display of their geometry and the amount of video memory used. Methods are considered that allow optimizing both the scene as a whole and its individual components.
根据城市环境模型可视化的特点,提出了优化城市环境模型可视化过程的方法。描述了各种方法,这些方法可以减少可视化三维模型的计算复杂性,从而可以优化其几何形状的显示和所使用的视频内存的数量。方法被考虑允许优化场景作为一个整体和它的单独组件。
{"title":"Optimization of the process of 3D visualization of the model of urban environment objects generated on the basis of the attributive information from a digital map","authors":"M. P. Osipov, O. A. Chekodaev","doi":"10.18287/1613-0073-2019-2416-534-541","DOIUrl":"https://doi.org/10.18287/1613-0073-2019-2416-534-541","url":null,"abstract":"The paper presents methods for optimizing the process of visualization of the urban environment model based on the characteristics of its presentation. Various approaches are described which provide a reduction in computational complexity in visualizing threedimensional models that can optimize the display of their geometry and the amount of video memory used. Methods are considered that allow optimizing both the scene as a whole and its individual components.","PeriodicalId":10486,"journal":{"name":"Collection of selected papers of the III International Conference on Information Technology and Nanotechnology","volume":"243 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91435226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Local approximation of discrete processes by interpolation polynomials 离散过程的插值多项式局部逼近
A. A. Kolpakov, Y. Kropotov
This paper discusses the structure of the devices and their defining formulas used for local approximation using power-algebraic polynomials when the observed data are nown exactly. A multichannel system for processing discrete sequences is considered. On the basis of the considered system the research of acceleration of calculations in the system from specialized computational modules is carried out. The carried out researches have shown, that the developed model of multichannel data processing system allows to reduce essentially time for data processing.
本文讨论了在观测数据精确时,用幂代数多项式进行局部逼近的器件结构及其定义公式。研究了一种处理离散序列的多通道系统。在考虑系统的基础上,从专门的计算模块对系统中的计算加速进行了研究。研究表明,所开发的多通道数据处理系统模型可以从根本上缩短数据处理时间。
{"title":"Local approximation of discrete processes by interpolation polynomials","authors":"A. A. Kolpakov, Y. Kropotov","doi":"10.18287/1613-0073-2019-2416-104-110","DOIUrl":"https://doi.org/10.18287/1613-0073-2019-2416-104-110","url":null,"abstract":"This paper discusses the structure of the devices and their defining formulas used for local approximation using power-algebraic polynomials when the observed data are nown exactly. A multichannel system for processing discrete sequences is considered. On the basis of the considered system the research of acceleration of calculations in the system from specialized computational modules is carried out. The carried out researches have shown, that the developed model of multichannel data processing system allows to reduce essentially time for data processing.","PeriodicalId":10486,"journal":{"name":"Collection of selected papers of the III International Conference on Information Technology and Nanotechnology","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88731211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative analysis of segmentation algorithms for the allocation of microcalcifications on mammograms 乳房x光片微钙化分割算法的比较分析
Y. Podgornova, S. S. Sadykov
Breast cancer is the most common disease of the current century in the female population of the world. The main task of the research of most scientists is the detection of this pathology at an early stage (the tumor size is less than 7 mm) when a woman can still be helped. An indicator of this disease is the presence of small-point microcalcifications, located in groups within or in the immediate circle of the tumor. Microcalcification is a small-point character at cancer, reminding grains of sand of irregular shape which sizes are from 100 to 600 microns. The probability of breast cancer increases with the increase in the number of microcalcifications per unit area. So, the probability of cancer is 80% if more than 15 microcalcifications on 1 sq. cm. The microcalcifications are often the only sign of breast cancer, therefore, their detection even in the absence of a tumor node could be a harbinger to cancer. Image segmentation is one way to identify microcalcifications. The conducted research allowed us to choose the optimal segmentation algorithms of mammograms to highlight areas of microcalcifications for further analysis of their groups, sizes, and so on.
乳腺癌是本世纪世界女性人口中最常见的疾病。大多数科学家研究的主要任务是在早期发现这种病理(肿瘤大小小于7毫米),此时女性仍然可以得到帮助。这种疾病的一个指标是出现小点微钙化,位于肿瘤内或肿瘤周围。微钙化是癌症的一个小点特征,使人想起形状不规则的砂粒,大小在100到600微米之间。随着单位面积微钙化数量的增加,乳腺癌的发生概率也随之增加。所以,如果一个正方形上有超过15个微钙化,癌症的概率是80%厘米。微钙化通常是乳腺癌的唯一征兆,因此,即使在没有肿瘤结的情况下检测到微钙化也可能是癌症的前兆。图像分割是识别微钙化的一种方法。所进行的研究使我们能够选择最佳的乳房x光片分割算法来突出微钙化区域,以便进一步分析其群体,大小等。
{"title":"Comparative analysis of segmentation algorithms for the allocation of microcalcifications on mammograms","authors":"Y. Podgornova, S. S. Sadykov","doi":"10.18287/1613-0073-2019-2391-121-127","DOIUrl":"https://doi.org/10.18287/1613-0073-2019-2391-121-127","url":null,"abstract":"Breast cancer is the most common disease of the current century in the female population of the world. The main task of the research of most scientists is the detection of this pathology at an early stage (the tumor size is less than 7 mm) when a woman can still be helped. An indicator of this disease is the presence of small-point microcalcifications, located in groups within or in the immediate circle of the tumor. Microcalcification is a small-point character at cancer, reminding grains of sand of irregular shape which sizes are from 100 to 600 microns. The probability of breast cancer increases with the increase in the number of microcalcifications per unit area. So, the probability of cancer is 80% if more than 15 microcalcifications on 1 sq. cm. The microcalcifications are often the only sign of breast cancer, therefore, their detection even in the absence of a tumor node could be a harbinger to cancer. Image segmentation is one way to identify microcalcifications. The conducted research allowed us to choose the optimal segmentation algorithms of mammograms to highlight areas of microcalcifications for further analysis of their groups, sizes, and so on.","PeriodicalId":10486,"journal":{"name":"Collection of selected papers of the III International Conference on Information Technology and Nanotechnology","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87160744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Collection of selected papers of the III International Conference on Information Technology and Nanotechnology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1