首页 > 最新文献

计算机科学最新文献

英文 中文
IF:
Incorporating prior knowledge into style embedding for unsupervised text style transfer 将先验知识应用于无监督文本样式转移的样式嵌入
IF 3.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-10-01 Epub Date: 2026-03-02 DOI: 10.1016/j.csl.2026.101968
Yahao Hu , Wei Tao , Yifei Xie , Tianfeng Wang , Zhisong Pan
Text style transfer involves altering the style of a sentence to a specified style while maintaining the content that is independent of style. A prevalent approach assigns a embedding to each style, facilitating control over the style of the generated sentence. However, in unsupervised learning, vanilla style embedding tends to imitate training corpus characteristics beyond the style attributes, leading to compromised generalization capabilities. Moreover, this approach may struggle to capture the relationships between different styles, thereby further constraining the transfer performance. In this paper, we introduce a novel approach that leverages the prior knowledge of Distinctiveness and Commonness to refine style embedding. Specifically, we employ contrastive learning to achieve distinctiveness by clustering positive samples together and distancing negative samples. Additionally, we explore conventional pooling strategies to extract the stylistic commonality across multiple samples of the same style, ultimately deriving a representative style embedding. Experiments on three benchmark datasets show that our proposed method outperforms several embedding-based baselines, confirming the efficacy of our method.
文本样式转移涉及将句子的样式更改为指定的样式,同时保持独立于样式的内容。一种流行的方法是为每个样式分配一个嵌入,方便对生成的句子的样式进行控制。然而,在无监督学习中,香草风格嵌入倾向于模仿训练语料库的风格属性之外的特征,导致泛化能力受损。此外,这种方法可能难以捕捉不同风格之间的关系,从而进一步限制了迁移性能。在本文中,我们引入了一种新的方法,利用先验知识的独特性和共性来改进风格嵌入。具体来说,我们采用对比学习通过聚类阳性样本和远离阴性样本来实现显著性。此外,我们探索了传统的池化策略,以提取相同风格的多个样本之间的风格共性,最终得出具有代表性的风格嵌入。在三个基准数据集上的实验表明,该方法优于几种基于嵌入的基线,验证了该方法的有效性。
{"title":"Incorporating prior knowledge into style embedding for unsupervised text style transfer","authors":"Yahao Hu ,&nbsp;Wei Tao ,&nbsp;Yifei Xie ,&nbsp;Tianfeng Wang ,&nbsp;Zhisong Pan","doi":"10.1016/j.csl.2026.101968","DOIUrl":"10.1016/j.csl.2026.101968","url":null,"abstract":"<div><div>Text style transfer involves altering the style of a sentence to a specified style while maintaining the content that is independent of style. A prevalent approach assigns a embedding to each style, facilitating control over the style of the generated sentence. However, in unsupervised learning, vanilla style embedding tends to imitate training corpus characteristics beyond the style attributes, leading to compromised generalization capabilities. Moreover, this approach may struggle to capture the relationships between different styles, thereby further constraining the transfer performance. In this paper, we introduce a novel approach that leverages the prior knowledge of <em>Distinctiveness</em> and <em>Commonness</em> to refine style embedding. Specifically, we employ contrastive learning to achieve distinctiveness by clustering positive samples together and distancing negative samples. Additionally, we explore conventional pooling strategies to extract the stylistic commonality across multiple samples of the same style, ultimately deriving a representative style embedding. Experiments on three benchmark datasets show that our proposed method outperforms several embedding-based baselines, confirming the efficacy of our method.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"100 ","pages":"Article 101968"},"PeriodicalIF":3.4,"publicationDate":"2026-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bumper-guided representation interpolation for black-box unsupervised domain adaptation 黑箱无监督域自适应的缓冲器引导表示插值
IF 3.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-10-01 Epub Date: 2026-01-29 DOI: 10.1016/j.csl.2026.101947
Jin-Seong Choi , Jae-Hong Lee , Joon-Hyuk Chang
Black-box unsupervised domain adaptation (BUDA) presents a challenging scenario in which only unlabeled target data are available, and access to the source model’s parameters is limited. Recent BUDA methods that rely on consistency training struggle with error accumulation caused by fixed source representations. In this paper, we propose a novel framework called bumper-guided representation interpolation (BGRI), which introduces a bumper model that interpolates between the source and target domain representation spaces. Using interpolated representations, the bumper model delivers generalized source information and enables stable and effective knowledge transfer to the target model. Through extensive experiments conducted in real-world scenarios across diverse acoustic and linguistic domains, BGRI consistently outperforms the existing BUDA approaches in terms of adaptation performance and robustness.
黑盒无监督域自适应(BUDA)提出了一个具有挑战性的场景,其中只有未标记的目标数据可用,并且对源模型参数的访问受到限制。最近依赖于一致性训练的BUDA方法与固定源表示引起的误差积累作斗争。在本文中,我们提出了一个新的框架,称为缓冲器引导表示插值(BGRI),它引入了一个缓冲器模型,在源域和目标域表示空间之间进行插值。缓冲器模型使用插值表示,提供广义的源信息,并能够稳定有效地向目标模型传递知识。通过在不同声学和语言领域的真实场景中进行的大量实验,BGRI在自适应性能和鲁棒性方面始终优于现有的BUDA方法。
{"title":"Bumper-guided representation interpolation for black-box unsupervised domain adaptation","authors":"Jin-Seong Choi ,&nbsp;Jae-Hong Lee ,&nbsp;Joon-Hyuk Chang","doi":"10.1016/j.csl.2026.101947","DOIUrl":"10.1016/j.csl.2026.101947","url":null,"abstract":"<div><div>Black-box unsupervised domain adaptation (BUDA) presents a challenging scenario in which only unlabeled target data are available, and access to the source model’s parameters is limited. Recent BUDA methods that rely on consistency training struggle with error accumulation caused by fixed source representations. In this paper, we propose a novel framework called bumper-guided representation interpolation (BGRI), which introduces a bumper model that interpolates between the source and target domain representation spaces. Using interpolated representations, the bumper model delivers generalized source information and enables stable and effective knowledge transfer to the target model. Through extensive experiments conducted in real-world scenarios across diverse acoustic and linguistic domains, BGRI consistently outperforms the existing BUDA approaches in terms of adaptation performance and robustness.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"100 ","pages":"Article 101947"},"PeriodicalIF":3.4,"publicationDate":"2026-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online dominating set and coloring for geometric intersection graphs 几何交图的在线支配集与着色
IF 0.7 4区 计算机科学 Q4 MATHEMATICS Pub Date : 2026-09-01 Epub Date: 2026-02-09 DOI: 10.1016/j.comgeo.2026.102256
Minati De , Sambhav Khurana , Satyam Singh
We study the online minimum dominating set and minimum coloring problems in the context of geometric intersection graphs. We consider a graph parameter: the independent kissing number ζ, which is the number equal to “the size of the largest induced star in the graph −1”. For a graph with an independent kissing number of ζ, we show that the famous greedy algorithm achieves an optimal competitive ratio of ζ for the minimum dominating set and the minimum independent dominating set problems. However, for the minimum connected dominating set problem, we obtain a competitive ratio of at most 2ζ. To complement this, we prove that for the minimum connected dominating set problem, any deterministic online algorithm achieves a competitive ratio of at least 2(ζ1), for the geometric intersection graph of translates of a convex object in R2. Next, for the minimum coloring problem, we obtain an algorithm with a competitive ratio of O(ζlogm) for geometric intersection graphs of α-fat objects in Rd having widths in [1,m], where ζ is the independent kissing number of the geometric intersection graph of α-fat objects having widths in [1,2]. Finally, we investigate the value of ζ for geometric intersection graphs of various families of geometric objects.
研究了几何交图的在线最小支配集和最小染色问题。我们考虑一个图参数:独立的接吻数ζ,这个数等于“图- 1中最大的诱导星的大小”。对于具有独立接吻数ζ的图,我们证明了著名的贪婪算法对于最小控制集和最小独立控制集问题实现了ζ的最优竞争比。然而,对于最小连通支配集问题,我们得到了最大为2ζ的竞争比。为了补充这一点,我们证明了对于最小连通支配集问题,对于R2中凸物体的平移的几何相交图,任何确定性在线算法都至少达到2(ζ−1)的竞争比。接下来,对于最小着色问题,我们得到了一个竞争比为O(ζ ‘ log (m))的算法,用于宽度为[1,m]的Rd中α-fat对象的几何相交图,其中ζ ’是宽度为[1,2]的α-fat对象的几何相交图的独立接吻数。最后,我们研究了各种几何对象族的几何相交图的ζ值。
{"title":"Online dominating set and coloring for geometric intersection graphs","authors":"Minati De ,&nbsp;Sambhav Khurana ,&nbsp;Satyam Singh","doi":"10.1016/j.comgeo.2026.102256","DOIUrl":"10.1016/j.comgeo.2026.102256","url":null,"abstract":"<div><div>We study the online minimum dominating set and minimum coloring problems in the context of geometric intersection graphs. We consider a graph parameter: the independent kissing number <em>ζ</em>, which is the number equal to “the size of the largest induced star in the graph −1”. For a graph with an independent kissing number of <em>ζ</em>, we show that the famous greedy algorithm achieves an optimal competitive ratio of <em>ζ</em> for the minimum dominating set and the minimum independent dominating set problems. However, for the minimum connected dominating set problem, we obtain a competitive ratio of at most 2<em>ζ</em>. To complement this, we prove that for the minimum connected dominating set problem, any deterministic online algorithm achieves a competitive ratio of at least <span><math><mn>2</mn><mo>(</mo><mi>ζ</mi><mo>−</mo><mn>1</mn><mo>)</mo></math></span>, for the geometric intersection graph of translates of a convex object in <span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span>. Next, for the minimum coloring problem, we obtain an algorithm with a competitive ratio of <span><math><mi>O</mi><mrow><mo>(</mo><msup><mrow><mi>ζ</mi></mrow><mrow><mo>′</mo></mrow></msup><mi>log</mi><mo>⁡</mo><mi>m</mi><mo>)</mo></mrow></math></span> for geometric intersection graphs of <em>α</em>-fat objects in <span><math><msup><mrow><mi>R</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span> having widths in <span><math><mo>[</mo><mn>1</mn><mo>,</mo><mi>m</mi><mo>]</mo></math></span>, where <span><math><msup><mrow><mi>ζ</mi></mrow><mrow><mo>′</mo></mrow></msup></math></span> is the independent kissing number of the geometric intersection graph of <em>α</em>-fat objects having widths in <span><math><mo>[</mo><mn>1</mn><mo>,</mo><mn>2</mn><mo>]</mo></math></span>. Finally, we investigate the value of <em>ζ</em> for geometric intersection graphs of various families of geometric objects.</div></div>","PeriodicalId":51001,"journal":{"name":"Computational Geometry-Theory and Applications","volume":"134 ","pages":"Article 102256"},"PeriodicalIF":0.7,"publicationDate":"2026-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146161961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal right angles crossing graphs 最佳直角交叉图
IF 0.7 4区 计算机科学 Q4 MATHEMATICS Pub Date : 2026-09-01 Epub Date: 2026-02-09 DOI: 10.1016/j.comgeo.2026.102255
Franz J. Brandenburg
A graph is an optimal right angle crossing graph (also called an optimal RAC graph for short) if it has n vertices and 4n–10 edges and admits a straight-line drawing in the plane such that each edge is crossed at most once and edges cross only at a right angle. This implies that the drawing is 3T- or TTX-framed, that is, the outer face is a triangle that is adjacent to three triangles or to two triangles and a crossing. An optimal pseudo-RAC graph is the topological version of an optimal RAC graph, where the restrictions to straight-line edges and right angle crossings are dropped.
We show that every 3T-framed optimal pseudo-RAC graph is an optimal RAC graph, that is, 3T-framed optimal pseudo-RAC embeddings can be stretched and orthogonalized. This is not true for TTX-framed embeddings. There are n-vertex 3T- and TTX-framed optimal RAC graphs for every n9, and eleven optimal RAC and fourteen optimal pseudo-RAC graphs with at most eight vertices. Optimal pseudo-RAC graphs can be recognized in O(n3) time, where the recognition algorithm demonstrates that every optimal pseudo-RAC graph has at most three 1-planar embeddings, in which edges are crossed at most once.
如果一个图有n个顶点和4n-10条边,并且允许在平面上画一条直线,使得每条边最多相交一次,而每条边只相交一个直角,那么这个图就是最优直角相交图(简称最优RAC图)。这意味着该绘图是3T或ttx框架,即外表面是与三个三角形相邻或与两个三角形和一个交叉点相邻的三角形。最优伪RAC图是最优RAC图的拓扑版本,其中取消了对直线边和直角交叉的限制。我们证明了每一个3t框架的最优伪RAC图都是最优RAC图,即3t框架的最优伪RAC嵌入可以被拉伸和正交化。这对于ttx框架嵌入来说是不成立的。对于每n≥9,有n个顶点的3T框架和ttx框架的最优RAC图,有11个最优RAC图和14个最多8个顶点的最优伪RAC图。最优伪rac图可在O(n3)时间内识别,其中识别算法表明,每个最优伪rac图最多有三个1-平面嵌入,其中边最多交叉一次。
{"title":"Optimal right angles crossing graphs","authors":"Franz J. Brandenburg","doi":"10.1016/j.comgeo.2026.102255","DOIUrl":"10.1016/j.comgeo.2026.102255","url":null,"abstract":"<div><div>A graph is an <em>optimal right angle crossing graph</em> (also called an optimal RAC graph for short) if it has n vertices and 4n–10 edges and admits a straight-line drawing in the plane such that each edge is crossed at most once and edges cross only at a right angle. This implies that the drawing is <em>3T-</em> or <em>TTX-framed</em>, that is, the outer face is a triangle that is adjacent to three triangles or to two triangles and a crossing. An optimal <em>pseudo-RAC graph</em> is the topological version of an optimal RAC graph, where the restrictions to straight-line edges and right angle crossings are dropped.</div><div>We show that every 3T-framed optimal pseudo-RAC graph is an optimal RAC graph, that is, 3T-framed optimal pseudo-RAC embeddings can be stretched and orthogonalized. This is not true for TTX-framed embeddings. There are <em>n</em>-vertex 3T- and TTX-framed optimal RAC graphs for every <span><math><mi>n</mi><mo>≥</mo><mn>9</mn></math></span>, and eleven optimal RAC and fourteen optimal pseudo-RAC graphs with at most eight vertices. Optimal pseudo-RAC graphs can be recognized in <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mn>3</mn></mrow></msup><mo>)</mo></math></span> time, where the recognition algorithm demonstrates that every optimal pseudo-RAC graph has at most three 1-planar embeddings, in which edges are crossed at most once.</div></div>","PeriodicalId":51001,"journal":{"name":"Computational Geometry-Theory and Applications","volume":"134 ","pages":"Article 102255"},"PeriodicalIF":0.7,"publicationDate":"2026-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146161879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic change detection of roads and bridges: A fine-grained dataset and multimodal frequency-driven detector 道路和桥梁的语义变化检测:一个细粒度数据集和多模态频率驱动检测器
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-29 DOI: 10.1016/j.patcog.2026.113191
Qing-Ling Shu , Si-Bao Chen , Xiao Wang , Zhi-Hui You , Wei Lu , Jin Tang , Bin Luo
Accurate detection of road and bridge changes is crucial for urban planning and transportation management, yet presents unique challenges for general change detection (CD). Key difficulties arise from maintaining the continuity of roads and bridges as linear structures and disambiguating visually similar land covers (e.g., road construction vs. bare land). Existing spatial-domain models struggle with these issues, further hindered by the lack of specialized, semantically rich datasets. To fill these gaps, we introduce the Road and Bridge Semantic Change Detection (RB-SCD) dataset. Unlike existing benchmarks that primarily focus on general land cover changes, RB-SCD is the first to systematically target 11 specific semantic change transition types (e.g., water → bridge) anchored to traffic infrastructure. This enables a detailed analysis of traffic infrastructure evolution. Building on this, we propose a novel framework, the Multimodal Frequency-Driven Change Detector (MFDCD). MFDCD integrates multimodal features in the frequency domain through two key components: (1) the Dynamic Frequency Coupler (DFC), which leverages wavelet transform to decompose visual features, enabling it to robustly model the continuity of linear transitions; and (2) the Textual Frequency Filter (TFF), which encodes semantic priors into frequency-domain graphs and applies filter banks to align them with visual features, resolving semantic ambiguities. Experiments demonstrate the state-of-the-art performance of MFDCD on RB-SCD and three public CD datasets. The code will be available at https://github.com/DaGuangDaGuang/RB-SCD.
道路和桥梁变化的准确检测对城市规划和交通管理至关重要,但对一般变化检测(CD)提出了独特的挑战。关键的困难来自保持道路和桥梁作为线性结构的连续性,以及消除视觉上相似的土地覆盖(例如,道路建设与裸地)的歧义。现有的空间域模型与这些问题作斗争,进一步受到缺乏专门的、语义丰富的数据集的阻碍。为了填补这些空白,我们引入了道路和桥梁语义变化检测(RB-SCD)数据集。与现有的主要关注一般土地覆盖变化的基准不同,RB-SCD是第一个系统地针对11种特定的语义变化转换类型(例如,水 → 桥梁)锚定在交通基础设施上。这使得对交通基础设施演变的详细分析成为可能。在此基础上,我们提出了一个新的框架,即多模态频率驱动变化检测器(MFDCD)。MFDCD通过两个关键组件在频域中集成多模态特征:(1)动态频率耦合器(DFC),它利用小波变换对视觉特征进行分解,使其能够鲁棒地模拟线性过渡的连续性;(2)文本频率滤波器(TFF),它将语义先验编码到频域图中,并应用滤波器组将它们与视觉特征对齐,从而解决语义歧义。实验证明了MFDCD在RB-SCD和三个公共CD数据集上的性能。代码可在https://github.com/DaGuangDaGuang/RB-SCD上获得。
{"title":"Semantic change detection of roads and bridges: A fine-grained dataset and multimodal frequency-driven detector","authors":"Qing-Ling Shu ,&nbsp;Si-Bao Chen ,&nbsp;Xiao Wang ,&nbsp;Zhi-Hui You ,&nbsp;Wei Lu ,&nbsp;Jin Tang ,&nbsp;Bin Luo","doi":"10.1016/j.patcog.2026.113191","DOIUrl":"10.1016/j.patcog.2026.113191","url":null,"abstract":"<div><div>Accurate detection of road and bridge changes is crucial for urban planning and transportation management, yet presents unique challenges for general change detection (CD). Key difficulties arise from maintaining the continuity of roads and bridges as linear structures and disambiguating visually similar land covers (e.g., road construction vs. bare land). Existing spatial-domain models struggle with these issues, further hindered by the lack of specialized, semantically rich datasets. To fill these gaps, we introduce the Road and Bridge Semantic Change Detection (RB-SCD) dataset. Unlike existing benchmarks that primarily focus on general land cover changes, RB-SCD is the first to systematically target 11 specific semantic change transition types (e.g., water → bridge) anchored to traffic infrastructure. This enables a detailed analysis of traffic infrastructure evolution. Building on this, we propose a novel framework, the Multimodal Frequency-Driven Change Detector (MFDCD). MFDCD integrates multimodal features in the frequency domain through two key components: (1) the Dynamic Frequency Coupler (DFC), which leverages wavelet transform to decompose visual features, enabling it to robustly model the continuity of linear transitions; and (2) the Textual Frequency Filter (TFF), which encodes semantic priors into frequency-domain graphs and applies filter banks to align them with visual features, resolving semantic ambiguities. Experiments demonstrate the state-of-the-art performance of MFDCD on RB-SCD and three public CD datasets. The code will be available at <span><span>https://github.com/DaGuangDaGuang/RB-SCD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113191"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAFS: A distribution-aware hierarchical feature selection method for long-tailed classification DAFS:一种分布感知的长尾分类分层特征选择方法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-02-06 DOI: 10.1016/j.patcog.2026.113218
Yang Zhang , Jie Shi , Yanfang Liu , Hong Zhao
Feature selection for long-tailed data has become a research hotspot due to high-dimensional features and imbalanced distributions in real-world data. Although some of them effectively balance the data, correctly classifying tail classes and distinguishing easy-confused classes in long-tailed data are still two significant challenges. To address these issues, we propose a distribution-aware hierarchical feature selection method for long-tailed classification (DAFS). First, we embed sample distribution-based punishment coefficients into loss and regularization terms to balance feature weights for head and tail classes, which enhances the accuracy of classifying tail classes. Then, we use multi-granularity knowledge and similarities among classes to design feature differentiation regularization terms for improving the distinguishability of easy-confused classes. Finally, extensive experimental results demonstrate that DAFS outperforms the other ten traditional and advanced feature selection methods on different datasets.
长尾数据的高维特征和不平衡分布已成为现实数据特征选择的研究热点。尽管其中一些方法有效地平衡了数据,但正确分类尾类和区分长尾数据中容易混淆的类仍然是两个重大挑战。为了解决这些问题,我们提出了一种分布感知的长尾分类(DAFS)分层特征选择方法。首先,我们将基于样本分布的惩罚系数嵌入到损失项和正则化项中,以平衡头和尾类的特征权重,从而提高尾类分类的准确性。然后,利用多粒度知识和类间相似度设计特征区分正则化项,提高易混淆类的可分辨性。最后,大量的实验结果表明,DAFS在不同的数据集上优于其他10种传统和先进的特征选择方法。
{"title":"DAFS: A distribution-aware hierarchical feature selection method for long-tailed classification","authors":"Yang Zhang ,&nbsp;Jie Shi ,&nbsp;Yanfang Liu ,&nbsp;Hong Zhao","doi":"10.1016/j.patcog.2026.113218","DOIUrl":"10.1016/j.patcog.2026.113218","url":null,"abstract":"<div><div>Feature selection for long-tailed data has become a research hotspot due to high-dimensional features and imbalanced distributions in real-world data. Although some of them effectively balance the data, correctly classifying tail classes and distinguishing easy-confused classes in long-tailed data are still two significant challenges. To address these issues, we propose a distribution-aware hierarchical feature selection method for long-tailed classification (DAFS). First, we embed sample distribution-based punishment coefficients into loss and regularization terms to balance feature weights for head and tail classes, which enhances the accuracy of classifying tail classes. Then, we use multi-granularity knowledge and similarities among classes to design feature differentiation regularization terms for improving the distinguishability of easy-confused classes. Finally, extensive experimental results demonstrate that DAFS outperforms the other ten traditional and advanced feature selection methods on different datasets.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113218"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CD-DPC: Centrifugal degree based density peaks clustering algorithm CD-DPC:基于离心度的密度峰聚类算法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-02-06 DOI: 10.1016/j.patcog.2026.113223
Linlin Ma , Hui Li , Xincheng Liu , Huihui Chu , Yue Guan , Yuzhen Zhao , Yawen Chen , Da Wang , Wenke Zang
The Density Peak Clustering (DPC) algorithm is simple and efficient. But DPC and its variants identify clusters only by identifying the centers of single or multiple sparse clusters without considering the coherence of the clustering structure, which tends to result in clusters that cannot be accurately captured. In addition, relative distance and density are only used to identify the centers of clusters and do not provide a description of the relative positions of the remaining sample points. To address these issues, this paper proposes an adaptive density peak clustering algorithm based on centrifugal degree (CD-DPC). The centrifugal degree reflects the relative position of the sample points in the cluster. The CD-DPC categorizes sample points into support, structural, coherent and decoration points based on centrifugal degree. Based on this, the number of clusters is automatically obtained by using different association methods for sample points with different centrifugal degrees, which greatly reduces the influence of human factors. Finally, the clustering results are further improved by introducing shared nearest neighbors for the final association of decorated points. Extensive experiments on synthetic and UCI datasets show that this algorithm outperforms other comparative algorithms.
密度峰值聚类(DPC)算法简单、高效。但DPC及其变体仅通过识别单个或多个稀疏聚类的中心来识别聚类,而不考虑聚类结构的相干性,这往往导致无法准确捕获聚类。此外,相对距离和密度仅用于识别聚类的中心,而不能描述剩余样本点的相对位置。针对这些问题,本文提出了一种基于离心度的自适应密度峰聚类算法(CD-DPC)。离心度反映了样本点在聚类中的相对位置。CD-DPC根据离心度将样本点分为支撑点、结构点、连贯点和装饰点。在此基础上,对不同离心度的样本点采用不同的关联方法自动获得聚类个数,大大降低了人为因素的影响。最后,通过引入共享近邻来实现装饰点的最终关联,进一步改善聚类结果。在合成和UCI数据集上的大量实验表明,该算法优于其他比较算法。
{"title":"CD-DPC: Centrifugal degree based density peaks clustering algorithm","authors":"Linlin Ma ,&nbsp;Hui Li ,&nbsp;Xincheng Liu ,&nbsp;Huihui Chu ,&nbsp;Yue Guan ,&nbsp;Yuzhen Zhao ,&nbsp;Yawen Chen ,&nbsp;Da Wang ,&nbsp;Wenke Zang","doi":"10.1016/j.patcog.2026.113223","DOIUrl":"10.1016/j.patcog.2026.113223","url":null,"abstract":"<div><div>The Density Peak Clustering (DPC) algorithm is simple and efficient. But DPC and its variants identify clusters only by identifying the centers of single or multiple sparse clusters without considering the coherence of the clustering structure, which tends to result in clusters that cannot be accurately captured. In addition, relative distance and density are only used to identify the centers of clusters and do not provide a description of the relative positions of the remaining sample points. To address these issues, this paper proposes an adaptive density peak clustering algorithm based on centrifugal degree (CD-DPC). The centrifugal degree reflects the relative position of the sample points in the cluster. The CD-DPC categorizes sample points into support, structural, coherent and decoration points based on centrifugal degree. Based on this, the number of clusters is automatically obtained by using different association methods for sample points with different centrifugal degrees, which greatly reduces the influence of human factors. Finally, the clustering results are further improved by introducing shared nearest neighbors for the final association of decorated points. Extensive experiments on synthetic and UCI datasets show that this algorithm outperforms other comparative algorithms.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113223"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MonoTDF: Temporal deep feature learning for generalizable monocular 3D object detection 用于泛化单目3D物体检测的时间深度特征学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-28 DOI: 10.1016/j.patcog.2026.113184
Xiu-Zhi Chen , Yi-Kai Chiu , Chih-Sheng Huang , Yen-Lin Chen
Monocular 3D object detection has gained significant attention due to its cost-effectiveness and practicality in real-world applications. However, existing monocular methods often struggle with depth estimation and spatial consistency, limiting their accuracy in complex environments. In this work, we introduce a Temporal Deep Feature Learning framework, which enhances monocular 3D object detection by integrating temporal features across sequential frames. Our approach leverages a novel deep feature auxiliary module based on convolutional recurrent structures, effectively capturing spatiotemporal information to improve depth perception and detection robustness. The proposed module is model-agnostic and can be seamlessly integrated into various existing monocular detection frameworks. Extensive experiments across multiple state-of-the-art monocular 3D object detection models demonstrate consistent performance improvements, particularly in detecting small or partially occluded objects. Our results highlight the effectiveness and generalizability of the proposed approach, making it a promising solution for real-world autonomous perception systems. The source code of this work is at: https://github.com/Shuray36/MonoTDF-Temporal-Deep-Feature-Learning-for-Generalizable-Monocular-3D-Object-Detection.
单目三维目标检测由于其成本效益和在实际应用中的实用性而受到了极大的关注。然而,现有的单目方法往往在深度估计和空间一致性方面存在问题,限制了它们在复杂环境下的精度。在这项工作中,我们引入了一个时间深度特征学习框架,该框架通过整合跨序列帧的时间特征来增强单目3D目标检测。我们的方法利用一种基于卷积循环结构的新型深度特征辅助模块,有效地捕获时空信息,以提高深度感知和检测鲁棒性。该模块与模型无关,可以无缝集成到各种现有的单目检测框架中。在多个最先进的单眼3D物体检测模型上进行的广泛实验表明,性能得到了一致的改善,特别是在检测小或部分遮挡的物体方面。我们的结果突出了所提出方法的有效性和可泛化性,使其成为现实世界自主感知系统的一个有前途的解决方案。这项工作的源代码在:https://github.com/Shuray36/MonoTDF-Temporal-Deep-Feature-Learning-for-Generalizable-Monocular-3D-Object-Detection。
{"title":"MonoTDF: Temporal deep feature learning for generalizable monocular 3D object detection","authors":"Xiu-Zhi Chen ,&nbsp;Yi-Kai Chiu ,&nbsp;Chih-Sheng Huang ,&nbsp;Yen-Lin Chen","doi":"10.1016/j.patcog.2026.113184","DOIUrl":"10.1016/j.patcog.2026.113184","url":null,"abstract":"<div><div>Monocular 3D object detection has gained significant attention due to its cost-effectiveness and practicality in real-world applications. However, existing monocular methods often struggle with depth estimation and spatial consistency, limiting their accuracy in complex environments. In this work, we introduce a Temporal Deep Feature Learning framework, which enhances monocular 3D object detection by integrating temporal features across sequential frames. Our approach leverages a novel deep feature auxiliary module based on convolutional recurrent structures, effectively capturing spatiotemporal information to improve depth perception and detection robustness. The proposed module is model-agnostic and can be seamlessly integrated into various existing monocular detection frameworks. Extensive experiments across multiple state-of-the-art monocular 3D object detection models demonstrate consistent performance improvements, particularly in detecting small or partially occluded objects. Our results highlight the effectiveness and generalizability of the proposed approach, making it a promising solution for real-world autonomous perception systems. The source code of this work is at: <span><span>https://github.com/Shuray36/MonoTDF-Temporal-Deep-Feature-Learning-for-Generalizable-Monocular-3D-Object-Detection</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113184"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal behavioral analysis for autism spectrum disorder assessment 自闭症谱系障碍评估的多模态行为分析
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-02-07 DOI: 10.1016/j.patcog.2026.113155
Yunxiu Zhao , Shigang Wang , Feiyong Jia , Honghua Li , Jinyang Wu , Jian Wei , Yan Zhao
Scale-dependent approaches have shown great potential in diagnosing autism spectrum disorder (ASD). However, such methods often involve lengthy evaluation procedures and require substantial resources, including trained professionals and specialized equipment, which significantly limit their scalability and feasibility for large-scale or routine clinical assessments. In this paper, we propose a novel multimodal behavioral signal analysis (MBSA) approach for the intelligent assessment of ASD. Specifically, we first leverage speech and visual cues to identify the Target Movement Area (TMA), thereby enhancing recognition efficiency. Then, an adaptive fine-tuning strategy is employed to improve the generalization and efficiency of pre-trained models in small-sample action recognition tasks. An attention-based detection method is further incorporated to strengthen the semantic understanding of observed behavioral patterns. To enable effective ASD classification, we develop a behavioral quantification scoring method that structurally models the relationship between behavioral features and disease indicators. We collected a multimodal behavioral database of 160 participants in a real clinical setting and assessed ASD using this data. Extensive experiments demonstrate that the proposed MBSA approach significantly outperforms many state-of-the-art methods. With competitive performance and a solid theoretical foundation, MBSA provides a practical and scalable solution for ASD screening and holds promise for broader applications in the intelligent diagnosis of other neurodevelopmental disorders.
依赖于量表的方法在诊断自闭症谱系障碍(ASD)方面显示出巨大的潜力。然而,这种方法往往涉及冗长的评估程序,需要大量资源,包括训练有素的专业人员和专用设备,这大大限制了其大规模或常规临床评估的可扩展性和可行性。在本文中,我们提出了一种新的多模态行为信号分析(MBSA)方法来智能评估ASD。具体来说,我们首先利用语音和视觉线索来识别目标运动区域(TMA),从而提高识别效率。然后,采用自适应微调策略提高预训练模型在小样本动作识别任务中的泛化和效率。进一步采用基于注意的检测方法来加强对观察到的行为模式的语义理解。为了实现有效的ASD分类,我们开发了一种行为量化评分方法,该方法在结构上模拟了行为特征与疾病指标之间的关系。我们在真实的临床环境中收集了160名参与者的多模式行为数据库,并使用这些数据评估ASD。大量的实验表明,所提出的MBSA方法明显优于许多最先进的方法。具有竞争力的性能和坚实的理论基础,MBSA为ASD筛查提供了实用和可扩展的解决方案,并有望在其他神经发育障碍的智能诊断中得到更广泛的应用。
{"title":"Multimodal behavioral analysis for autism spectrum disorder assessment","authors":"Yunxiu Zhao ,&nbsp;Shigang Wang ,&nbsp;Feiyong Jia ,&nbsp;Honghua Li ,&nbsp;Jinyang Wu ,&nbsp;Jian Wei ,&nbsp;Yan Zhao","doi":"10.1016/j.patcog.2026.113155","DOIUrl":"10.1016/j.patcog.2026.113155","url":null,"abstract":"<div><div>Scale-dependent approaches have shown great potential in diagnosing autism spectrum disorder (ASD). However, such methods often involve lengthy evaluation procedures and require substantial resources, including trained professionals and specialized equipment, which significantly limit their scalability and feasibility for large-scale or routine clinical assessments. In this paper, we propose a novel multimodal behavioral signal analysis (MBSA) approach for the intelligent assessment of ASD. Specifically, we first leverage speech and visual cues to identify the Target Movement Area (TMA), thereby enhancing recognition efficiency. Then, an adaptive fine-tuning strategy is employed to improve the generalization and efficiency of pre-trained models in small-sample action recognition tasks. An attention-based detection method is further incorporated to strengthen the semantic understanding of observed behavioral patterns. To enable effective ASD classification, we develop a behavioral quantification scoring method that structurally models the relationship between behavioral features and disease indicators. We collected a multimodal behavioral database of 160 participants in a real clinical setting and assessed ASD using this data. Extensive experiments demonstrate that the proposed MBSA approach significantly outperforms many state-of-the-art methods. With competitive performance and a solid theoretical foundation, MBSA provides a practical and scalable solution for ASD screening and holds promise for broader applications in the intelligent diagnosis of other neurodevelopmental disorders.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113155"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual relation extraction with wake-sleep memory consolidation 连续关系提取与清醒-睡眠记忆巩固
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-29 DOI: 10.1016/j.patcog.2026.113192
Tingting Hang , Ya Guo , Jun Huang , Yirui Wu , Umapada Pal , Shivakumara Palaiahnakote
Continual Relation Extraction (CRE) has achieved significant success due to its ability to adapt to new relations without frequent retraining. However, existing methods still face challenges such as overfitting and representation bias. Inspired by the wake-sleep memory consolidation process of the human brain, this paper proposes a Wake-Sleep Memory Consolidation (WSMC) framework to address these issues systematically. During the wake phase, the model simulates the brain’s information processing mechanism, quickly encoding new relations and storing them in short-term memory. We also introduce the Experience Iterative Learning (EIL) approach, which dynamically adjusts the distribution of relation samples. This approach corrects the model’s representation bias and enhances memory stability through experience replay. During the sleep phase, the model consolidates existing knowledge by replaying long-term memory. Moreover, the framework generates diverse dream data from existing memory sets, thereby increasing the diversity of the training data and improving the model’s generalization capability. Experimental results show that WSMC significantly outperforms other CRE baseline methods on FewRel and TACRED datasets, demonstrating its superior performance compared to baseline methods. Our source code is available at https://github.com/Gyanis9/WSMC.git.
连续关系提取(CRE)由于能够适应新的关系而无需频繁的再培训而取得了显著的成功。然而,现有的方法仍然面临着过拟合和代表性偏差等挑战。受人类大脑清醒-睡眠记忆巩固过程的启发,本文提出了一个清醒-睡眠记忆巩固(WSMC)框架来系统地解决这些问题。在清醒阶段,该模型模拟大脑的信息处理机制,快速编码新的关系并将其存储在短期记忆中。我们还介绍了经验迭代学习(EIL)方法,该方法可以动态调整关系样本的分布。这种方法纠正了模型的表征偏差,并通过经验重放增强了记忆的稳定性。在睡眠阶段,该模型通过重放长期记忆来巩固现有知识。此外,该框架从现有的记忆集中生成多样化的梦数据,从而增加了训练数据的多样性,提高了模型的泛化能力。实验结果表明,在FewRel和TACRED数据集上,WSMC显著优于其他CRE基线方法,显示了其优于基线方法的性能。我们的源代码可从https://github.com/Gyanis9/WSMC.git获得。
{"title":"Continual relation extraction with wake-sleep memory consolidation","authors":"Tingting Hang ,&nbsp;Ya Guo ,&nbsp;Jun Huang ,&nbsp;Yirui Wu ,&nbsp;Umapada Pal ,&nbsp;Shivakumara Palaiahnakote","doi":"10.1016/j.patcog.2026.113192","DOIUrl":"10.1016/j.patcog.2026.113192","url":null,"abstract":"<div><div>Continual Relation Extraction (CRE) has achieved significant success due to its ability to adapt to new relations without frequent retraining. However, existing methods still face challenges such as overfitting and representation bias. Inspired by the wake-sleep memory consolidation process of the human brain, this paper proposes a <strong>W</strong>ake-<strong>S</strong>leep <strong>M</strong>emory <strong>C</strong>onsolidation (WSMC) framework to address these issues systematically. During the wake phase, the model simulates the brain’s information processing mechanism, quickly encoding new relations and storing them in short-term memory. We also introduce the Experience Iterative Learning (EIL) approach, which dynamically adjusts the distribution of relation samples. This approach corrects the model’s representation bias and enhances memory stability through experience replay. During the sleep phase, the model consolidates existing knowledge by replaying long-term memory. Moreover, the framework generates diverse dream data from existing memory sets, thereby increasing the diversity of the training data and improving the model’s generalization capability. Experimental results show that WSMC significantly outperforms other CRE baseline methods on FewRel and TACRED datasets, demonstrating its superior performance compared to baseline methods. Our source code is available at <span><span>https://github.com/Gyanis9/WSMC.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113192"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
全部 J. Field Rob. J. Bionic Eng. ACTA INFORM Adv. Rob. AI MAG Ann. Math. Artif. Intell. Appl. Bionics Biomech. APPL INTELL APPL COMPUT ELECTROM APPL ARTIF INTELL Artif. Intell. ARTIF INTELL REV CHEMOMETR INTELL LAB China Commun. CMC-Comput. Mater. Continua Complex Intell. Syst. Comput. Sci. Eng. Commun. ACM COMPUTER Comput. Graphics Forum COMPUTING EMPIR SOFTW ENG Enterp. Inf. Syst. EPJ Data Sci. ETRI J EURASIP J WIREL COMM Evolving Systems FORM METHOD SYST DES Front. Neurorob. FRONT COMPUT SCI-CHI IEEE Trans. Commun. IEEE Trans. Comput. Social Syst. IEEE Trans. Dependable Secure Comput. IEEE Trans. Green Commun. Networking IEEE Trans. Cognit. Commun. Networking IEEE Access IEEE Trans. Comput. IEEE Antennas Propag. Mag. IEEE Micro IEEE Trans. Antennas Propag. IEEE Trans. Control Syst. Technol. IEEE Trans. Big Data IEEE Trans. Cybern. IEEE Internet Comput. IEEE Trans. Affective Comput. IEEE Trans. Emerging Top. Comput. Intell. IEEE SECUR PRIV IEEE Trans. Emerging Top. Comput. IEEE Trans. Aerosp. Electron. Syst. IEEE Trans. Broadcast. IEEE Intell. Syst. IEEE Commun. Lett. IEEE Trans. Autom. Control IEEE Trans. Cloud Comput. IEEE Trans. Evol. Comput. IEEE Trans. Consum. Electron. IEEE Trans. Fuzzy Syst. IEEE Trans. Haptic IEEE Trans. Image Process. IEEE Multimedia IEEE Rob. Autom. Lett. IEEE J. Sel. Areas Commun. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. IETE Tech. Rev. IEEE Trans. Serv. Comput. IEEE Trans. Parallel Distrib. Syst. IEEE Trans. Sustainable Comput. IEEE Trans. Multimedia IEEE Trans. Ind. Inf. IEEE Trans. Neural Networks Learn. Syst. IEEE Trans. Software Eng. IEEE-ACM T AUDIO SPE IEEE Wireless Commun. IEEE Wireless Commun. Lett. IET MICROW ANTENNA P IEEE Trans. Visual Comput. Graphics IEEE Trans. Ind. Electron. IET Optoelectron IEEE Trans. Veh. Technol. IEEE Trans. Netw. Serv. Manage. IEEE Trans. Pattern Anal. Mach. Intell. IEEE Trans. Wireless Commun. IEEE ACM T NETWORK IEEE Trans. Inf. Forensics Secur. IEEE Trans. Inf. Theory IEEE Trans. Knowl. Data Eng. INFORM SYST FRONT INFORMS J COMPUT INFOR Int. J. Comput. Vision Int. J. Approximate Reasoning Int. J. Control Int. J. Commun. Syst. Int. J. Imaging Syst. Technol. Int. J. Fuzzy Syst. Int. J. Intell. Syst. Int. J. Network Manage. Int. J. Parallel Program. Int. J. Social Rob. Int. J. Software Tools Technol. Trans.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1