Pub Date : 2024-05-11DOI: 10.1007/s10472-024-09944-8
Rashel Fam, Yves Lepage
We perform a study on the universal morphological analysis task: given a word form, generate the lemma (lemmatisation) and its corresponding morphosyntactic descriptions (MSD analysis). Experiments are carried out on the SIGMORPHON 2018 Shared Task: Morphological Reinflection Task dataset which consists of more than 100 different languages with various morphological richness under three different data size conditions: low, medium and high. We consider three main approaches: morpheme-based (eager learning), holistic (lazy learning), and neural (eager learning). Performance is evaluated on the two subtasks of lemmatisation and MSD analysis. For the lemmatisation subtask, under all three data sizes, experimental results show that the holistic approach predicted more accurate lemmata, while the morpheme-based approach produced lemmata closer to the answers when it produces the wrong answers. For the MSD analysis subtask, under all three data sizes, the holistic approach achieves higher recall, while the morpheme-based approach is more precise. However, the trade-off between precision and recall of the two systems leads to a very similar overall F1 score. On the whole, neural approaches suffer under low resource conditions, but they achieve the best performance in comparison to the other approaches when the size of the training data increases.
{"title":"A study of universal morphological analysis using morpheme-based, holistic, and neural approaches under various data size conditions","authors":"Rashel Fam, Yves Lepage","doi":"10.1007/s10472-024-09944-8","DOIUrl":"https://doi.org/10.1007/s10472-024-09944-8","url":null,"abstract":"<p>We perform a study on the universal morphological analysis task: given a word form, generate the lemma (lemmatisation) and its corresponding morphosyntactic descriptions (MSD analysis). Experiments are carried out on the SIGMORPHON 2018 Shared Task: Morphological Reinflection Task dataset which consists of more than 100 different languages with various morphological richness under three different data size conditions: low, medium and high. We consider three main approaches: morpheme-based (eager learning), holistic (lazy learning), and neural (eager learning). Performance is evaluated on the two subtasks of lemmatisation and MSD analysis. For the lemmatisation subtask, under all three data sizes, experimental results show that the holistic approach predicted more accurate lemmata, while the morpheme-based approach produced lemmata closer to the answers when it produces the wrong answers. For the MSD analysis subtask, under all three data sizes, the holistic approach achieves higher recall, while the morpheme-based approach is more precise. However, the trade-off between precision and recall of the two systems leads to a very similar overall F1 score. On the whole, neural approaches suffer under low resource conditions, but they achieve the best performance in comparison to the other approaches when the size of the training data increases.</p>","PeriodicalId":7971,"journal":{"name":"Annals of Mathematics and Artificial Intelligence","volume":"42 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140930676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-26DOI: 10.1007/s10472-024-09943-9
Seonho Park, Panos M. Pardalos
Estimating the data density is one of the challenging problem topics in the deep learning society. In this paper, we present a simple yet effective methodology for estimating the data density using the Donsker-Varadhan variational lower bound on the KL divergence and the modeling based on the deep neural network. We demonstrate that the optimal critic function associated with the Donsker-Varadhan representation on the KL divergence between the data and the uniform distribution can estimate the data density. Also, we present the deep neural network-based modeling and its stochastic learning procedure. The experimental results and possible applications of the proposed method demonstrate that it is competitive with the previous methods for data density estimation and has a lot of possibilities for various applications.
{"title":"Deep data density estimation through Donsker-Varadhan representation","authors":"Seonho Park, Panos M. Pardalos","doi":"10.1007/s10472-024-09943-9","DOIUrl":"https://doi.org/10.1007/s10472-024-09943-9","url":null,"abstract":"<p>Estimating the data density is one of the challenging problem topics in the deep learning society. In this paper, we present a simple yet effective methodology for estimating the data density using the Donsker-Varadhan variational lower bound on the KL divergence and the modeling based on the deep neural network. We demonstrate that the optimal critic function associated with the Donsker-Varadhan representation on the KL divergence between the data and the uniform distribution can estimate the data density. Also, we present the deep neural network-based modeling and its stochastic learning procedure. The experimental results and possible applications of the proposed method demonstrate that it is competitive with the previous methods for data density estimation and has a lot of possibilities for various applications.</p>","PeriodicalId":7971,"journal":{"name":"Annals of Mathematics and Artificial Intelligence","volume":"19 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140806658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-17DOI: 10.1007/s10472-024-09938-6
Jérémy Lemée, Danai Vachtsevanou, Simon Mayer, Andrei Ciortea
The ecological psychologist James J. Gibson defined the notion of affordances to refer to what action possibilities environments offer to animals. In this paper, we show how (artificial) agents can discover and exploit affordances in a Multi-Agent System (MAS) environment to achieve their goals. To indicate to agents what affordances are present in their environment and whether it is likely that these may help the agents to achieve their objectives, the environment may expose signifiers while taking into account the current situation of the environment and of the agent. On this basis, we define a Signifier Exposure Mechanism that is used by the environment to compute which signifiers should be exposed to agents in order to permit agents to only perceive information about affordances that are likely to be relevant to them, and thereby increase their interaction efficiency. If this is successful, agents can interact with partially observable environments more efficiently because the signifiers indicate the affordances they can exploit towards given purposes. Signifiers thereby facilitate the exploration and the exploitation of MAS environments. Implementations of signifiers and of the Signifier Exposure Mechanism are presented within the context of a Hypermedia Multi-Agent System, and the utility of this approach is presented through the development of a scenario.
生态心理学家詹姆斯-吉布森(James J. Gibson)定义了 "可负担性"(affordances)这一概念,指的是环境为动物提供的行动可能性。在本文中,我们将展示(人工)代理如何在多代理系统(MAS)环境中发现并利用可负担性来实现其目标。为了向代理指明其所处环境中存在哪些可负担性,以及这些可负担性是否有可能帮助代理实现其目标,环境可以在考虑到环境和代理当前情况的情况下暴露出标志物。在此基础上,我们定义了一种标识符暴露机制,由环境来计算哪些标识符应暴露给代理,以便让代理只感知可能与其相关的负担能力信息,从而提高其交互效率。如果这样做成功的话,代理就能更有效地与部分可观测环境进行交互,因为标识符指明了他们可以利用的能力,以达到特定目的。因此,标识符有助于探索和利用 MAS 环境。在超媒体多代理系统的背景下,介绍了标识符和标识符暴露机制的实施,并通过一个场景的开发介绍了这种方法的实用性。
{"title":"Signifiers for conveying and exploiting affordances: from human-computer interaction to multi-agent systems","authors":"Jérémy Lemée, Danai Vachtsevanou, Simon Mayer, Andrei Ciortea","doi":"10.1007/s10472-024-09938-6","DOIUrl":"10.1007/s10472-024-09938-6","url":null,"abstract":"<div><p>The ecological psychologist James J. Gibson defined the notion of affordances to refer to what action possibilities environments offer to animals. In this paper, we show how (artificial) agents can discover and exploit affordances in a Multi-Agent System (MAS) environment to achieve their goals. To indicate to agents what affordances are present in their environment and whether it is likely that these may help the agents to achieve their objectives, the environment may expose signifiers while taking into account the current situation of the environment and of the agent. On this basis, we define a Signifier Exposure Mechanism that is used by the environment to compute which signifiers should be exposed to agents in order to permit agents to only perceive information about affordances that are likely to be relevant to them, and thereby increase their interaction efficiency. If this is successful, agents can interact with partially observable environments more efficiently because the signifiers indicate the affordances they can exploit towards given purposes. Signifiers thereby facilitate the exploration and the exploitation of MAS environments. Implementations of signifiers and of the Signifier Exposure Mechanism are presented within the context of a Hypermedia Multi-Agent System, and the utility of this approach is presented through the development of a scenario.</p></div>","PeriodicalId":7971,"journal":{"name":"Annals of Mathematics and Artificial Intelligence","volume":"92 4","pages":"815 - 835"},"PeriodicalIF":1.2,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10472-024-09938-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140616802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes experiments showing that some tasks in natural language processing (NLP) can already be performed using quantum computers, though so far only with small datasets. We demonstrate various approaches to topic classification. The first uses an explicit word-based approach, in which word-topic weights are implemented as fractional rotations of individual qubits, and a phrase is classified based on the accumulation of these weights onto a scoring qubit, using entangling quantum gates. This is compared with more scalable quantum encodings of word embedding vectors, which are used to compute kernel values in a quantum support vector machine: this approach achieved an average of 62% accuracy on classification tasks involving over 10000 words, which is the largest such quantum computing experiment to date. We describe a quantum probability approach to bigram modeling that can be applied to understand sequences of words and formal concepts, investigate a generative approximation to these distributions using a quantum circuit Born machine, and introduce an approach to ambiguity resolution in verb-noun composition using single-qubit rotations for simple nouns and 2-qubit entangling gates for simple verbs. The smaller systems presented have been run successfully on physical quantum computers, and the larger ones have been simulated. We show that statistically meaningful results can be obtained, but the quality of individual results varies much more using real datasets than using artificial language examples from previous quantum NLP research. Related NLP research is compared, partly with respect to contemporary challenges including informal language, fluency, and truthfulness.
{"title":"Near-term advances in quantum natural language processing","authors":"Dominic Widdows, Aaranya Alexander, Daiwei Zhu, Chase Zimmerman, Arunava Majumder","doi":"10.1007/s10472-024-09940-y","DOIUrl":"10.1007/s10472-024-09940-y","url":null,"abstract":"<div><p>This paper describes experiments showing that some tasks in natural language processing (NLP) can already be performed using quantum computers, though so far only with small datasets. We demonstrate various approaches to topic classification. The first uses an explicit word-based approach, in which word-topic weights are implemented as fractional rotations of individual qubits, and a phrase is classified based on the accumulation of these weights onto a scoring qubit, using entangling quantum gates. This is compared with more scalable quantum encodings of word embedding vectors, which are used to compute kernel values in a quantum support vector machine: this approach achieved an average of 62% accuracy on classification tasks involving over 10000 words, which is the largest such quantum computing experiment to date. We describe a quantum probability approach to bigram modeling that can be applied to understand sequences of words and formal concepts, investigate a generative approximation to these distributions using a quantum circuit Born machine, and introduce an approach to ambiguity resolution in verb-noun composition using single-qubit rotations for simple nouns and 2-qubit entangling gates for simple verbs. The smaller systems presented have been run successfully on physical quantum computers, and the larger ones have been simulated. We show that statistically meaningful results can be obtained, but the quality of individual results varies much more using real datasets than using artificial language examples from previous quantum NLP research. Related NLP research is compared, partly with respect to contemporary challenges including informal language, fluency, and truthfulness.</p></div>","PeriodicalId":7971,"journal":{"name":"Annals of Mathematics and Artificial Intelligence","volume":"92 5","pages":"1249 - 1272"},"PeriodicalIF":1.2,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.1007/s10472-024-09941-x
Serdar Kadıoğlu, Bernard Kleynhans, Xin Wang
Recommender Systems have become the backbone of personalized services that provide tailored experiences to individual users, yet designing new recommendation applications with limited or no available training data remains a challenge. To address this issue, we focus on selecting the universe of items for experimentation in recommender systems by leveraging a recently introduced combinatorial problem. On the one hand, selecting a large set of items is desirable to increase the diversity of items. On the other hand, a smaller set of items enables rapid experimentation and minimizes the time and the amount of data required to train machine learning models. We first present how to optimize for such conflicting criteria using a multi-level optimization framework. Then, we shift our focus to the operational setting of a recommender system. In practice, to work effectively in a dynamic environment where new items are introduced to the system, we need to explore users’ behaviors and interests continuously. To that end, we show how to integrate the item selection approach with active learning to guide randomized exploration in an ongoing fashion. Our hybrid approach combines techniques from discrete optimization, unsupervised clustering, and latent text embeddings. Experimental results on well-known movie and book recommendation benchmarks demonstrate the benefits of optimized item selection and efficient exploration.
{"title":"Integrating optimized item selection with active learning for continuous exploration in recommender systems","authors":"Serdar Kadıoğlu, Bernard Kleynhans, Xin Wang","doi":"10.1007/s10472-024-09941-x","DOIUrl":"https://doi.org/10.1007/s10472-024-09941-x","url":null,"abstract":"<p>Recommender Systems have become the backbone of personalized services that provide tailored experiences to individual users, yet designing new recommendation applications with limited or no available training data remains a challenge. To address this issue, we focus on selecting the universe of items for experimentation in recommender systems by leveraging a recently introduced combinatorial problem. On the one hand, selecting a large set of items is desirable to increase the diversity of items. On the other hand, a smaller set of items enables rapid experimentation and minimizes the time and the amount of data required to train machine learning models. We first present how to optimize for such conflicting criteria using a multi-level optimization framework. Then, we shift our focus to the operational setting of a recommender system. In practice, to work effectively in a dynamic environment where new items are introduced to the system, we need to explore users’ behaviors and interests continuously. To that end, we show how to integrate the item selection approach with active learning to guide randomized exploration in an ongoing fashion. Our hybrid approach combines techniques from discrete optimization, unsupervised clustering, and latent text embeddings. Experimental results on well-known movie and book recommendation benchmarks demonstrate the benefits of optimized item selection and efficient exploration.</p>","PeriodicalId":7971,"journal":{"name":"Annals of Mathematics and Artificial Intelligence","volume":"105 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.1007/s10472-024-09939-5
Vikram Voleti, Chris Finlay, Adam Oberman, Christopher Pal
Recent work has shown that Neural Ordinary Differential Equations (ODEs) can serve as generative models of images using the perspective of Continuous Normalizing Flows (CNFs). Such models offer exact likelihood calculation, and invertible generation/density estimation. In this work we introduce a Multi-Resolution variant of such models (MRCNF), by characterizing the conditional distribution over the additional information required to generate a fine image that is consistent with the coarse image. We introduce a transformation between resolutions that allows for no change in the log likelihood. We show that this approach yields comparable likelihood values for various image datasets, with improved performance at higher resolutions, with fewer parameters, using only one GPU. Further, we examine the out-of-distribution properties of MRCNFs, and find that they are similar to those of other likelihood-based generative models.
{"title":"Multi-resolution continuous normalizing flows","authors":"Vikram Voleti, Chris Finlay, Adam Oberman, Christopher Pal","doi":"10.1007/s10472-024-09939-5","DOIUrl":"10.1007/s10472-024-09939-5","url":null,"abstract":"<div><p>Recent work has shown that Neural Ordinary Differential Equations (ODEs) can serve as generative models of images using the perspective of Continuous Normalizing Flows (CNFs). Such models offer exact likelihood calculation, and invertible generation/density estimation. In this work we introduce a Multi-Resolution variant of such models (MRCNF), by characterizing the conditional distribution over the additional information required to generate a fine image that is consistent with the coarse image. We introduce a transformation between resolutions that allows for no change in the log likelihood. We show that this approach yields comparable likelihood values for various image datasets, with improved performance at higher resolutions, with fewer parameters, using only one GPU. Further, we examine the out-of-distribution properties of MRCNFs, and find that they are similar to those of other likelihood-based generative models.</p></div>","PeriodicalId":7971,"journal":{"name":"Annals of Mathematics and Artificial Intelligence","volume":"92 5","pages":"1295 - 1317"},"PeriodicalIF":1.2,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140203649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-19DOI: 10.1007/s10472-024-09929-7
L. Thorne McCarty
This paper develops a theory of clustering and coding that combines a geometric model with a probabilistic model in a principled way. The geometric model is a Riemannian manifold with a Riemannian metric, ({g}_{ij}(textbf{x})), which we interpret as a measure of dissimilarity. The probabilistic model consists of a stochastic process with an invariant probability measure that matches the density of the sample input data. The link between the two models is a potential function, (U(textbf{x})), and its gradient, (nabla U(textbf{x})). We use the gradient to define the dissimilarity metric, which guarantees that our measure of dissimilarity will depend on the probability measure. Finally, we use the dissimilarity metric to define a coordinate system on the embedded Riemannian manifold, which gives us a low-dimensional encoding of our original data.
{"title":"Clustering, coding, and the concept of similarity","authors":"L. Thorne McCarty","doi":"10.1007/s10472-024-09929-7","DOIUrl":"10.1007/s10472-024-09929-7","url":null,"abstract":"<div><p>This paper develops a theory of <i>clustering</i> and <i>coding</i> that combines a geometric model with a probabilistic model in a principled way. The geometric model is a Riemannian manifold with a Riemannian metric, <span>({g}_{ij}(textbf{x}))</span>, which we interpret as a measure of <i>dissimilarity</i>. The probabilistic model consists of a stochastic process with an invariant probability measure that matches the density of the sample input data. The link between the two models is a potential function, <span>(U(textbf{x}))</span>, and its gradient, <span>(nabla U(textbf{x}))</span>. We use the gradient to define the dissimilarity metric, which guarantees that our measure of dissimilarity will depend on the probability measure. Finally, we use the dissimilarity metric to define a coordinate system on the embedded Riemannian manifold, which gives us a low-dimensional encoding of our original data.</p></div>","PeriodicalId":7971,"journal":{"name":"Annals of Mathematics and Artificial Intelligence","volume":"92 5","pages":"1197 - 1248"},"PeriodicalIF":1.2,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140172695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1007/s10472-024-09936-8
Teddy Lazebnik, Avi Rosenfeld
Feature selection (FS) stability is an important topic of recent interest. Finding stable features is important for creating reliable, non-overfitted feature sets, which in turn can be used to generate machine learning models with better accuracy and explanations and are less prone to adversarial attacks. There are currently several definitions of FS stability that are widely used. In this paper, we demonstrate that existing stability metrics fail to quantify certain key elements of many datasets such as resilience to data drift or non-uniformly distributed missing values. To address this shortcoming, we propose a new definition for FS stability inspired by Lyapunov stability in dynamic systems. We show the proposed definition is statistically different from the classical record-stability on ((n=90)) datasets. We present the advantages and disadvantages of using Lyapunov and other stability definitions and demonstrate three scenarios in which each one of the three proposed stability metrics is best suited.
{"title":"A new definition for feature selection stability analysis","authors":"Teddy Lazebnik, Avi Rosenfeld","doi":"10.1007/s10472-024-09936-8","DOIUrl":"10.1007/s10472-024-09936-8","url":null,"abstract":"<div><p>Feature selection (FS) stability is an important topic of recent interest. Finding stable features is important for creating reliable, non-overfitted feature sets, which in turn can be used to generate machine learning models with better accuracy and explanations and are less prone to adversarial attacks. There are currently several definitions of FS stability that are widely used. In this paper, we demonstrate that existing stability metrics fail to quantify certain key elements of many datasets such as resilience to data drift or non-uniformly distributed missing values. To address this shortcoming, we propose a new definition for FS stability inspired by Lyapunov stability in dynamic systems. We show the proposed definition is statistically different from the classical <i>record-stability</i> on (<span>(n=90)</span>) datasets. We present the advantages and disadvantages of using Lyapunov and other stability definitions and demonstrate three scenarios in which each one of the three proposed stability metrics is best suited.</p></div>","PeriodicalId":7971,"journal":{"name":"Annals of Mathematics and Artificial Intelligence","volume":"92 3","pages":"753 - 770"},"PeriodicalIF":1.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10472-024-09936-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140017398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1007/s10472-024-09934-w
Francine Chen, Yanxia Zhang, Minh Nguyen, Matt Klenk, Charlene Wu
{"title":"Correction: Personalized choice prediction with less user information","authors":"Francine Chen, Yanxia Zhang, Minh Nguyen, Matt Klenk, Charlene Wu","doi":"10.1007/s10472-024-09934-w","DOIUrl":"10.1007/s10472-024-09934-w","url":null,"abstract":"","PeriodicalId":7971,"journal":{"name":"Annals of Mathematics and Artificial Intelligence","volume":"92 3","pages":"771 - 771"},"PeriodicalIF":1.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10472-024-09934-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}