Pub Date : 2024-01-31DOI: 10.1007/s10618-024-01003-4
Abstract
The detection of central nodes in a network is a fundamental task in network science and graph data analysis. During the past decades, numerous centrality measures have been presented to characterize what is a central node. However, few studies address this issue from a statistical inference perspective. In this paper, we formulate the central node identification issue as a weighted kernel density estimation problem on graphs. Such a formulation provides a generic framework for recognizing central nodes. On one hand, some existing centrality evaluation metrics can be unified under this framework through the manipulation of kernel functions. On the other hand, more effective methods for node centrality assessment can be developed based on proper weighting coefficient specification. Experimental results on 20 simulated networks and 53 real networks show that our method outperforms both six prior state-of-the-art centrality measures and two recently proposed centrality evaluation methods. To the best of our knowledge, this is the first piece of work that addresses the central node identification issue via weighted kernel density estimation.
{"title":"Central node identification via weighted kernel density estimation","authors":"","doi":"10.1007/s10618-024-01003-4","DOIUrl":"https://doi.org/10.1007/s10618-024-01003-4","url":null,"abstract":"<h3>Abstract</h3> <p>The detection of central nodes in a network is a fundamental task in network science and graph data analysis. During the past decades, numerous centrality measures have been presented to characterize what is a central node. However, few studies address this issue from a statistical inference perspective. In this paper, we formulate the central node identification issue as a weighted kernel density estimation problem on graphs. Such a formulation provides a generic framework for recognizing central nodes. On one hand, some existing centrality evaluation metrics can be unified under this framework through the manipulation of kernel functions. On the other hand, more effective methods for node centrality assessment can be developed based on proper weighting coefficient specification. Experimental results on 20 simulated networks and 53 real networks show that our method outperforms both six prior state-of-the-art centrality measures and two recently proposed centrality evaluation methods. To the best of our knowledge, this is the first piece of work that addresses the central node identification issue via weighted kernel density estimation.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"21 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139658286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-09DOI: 10.1007/s10618-023-00999-5
Marco Heyden, Edouard Fouché, Vadim Arzamasov, Tanja Fenn, Florian Kalinke, Klemens Böhm
Change detection is of fundamental importance when analyzing data streams. Detecting changes both quickly and accurately enables monitoring and prediction systems to react, e.g., by issuing an alarm or by updating a learning algorithm. However, detecting changes is challenging when observations are high-dimensional. In high-dimensional data, change detectors should not only be able to identify when changes happen, but also in which subspace they occur. Ideally, one should also quantify how severe they are. Our approach, ABCD, has these properties. ABCD learns an encoder-decoder model and monitors its accuracy over a window of adaptive size. ABCD derives a change score based on Bernstein’s inequality to detect deviations in terms of accuracy, which indicate changes. Our experiments demonstrate that ABCD outperforms its best competitor by up to 20% in F1-score on average. It can also accurately estimate changes’ subspace, together with a severity measure that correlates with the ground truth.
在分析数据流时,变化检测至关重要。快速而准确地检测变化可使监控和预测系统做出反应,例如发出警报或更新学习算法。然而,当观测数据是高维数据时,检测变化是一项挑战。在高维数据中,变化检测器不仅要能识别变化发生的时间,还要能识别变化发生在哪个子空间。理想情况下,还应该量化变化的严重程度。我们的方法 ABCD 就具有这些特性。ABCD 学习编码器-解码器模型,并在一个自适应大小的窗口内监控其准确性。ABCD 基于伯恩斯坦不等式得出变化分数,以检测准确度方面的偏差,这表明发生了变化。我们的实验证明,ABCD 的 F1 分数平均比最佳竞争对手高出 20%。它还能准确估计变化的子空间,以及与地面实况相关的严重程度。
{"title":"Adaptive Bernstein change detector for high-dimensional data streams","authors":"Marco Heyden, Edouard Fouché, Vadim Arzamasov, Tanja Fenn, Florian Kalinke, Klemens Böhm","doi":"10.1007/s10618-023-00999-5","DOIUrl":"https://doi.org/10.1007/s10618-023-00999-5","url":null,"abstract":"<p>Change detection is of fundamental importance when analyzing data streams. Detecting changes both quickly and accurately enables monitoring and prediction systems to react, e.g., by issuing an alarm or by updating a learning algorithm. However, detecting changes is challenging when observations are high-dimensional. In high-dimensional data, change detectors should not only be able to identify when changes happen, but also in which subspace they occur. Ideally, one should also quantify how severe they are. Our approach, ABCD, has these properties. ABCD learns an encoder-decoder model and monitors its accuracy over a window of adaptive size. ABCD derives a change score based on Bernstein’s inequality to detect deviations in terms of accuracy, which indicate changes. Our experiments demonstrate that ABCD outperforms its best competitor by up to 20% in F1-score on average. It can also accurately estimate changes’ subspace, together with a severity measure that correlates with the ground truth.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"54 ","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139411891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-05DOI: 10.1007/s10618-023-00992-y
Zhanbo Liang, Jie Guo, Weidong Qiu, Zheng Huang, Shujun Li
With the rise of Web 2.0 platforms such as online social media, people’s private information, such as their location, occupation and even family information, is often inadvertently disclosed through online discussions. Therefore, it is important to detect such unwanted privacy disclosures to help alert people affected and the online platform. In this paper, privacy disclosure detection is modeled as a multi-label text classification (MLTC) problem, and a new privacy disclosure detection model is proposed to construct an MLTC classifier for detecting online privacy disclosures. This classifier takes an online post as the input and outputs multiple labels, each reflecting a possible privacy disclosure. The proposed presentation method combines three different sources of information, the input text itself, the label-to-text correlation and the label-to-label correlation. A double-attention mechanism is used to combine the first two sources of information, and a graph convolutional network is employed to extract the third source of information that is then used to help fuse features extracted from the first two sources of information. Our extensive experimental results, obtained on a public dataset of privacy-disclosing posts on Twitter, demonstrated that our proposed privacy disclosure detection method significantly and consistently outperformed other state-of-the-art methods in terms of all key performance indicators.
随着网络社交媒体等 Web 2.0 平台的兴起,人们的私人信息,如位置、职业甚至家庭信息,往往会在网上讨论中不经意地泄露。因此,检测此类不必要的隐私泄露以帮助提醒受影响者和网络平台是非常重要的。本文将隐私披露检测建模为一个多标签文本分类(MLTC)问题,并提出了一个新的隐私披露检测模型,以构建一个用于检测在线隐私披露的 MLTC 分类器。该分类器以网上帖子为输入,输出多个标签,每个标签反映一个可能的隐私披露。所提出的呈现方法结合了三种不同的信息来源:输入文本本身、标签与文本之间的相关性以及标签与标签之间的相关性。双重关注机制用于结合前两个信息源,图卷积网络用于提取第三个信息源,然后用来帮助融合从前两个信息源中提取的特征。我们在 Twitter 上公开的隐私披露帖子数据集上取得的大量实验结果表明,我们提出的隐私披露检测方法在所有关键性能指标上都显著且持续地优于其他最先进的方法。
{"title":"When graph convolution meets double attention: online privacy disclosure detection with multi-label text classification","authors":"Zhanbo Liang, Jie Guo, Weidong Qiu, Zheng Huang, Shujun Li","doi":"10.1007/s10618-023-00992-y","DOIUrl":"https://doi.org/10.1007/s10618-023-00992-y","url":null,"abstract":"<p>With the rise of Web 2.0 platforms such as online social media, people’s private information, such as their location, occupation and even family information, is often inadvertently disclosed through online discussions. Therefore, it is important to detect such unwanted privacy disclosures to help alert people affected and the online platform. In this paper, privacy disclosure detection is modeled as a multi-label text classification (MLTC) problem, and a new privacy disclosure detection model is proposed to construct an MLTC classifier for detecting online privacy disclosures. This classifier takes an online post as the input and outputs multiple labels, each reflecting a possible privacy disclosure. The proposed presentation method combines three different sources of information, the input text itself, the label-to-text correlation and the label-to-label correlation. A double-attention mechanism is used to combine the first two sources of information, and a graph convolutional network is employed to extract the third source of information that is then used to help fuse features extracted from the first two sources of information. Our extensive experimental results, obtained on a public dataset of privacy-disclosing posts on Twitter, demonstrated that our proposed privacy disclosure detection method significantly and consistently outperformed other state-of-the-art methods in terms of all key performance indicators.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"80 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139376729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-02DOI: 10.1007/s10618-023-01000-z
Zhenxiang Cao, N. Seeuws, Maarten Vos, Alexander Bertrand
{"title":"Correction: A semi‑supervised interactive algorithm for change point detection","authors":"Zhenxiang Cao, N. Seeuws, Maarten Vos, Alexander Bertrand","doi":"10.1007/s10618-023-01000-z","DOIUrl":"https://doi.org/10.1007/s10618-023-01000-z","url":null,"abstract":"","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"54 20","pages":"1"},"PeriodicalIF":4.8,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139390140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-29DOI: 10.1007/s10618-023-00989-7
Moshe Unger, Michel Wedel, Alexander Tuzhilin
We propose the use of a deep learning architecture, called RETINA, to predict multi-alternative, multi-attribute consumer choice from eye movement data. RETINA directly uses the complete time series of raw eye-tracking data from both eyes as input to state-of-the art Transformer and Metric Learning Deep Learning methods. Using the raw data input eliminates the information loss that may result from first calculating fixations, deriving metrics from the fixations data and analysing those metrics, as has been often done in eye movement research, and allows us to apply Deep Learning to eye tracking data sets of the size commonly encountered in academic and applied research. Using a data set with 112 respondents who made choices among four laptops, we show that the proposed architecture outperforms other state-of-the-art machine learning methods (standard BERT, LSTM, AutoML, logistic regression) calibrated on raw data or fixation data. The analysis of partial time and partial data segments reveals the ability of RETINA to predict choice outcomes well before participants reach a decision. Specifically, we find that using a mere 5 s of data, the RETINA architecture achieves a predictive validation accuracy of over 0.7. We provide an assessment of which features of the eye movement data contribute to RETINA’s prediction accuracy. We make recommendations on how the proposed deep learning architecture can be used as a basis for future academic research, in particular its application to eye movements collected from front-facing video cameras.
{"title":"Predicting consumer choice from raw eye-movement data using the RETINA deep learning architecture","authors":"Moshe Unger, Michel Wedel, Alexander Tuzhilin","doi":"10.1007/s10618-023-00989-7","DOIUrl":"https://doi.org/10.1007/s10618-023-00989-7","url":null,"abstract":"<p>We propose the use of a deep learning architecture, called RETINA, to predict multi-alternative, multi-attribute consumer choice from eye movement data. RETINA directly uses the complete time series of raw eye-tracking data from both eyes as input to state-of-the art Transformer and Metric Learning Deep Learning methods. Using the raw data input eliminates the information loss that may result from first calculating fixations, deriving metrics from the fixations data and analysing those metrics, as has been often done in eye movement research, and allows us to apply Deep Learning to eye tracking data sets of the size commonly encountered in academic and applied research. Using a data set with 112 respondents who made choices among four laptops, we show that the proposed architecture outperforms other state-of-the-art machine learning methods (standard BERT, LSTM, AutoML, logistic regression) calibrated on raw data or fixation data. The analysis of partial time and partial data segments reveals the ability of RETINA to predict choice outcomes well before participants reach a decision. Specifically, we find that using a mere 5 s of data, the RETINA architecture achieves a predictive validation accuracy of over 0.7. We provide an assessment of which features of the eye movement data contribute to RETINA’s prediction accuracy. We make recommendations on how the proposed deep learning architecture can be used as a basis for future academic research, in particular its application to eye movements collected from front-facing video cameras.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"29 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139064560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-26DOI: 10.1007/s10618-023-00994-w
Huizi Wu, Cong Geng, Hui Fang
Session-based recommendation (SR) aims to dynamically recommend items to a user based on a sequence of the most recent user-item interactions. Most existing studies on SR adopt advanced deep learning methods. However, the majority only consider a special behavior type (e.g., click), while those few considering multi-typed behaviors ignore to take full advantage of the relationships between products (items). In this case, the paper proposes a novel approach, called Substitutable and Complementary Relationships from Multi-behavior Data (denoted as SCRM) to better explore the relationships between products for effective recommendation. Specifically, we firstly construct substitutable and complementary graphs based on a user’s sequential behaviors in every session by jointly considering ‘click’ and ‘purchase’ behaviors. We then design a denoising network to remove false relationships, and further consider constraints on the two relationships via a particularly designed loss function. Extensive experiments on two e-commerce datasets demonstrate the superiority of our model over state-of-the-art methods, and the effectiveness of every component in SCRM.
基于会话的推荐(SR)旨在根据用户与物品最近的交互序列向用户动态推荐物品。关于会话推荐的现有研究大多采用先进的深度学习方法。然而,大多数研究只考虑了一种特殊的行为类型(如点击),而少数考虑多类型行为的研究则忽略了充分利用产品(项目)之间的关系。在这种情况下,本文提出了一种名为 "多行为数据中的可替代和互补关系"(Substitutable and Complementary Relationships from Multi-behavior Data,简称 SCRM)的新方法,以更好地探索产品之间的关系,从而实现有效的推荐。具体来说,我们首先通过联合考虑 "点击 "和 "购买 "行为,根据用户在每个会话中的连续行为构建可替代和互补图。然后,我们设计了一个去噪网络来去除虚假关系,并通过一个特别设计的损失函数进一步考虑对这两种关系的约束。在两个电子商务数据集上进行的广泛实验证明了我们的模型优于最先进的方法,以及 SCRM 中每个组件的有效性。
{"title":"Session-based recommendation by exploiting substitutable and complementary relationships from multi-behavior data","authors":"Huizi Wu, Cong Geng, Hui Fang","doi":"10.1007/s10618-023-00994-w","DOIUrl":"https://doi.org/10.1007/s10618-023-00994-w","url":null,"abstract":"<p>Session-based recommendation (SR) aims to dynamically recommend items to a user based on a sequence of the most recent user-item interactions. Most existing studies on SR adopt advanced deep learning methods. However, the majority only consider a special behavior type (e.g., click), while those few considering multi-typed behaviors ignore to take full advantage of the relationships between products (items). In this case, the paper proposes a novel approach, called Substitutable and Complementary Relationships from Multi-behavior Data (denoted as SCRM) to better explore the relationships between products for effective recommendation. Specifically, we firstly construct substitutable and complementary graphs based on a user’s sequential behaviors in every session by jointly considering ‘click’ and ‘purchase’ behaviors. We then design a denoising network to remove false relationships, and further consider constraints on the two relationships via a particularly designed loss function. Extensive experiments on two e-commerce datasets demonstrate the superiority of our model over state-of-the-art methods, and the effectiveness of every component in SCRM.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"37 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139057144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-12DOI: 10.1007/s10618-023-00990-0
Ling Jian, Kai Shao, Ying Liu, Jundong Li, Xijun Liang
Distilling actionable patterns from large-scale streaming data in the presence of concept drift is a challenging problem, especially when data is polluted with noisy labels. To date, various data stream mining algorithms have been proposed and extensively used in many real-world applications. Considering the functional complementation of classical online learning algorithms and with the goal of combining their advantages, we propose an Online Ensemble Classification (OEC) algorithm to integrate the predictions obtained by different base online classification algorithms. The proposed OEC method works by learning weights of different base classifiers dynamically through the classical Normalized Exponentiated Gradient (NEG) algorithm framework. As a result, the proposed OEC inherits the adaptability and flexibility of concept drift-tracking online classifiers, while maintaining the robustness of noise-resistant online classifiers. Theoretically, we show OEC algorithm is a low regret algorithm which makes it a good candidate to learn from noisy streaming data. Extensive experiments on both synthetic and real-world datasets demonstrate the effectiveness of the proposed OEC method.
{"title":"OEC: an online ensemble classifier for mining data streams with noisy labels","authors":"Ling Jian, Kai Shao, Ying Liu, Jundong Li, Xijun Liang","doi":"10.1007/s10618-023-00990-0","DOIUrl":"https://doi.org/10.1007/s10618-023-00990-0","url":null,"abstract":"<p>Distilling actionable patterns from large-scale streaming data in the presence of concept drift is a challenging problem, especially when data is polluted with noisy labels. To date, various data stream mining algorithms have been proposed and extensively used in many real-world applications. Considering the functional complementation of classical online learning algorithms and with the goal of combining their advantages, we propose an Online Ensemble Classification (OEC) algorithm to integrate the predictions obtained by different base online classification algorithms. The proposed OEC method works by learning weights of different base classifiers dynamically through the classical Normalized Exponentiated Gradient (NEG) algorithm framework. As a result, the proposed OEC inherits the adaptability and flexibility of concept drift-tracking online classifiers, while maintaining the robustness of noise-resistant online classifiers. Theoretically, we show OEC algorithm is a low regret algorithm which makes it a good candidate to learn from noisy streaming data. Extensive experiments on both synthetic and real-world datasets demonstrate the effectiveness of the proposed OEC method.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"177 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138628812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-08DOI: 10.1007/s10618-023-00987-9
Nourhan Ahmed, Lars Schmidt-Thieme
Handling incomplete multivariate time series is an important and fundamental concern for a variety of domains. Existing time-series imputation approaches rely on basic assumptions regarding relationship information between sensors, posing significant challenges since inter-sensor interactions in the real world are often complex and unknown beforehand. Specifically, there is a lack of in-depth investigation into (1) the coexistence of relationships between sensors and (2) the incorporation of reciprocal impact between sensor properties and inter-sensor relationships for the time-series imputation problem. To fill this gap, we present the Structure-aware Decoupled imputation network (SaD), which is designed to model sensor characteristics and relationships between sensors in distinct latent spaces. Our approach is equipped with a two-step knowledge integration scheme that incorporates the influence between the sensor attribute information as well as sensor relationship information. The experimental results indicate that when compared to state-of-the-art models for time-series imputation tasks, our proposed method can reduce error by around 15%.
{"title":"Structure-aware decoupled imputation network for multivariate time series","authors":"Nourhan Ahmed, Lars Schmidt-Thieme","doi":"10.1007/s10618-023-00987-9","DOIUrl":"https://doi.org/10.1007/s10618-023-00987-9","url":null,"abstract":"<p>Handling incomplete multivariate time series is an important and fundamental concern for a variety of domains. Existing time-series imputation approaches rely on basic assumptions regarding relationship information between sensors, posing significant challenges since inter-sensor interactions in the real world are often complex and unknown beforehand. Specifically, there is a lack of in-depth investigation into (1) the coexistence of relationships between sensors and (2) the incorporation of reciprocal impact between sensor properties and inter-sensor relationships for the time-series imputation problem. To fill this gap, we present the Structure-aware Decoupled imputation network (SaD), which is designed to model sensor characteristics and relationships between sensors in distinct latent spaces. Our approach is equipped with a two-step knowledge integration scheme that incorporates the influence between the sensor attribute information as well as sensor relationship information. The experimental results indicate that when compared to state-of-the-art models for time-series imputation tasks, our proposed method can reduce error by around 15%.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"107 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138555888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2023-06-19DOI: 10.1007/s12070-023-03947-3
Kalyana Sundaram Chidambaram, Manjul Muraleedharan, Amit Keshri, Sabaratnam Mayilvaganan, Nazrin Hameed, Mohd Aqib, Arushi Kumar, Ravi Sankar Manogaran, Raj Kumar
Benign parotid tumors follow an indolent course and present as slow-growing painless swelling in the pre-and-infra-auricular areas. The treatment of choice is surgery. Though the gold standard technique is Superficial Parotidectomy, Extracapsular Dissection (ECD) is an alternative option with the same outcome and decreased complications. This study discusses our experience with extracapsular dissection and the surgical nuances for better results. A retrospective study of histologically confirmed cases of pleomorphic adenoma of the parotid gland, who underwent Extracapsular dissection between September 2019 and March 2023, was done. The demographic details, clinical characteristics, and outcomes were evaluated. There were 33 patients, including 16 females and 17 males, with a mean age of 32.75 years. All cases presented as slow-growing painless swelling for a mean duration of 5 years. Most of the tumors (94%) were of size between 2 and 4 cm, with few tumors more than 4 cm. All underwent extracapsular dissection with complete excision. There was only one complication (seroma) and no incidence of facial palsy in our experience with ECD. The goal of a benign parotid surgery is the complete removal of the tumor with minimum complications, which could be achieved with ECD, which has good tumor clearance and lesser rates of complications with good cosmesis. Thus, this minimally invasive parotid surgery could be a worthwhile option in properly selected cases.
{"title":"The Outcomes and Surgical Nuances of Minimally Invasive Parotid Surgery for Pleomorphic Adenoma.","authors":"Kalyana Sundaram Chidambaram, Manjul Muraleedharan, Amit Keshri, Sabaratnam Mayilvaganan, Nazrin Hameed, Mohd Aqib, Arushi Kumar, Ravi Sankar Manogaran, Raj Kumar","doi":"10.1007/s12070-023-03947-3","DOIUrl":"10.1007/s12070-023-03947-3","url":null,"abstract":"<p><p>Benign parotid tumors follow an indolent course and present as slow-growing painless swelling in the pre-and-infra-auricular areas. The treatment of choice is surgery. Though the gold standard technique is Superficial Parotidectomy, Extracapsular Dissection (ECD) is an alternative option with the same outcome and decreased complications. This study discusses our experience with extracapsular dissection and the surgical nuances for better results. A retrospective study of histologically confirmed cases of pleomorphic adenoma of the parotid gland, who underwent Extracapsular dissection between September 2019 and March 2023, was done. The demographic details, clinical characteristics, and outcomes were evaluated. There were 33 patients, including 16 females and 17 males, with a mean age of 32.75 years. All cases presented as slow-growing painless swelling for a mean duration of 5 years. Most of the tumors (94%) were of size between 2 and 4 cm, with few tumors more than 4 cm. All underwent extracapsular dissection with complete excision. There was only one complication (seroma) and no incidence of facial palsy in our experience with ECD. The goal of a benign parotid surgery is the complete removal of the tumor with minimum complications, which could be achieved with ECD, which has good tumor clearance and lesser rates of complications with good cosmesis. Thus, this minimally invasive parotid surgery could be a worthwhile option in properly selected cases.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"28 1","pages":"3256-3262"},"PeriodicalIF":2.8,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10645680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73804083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-18DOI: 10.1007/s10618-023-00988-8
Sondre Sørbø, Massimiliano Ruocco
The field of time series anomaly detection is constantly advancing, with several methods available, making it a challenge to determine the most appropriate method for a specific domain. The evaluation of these methods is facilitated by the use of metrics, which vary widely in their properties. Despite the existence of new evaluation metrics, there is limited agreement on which metrics are best suited for specific scenarios and domains, and the most commonly used metrics have faced criticism in the literature. This paper provides a comprehensive overview of the metrics used for the evaluation of time series anomaly detection methods, and also defines a taxonomy of these based on how they are calculated. By defining a set of properties for evaluation metrics and a set of specific case studies and experiments, twenty metrics are analyzed and discussed in detail, highlighting the unique suitability of each for specific tasks. Through extensive experimentation and analysis, this paper argues that the choice of evaluation metric must be made with care, taking into account the specific requirements of the task at hand.
{"title":"Navigating the metric maze: a taxonomy of evaluation metrics for anomaly detection in time series","authors":"Sondre Sørbø, Massimiliano Ruocco","doi":"10.1007/s10618-023-00988-8","DOIUrl":"https://doi.org/10.1007/s10618-023-00988-8","url":null,"abstract":"<p>The field of time series anomaly detection is constantly advancing, with several methods available, making it a challenge to determine the most appropriate method for a specific domain. The evaluation of these methods is facilitated by the use of metrics, which vary widely in their properties. Despite the existence of new evaluation metrics, there is limited agreement on which metrics are best suited for specific scenarios and domains, and the most commonly used metrics have faced criticism in the literature. This paper provides a comprehensive overview of the metrics used for the evaluation of time series anomaly detection methods, and also defines a taxonomy of these based on how they are calculated. By defining a set of properties for evaluation metrics and a set of specific case studies and experiments, twenty metrics are analyzed and discussed in detail, highlighting the unique suitability of each for specific tasks. Through extensive experimentation and analysis, this paper argues that the choice of evaluation metric must be made with care, taking into account the specific requirements of the task at hand.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"13 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}