Concept maps are visual tools for organizing knowledge, commonly used in education and design. The process often involves reading and developing conceptual models, where feedback is crucial. Learners (e.g., students, designers) often refer to reading materials, and receive feedback from instructors (e.g., teachers, stakeholders) based on the maps they create. However, annotations made by learners, like highlights, are usually not visible to instructors, limiting tailored feedback. We propose incorporating annotation practices into concept mapping. Learners could highlight text and link these highlights to existing or newly created concepts in their concept map. This way, instructors can access both the concept map and the relevant readings for better feedback. This vision is realized through Concept&Go, a plug-in for the editor CmapCloud. This extension aims at the interplay between mapping, reading, and feedback during concept mapping. The effectiveness of this approach is demonstrated through a focus group (n=5) and a UTAUT evaluation (n=12). Concept&Go is publicly available.
Numerous challenges and open problems have appeared with the dawn of multi-model data. In most cases, single-model solutions cannot be straightforwardly extended, and new, efficient approaches must be found. In addition, since there are no standards related to combining and managing multiple models, the situation is even more complicated and confusing for users.
This paper deals with the most important aspect of data management — querying. To enable the user to grasp all the popular models, we base our solution on the abstract categorical representation of multi-model data, which can be viewed as a graph. To unify the querying of multi-model data, we enable the user to query the categorical graph using a SPARQL-based model-agnostic query language called MMQL. The query is then decomposed and translated into languages of the underlying systems. The intermediate results are then combined into the final categorical result that can be expressed in any selected format. The support for cross-model redundancy enables one to create distinct query plans and choose the optimal one. We also introduce a proof-of-concept implementation of our solution called MM-quecat.
ClinicalTrials.gov hosts an online database with over 440,000 medical studies (as of 2023) evaluating drugs, supplements, medical devices, and behavioral treatments. Target users include scientists, medical researchers, pharmaceutical companies, and other public and private institutions. Although ClinicalTrials has some filtering ability, it does not provide visualization tools, reporting tools or historical data; only the most recent state of each trial is visible to users. To fill these functionality gaps, we present Tri-AL: an open-source data platform for clinical trial visualization, information extraction, historical analysis, and reporting. This paper describes the design and functionality of Tri-AL, including a programmable module to incorporate machine learning models and extract disease-specific data from unstructured trial reports, which we demonstrate using Alzheimer’s disease reporting as a case study. We also highlight the use of Tri-AL for trial participation analysis in terms of sex, gender, race and ethnicity. The source code is publicly available at https://github.com/pouyan9675/Tri-AL.
Abnormal electricity usage detection is the process of discovering and diagnosing abnormal electricity usage behavior by monitoring and analyzing the electricity usage in the power system. How to improve the accuracy of anomaly detection is a popular research topic. Most studies use neural networks for anomaly detection, but ignore the effect of missing electricity data on anomaly detection performance. Missing value completion is an important method to improve the quality of electricity data and to optimize the anomaly detection performance. Moreover, most studies have ignored the potential correlation relationship between spatial features by modeling the temporal features of electricity data. Therefore, this paper proposes an electricity anomaly detection model based on multi-feature fusion and contrastive learning. The model integrates the temporal and spatial features to jointly accomplish electricity anomaly detection. In terms of temporal feature representation learning, an improved bi-directional LSTM is designed to achieve the missing value completion of electricity data, and combined with CNN to capture the electricity consumption behavior patterns in the temporal data. In terms of spatial feature representation learning, GCN and Transformer are used to fully explore the complex correlation relationships among data. In addition, in order to improve the performance of anomaly detection, this paper also designs a gated fusion module and combines the idea of contrastive learning to strengthen the representation ability of electricity data. Finally, we demonstrate through experiments that the method proposed in this paper can effectively improve the performance of electricity behavior anomaly detection.
Business Process Simulation (BPS) is an approach to analyze the performance of business processes under different scenarios. For example, BPS allows us to estimate the impact of adding one or more resources on the cycle time of a process. The starting point of BPS is a process model annotated with simulation parameters (a BPS model). BPS models may be manually designed, based on information collected from stakeholders and from empirical observations, or automatically discovered from historical execution data. Regardless of its provenance, a key question when using a BPS model is how to assess its quality. In particular, in a setting where we are able to produce multiple alternative BPS models of the same process, this question becomes: How to determine which model is better, to what extent, and in what respect? In this context, this article studies the question of how to measure the quality of a BPS model with respect to its ability to accurately replicate the observed behavior of a process. Rather than pursuing a one-size-fits-all approach, the article recognizes that a process covers multiple perspectives. Accordingly, the article outlines a framework that can be instantiated in different ways to yield quality measures that tackle different process perspectives. The article defines a number of concrete quality measures and evaluates these measures with respect to their ability to discern the impact of controlled perturbations on a BPS model, and their ability to uncover the relative strengths and weaknesses of two approaches for automated discovery of BPS models. The evaluation shows that the proposed measures not only capture how close a BPS model is to the observed behavior, but they also help us to identify the sources of discrepancies.
Collective entity linking always outperforms independent entity linking because it considers the interdependencies among entities. However, the existing collective entity linking methods often have high time complexity, do not fully utilize the relationship information in heterogeneous information networks (HIN) and most of them are largely dependent on the special features associated with Wikipedia. Based on the above problems, this paper proposes a novel collective entity linking method based on relationship path in heterogeneous information networks (PathEL). The PathEL classifies complex relationships in HIN into 1-hop paths and 3 types of 2-hop paths, and measures entity correlation by the path information among entities, ultimately combining textual semantic information to realize collective entity linking. In addition, facing the high complexity of collective entity linking, this paper proposes to solve the problem by combining the variable sliding window data processing method and the two-step pruning strategy. The variable sliding window data processing method limits the number of entity mentions in each window and the pruning strategy reduces the number of candidate entities. Finally, the experimental results of three benchmark datasets verify that the model proposed in this paper performs better in entity linking than the baseline models. On the AIDA CoNLL dataset, compared to the second-ranked model, our model has improved P, R, and F1 scores by 1.61%, 1.54%, and 1.57%, respectively.
Data repairing algorithms are extensively studied for improving data quality. Denial constraints (DCs) are commonly employed to state quality specifications that data should satisfy and hence facilitate data repairing since DCs are general enough to subsume many other dependencies. Data in practice are usually frequently updated, which motivates the quest for efficient incremental repairing techniques in response to data updates. In this paper, we present the first incremental algorithm for repairing DC violations. Specifically, given a relational instance consistent with a set of DCs, and a set of tuple insertions to , our aim is to find a set of tuple insertions such that is satisfied on . We first formalize and prove the complexity of the problem of incremental data repairing with DCs. We then present techniques that combine auxiliary indexing structures to efficiently identify DC violations incurred by w.r.t. , and further develop an efficient repairing algorithm to compute by resolving DC violations. Finally, using both real-life and synthetic datasets, we conduct extensive experiments to demonstrate the effectiveness and efficiency of our approach.
Waiting times in a business process often arise when a case transitions from one activity to another. Accordingly, analyzing the causes of waiting times in activity transitions can help analysts identify opportunities for reducing the cycle time of a process. This paper proposes a process mining approach to decompose observed waiting times in each activity transition into multiple direct causes and to analyze the impact of each identified cause on the process cycle time efficiency. The approach is implemented as a software tool called Kronos that process analysts can use to upload event logs and obtain analysis results of waiting time causes. The proposed approach was empirically evaluated using synthetic event logs to verify its ability to discover different direct causes of waiting times. The applicability of the approach is demonstrated in a real-life process. Interviews with process mining experts confirm that Kronos is useful and easy to use for identifying improvement opportunities related to waiting times.
Analyzing multivariate time series data is crucial for many real-world issues, such as power forecasting, traffic flow forecasting, industrial anomaly detection, and more. Recently, universal frameworks for time series representation based on representation learning have received widespread attention due to their ability to capture changes in the distribution of time series data. However, existing time series representation learning models, when confronting multivariate time series data, merely apply contrastive learning methods to construct positive and negative samples for each variable at the timestamp level, and then employ a contrastive loss function to encourage the model to learn the similarities among the positive samples and the dissimilarities among the negative samples for each variable. Despite this, they fail to fully exploit the latent space dependencies between pairs of variables. To address this problem, we propose the Contrastive Learning Enhanced by Graph Neural Networks for Universal Multivariate Time Series Representation (COGNet), which has three distinctive features. (1) COGNet is a comprehensive self-supervised learning model that combines autoencoders and contrastive learning methods. (2) We introduce graph feature representation blocks on top of the backbone encoder, which extract adjacency features of each variable with other variables. (3) COGNet uses graph contrastive loss to learn graph feature representations. Experimental results across multiple public datasets indicate that COGNet outperforms existing methods in time series prediction and anomaly detection tasks.