Representation learning has been applied to Electronic Health Records (EHR) for medical concept embedding and the downstream predictive analytics tasks with promising results. Medical ontologies can also be integrated to guide the learning so the embedding space can better align with existing medical knowledge. Yet, properly carrying out the integration is non-trivial. Medical concepts that are similar according to a medical ontology may not be necessarily close in the embedding space learned from the EHR data, as medical ontologies organize medical concepts for their own specific objectives. Any integration methodology without considering the underlying inconsistency will result in sub-optimal medical concept embedding and, in turn, degrade the performance of the downstream tasks. In this article, we propose a novel representation learning framework called ADORE (ADaptive Ontological REpresentations) that allows the medical ontologies to adapt their structures for more robust integrating with the EHR data. ADORE first learns multiple embeddings for each category in the ontology via an attention mechanism. At the same time, it supports an adaptive integration of categorical and multi-relational ontologies in the embedding space using a category-aware graph attention network. We evaluate the performance of ADORE on a number of predictive analytics tasks using two EHR datasets. Our experimental results show that the medical concept embeddings obtained by ADORE can outperform the state-of-the-art methods for all the tasks. More importantly, it can result in clinically meaningful sub-categorization of the existing ontological categories and yield attention values that can further enhance the model interpretability.
The known set of genetic factors involved in the development of several types of cancer has considerably been expanded, thus easing to devise and implement better therapeutic strategies. The automatic diagnosis of cancer, however, remains as a complex task because of the high heterogeneity of tumors and the biological variability between samples. In this work, a long short-term memory network-based model is proposed for diagnosing cancer from transcript-base data. An efficient method that transforms data into gene/isoform expression-based rankings was formulated, allowing us to directly embed important information in the relative order of the elements of a ranking that can subsequently ease the classification of samples. The proposed predictive model leverages the power of deep recurrent neural networks, being able to learn existing patterns on the individual rankings of isoforms describing each sample of the population. To evaluate the suitability of the proposal, an extensive experimental study was conducted on 17 transcript-based datasets, and the results showed the effectiveness of this novel approach and also indicated the gene/isoforms expression-based rankings contained valuable information that can lead to a more effective cancer diagnosis.
This article introduces a novel architecture for two objectives recommendation and interpretability in a unified model. We leverage textual content as a source of interpretability in content-aware recommender systems. The goal is to characterize user preferences with a set of human-understandable attributes, each is described by a single word, enabling comprehension of user interests behind item adoptions. This is achieved via a dedicated architecture, which is interpretable by design, involving two components for recommendation and interpretation. In particular, we seek an interpreter, which accepts holistic user’s representation from a recommender to output a set of activated attributes describing user preferences. Besides encoding interpretability properties such as fidelity, conciseness and diversity, the proposed memory network-based interpreter enables the generalization of user representation by discovering relevant attributes that go beyond her adopted items’ textual content. We design experiments involving both human- and functionally-grounded evaluations of interpretability. Results on four real-world datasets show that our proposed model not only discovers highly relevant attributes for interpreting user preferences, but also enjoys comparable or better recommendation accuracy than a series of baselines.
The rapid advancements of Internet of Things (IoT) and Artificial Intelligence (AI) have catalyzed the development of adaptive traffic control systems (ATCS) for smart cities. In particular, deep reinforcement learning (DRL) models produce state-of-the-art performance and have great potential for practical applications. In the existing DRL-based ATCS, the controlled signals collect traffic state information from nearby vehicles, and then optimal actions (e.g., switching phases) can be determined based on the collected information. The DRL models fully “trust” that vehicles are sending the true information to the traffic signals, making the ATCS vulnerable to adversarial attacks with falsified information. In view of this, this article first time formulates a novel task in which a group of vehicles can cooperatively send falsified information to “cheat” DRL-based ATCS in order to save their total travel time. To solve the proposed task, we develop CollusionVeh, a generic and effective vehicle-colluding framework composed of a road situation encoder, a vehicle interpreter, and a communication mechanism. We employ our framework to attack established DRL-based ATCS and demonstrate that the total travel time for the colluding vehicles can be significantly reduced with a reasonable number of learning episodes, and the colluding effect will decrease if the number of colluding vehicles increases. Additionally, insights and suggestions for the real-world deployment of DRL-based ATCS are provided. The research outcomes could help improve the reliability and robustness of the ATCS and better protect the smart mobility systems.