With the increasing amount of information collected through clinical practice and scientific experimentation, a growing challenge is how to utilize available resources to construct predictive models to facilitate clinical decision making. Clinicians often have questions related to the treatment and outcome of a medical problem for individual patients; however, few tools exist that leverage the large collection of patient data and scientific knowledge to answer these questions. Without appropriate context, existing data that have been collected for a specific task may not be suitable for creating new models that answer different questions. This paper presents an approach that leverages available structured or unstructured data to build a probabilistic predictive model that assists physicians with answering clinical questions on individual patients. Various challenges related to transforming available data to an end-user application are addressed: problem decomposition, variable selection, context representation, automated extraction of information from unstructured data sources, model generation, and development of an intuitive application to query the model and present the results. We describe our efforts towards building a model that predicts the risk of vasospasm in aneurysm patients.
{"title":"An Approach for Incorporating Context in Building Probabilistic Predictive Models","authors":"J. Wu, William Hsu, A. Bui","doi":"10.1109/HISB.2012.30","DOIUrl":"https://doi.org/10.1109/HISB.2012.30","url":null,"abstract":"With the increasing amount of information collected through clinical practice and scientific experimentation, a growing challenge is how to utilize available resources to construct predictive models to facilitate clinical decision making. Clinicians often have questions related to the treatment and outcome of a medical problem for individual patients; however, few tools exist that leverage the large collection of patient data and scientific knowledge to answer these questions. Without appropriate context, existing data that have been collected for a specific task may not be suitable for creating new models that answer different questions. This paper presents an approach that leverages available structured or unstructured data to build a probabilistic predictive model that assists physicians with answering clinical questions on individual patients. Various challenges related to transforming available data to an end-user application are addressed: problem decomposition, variable selection, context representation, automated extraction of information from unstructured data sources, model generation, and development of an intuitive application to query the model and present the results. We describe our efforts towards building a model that predicts the risk of vasospasm in aneurysm patients.","PeriodicalId":375089,"journal":{"name":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","volume":"284 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131417730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We sought to examine the frequencies and patterns of nephrotoxicity and neutrophilia due to azathioprine (AZA), and to develop a prototype method for using large de-identified electronic health record (EHR) data to aid in post-market drug surveillance. We leveraged a de-identified database of over 10 million patient EHRs to construct a network of comorbidities induced by administration of AZA, where comorbidities were defined by baseline-controlled laboratory values. To gauge the significance of the identified disease patterns, we calculated the relative risk of developing a comorbidity pair relative to a control cohort of patients taking one of 12 other anti-rheumatic agents. Nephrotoxicity as gauged by elevations in creatinine was present in 11% of patients taking AZA, and this frequency was significantly higher than in patients taking other anti-rheumatic agents (RR: 1.2, 95% CI: 1.04-1.43). Neutrophilia was highly prevalent (45%) in the population and was also unique to AZA (RR: 1.2, 95% CI: 1.17-1.28). Using a comorbidity network analysis, we hypothesized that the joint consideration of anemia (hemoglobin 190 IU/L) may serve as a predictor of impending renal dysfunction. Indeed, these two laboratory values provide approximately 100% sensitivity in predicting subsequent elevations in creatinine. Furthermore, the predictive power is unique to AZA, for jointly considering anemia and an elevated LDH provides only 50% sensitivity in predicting creatinine elevations with other anti-rheumatic agents. Our work demonstrates that the construction of comorbidity networks from de-identified EHR data sets can provide both sufficient insight and statistical power to uncover novel patterns and predictors of disease.
{"title":"Azathioprine-Induced Comorbidity Network Reveals Patterns and Predictors of Nephrotoxicity and Neutrophilia","authors":"Vishal N. Patel, D. Kaelber","doi":"10.1109/HISB.2012.28","DOIUrl":"https://doi.org/10.1109/HISB.2012.28","url":null,"abstract":"We sought to examine the frequencies and patterns of nephrotoxicity and neutrophilia due to azathioprine (AZA), and to develop a prototype method for using large de-identified electronic health record (EHR) data to aid in post-market drug surveillance. We leveraged a de-identified database of over 10 million patient EHRs to construct a network of comorbidities induced by administration of AZA, where comorbidities were defined by baseline-controlled laboratory values. To gauge the significance of the identified disease patterns, we calculated the relative risk of developing a comorbidity pair relative to a control cohort of patients taking one of 12 other anti-rheumatic agents. Nephrotoxicity as gauged by elevations in creatinine was present in 11% of patients taking AZA, and this frequency was significantly higher than in patients taking other anti-rheumatic agents (RR: 1.2, 95% CI: 1.04-1.43). Neutrophilia was highly prevalent (45%) in the population and was also unique to AZA (RR: 1.2, 95% CI: 1.17-1.28). Using a comorbidity network analysis, we hypothesized that the joint consideration of anemia (hemoglobin 190 IU/L) may serve as a predictor of impending renal dysfunction. Indeed, these two laboratory values provide approximately 100% sensitivity in predicting subsequent elevations in creatinine. Furthermore, the predictive power is unique to AZA, for jointly considering anemia and an elevated LDH provides only 50% sensitivity in predicting creatinine elevations with other anti-rheumatic agents. Our work demonstrates that the construction of comorbidity networks from de-identified EHR data sets can provide both sufficient insight and statistical power to uncover novel patterns and predictors of disease.","PeriodicalId":375089,"journal":{"name":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130963368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The database of Genotypes and Phenotypes (dbGaP) was developed by the National Heart Lung, and Blood Institute (NHLBI) to archive genome-wide association studies (GWAS) data. As of July 17th 2012, dbGaP contained 305 top-level studies. The metadata for each study (available from the dbGaP website) are organized into distinct sections, including a study description, inclusion/exclusion criteria, policies for authorized access requests, MeSH terms, PubMed identifiers, study histories, and the names of principal and co-investigators. We here tabulate the salient characteristics of dbGaP metadata as part of the Phenotype Discoverer (PhD) project, a research project at the University of California San Diego Division of Biomedical Informatics which aims to enhance the "searchability" of the current dbGaP website through the alignment of phenotypes to a standard information model. In particular, we are interested in using the extracted metadata PubMed identifiers, principal investigator names, associated journal names, etc. as input to a statistical text.
{"title":"A Study on Studies: Exploring the Metadata Associated with dbGaP Studies","authors":"Karen Truong, Mike Conway","doi":"10.1109/HISB.2012.51","DOIUrl":"https://doi.org/10.1109/HISB.2012.51","url":null,"abstract":"The database of Genotypes and Phenotypes (dbGaP) was developed by the National Heart Lung, and Blood Institute (NHLBI) to archive genome-wide association studies (GWAS) data. As of July 17th 2012, dbGaP contained 305 top-level studies. The metadata for each study (available from the dbGaP website) are organized into distinct sections, including a study description, inclusion/exclusion criteria, policies for authorized access requests, MeSH terms, PubMed identifiers, study histories, and the names of principal and co-investigators. We here tabulate the salient characteristics of dbGaP metadata as part of the Phenotype Discoverer (PhD) project, a research project at the University of California San Diego Division of Biomedical Informatics which aims to enhance the \"searchability\" of the current dbGaP website through the alignment of phenotypes to a standard information model. In particular, we are interested in using the extracted metadata PubMed identifiers, principal investigator names, associated journal names, etc. as input to a statistical text.","PeriodicalId":375089,"journal":{"name":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132693178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The database of Genotypes and Phenotypes (dbGaP) is archiving the results of different Genome Wide Association Studies (GWAS). dbGaP has a multitude of phenotype variables, but they are not harmonized across studies. Unfortunately, dbGaP lacks semantic relations among its variables. This prevents efficient information retrieval and accurate searches to find studies that contain common phenotypes. Our goal is to standardize dbGaP information to allow accurate, reusable and quick retrieval of information.
{"title":"Building an Ontology of Phenotypes for Existing GWAS Studies","authors":"N. Alipanah, Hyeon-eui Kim, L. Ohno-Machado","doi":"10.1109/HISB.2012.36","DOIUrl":"https://doi.org/10.1109/HISB.2012.36","url":null,"abstract":"The database of Genotypes and Phenotypes (dbGaP) is archiving the results of different Genome Wide Association Studies (GWAS). dbGaP has a multitude of phenotype variables, but they are not harmonized across studies. Unfortunately, dbGaP lacks semantic relations among its variables. This prevents efficient information retrieval and accurate searches to find studies that contain common phenotypes. Our goal is to standardize dbGaP information to allow accurate, reusable and quick retrieval of information.","PeriodicalId":375089,"journal":{"name":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134502820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Coden, D. Gruhl, Neal Lewis, M. Tanenblatt, J. Terdiman
Although structured electronic health records are becoming more prevalent, much information about patient health is still recorded only in unstructured text. “Understanding” these texts has been a focus of natural language processing (NLP) research for many years, with some remarkable successes, yet there is more work to be done. Knowing the drugs patients take is not only critical for understanding patient health (e.g., for drug-drug interactions or drug-enzyme interaction), but also for secondary uses, such as research on treatment effectiveness. Several drug dictionaries have been curated, such as RxNorm, FDA's Orange Book, or NCI, with a focus on prescription drugs. Developing these dictionaries is a challenge, but even more challenging is keeping these dictionaries up-to-date in the face of a rapidly advancing field-it is critical to identify grapefruit as a “drug” for a patient who takes the prescription medicine Lipitor, due to their known adverse interaction. To discover other, new adverse drug interactions, a large number of patient histories often need to be examined, necessitating not only accurate but also fast algorithms to identify pharmacological substances. In this paper we propose a new algorithm, SPOT, which identifies drug names that can be used as new dictionary entries from a large corpus, where a “drug” is defined as a substance intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease. Measured against a manually annotated reference corpus, we present precision and recall values for SPOT. SPOT is language and syntax independent, can be run efficiently to keep dictionaries up-to-date and to also suggest words and phrases which may be misspellings or uncatalogued synonyms of a known drug. We show how SPOT's lack of reliance on NLP tools makes it robust in analyzing clinical medical text. SPOT is a generalized bootstrapping algorithm, seeded with a known dictionary and automatically extracting the context within which each drug is mentioned. We define three features of such context: support, confidence and prevalence. Finally, we present the performance tradeoffs depending on the thresholds chosen for these features.
{"title":"SPOT the Drug! An Unsupervised Pattern Matching Method to Extract Drug Names from Very Large Clinical Corpora","authors":"A. Coden, D. Gruhl, Neal Lewis, M. Tanenblatt, J. Terdiman","doi":"10.1109/HISB.2012.16","DOIUrl":"https://doi.org/10.1109/HISB.2012.16","url":null,"abstract":"Although structured electronic health records are becoming more prevalent, much information about patient health is still recorded only in unstructured text. “Understanding” these texts has been a focus of natural language processing (NLP) research for many years, with some remarkable successes, yet there is more work to be done. Knowing the drugs patients take is not only critical for understanding patient health (e.g., for drug-drug interactions or drug-enzyme interaction), but also for secondary uses, such as research on treatment effectiveness. Several drug dictionaries have been curated, such as RxNorm, FDA's Orange Book, or NCI, with a focus on prescription drugs. Developing these dictionaries is a challenge, but even more challenging is keeping these dictionaries up-to-date in the face of a rapidly advancing field-it is critical to identify grapefruit as a “drug” for a patient who takes the prescription medicine Lipitor, due to their known adverse interaction. To discover other, new adverse drug interactions, a large number of patient histories often need to be examined, necessitating not only accurate but also fast algorithms to identify pharmacological substances. In this paper we propose a new algorithm, SPOT, which identifies drug names that can be used as new dictionary entries from a large corpus, where a “drug” is defined as a substance intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease. Measured against a manually annotated reference corpus, we present precision and recall values for SPOT. SPOT is language and syntax independent, can be run efficiently to keep dictionaries up-to-date and to also suggest words and phrases which may be misspellings or uncatalogued synonyms of a known drug. We show how SPOT's lack of reliance on NLP tools makes it robust in analyzing clinical medical text. SPOT is a generalized bootstrapping algorithm, seeded with a known dictionary and automatically extracting the context within which each drug is mentioned. We define three features of such context: support, confidence and prevalence. Finally, we present the performance tradeoffs depending on the thresholds chosen for these features.","PeriodicalId":375089,"journal":{"name":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133473641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MicroRNAs are a class of small non-coding RNAs that play an important role in post-transcriptional regulation of gene products. Identification of novel microRNA is difficult because the validated microRNA set is still small in size and diverse. Existing feature selection methods use different combinations of features related to the biogenesis of microRNAs, but performance evaluations are not comprehensive. We developed a robust feature selection method using a combination of three types of nucleotide-structure triplets, the minimum free energy of the secondary structure of precursor microRNAs and other extracted characteristics. We compared our new combination feature set and three other previously published sets using three different classifiers: logistic regression, support vector machine, and random forest. Our proposed feature set was not only robust across all classifier methods, but also had the highest classification performance, as measured by the area under the ROC curve.
{"title":"A Robust Feature Selection Method for Novel Pre-microRNA Identification Using a Combination of Nucleotide-Structure Triplets","authors":"Petra Stepanowsky, Jihoon Kim, L. Ohno-Machado","doi":"10.1109/HISB.2012.20","DOIUrl":"https://doi.org/10.1109/HISB.2012.20","url":null,"abstract":"MicroRNAs are a class of small non-coding RNAs that play an important role in post-transcriptional regulation of gene products. Identification of novel microRNA is difficult because the validated microRNA set is still small in size and diverse. Existing feature selection methods use different combinations of features related to the biogenesis of microRNAs, but performance evaluations are not comprehensive. We developed a robust feature selection method using a combination of three types of nucleotide-structure triplets, the minimum free energy of the secondary structure of precursor microRNAs and other extracted characteristics. We compared our new combination feature set and three other previously published sets using three different classifiers: logistic regression, support vector machine, and random forest. Our proposed feature set was not only robust across all classifier methods, but also had the highest classification performance, as measured by the area under the ROC curve.","PeriodicalId":375089,"journal":{"name":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121923152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motivation: Word Sense Disambiguation (WSD) methods automatically assign an unambiguous concept to an ambiguous term based on context, and are important to many text processing tasks. In this study, we developed and evaluated a knowledge-based WSD method that uses semantic similarity measures derived from the Unified Medical Language System (UMLS), and we evaluated the contribution of WSD to clinical text classification. Results: We evaluated our system on biomedical WSD datasets; our system compares favorably to other knowledge-based methods. We evaluated the contribution of our WSD system to clinical document classification on the 2007 Computational Medicine Challenge corpus. Machine learning classifiers trained on disambiguated concepts significantly outperformed those trained using all concepts. Availability: We integrated our WSD system with MetaMap and cTAKES, two popular biomedical natural language processing systems. We released all code required to reproduce our results and all tools developed as part of this study as open source, available under http://code.google.com/p/ytex.
{"title":"Knowledge-Based Biomedical Word Sense Disambiguation: An Evaluation and Application to Clinical Document Classification","authors":"Vijay Garla, C. Brandt","doi":"10.1109/HISB.2012.12","DOIUrl":"https://doi.org/10.1109/HISB.2012.12","url":null,"abstract":"Motivation: Word Sense Disambiguation (WSD) methods automatically assign an unambiguous concept to an ambiguous term based on context, and are important to many text processing tasks. In this study, we developed and evaluated a knowledge-based WSD method that uses semantic similarity measures derived from the Unified Medical Language System (UMLS), and we evaluated the contribution of WSD to clinical text classification. Results: We evaluated our system on biomedical WSD datasets; our system compares favorably to other knowledge-based methods. We evaluated the contribution of our WSD system to clinical document classification on the 2007 Computational Medicine Challenge corpus. Machine learning classifiers trained on disambiguated concepts significantly outperformed those trained using all concepts. Availability: We integrated our WSD system with MetaMap and cTAKES, two popular biomedical natural language processing systems. We released all code required to reproduce our results and all tools developed as part of this study as open source, available under http://code.google.com/p/ytex.","PeriodicalId":375089,"journal":{"name":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126260675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With millions people discussing their Personal Health Information (PHI) online, there is a need for the development of tools that can extract and analyze such information. We introduce two semantic-based methods for mining PHI. One method uses WordNet as a source of health-related knowledge, another - terms of personal relations. Incorporating semantics gives a significant improvement in retrieval of text with PHI (paired t-test, P = 0.0001).
{"title":"Text Mining for Personal Health Information on Twitter","authors":"Marina Sokolova, Yasser Jafer, D. Schramm","doi":"10.1109/HISB.2012.37","DOIUrl":"https://doi.org/10.1109/HISB.2012.37","url":null,"abstract":"With millions people discussing their Personal Health Information (PHI) online, there is a need for the development of tools that can extract and analyze such information. We introduce two semantic-based methods for mining PHI. One method uses WordNet as a source of health-related knowledge, another - terms of personal relations. Incorporating semantics gives a significant improvement in retrieval of text with PHI (paired t-test, P = 0.0001).","PeriodicalId":375089,"journal":{"name":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","volume":"300 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120885937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Thyvalikakath, Michael P. Dziabiak, Raymond Johnson, M. Torres-Urquidy, J. Yabes, T. Schleyer
Despite the many decades of research on the effective development of clinical systems in medicine, the adoption of health information technology to improve patient care continues to be slow, especially in ambulatory settings. This applies to dentistry as well, a primary care discipline with approximately 137,000 practitioners in the United States. A critical reason for slow adoption is the poor usability of clinical systems, which makes it difficult for providers to navigate through the information and obtain an integrated view of patient data. Cognitive science methods have shown significant promise to meaningfully inform the design, development and assessment of clinical information systems. In most cases, these methods have been applied to evaluate the design of systems after they have been developed. Very few studies, on the other hand, have used cognitive engineering methods to support the design process for a system itself. It is this gap in knowledge how cognitive engineering methods can be optimally applied to inform the system design process that our research seeks to address. This project studied the cognitive processes and in-formation management strategies used by dentists during a typical patient exam and applied the results to inform the design of an electronic dental record interface. The results of this study will contribute to designing clinical systems that improve cognitive support for clinicians during patient care. Such a system has the potential to enhance the quality and safety of patient care, as well as reduce healthcare costs.
{"title":"Designing Clinical Data Presentation in Electronic Dental Records Using Cognitive Task Analysis Methods","authors":"T. Thyvalikakath, Michael P. Dziabiak, Raymond Johnson, M. Torres-Urquidy, J. Yabes, T. Schleyer","doi":"10.1109/HISB.2012.24","DOIUrl":"https://doi.org/10.1109/HISB.2012.24","url":null,"abstract":"Despite the many decades of research on the effective development of clinical systems in medicine, the adoption of health information technology to improve patient care continues to be slow, especially in ambulatory settings. This applies to dentistry as well, a primary care discipline with approximately 137,000 practitioners in the United States. A critical reason for slow adoption is the poor usability of clinical systems, which makes it difficult for providers to navigate through the information and obtain an integrated view of patient data. Cognitive science methods have shown significant promise to meaningfully inform the design, development and assessment of clinical information systems. In most cases, these methods have been applied to evaluate the design of systems after they have been developed. Very few studies, on the other hand, have used cognitive engineering methods to support the design process for a system itself. It is this gap in knowledge how cognitive engineering methods can be optimally applied to inform the system design process that our research seeks to address. This project studied the cognitive processes and in-formation management strategies used by dentists during a typical patient exam and applied the results to inform the design of an electronic dental record interface. The results of this study will contribute to designing clinical systems that improve cognitive support for clinicians during patient care. Such a system has the potential to enhance the quality and safety of patient care, as well as reduce healthcare costs.","PeriodicalId":375089,"journal":{"name":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121040671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Olga V. Patterson, G. Kerr, J. Richards, C. Nunziato, D. Maron, R. Amdur, S. Duvall
National guidelines for a number of health conditions recommend that practitioners assess and reinforce patient's adherence to specific diet and lifestyle modifications. Counseling intervention has shown to have a long-term positive effect on patient adherence but the extent to which physicians comply is unknown. Evidence of counseling provided by practitioner is recorded only as free text in electronic medical records. To identify physicians' counseling practices we developed a natural language processing system to detect text documentation of dietary counseling in gout patients.
{"title":"Identifying Provider Counseling Practices Using Natural Language Processing: Gout Example","authors":"Olga V. Patterson, G. Kerr, J. Richards, C. Nunziato, D. Maron, R. Amdur, S. Duvall","doi":"10.1109/HISB.2012.52","DOIUrl":"https://doi.org/10.1109/HISB.2012.52","url":null,"abstract":"National guidelines for a number of health conditions recommend that practitioners assess and reinforce patient's adherence to specific diet and lifestyle modifications. Counseling intervention has shown to have a long-term positive effect on patient adherence but the extent to which physicians comply is unknown. Evidence of counseling provided by practitioner is recorded only as free text in electronic medical records. To identify physicians' counseling practices we developed a natural language processing system to detect text documentation of dietary counseling in gout patients.","PeriodicalId":375089,"journal":{"name":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115508680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}