Niharika R Bollumpally, Andrew C Evans, Scott W Gleave, Alexander R Gromadzki, G. Learmonth
{"title":"A Machine Learning Approach to Workflow Prioritization","authors":"Niharika R Bollumpally, Andrew C Evans, Scott W Gleave, Alexander R Gromadzki, G. Learmonth","doi":"10.1109/SIEDS.2019.8735589","DOIUrl":null,"url":null,"abstract":"Our client, S&P Global, is a leading provider of cross-industry data products, whose success is largely dependent on the timeliness and quality of its data. The company relies heavily on manual search across a variety of public documents to update internal records, making workflow prioritization an important component to the timeliness of its value proposition. Given the broad scope of prioritizing a highly granular workflow, our team aimed to leverage operational metadata at the lowest level: information extraction. Rather than parsing documents themselves, we aimed to preserve parsimony in developing a model capable of providing actionable insight towards workflow optimization. The selected model was trained using gradient decision tree-boosting with a logistic output, predicting the probability of task success. By combining a number of previously unused features, we were able to classify tasks that resulted in an update to any of our client's expansive datasets. The classification accuracy was measured with a ROC-AUC and the recall for the positive outcome class. Given the 98% F1 score achieved predicting at this level, we constructed a priority score, at a higher level of granularity, where the implementation of a rating system is of more practical use to our client in scheduling. The model was trained on our client's financial domain data from 2018, with hopes of generalizing our findings to other domains in the future.","PeriodicalId":265421,"journal":{"name":"2019 Systems and Information Engineering Design Symposium (SIEDS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Systems and Information Engineering Design Symposium (SIEDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIEDS.2019.8735589","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Our client, S&P Global, is a leading provider of cross-industry data products, whose success is largely dependent on the timeliness and quality of its data. The company relies heavily on manual search across a variety of public documents to update internal records, making workflow prioritization an important component to the timeliness of its value proposition. Given the broad scope of prioritizing a highly granular workflow, our team aimed to leverage operational metadata at the lowest level: information extraction. Rather than parsing documents themselves, we aimed to preserve parsimony in developing a model capable of providing actionable insight towards workflow optimization. The selected model was trained using gradient decision tree-boosting with a logistic output, predicting the probability of task success. By combining a number of previously unused features, we were able to classify tasks that resulted in an update to any of our client's expansive datasets. The classification accuracy was measured with a ROC-AUC and the recall for the positive outcome class. Given the 98% F1 score achieved predicting at this level, we constructed a priority score, at a higher level of granularity, where the implementation of a rating system is of more practical use to our client in scheduling. The model was trained on our client's financial domain data from 2018, with hopes of generalizing our findings to other domains in the future.