{"title":"2020年B2D2LM研讨会主席致辞","authors":"Shuihua Wang","doi":"10.1109/ucc48980.2020.00011","DOIUrl":null,"url":null,"abstract":"Due to the proliferation of biomedical imaging modalities such as Photoacoustic Tomography, Computed Tomography (CT), etc., massive amounts of biomedical data are being generated on a daily basis. How can we utilize such big data to build better health profiles and better predictive models so that we can better diagnose and treat diseases and provide a better life for humans? In the past years, many successful learning methods such as deep learning were proposed to answer this crucial question, which has social, economic, as well as legal implications. However, several significant problems plague the processing of big biomedical data, such as data heterogeneity, data incompleteness, data imbalance, and high dimensionality. What is worse is that many data sets exhibit multiple such problems. Most existing learning methods can only deal with homogeneous, complete, class-balanced, and moderate-dimensional data. Therefore, data preprocessing techniques including data representation learning, dimensionality reduction, and missing value imputation should be developed to enhance the applicability of deep learning methods in real-world applications of biomedicine.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Message from the B2D2LM 2020 Workshop Chairs\",\"authors\":\"Shuihua Wang\",\"doi\":\"10.1109/ucc48980.2020.00011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Due to the proliferation of biomedical imaging modalities such as Photoacoustic Tomography, Computed Tomography (CT), etc., massive amounts of biomedical data are being generated on a daily basis. How can we utilize such big data to build better health profiles and better predictive models so that we can better diagnose and treat diseases and provide a better life for humans? In the past years, many successful learning methods such as deep learning were proposed to answer this crucial question, which has social, economic, as well as legal implications. However, several significant problems plague the processing of big biomedical data, such as data heterogeneity, data incompleteness, data imbalance, and high dimensionality. What is worse is that many data sets exhibit multiple such problems. Most existing learning methods can only deal with homogeneous, complete, class-balanced, and moderate-dimensional data. Therefore, data preprocessing techniques including data representation learning, dimensionality reduction, and missing value imputation should be developed to enhance the applicability of deep learning methods in real-world applications of biomedicine.\",\"PeriodicalId\":125849,\"journal\":{\"name\":\"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ucc48980.2020.00011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ucc48980.2020.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Due to the proliferation of biomedical imaging modalities such as Photoacoustic Tomography, Computed Tomography (CT), etc., massive amounts of biomedical data are being generated on a daily basis. How can we utilize such big data to build better health profiles and better predictive models so that we can better diagnose and treat diseases and provide a better life for humans? In the past years, many successful learning methods such as deep learning were proposed to answer this crucial question, which has social, economic, as well as legal implications. However, several significant problems plague the processing of big biomedical data, such as data heterogeneity, data incompleteness, data imbalance, and high dimensionality. What is worse is that many data sets exhibit multiple such problems. Most existing learning methods can only deal with homogeneous, complete, class-balanced, and moderate-dimensional data. Therefore, data preprocessing techniques including data representation learning, dimensionality reduction, and missing value imputation should be developed to enhance the applicability of deep learning methods in real-world applications of biomedicine.