{"title":"不确定性:神经网络背后的思想引领我们超越KL分解和区间域","authors":"M. Beer, O. Kosheleva, V. Kreinovich","doi":"10.1109/SSCI50451.2021.9660145","DOIUrl":null,"url":null,"abstract":"In many practical situations, we know that there is a functional dependence between a quantity $q$ and quantities a1,…, an, but the exact form of this dependence is only known with uncertainty. In some cases, we only know the class of possible functions describing this dependence. In other cases, we also know the probabilities of different functions from this class - i.e., we know the corresponding random field or random process. To solve problems related to such a dependence, it is desirable to be able to simulate the corresponding functions, i.e., to have algorithms that transform simple intervals or simple random variables into functions from the desired class. Many of the real-life dependencies are very complex, requiring a large amount of computation time even if we ignore the uncertainty. So, to make simulation of uncertainty practically feasible, we need to make sure that the corresponding simulation algorithm is as fast as possible. In this paper, we show that for this objective, ideas behind neural networks lead to the known Karhunen-Loevc decomposition and interval field techniques - and also that these ideas help us go - when necessary - beyond these techniques.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Uncertainty: Ideas Behind Neural Networks Lead Us Beyond KL- Decomposition and Interval Fields\",\"authors\":\"M. Beer, O. Kosheleva, V. Kreinovich\",\"doi\":\"10.1109/SSCI50451.2021.9660145\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In many practical situations, we know that there is a functional dependence between a quantity $q$ and quantities a1,…, an, but the exact form of this dependence is only known with uncertainty. In some cases, we only know the class of possible functions describing this dependence. In other cases, we also know the probabilities of different functions from this class - i.e., we know the corresponding random field or random process. To solve problems related to such a dependence, it is desirable to be able to simulate the corresponding functions, i.e., to have algorithms that transform simple intervals or simple random variables into functions from the desired class. Many of the real-life dependencies are very complex, requiring a large amount of computation time even if we ignore the uncertainty. So, to make simulation of uncertainty practically feasible, we need to make sure that the corresponding simulation algorithm is as fast as possible. In this paper, we show that for this objective, ideas behind neural networks lead to the known Karhunen-Loevc decomposition and interval field techniques - and also that these ideas help us go - when necessary - beyond these techniques.\",\"PeriodicalId\":255763,\"journal\":{\"name\":\"2021 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSCI50451.2021.9660145\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI50451.2021.9660145","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Uncertainty: Ideas Behind Neural Networks Lead Us Beyond KL- Decomposition and Interval Fields
In many practical situations, we know that there is a functional dependence between a quantity $q$ and quantities a1,…, an, but the exact form of this dependence is only known with uncertainty. In some cases, we only know the class of possible functions describing this dependence. In other cases, we also know the probabilities of different functions from this class - i.e., we know the corresponding random field or random process. To solve problems related to such a dependence, it is desirable to be able to simulate the corresponding functions, i.e., to have algorithms that transform simple intervals or simple random variables into functions from the desired class. Many of the real-life dependencies are very complex, requiring a large amount of computation time even if we ignore the uncertainty. So, to make simulation of uncertainty practically feasible, we need to make sure that the corresponding simulation algorithm is as fast as possible. In this paper, we show that for this objective, ideas behind neural networks lead to the known Karhunen-Loevc decomposition and interval field techniques - and also that these ideas help us go - when necessary - beyond these techniques.