{"title":"Coded Caching with Optimized Shared-Cache Sizes","authors":"Emanuele Parrinello, P. Elia","doi":"10.1109/ITW44776.2019.8989173","DOIUrl":null,"url":null,"abstract":"This work studies the K-user broadcast channel where each user is assisted by one of $\\Lambda$ caches with a cumulative memory constraint that is equal to t times the size of the library, and where each cache serves an arbitrary number of users. In this setting, under the assumption of uncoded cache placement, no prior scheme is known to achieve a sum degrees of freedom $(\\mathrm{D}\\mathrm{o}$F) of $t+1$, other than in the uniform case where all caches serve an equal number of users. We here show for the first time that allowing an optimized memory allocation across the caches as a function of the number of users served per cache, provides for the aforementioned DoF. A subsequent index-coding based converse proves that this performance can be close to optimal for bounded values of t.","PeriodicalId":214379,"journal":{"name":"2019 IEEE Information Theory Workshop (ITW)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Information Theory Workshop (ITW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITW44776.2019.8989173","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
This work studies the K-user broadcast channel where each user is assisted by one of $\Lambda$ caches with a cumulative memory constraint that is equal to t times the size of the library, and where each cache serves an arbitrary number of users. In this setting, under the assumption of uncoded cache placement, no prior scheme is known to achieve a sum degrees of freedom $(\mathrm{D}\mathrm{o}$F) of $t+1$, other than in the uniform case where all caches serve an equal number of users. We here show for the first time that allowing an optimized memory allocation across the caches as a function of the number of users served per cache, provides for the aforementioned DoF. A subsequent index-coding based converse proves that this performance can be close to optimal for bounded values of t.