Pub Date : 2022-09-01DOI: 10.1109/ai4i54798.2022.00005
{"title":"Message from the AI4I 2022 General Co-Chairs","authors":"","doi":"10.1109/ai4i54798.2022.00005","DOIUrl":"https://doi.org/10.1109/ai4i54798.2022.00005","url":null,"abstract":"","PeriodicalId":345427,"journal":{"name":"2022 5th International Conference on Artificial Intelligence for Industries (AI4I)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129494203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-09DOI: 10.1109/AI4I54798.2022.00015
Zhijian Li, J. Xin
We propose an adaptive projection-gradient descentshrinkage- splitting method (APGDSSM) to integrate penalty based channel pruning into quantization-aware training (QAT). APGDSSM concurrently searches weights in both the quantized subspace and the sparse subspace. APGDSSM uses shrinkage operator and a splitting technique to create sparse weights, as well as the Group Lasso penalty to push the weight sparsity into channel sparsity. In addition, we propose a novel complementary transformed l1 penalty to stabilize the training for extreme compression.
{"title":"Channel Pruning in Quantization-aware Training: an Adaptive Projection-gradient Descent-shrinkage-splitting Method","authors":"Zhijian Li, J. Xin","doi":"10.1109/AI4I54798.2022.00015","DOIUrl":"https://doi.org/10.1109/AI4I54798.2022.00015","url":null,"abstract":"We propose an adaptive projection-gradient descentshrinkage- splitting method (APGDSSM) to integrate penalty based channel pruning into quantization-aware training (QAT). APGDSSM concurrently searches weights in both the quantized subspace and the sparse subspace. APGDSSM uses shrinkage operator and a splitting technique to create sparse weights, as well as the Group Lasso penalty to push the weight sparsity into channel sparsity. In addition, we propose a novel complementary transformed l1 penalty to stabilize the training for extreme compression.","PeriodicalId":345427,"journal":{"name":"2022 5th International Conference on Artificial Intelligence for Industries (AI4I)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123548734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}