首页 > 最新文献

IEEE MultiMedia最新文献

英文 中文
IEEE Computer Graphics and Applications IEEE 计算机图形学与应用
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-03 DOI: 10.1109/mmul.2023.3339958
{"title":"IEEE Computer Graphics and Applications","authors":"","doi":"10.1109/mmul.2023.3339958","DOIUrl":"https://doi.org/10.1109/mmul.2023.3339958","url":null,"abstract":"","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"15 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Computer Architecture Letters IEEE 计算机体系结构通讯
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-03 DOI: 10.1109/mmul.2023.3339931
{"title":"IEEE Computer Architecture Letters","authors":"","doi":"10.1109/mmul.2023.3339931","DOIUrl":"https://doi.org/10.1109/mmul.2023.3339931","url":null,"abstract":"","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"15 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computing in Science & Engineering 科学与工程中的计算
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-03 DOI: 10.1109/mmul.2023.3339939
{"title":"Computing in Science & Engineering","authors":"","doi":"10.1109/mmul.2023.3339939","DOIUrl":"https://doi.org/10.1109/mmul.2023.3339939","url":null,"abstract":"","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"2015 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Computer Society - Call for Papers IEEE 计算机协会 - 征稿启事
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-03 DOI: 10.1109/mmul.2023.3339935
{"title":"IEEE Computer Society - Call for Papers","authors":"","doi":"10.1109/mmul.2023.3339935","DOIUrl":"https://doi.org/10.1109/mmul.2023.3339935","url":null,"abstract":"","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"2015 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Drive Diversity & Inclusion in Computing 推动计算机领域的多样性和包容性
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-03 DOI: 10.1109/mmul.2023.3339930
{"title":"Drive Diversity & Inclusion in Computing","authors":"","doi":"10.1109/mmul.2023.3339930","DOIUrl":"https://doi.org/10.1109/mmul.2023.3339930","url":null,"abstract":"","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"127 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
aVCSR: Adaptive Video Compressive Sensing Using Region-of-Interest Detection in the Compressed Domain aVCSR:利用压缩域中的兴趣区域检测进行自适应视频压缩传感
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-12-14 DOI: 10.1109/mmul.2023.3342062
Jian Yang, Haixin Wang, Ittetsu Taniguchi, Yibo Fan, Jinjia Zhou
Existing video compressive sensing (CS) techniques with fixed sampling rates can deliver satisfactory reconstructed quality but necessitate large transmission bandwidth. To overcome this challenge, region-of-interest (ROI)-based CS algorithms have been introduced to allocate different coding resources between ROI and non-ROI segments. However, neglecting non-ROI excessively in these algorithms leads to unsatisfactory average quality for the eventual reconstruction. In this article, we integrate the ideas of these methods and propose a novel adaptive video CS approach using a low-complexity ROI detection method in the compressed domain. The ROI is detected and sampled by calculating the measurement variance between the reference frame and the subsequent frames. Conversely, the non-ROI is not transmitted but will be reconstructed by utilizing the reference frame through the corresponding position information. In addition, we present a compact method for adapting the threshold value, which allows each frame of a video to have a unique threshold rather than an artificially predetermined fixed value. Moreover, a reference-frame-updating strategy is developed to improve the versatility of the entire framework. Compared to state-of-the-art counterparts, extensive experimental results have demonstrated that our proposed methods achieve superior performance while tackling diverse scenes and using a lower sampling rate.
现有的固定采样率视频压缩传感(CS)技术可以提供令人满意的重建质量,但需要很大的传输带宽。为了克服这一难题,人们引入了基于感兴趣区域(ROI)的 CS 算法,在感兴趣区域和非感兴趣区域段之间分配不同的编码资源。然而,在这些算法中过度忽略非 ROI 会导致最终重建的平均质量不尽如人意。在本文中,我们综合了这些方法的思想,提出了一种新的自适应视频 CS 方法,在压缩域中使用低复杂度 ROI 检测方法。ROI 是通过计算参考帧和后续帧之间的测量方差来检测和采样的。相反,非 ROI 不会被传输,而是通过相应的位置信息利用参考帧进行重建。此外,我们还提出了一种自适应阈值的紧凑方法,它允许视频的每一帧都有一个独特的阈值,而不是人为预先确定的固定值。此外,我们还开发了一种参考帧更新策略,以提高整个框架的通用性。与最先进的同类方法相比,大量实验结果表明,我们提出的方法在处理不同场景和使用较低采样率时性能更优。
{"title":"aVCSR: Adaptive Video Compressive Sensing Using Region-of-Interest Detection in the Compressed Domain","authors":"Jian Yang, Haixin Wang, Ittetsu Taniguchi, Yibo Fan, Jinjia Zhou","doi":"10.1109/mmul.2023.3342062","DOIUrl":"https://doi.org/10.1109/mmul.2023.3342062","url":null,"abstract":"Existing video compressive sensing (CS) techniques with fixed sampling rates can deliver satisfactory reconstructed quality but necessitate large transmission bandwidth. To overcome this challenge, region-of-interest (ROI)-based CS algorithms have been introduced to allocate different coding resources between ROI and non-ROI segments. However, neglecting non-ROI excessively in these algorithms leads to unsatisfactory average quality for the eventual reconstruction. In this article, we integrate the ideas of these methods and propose a novel adaptive video CS approach using a low-complexity ROI detection method in the compressed domain. The ROI is detected and sampled by calculating the measurement variance between the reference frame and the subsequent frames. Conversely, the non-ROI is not transmitted but will be reconstructed by utilizing the reference frame through the corresponding position information. In addition, we present a compact method for adapting the threshold value, which allows each frame of a video to have a unique threshold rather than an artificially predetermined fixed value. Moreover, a reference-frame-updating strategy is developed to improve the versatility of the entire framework. Compared to state-of-the-art counterparts, extensive experimental results have demonstrated that our proposed methods achieve superior performance while tackling diverse scenes and using a lower sampling rate.","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"1 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140594324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge of Cervical Cancer Screening and Prevention by Human Papillomavirus Deoxyribonucleic Acid and Human Papillomavirus Vaccination among Women Attending a Tertiary Care Centre. 第三级医疗中心就诊妇女对宫颈癌筛查、人类乳头瘤病毒脱氧核糖核酸预防以及人类乳头瘤病毒疫苗接种的了解。
IF 0.5 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-08-01 DOI: 10.31729/jnma.8248
Sapana Amatya Vaidya, Lisa Roka Magar, Silpina Budha Magar

Introduction: Cervical cancer is one of the leading causes of morbidity and mortality among women globally as well as in Nepal. It is attributable to persistent infection by high-risk human papillomavirus, especially human papillomavirus-16 and human papillomavirus-18. The aim of this study was to find out the knowledge of cervical cancer screening and prevention by human papillomavirus deoxyribonucleic acid and human papillomavirus vaccination among women attending a tertiary care centre.

Methods: A descriptive cross-sectional study was conducted in patients attending the outpatient Department of Gynaecology in a tertiary care centre from 18 March to 30 April 2023. After calculating sample size and taking a convenience sampling a survey questionnaire on knowledge of Cervical Cancer Screening and Prevention by Human Papillomavirus Deoxyribonucleic Acid and Human Papillomavirus Vaccination was collected. The point estimate was calculated at a 95% confidence interval.

Results: Among 508 women, 42 (8.25%) (5.86-10.64, 95% Confidence Interval) had knowledge of cervical cancer screening and prevention by human papillomavirus deoxyribonucleic acid and human papillomavirus vaccination. According to the questionnaires with a total sample of 508, 164 (32.28%) know about cervical cancer, 15 (2.95%) know about HPV infection, 14 (2.76%) know about HPV infection causes cervical cancer, and 21 (4.13%) know about HPV transmitted through multiple sex partners.

Conclusions: The knowledge of cervical cancer screening and prevention by human papillomavirus deoxyribonucleic acid and human papillomavirus vaccination among women is very low. This study recommends having a health education and awareness programme on it to increase knowledge.

Keywords: cervical cancer; human papillomavirus; pap smear; sexual intercourse; vaccination.

导言:宫颈癌是全球和尼泊尔妇女发病和死亡的主要原因之一。它可归因于高危人类乳头瘤病毒的持续感染,尤其是人类乳头瘤病毒-16 和人类乳头瘤病毒-18。本研究的目的是了解在一家三级医疗中心就诊的妇女对宫颈癌筛查以及通过人类乳头瘤病毒脱氧核糖核酸和人类乳头瘤病毒疫苗接种预防宫颈癌的认识:在 2023 年 3 月 18 日至 4 月 30 日期间,对在一家三级医疗中心妇科门诊就诊的患者进行了一项描述性横断面研究。在计算样本量并采取便利抽样后,收集了一份关于宫颈癌筛查和人类乳头瘤病毒脱氧核糖核酸预防以及人类乳头瘤病毒疫苗接种知识的调查问卷。结果显示,在 508 名妇女中,有 42 人(8%)对宫颈癌筛查和人乳头状瘤病毒脱氧核糖核酸及人乳头状瘤病毒疫苗接种有所了解:结果:在 508 名妇女中,42 人(8.25%)(5.86-10.64,95% 置信区间)了解宫颈癌筛查以及人类乳头瘤病毒脱氧核糖核酸和人类乳头瘤病毒疫苗接种的预防知识。根据对 508 个样本的问卷调查,164 人(32.28%)了解宫颈癌,15 人(2.95%)了解人乳头状瘤病毒感染,14 人(2.76%)了解人乳头状瘤病毒感染导致宫颈癌,21 人(4.13%)了解人乳头状瘤病毒通过多个性伴侣传播:结论:妇女对宫颈癌筛查以及通过人类乳头瘤病毒脱氧核糖核酸和人类乳头瘤病毒疫苗预防宫颈癌的了解非常少。关键词:宫颈癌;人类乳头瘤病毒;宫颈涂片;性交;疫苗接种。
{"title":"Knowledge of Cervical Cancer Screening and Prevention by Human Papillomavirus Deoxyribonucleic Acid and Human Papillomavirus Vaccination among Women Attending a Tertiary Care Centre.","authors":"Sapana Amatya Vaidya, Lisa Roka Magar, Silpina Budha Magar","doi":"10.31729/jnma.8248","DOIUrl":"10.31729/jnma.8248","url":null,"abstract":"<p><strong>Introduction: </strong>Cervical cancer is one of the leading causes of morbidity and mortality among women globally as well as in Nepal. It is attributable to persistent infection by high-risk human papillomavirus, especially human papillomavirus-16 and human papillomavirus-18. The aim of this study was to find out the knowledge of cervical cancer screening and prevention by human papillomavirus deoxyribonucleic acid and human papillomavirus vaccination among women attending a tertiary care centre.</p><p><strong>Methods: </strong>A descriptive cross-sectional study was conducted in patients attending the outpatient Department of Gynaecology in a tertiary care centre from 18 March to 30 April 2023. After calculating sample size and taking a convenience sampling a survey questionnaire on knowledge of Cervical Cancer Screening and Prevention by Human Papillomavirus Deoxyribonucleic Acid and Human Papillomavirus Vaccination was collected. The point estimate was calculated at a 95% confidence interval.</p><p><strong>Results: </strong>Among 508 women, 42 (8.25%) (5.86-10.64, 95% Confidence Interval) had knowledge of cervical cancer screening and prevention by human papillomavirus deoxyribonucleic acid and human papillomavirus vaccination. According to the questionnaires with a total sample of 508, 164 (32.28%) know about cervical cancer, 15 (2.95%) know about HPV infection, 14 (2.76%) know about HPV infection causes cervical cancer, and 21 (4.13%) know about HPV transmitted through multiple sex partners.</p><p><strong>Conclusions: </strong>The knowledge of cervical cancer screening and prevention by human papillomavirus deoxyribonucleic acid and human papillomavirus vaccination among women is very low. This study recommends having a health education and awareness programme on it to increase knowledge.</p><p><strong>Keywords: </strong>cervical cancer; human papillomavirus; pap smear; sexual intercourse; vaccination.</p>","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"1 1","pages":"658-661"},"PeriodicalIF":0.5,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10566606/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69803539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge Distraction-aware Salient Object Detection 边缘干扰感知显著目标检测
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-07-01 DOI: 10.1109/MMUL.2023.3235936
Sucheng Ren, Wenxi Liu, Jianbo Jiao, Guoqiang Han, Shengfeng He
Integrating low-level edge features has been proven to be effective in preserving clear boundaries of salient objects. However, the locality of edge features makes it difficult to capture globally salient edges, leading to distraction in the final predictions. To address this problem, we propose to produce distraction-free edge features by incorporating cross-scale holistic interdependencies between high-level features. In particular, we first formulate our edge features extraction process as a boundary-filling problem. In this way, we enforce edge features to focus on closed boundaries instead of those disconnected background edges. Second, we propose to explore cross-scale holistic contextual connections between every position pair of high-level feature maps regardless of their distances across different scales. It selectively aggregates features at each position based on its connections to all the others, simulating the “contrast” stimulus of visual saliency. Finally, we present a complementary features integration module to fuse low- and high-level features according to their properties. Experimental results demonstrate our proposed method outperforms previous state-of-the-art methods on the benchmark datasets, with the fast inference speed of 30 FPS on a single GPU.
整合低水平边缘特征已被证明可以有效地保持显著目标的清晰边界。然而,边缘特征的局部性使得难以捕获全局显著边缘,从而导致最终预测的分散。为了解决这个问题,我们建议通过整合高阶特征之间的跨尺度整体相互依赖关系来产生无干扰的边缘特征。特别是,我们首先将边缘特征提取过程表述为边界填充问题。通过这种方式,我们强制边缘特征聚焦于封闭的边界,而不是那些断开的背景边缘。其次,我们建议探索高阶特征图的每个位置对之间的跨尺度整体上下文联系,而不管它们在不同尺度上的距离。它根据每个位置与所有其他位置的连接选择性地聚集特征,模拟视觉显著性的“对比”刺激。最后,我们提出了一个互补的特征集成模块,根据特征的性质融合低级特征和高级特征。实验结果表明,我们提出的方法在基准数据集上优于以前的最先进的方法,在单个GPU上具有30 FPS的快速推理速度。
{"title":"Edge Distraction-aware Salient Object Detection","authors":"Sucheng Ren, Wenxi Liu, Jianbo Jiao, Guoqiang Han, Shengfeng He","doi":"10.1109/MMUL.2023.3235936","DOIUrl":"https://doi.org/10.1109/MMUL.2023.3235936","url":null,"abstract":"Integrating low-level edge features has been proven to be effective in preserving clear boundaries of salient objects. However, the locality of edge features makes it difficult to capture globally salient edges, leading to distraction in the final predictions. To address this problem, we propose to produce distraction-free edge features by incorporating cross-scale holistic interdependencies between high-level features. In particular, we first formulate our edge features extraction process as a boundary-filling problem. In this way, we enforce edge features to focus on closed boundaries instead of those disconnected background edges. Second, we propose to explore cross-scale holistic contextual connections between every position pair of high-level feature maps regardless of their distances across different scales. It selectively aggregates features at each position based on its connections to all the others, simulating the “contrast” stimulus of visual saliency. Finally, we present a complementary features integration module to fuse low- and high-level features according to their properties. Experimental results demonstrate our proposed method outperforms previous state-of-the-art methods on the benchmark datasets, with the fast inference speed of 30 FPS on a single GPU.","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"30 1","pages":"63-73"},"PeriodicalIF":3.2,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43780891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content-Aware Latent Semantic Direction Fusion for Multi-Attribute Editing 面向内容感知的多属性编辑潜在语义方向融合
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-07-01 DOI: 10.1109/MMUL.2023.3285550
Xiwen Wei, Yihan Tang, Si Wu
For facial attribute editing, significant progress has been made in discovering semantic directions in the latent space of StyleGAN, and the manipulation is performed by mapping an input image to a latent code and then moving along a direction associated with a target attribute. In this case, multi-attribute editing typically needs a sequential transformation process, which may cause ineffective manipulation or the cumulative effect on irrelevant attribute deviation. In this work, we aim to simultaneously manipulate multiple attributes through a single transformation. Toward this end, we propose a StyleGAN-based latent semantic direction fusion model, referred to as StyleLSF. There are two learnable components: a content-aware direction predictor learns to infer the latent directions, which are associated with preset attributes. A fusion network fuses the directions with respect to target attributes and yields a single translation vector. We further ensure irrelevant attribute preservation by imposing an attribute-aware feature consistency regularization approach.
对于面部属性编辑,在StyleGAN的潜在空间中发现语义方向方面已经取得了重大进展,并且通过将输入图像映射到潜在代码,然后沿着与目标属性相关联的方向移动来执行操作。在这种情况下,多属性编辑通常需要一个顺序转换过程,这可能会导致无效的操作或对无关属性偏差的累积影响。在这项工作中,我们的目标是通过单个转换同时操作多个属性。为此,我们提出了一个基于StyleGAN的潜在语义方向融合模型,称为StyleSF。有两个可学习的组成部分:内容感知方向预测器学习推断与预设属性相关的潜在方向。融合网络融合相对于目标属性的方向,并产生单个平移向量。我们通过采用属性感知特征一致性正则化方法来进一步确保不相关的属性保持。
{"title":"Content-Aware Latent Semantic Direction Fusion for Multi-Attribute Editing","authors":"Xiwen Wei, Yihan Tang, Si Wu","doi":"10.1109/MMUL.2023.3285550","DOIUrl":"https://doi.org/10.1109/MMUL.2023.3285550","url":null,"abstract":"For facial attribute editing, significant progress has been made in discovering semantic directions in the latent space of StyleGAN, and the manipulation is performed by mapping an input image to a latent code and then moving along a direction associated with a target attribute. In this case, multi-attribute editing typically needs a sequential transformation process, which may cause ineffective manipulation or the cumulative effect on irrelevant attribute deviation. In this work, we aim to simultaneously manipulate multiple attributes through a single transformation. Toward this end, we propose a StyleGAN-based latent semantic direction fusion model, referred to as StyleLSF. There are two learnable components: a content-aware direction predictor learns to infer the latent directions, which are associated with preset attributes. A fusion network fuses the directions with respect to target attributes and yields a single translation vector. We further ensure irrelevant attribute preservation by imposing an attribute-aware feature consistency regularization approach.","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"30 1","pages":"87-99"},"PeriodicalIF":3.2,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42217833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Multidimensional Perceptual Quality in Online Interactive Multimedia 优化在线交互式多媒体的多维感知质量
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-07-01 DOI: 10.1109/MMUL.2023.3277851
Benjamin W. Wah, Jingxi X. Xu
Network latencies and losses in online interactive multimedia applications may lead to a degraded perception of quality, such as lower interactivity or sluggish responses. We can measure these degradations in perceptual quality by the just-noticeable difference, awareness, or probability of noticeability ($p_{text{note}}$pnote); the latter measures the likelihood that subjects can notice a change from a reference to a modified reference. In our previous work, we developed an efficient method for finding the perceptual quality for one metric under simplex control. However, integrating the perceptual qualities of several metrics is a heuristic. In this article, we present a formal approach to optimally combine the perceptual quality of multiple metrics into a joint measure that shows their tradeoffs. Our result shows that the optimal balance occurs when the $p_{text{note}}$pnote of all the component metrics are equal. Furthermore, our approach leads to an algorithm with a linear (instead of combinatorial) complexity of the number of metrics. Finally, we present the application of our method in two case studies, one on VoIP for finding the optimal operating points and the second on fast-action games to hide network delays while maintaining the consistency of action orders.
在线交互式多媒体应用程序中的网络延迟和损失可能导致质量感知下降,例如交互性降低或响应缓慢。我们可以通过可注意的差异、意识或可注意性概率来测量这些感知质量的退化($p_{text{note}}$pnote);后者衡量的是受试者注意到从参考到修改后的参考的变化的可能性。在我们之前的工作中,我们开发了一种在单纯形控制下寻找单个度量的感知质量的有效方法。然而,整合几个指标的感知品质是一个启发式的。在本文中,我们提出了一种正式的方法,以最佳方式将多个度量的感知质量组合成一个显示其权衡的联合度量。我们的结果表明,当所有组件指标的$p_{text{note}}$pnote相等时,会出现最佳平衡。此外,我们的方法导致算法具有线性(而不是组合)复杂度的指标数量。最后,我们介绍了我们的方法在两个案例研究中的应用,一个是在VoIP中寻找最佳操作点,第二个是在快速动作游戏中隐藏网络延迟,同时保持动作顺序的一致性。
{"title":"Optimizing Multidimensional Perceptual Quality in Online Interactive Multimedia","authors":"Benjamin W. Wah, Jingxi X. Xu","doi":"10.1109/MMUL.2023.3277851","DOIUrl":"https://doi.org/10.1109/MMUL.2023.3277851","url":null,"abstract":"Network latencies and losses in online interactive multimedia applications may lead to a degraded perception of quality, such as lower interactivity or sluggish responses. We can measure these degradations in perceptual quality by the just-noticeable difference, awareness, or probability of noticeability ($p_{text{note}}$pnote); the latter measures the likelihood that subjects can notice a change from a reference to a modified reference. In our previous work, we developed an efficient method for finding the perceptual quality for one metric under simplex control. However, integrating the perceptual qualities of several metrics is a heuristic. In this article, we present a formal approach to optimally combine the perceptual quality of multiple metrics into a joint measure that shows their tradeoffs. Our result shows that the optimal balance occurs when the $p_{text{note}}$pnote of all the component metrics are equal. Furthermore, our approach leads to an algorithm with a linear (instead of combinatorial) complexity of the number of metrics. Finally, we present the application of our method in two case studies, one on VoIP for finding the optimal operating points and the second on fast-action games to hide network delays while maintaining the consistency of action orders.","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"30 1","pages":"119-128"},"PeriodicalIF":3.2,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47142163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE MultiMedia
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1