首页 > 最新文献

2021 International Conference on Culture-oriented Science & Technology (ICCST)最新文献

英文 中文
Discussion on Conceptual Form of The Third Generation Camera Robot System 第三代摄像机器人系统概念形式探讨
Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00010
Jing He, Shitou Liu, Yixin Zhao, Qiang Liu, Yixue Dou
By reviewing the development of camera robot system, this paper extracts function and applicative characteristics of the first two generation systems. According to the actual requirements of film shooting and the evolution path by nature, this paper proposes the conceptual form of the fully automatic third-generation camera robot system suitable for photographer control. To explain major differences between the third-generation and the earlier two clearly, this paper uses comparison methods in narration procedure. With consideration of the application, development status, on-site requirements and engineering technology limitations of camera robot system in recent years, the design principles and functional advantages of third-generation system are discussed, and the corresponding key research directions are proposed.
通过对摄像机器人系统发展的回顾,总结了前两代摄像机器人系统的功能和应用特点。根据电影拍摄的实际要求和自然界的进化路径,提出了适合摄影师控制的全自动第三代摄像机器人系统的概念形式。为了清楚地说明第三代与前两代的主要区别,本文在叙述过程中采用了比较的方法。结合近年来摄像机机器人系统的应用、发展现状、现场需求和工程技术局限性,探讨了第三代摄像机机器人系统的设计原则和功能优势,并提出了相应的重点研究方向。
{"title":"Discussion on Conceptual Form of The Third Generation Camera Robot System","authors":"Jing He, Shitou Liu, Yixin Zhao, Qiang Liu, Yixue Dou","doi":"10.1109/ICCST53801.2021.00010","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00010","url":null,"abstract":"By reviewing the development of camera robot system, this paper extracts function and applicative characteristics of the first two generation systems. According to the actual requirements of film shooting and the evolution path by nature, this paper proposes the conceptual form of the fully automatic third-generation camera robot system suitable for photographer control. To explain major differences between the third-generation and the earlier two clearly, this paper uses comparison methods in narration procedure. With consideration of the application, development status, on-site requirements and engineering technology limitations of camera robot system in recent years, the design principles and functional advantages of third-generation system are discussed, and the corresponding key research directions are proposed.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114953021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Poverty Alleviation Path of Film and Television Design in the Context of Media Convergence 媒介融合背景下影视设计的扶贫路径研究
Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00044
Guobin Peng, Xudong Pi, Jiajia Zhang
As a common cause of the whole society, poverty alleviation is the basis of the work to explore the diversification of poverty alleviation paths in the overall project. Media convergence is the combination of the typical advantages of traditional media and new media, which has provided a breakthrough for poverty alleviation both from the media and content creation. With the rapid development of social science and technology and the popularization and application of digitalization, the advantages of artistic language of film and television design have gradually emerged in the poverty alleviation pattern, which has improved the transmission efficiency of poverty alleviation information. The hierarchical type of film and television expands the range of the audience. It makes poverty alleviation information reach the entire social group through the media and video symbols, thus indirectly promoting poverty alleviation.
扶贫作为全社会的共同事业,是在整体工程中探索扶贫路径多元化的工作基础。媒介融合是传统媒体与新媒体典型优势的结合,从媒介和内容创造两方面为扶贫提供了突破口。随着社会科学技术的快速发展和数字化的普及应用,影视设计艺术语言的优势在扶贫模式中逐渐显现出来,提高了扶贫信息的传播效率。电影和电视的等级类型扩大了观众的范围。它通过媒体和视频符号使扶贫信息到达整个社会群体,从而间接促进扶贫。
{"title":"Research on Poverty Alleviation Path of Film and Television Design in the Context of Media Convergence","authors":"Guobin Peng, Xudong Pi, Jiajia Zhang","doi":"10.1109/ICCST53801.2021.00044","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00044","url":null,"abstract":"As a common cause of the whole society, poverty alleviation is the basis of the work to explore the diversification of poverty alleviation paths in the overall project. Media convergence is the combination of the typical advantages of traditional media and new media, which has provided a breakthrough for poverty alleviation both from the media and content creation. With the rapid development of social science and technology and the popularization and application of digitalization, the advantages of artistic language of film and television design have gradually emerged in the poverty alleviation pattern, which has improved the transmission efficiency of poverty alleviation information. The hierarchical type of film and television expands the range of the audience. It makes poverty alleviation information reach the entire social group through the media and video symbols, thus indirectly promoting poverty alleviation.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129849103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-Reference Video Quality Assessment Based on Spatiotemporal Visual Sensitivity 基于时空视觉灵敏度的全参考视频质量评价
Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00071
Huiyuan Fu, Da Pan, Ping Shi
Video streaming services have become one of the important businesses of network service providers. Accurately predicting video perceptual quality score can help providing high-quality video services. Many video quality assessment (VQA) methods were trying to simulate human visual system (HVS) to get a better performance. In this paper, we proposed a full-reference video quality assessment (FR-VQA) method named DeepVQA-FBSA. Our method is based on spatiotemporal visual sensitivity. It firstly uses a convolutional neural network (CNN) to obtain the visual sensitivity maps of frames according to the input spatiotemporal information. Then visual sensitivity maps are used to obtain the perceptual features of every frame which we called frame-level features in this paper. The frame-level features are then feed into a Feature Based Self-attention (FBSA) module to fusion to the video-level features and used to predict the video quality score. The experimental results showed that the predicted results of our method have great consistency with the subjective evaluation results.
视频流服务已成为网络服务提供商的重要业务之一。准确预测视频感知质量评分有助于提供高质量的视频服务。许多视频质量评估方法都试图模拟人类视觉系统以获得更好的效果。本文提出了一种全参考视频质量评估(FR-VQA)方法,命名为DeepVQA-FBSA。我们的方法是基于时空视觉灵敏度。首先利用卷积神经网络(CNN)根据输入的时空信息获取帧的视觉灵敏度图;然后利用视觉灵敏度图获得每一帧的感知特征,本文称之为帧级特征。然后将帧级特征输入到基于特征的自关注(FBSA)模块中,与视频级特征融合,用于预测视频质量分数。实验结果表明,该方法的预测结果与主观评价结果有很大的一致性。
{"title":"Full-Reference Video Quality Assessment Based on Spatiotemporal Visual Sensitivity","authors":"Huiyuan Fu, Da Pan, Ping Shi","doi":"10.1109/ICCST53801.2021.00071","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00071","url":null,"abstract":"Video streaming services have become one of the important businesses of network service providers. Accurately predicting video perceptual quality score can help providing high-quality video services. Many video quality assessment (VQA) methods were trying to simulate human visual system (HVS) to get a better performance. In this paper, we proposed a full-reference video quality assessment (FR-VQA) method named DeepVQA-FBSA. Our method is based on spatiotemporal visual sensitivity. It firstly uses a convolutional neural network (CNN) to obtain the visual sensitivity maps of frames according to the input spatiotemporal information. Then visual sensitivity maps are used to obtain the perceptual features of every frame which we called frame-level features in this paper. The frame-level features are then feed into a Feature Based Self-attention (FBSA) module to fusion to the video-level features and used to predict the video quality score. The experimental results showed that the predicted results of our method have great consistency with the subjective evaluation results.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125412931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Movie Scene Argument Extraction with Trigger Action Information 基于触发动作信息的电影场景参数提取
Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00103
Qian Yi, Guixuan Zhang, Jie Liu, Shuwu Zhang
Movie scene argument is an essential part of the movie scene. Movie scene argument extraction can help to understand the movie plot. In this paper we propose a movie scene argument extraction model, which utilizes the trigger action paraphrase as extra information to help improve the argument extraction. Specifically, we obtain the paraphrase of trigger from the dictionary and employ attention mechanism to encode them into an argument oriented embedding vector. Then we use the argument oriented embedding vector and the instance embedding for argument extraction. Experimental results on a movie scene event extraction dataset and a widely used open domain event extraction dataset prove effectiveness of our model.
电影场景论证是电影场景的重要组成部分。电影场景的论据提取有助于理解电影情节。在本文中,我们提出了一个电影场景的论点提取模型,该模型利用触发动作释义作为额外的信息来帮助改进论点提取。具体来说,我们从字典中获取触发器的释义,并利用注意机制将其编码为一个面向参数的嵌入向量。然后使用面向参数的嵌入向量和实例嵌入进行参数提取。在一个电影场景事件提取数据集和一个广泛使用的开放域事件提取数据集上的实验结果证明了该模型的有效性。
{"title":"Movie Scene Argument Extraction with Trigger Action Information","authors":"Qian Yi, Guixuan Zhang, Jie Liu, Shuwu Zhang","doi":"10.1109/ICCST53801.2021.00103","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00103","url":null,"abstract":"Movie scene argument is an essential part of the movie scene. Movie scene argument extraction can help to understand the movie plot. In this paper we propose a movie scene argument extraction model, which utilizes the trigger action paraphrase as extra information to help improve the argument extraction. Specifically, we obtain the paraphrase of trigger from the dictionary and employ attention mechanism to encode them into an argument oriented embedding vector. Then we use the argument oriented embedding vector and the instance embedding for argument extraction. Experimental results on a movie scene event extraction dataset and a widely used open domain event extraction dataset prove effectiveness of our model.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125436021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Named Entity Recognition of traditional architectural text based on BERT 基于BERT的传统建筑文本命名实体识别
Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00047
Yifu Li, Wenjun Hou, Bing Bai
Traditional architecture is an important component carrier of traditional culture. Through deep learning models, relevant entities can be automatically extracted from unstructured texts to provide data support for the protection and inheritance of traditional architecture. However, research on text information extraction oriented to this field has not been effectively carried out. In this paper, a data set of nearly 50,000 words in this field is collected, sorted out, and annotated, five types of entity labels are defined, annotation specifications are clarified, and a method of Named Entity Recognition based on pre-training model is proposed. BERT (Bidirectional Encoder Representations from Transformers) pre-training model is used to capture dynamic word vector information, Bi-directional Long Short-Term Memory (BiLSTM) module is used to capture bidirectional contextual information with positive and reverse sequences. Finally, classification mapping between labels is completed by the Conditional Random Field (CRF) module. The experiment shows that compared with other models, the BERT-BiLSTM-CRF model proposed in this experiment has a better recognition effect in this field, with F1 reaching 95.45%.
传统建筑是传统文化的重要组成载体。通过深度学习模型,从非结构化文本中自动提取相关实体,为传统建筑的保护和传承提供数据支持。然而,针对这一领域的文本信息提取研究尚未得到有效开展。本文对该领域近5万字的数据集进行了采集、整理和标注,定义了5种实体标签,明确了标注规范,提出了一种基于预训练模型的命名实体识别方法。采用BERT (Bidirectional Encoder Representations from Transformers)预训练模型捕获动态词向量信息,采用双向长短期记忆(BiLSTM)模块捕获正反两种序列的双向上下文信息。最后,标签之间的分类映射由条件随机场(Conditional Random Field, CRF)模块完成。实验表明,与其他模型相比,本实验提出的BERT-BiLSTM-CRF模型在该领域具有更好的识别效果,F1达到95.45%。
{"title":"Named Entity Recognition of traditional architectural text based on BERT","authors":"Yifu Li, Wenjun Hou, Bing Bai","doi":"10.1109/ICCST53801.2021.00047","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00047","url":null,"abstract":"Traditional architecture is an important component carrier of traditional culture. Through deep learning models, relevant entities can be automatically extracted from unstructured texts to provide data support for the protection and inheritance of traditional architecture. However, research on text information extraction oriented to this field has not been effectively carried out. In this paper, a data set of nearly 50,000 words in this field is collected, sorted out, and annotated, five types of entity labels are defined, annotation specifications are clarified, and a method of Named Entity Recognition based on pre-training model is proposed. BERT (Bidirectional Encoder Representations from Transformers) pre-training model is used to capture dynamic word vector information, Bi-directional Long Short-Term Memory (BiLSTM) module is used to capture bidirectional contextual information with positive and reverse sequences. Finally, classification mapping between labels is completed by the Conditional Random Field (CRF) module. The experiment shows that compared with other models, the BERT-BiLSTM-CRF model proposed in this experiment has a better recognition effect in this field, with F1 reaching 95.45%.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121292252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Image Super-Resolution via Dual Feature Aggregation Network 基于双特征聚合网络的轻量图像超分辨率
Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00104
Shang Li, Guixuan Zhang, Zhengxiong Luo, Jie Liu, Zhi Zeng, Shuwu Zhang
With the power of deep learning, super-resolution (SR) methods enjoy a dramatic boost of performance. However, they usually have a large model size and high computational complexity, which hinders the application in devices with limited memory and computing power. Some lightweight SR methods solve this issue by directly designing shallower architectures, but it will affect SR performance. In this paper, we propose the dual feature aggregation strategy (DFA). It enhances the feature utilization via feature reuse, which largely improves the representation ability while only introducing marginal computational cost. Thus, a smaller model could achieve better cost-effectiveness with DFA. Specifically, DFA consists of local and global feature aggregation modules (LAM and GAM). They work together to further fuse hierarchical features adaptively along the channel and spatial dimensions. Extensive experiments suggest that the proposed network performs favorably against the state-of-the-art SR methods in terms of visual quality, memory footprint, and computational complexity.
借助深度学习的力量,超分辨率(SR)方法的性能得到了显著提升。然而,它们通常具有较大的模型尺寸和较高的计算复杂度,这阻碍了在内存和计算能力有限的设备中的应用。一些轻量级的SR方法通过直接设计较浅的架构来解决这个问题,但是这会影响SR的性能。本文提出了双特征聚合策略(DFA)。它通过特征重用来提高特征利用率,在只引入边际计算成本的情况下,极大地提高了表征能力。因此,使用DFA,较小的模型可以获得更好的成本效益。具体来说,DFA由局部和全局特征聚合模块(LAM和GAM)组成。它们一起工作,进一步自适应地融合沿着通道和空间维度的分层特征。大量的实验表明,所提出的网络在视觉质量、内存占用和计算复杂性方面优于最先进的SR方法。
{"title":"Lightweight Image Super-Resolution via Dual Feature Aggregation Network","authors":"Shang Li, Guixuan Zhang, Zhengxiong Luo, Jie Liu, Zhi Zeng, Shuwu Zhang","doi":"10.1109/ICCST53801.2021.00104","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00104","url":null,"abstract":"With the power of deep learning, super-resolution (SR) methods enjoy a dramatic boost of performance. However, they usually have a large model size and high computational complexity, which hinders the application in devices with limited memory and computing power. Some lightweight SR methods solve this issue by directly designing shallower architectures, but it will affect SR performance. In this paper, we propose the dual feature aggregation strategy (DFA). It enhances the feature utilization via feature reuse, which largely improves the representation ability while only introducing marginal computational cost. Thus, a smaller model could achieve better cost-effectiveness with DFA. Specifically, DFA consists of local and global feature aggregation modules (LAM and GAM). They work together to further fuse hierarchical features adaptively along the channel and spatial dimensions. Extensive experiments suggest that the proposed network performs favorably against the state-of-the-art SR methods in terms of visual quality, memory footprint, and computational complexity.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114085920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on the Development and Application of Virtual Reality Simulation Technology in Investigation and Research Courses 虚拟现实仿真技术在调查研究课程中的开发与应用研究
Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00028
Xing Fang, K. Un, Xi Zhang
The purpose of this study is to develop the investigation and research Courses, improve the design education methods, and adapt to the rapid changes of the platform era. In order to improve the design education of the investigation and research courses, virtual reality simulation experiment is a simulation experiment based on head tracking VR, which is used to display the investigation results, so that users can investigate the subjects given by a variety of NPCs and apply them to the design. The core elements include User Virtual Realistic Control, Virtual AI System and Research Information Collection. Although the content of the simulation is very simple, the experience gained through the simulation will have a positive impact on the design education of the investigation team.
本研究的目的是开发调查研究课程,改进设计教育方法,适应平台时代的快速变化。为了完善调查研究课程的设计教育,虚拟现实模拟实验是一种基于头部跟踪VR的模拟实验,用于显示调查结果,使用户可以对各种npc给出的主题进行调查,并将其应用到设计中。其核心要素包括用户虚拟逼真控制、虚拟人工智能系统和科研信息收集。虽然模拟的内容很简单,但是通过模拟获得的经验会对考察队的设计教育产生积极的影响。
{"title":"Research on the Development and Application of Virtual Reality Simulation Technology in Investigation and Research Courses","authors":"Xing Fang, K. Un, Xi Zhang","doi":"10.1109/ICCST53801.2021.00028","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00028","url":null,"abstract":"The purpose of this study is to develop the investigation and research Courses, improve the design education methods, and adapt to the rapid changes of the platform era. In order to improve the design education of the investigation and research courses, virtual reality simulation experiment is a simulation experiment based on head tracking VR, which is used to display the investigation results, so that users can investigate the subjects given by a variety of NPCs and apply them to the design. The core elements include User Virtual Realistic Control, Virtual AI System and Research Information Collection. Although the content of the simulation is very simple, the experience gained through the simulation will have a positive impact on the design education of the investigation team.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131600209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey upon College Students’ Consumption Behavior on Short Video Platforms 大学生短视频平台消费行为调查
Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00133
Caiwen Zhao, Gu Wang, Guowei Li, Li Ding
The rapid development of the short video industry has not only offered individuals innovative ways for entertainment and social contacts, but also facilitated a new online consumption mode. Whereas, during the process of consumption upon short video platforms, there are a flurry of problems, such as ingratiating the general consumption trend, blind consumption, lax quality management of some products, and difficult protection of consumers’ rights and benefits. In this paper, the questionnaire survey method was employed to investigate and analyze College Students’ usage behavior and consumption behavior on short video platforms. It is found that (1) the content and style of short videos, (2) the individual charisma of the vlogger, (3) the user’s personal preference and (4) the platform’s purchase mode are the main factors impacting college students’ consumption. Consequently, it indicates a survey reference for short video platforms to improve user stickiness.
短视频产业的快速发展不仅为个人提供了创新的娱乐和社交方式,也催生了一种新的在线消费模式。然而,短视频平台在消费过程中,存在着迎合消费大势、盲目消费、部分产品质量管理不严、消费者权益保障困难等问题。本文采用问卷调查法,对大学生在短视频平台上的使用行为和消费行为进行调查分析。研究发现(1)短视频的内容和风格,(2)视频博主的个人魅力,(3)用户的个人偏好,(4)平台的购买模式是影响大学生消费的主要因素。为短视频平台提高用户粘性提供了调研参考。
{"title":"A Survey upon College Students’ Consumption Behavior on Short Video Platforms","authors":"Caiwen Zhao, Gu Wang, Guowei Li, Li Ding","doi":"10.1109/ICCST53801.2021.00133","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00133","url":null,"abstract":"The rapid development of the short video industry has not only offered individuals innovative ways for entertainment and social contacts, but also facilitated a new online consumption mode. Whereas, during the process of consumption upon short video platforms, there are a flurry of problems, such as ingratiating the general consumption trend, blind consumption, lax quality management of some products, and difficult protection of consumers’ rights and benefits. In this paper, the questionnaire survey method was employed to investigate and analyze College Students’ usage behavior and consumption behavior on short video platforms. It is found that (1) the content and style of short videos, (2) the individual charisma of the vlogger, (3) the user’s personal preference and (4) the platform’s purchase mode are the main factors impacting college students’ consumption. Consequently, it indicates a survey reference for short video platforms to improve user stickiness.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134338899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust video watermarking approach based on QR code 一种基于二维码的鲁棒视频水印方法
Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00079
Zhuojie Gao, Zhixian Niu, Baoning Niu, Hu Guan, Ying Huang, Shuwu Zhang
Video watermarking embeds copyright mark called watermark in video frame to prove the ownership of video copyright. Compared with more mature image watermarking algorithms, video watermarking algorithms require higher robustness. This paper encodes the watermark into QR code and makes full use of the high fault tolerance of QR code, proposes a watermark generating and decoding strategy based on the characteristics of QR code, which improves the robustness of the watermarking algorithm. The experimental results show that the algorithm is more robust than the algorithm using random binary string or scrambling QR code as watermark.
视频水印是在视频帧中嵌入称为水印的版权标记,用以证明视频版权的归属。与更成熟的图像水印算法相比,视频水印算法对鲁棒性的要求更高。本文将水印编码为QR码,充分利用QR码的高容错性,提出了一种基于QR码特性的水印生成与解码策略,提高了水印算法的鲁棒性。实验结果表明,该算法比使用随机二进制字符串或置乱QR码作为水印的算法具有更强的鲁棒性。
{"title":"A Robust video watermarking approach based on QR code","authors":"Zhuojie Gao, Zhixian Niu, Baoning Niu, Hu Guan, Ying Huang, Shuwu Zhang","doi":"10.1109/ICCST53801.2021.00079","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00079","url":null,"abstract":"Video watermarking embeds copyright mark called watermark in video frame to prove the ownership of video copyright. Compared with more mature image watermarking algorithms, video watermarking algorithms require higher robustness. This paper encodes the watermark into QR code and makes full use of the high fault tolerance of QR code, proposes a watermark generating and decoding strategy based on the characteristics of QR code, which improves the robustness of the watermarking algorithm. The experimental results show that the algorithm is more robust than the algorithm using random binary string or scrambling QR code as watermark.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115600831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calculation and simulation of loudspeaker power based on cultural complex 基于文化综合体的扬声器功率计算与仿真
Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00109
Zhen Li, Pengyang Ji, Lifeng Wu, Hui Ren, Yuqing Chen
Research on how to calculate the power of loudspeaker in sound reinforcement system of the cultural complex has been studied seldom, this paper analyzes the calculation methods proposed by domestic and foreign scholars, on this basis, put forward the improved method that can calculate the total loudspeaker power according to the space volume of the hall(LPSV). Using the existing algorithm and the LPSV in this paper, the speaker power in the cultural complex is calculated respectively, and the total power of the loudspeaker required in the hall can be directly obtained by using the LPSV through the volume of the hall. The 3D model of the hall is established by EASE, which is used to calculate and simulate the sound reinforcement technical parameters such as analogical pressure level, articulation loss, and rapid speech transmission index. In the aspects of maximum sound pressure level, sound field nonuniformity, and transmission frequency characteristics, according to the above analysis of the data, it meets the “Design specification of hall sound amplification system” (GB50371-2006). This study provides a theoretical basis for loudspeaker configuration of sound reinforcement system in cultural complex and has great application value.
对文化综合体扩声系统中扬声器功率的计算研究较少,本文分析了国内外学者提出的计算方法,在此基础上提出了根据大厅空间体积计算扬声器总功率的改进方法(LPSV)利用现有的算法和本文的LPSV分别计算文化综合体内扬声器的功率,利用LPSV可以通过大厅的体积直接得到大厅内所需扬声器的总功率。利用EASE软件建立大厅三维模型,对模拟压力级、发音损失、语音快速传输指标等扩声技术参数进行计算和仿真。在最大声压级、声场不均匀性、传输频率特性等方面,根据上述数据分析,符合《大厅扩声系统设计规范》(GB50371-2006)的要求。本研究为文化综合体扩声系统的扬声器配置提供了理论依据,具有较大的应用价值。
{"title":"Calculation and simulation of loudspeaker power based on cultural complex","authors":"Zhen Li, Pengyang Ji, Lifeng Wu, Hui Ren, Yuqing Chen","doi":"10.1109/ICCST53801.2021.00109","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00109","url":null,"abstract":"Research on how to calculate the power of loudspeaker in sound reinforcement system of the cultural complex has been studied seldom, this paper analyzes the calculation methods proposed by domestic and foreign scholars, on this basis, put forward the improved method that can calculate the total loudspeaker power according to the space volume of the hall(LPSV). Using the existing algorithm and the LPSV in this paper, the speaker power in the cultural complex is calculated respectively, and the total power of the loudspeaker required in the hall can be directly obtained by using the LPSV through the volume of the hall. The 3D model of the hall is established by EASE, which is used to calculate and simulate the sound reinforcement technical parameters such as analogical pressure level, articulation loss, and rapid speech transmission index. In the aspects of maximum sound pressure level, sound field nonuniformity, and transmission frequency characteristics, according to the above analysis of the data, it meets the “Design specification of hall sound amplification system” (GB50371-2006). This study provides a theoretical basis for loudspeaker configuration of sound reinforcement system in cultural complex and has great application value.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114678726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 International Conference on Culture-oriented Science & Technology (ICCST)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1