{"title":"在为自然语言处理而设计的基础设施内实施语言模型","authors":"Bartosz Walkowiak, Tomasz Walkowiak","doi":"10.24425/ijet.2024.149525","DOIUrl":null,"url":null,"abstract":"This paper explores cost-effective alternatives for resource-constrained environments in the context of language models by investigating methods such as quantization and CPUbased model implementations. The study addresses the computational efficiency of language models during inference and the development of infrastructure for text document processing. The paper discusses related technologies, the CLARIN-PL infrastructure architecture, and implementations of small and large language models. The emphasis is on model formats, data precision, and runtime environments (GPU and CPU). It identifies optimal solutions through extensive experimentation. In addition, the paper advocates for a more comprehensive performance evaluation approach. Instead of reporting only average token throughput, it suggests considering the curve’s shape, which can vary from constant to monotonically increasing or decreasing functions. Evaluating token throughput at various curve points, especially for different output token counts, provides a more informative perspective.","PeriodicalId":13922,"journal":{"name":"International Journal of Electronics and Telecommunications","volume":null,"pages":null},"PeriodicalIF":0.5000,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Implementation of language models within an infrastructure designed for Natural Language Processing\",\"authors\":\"Bartosz Walkowiak, Tomasz Walkowiak\",\"doi\":\"10.24425/ijet.2024.149525\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper explores cost-effective alternatives for resource-constrained environments in the context of language models by investigating methods such as quantization and CPUbased model implementations. The study addresses the computational efficiency of language models during inference and the development of infrastructure for text document processing. The paper discusses related technologies, the CLARIN-PL infrastructure architecture, and implementations of small and large language models. The emphasis is on model formats, data precision, and runtime environments (GPU and CPU). It identifies optimal solutions through extensive experimentation. In addition, the paper advocates for a more comprehensive performance evaluation approach. Instead of reporting only average token throughput, it suggests considering the curve’s shape, which can vary from constant to monotonically increasing or decreasing functions. Evaluating token throughput at various curve points, especially for different output token counts, provides a more informative perspective.\",\"PeriodicalId\":13922,\"journal\":{\"name\":\"International Journal of Electronics and Telecommunications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.5000,\"publicationDate\":\"2024-03-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Electronics and Telecommunications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.24425/ijet.2024.149525\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Electronics and Telecommunications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24425/ijet.2024.149525","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
摘要
本文通过研究量化和基于 CPU 的模型实现等方法,探讨了语言模型在资源受限环境下的成本效益替代方案。研究涉及语言模型在推理过程中的计算效率以及文本文档处理基础设施的开发。论文讨论了相关技术、CLARIN-PL 基础架构以及小型和大型语言模型的实现。重点是模型格式、数据精度和运行环境(GPU 和 CPU)。论文通过大量实验确定了最佳解决方案。此外,论文还提倡采用更全面的性能评估方法。它建议考虑曲线的形状,而不是只报告平均令牌吞吐量,因为曲线的形状可以从恒定函数到单调递增或递减函数。评估不同曲线点的令牌吞吐量,尤其是不同输出令牌数的令牌吞吐量,可以提供更多信息。
Implementation of language models within an infrastructure designed for Natural Language Processing
This paper explores cost-effective alternatives for resource-constrained environments in the context of language models by investigating methods such as quantization and CPUbased model implementations. The study addresses the computational efficiency of language models during inference and the development of infrastructure for text document processing. The paper discusses related technologies, the CLARIN-PL infrastructure architecture, and implementations of small and large language models. The emphasis is on model formats, data precision, and runtime environments (GPU and CPU). It identifies optimal solutions through extensive experimentation. In addition, the paper advocates for a more comprehensive performance evaluation approach. Instead of reporting only average token throughput, it suggests considering the curve’s shape, which can vary from constant to monotonically increasing or decreasing functions. Evaluating token throughput at various curve points, especially for different output token counts, provides a more informative perspective.