首页 > 最新文献

Computer Science Research Notes最新文献

英文 中文
Real-Time Visual Analytics for Remote Monitoring of Patient’s Health 用于远程监测患者健康的实时可视化分析
Pub Date : 2023-07-01 DOI: 10.24132/csrn.3301.61
Maryam Boumrah, S. Garbaya, A. Radgui
The recent proliferation of advanced data collection technologies for Patient Generated Health Data (PGHD) has made remote health monitoring more accessible. However, the complex nature of the big volume of medical generated data presents a significant challenge for traditional patient monitoring approaches, impeding the effective extraction of useful information. In this context, it is imperative to develop a robust and cost-effective framework that provides the scalability and deals with the heterogeneity of PGHD in real-time. Such a system could serve as a reference and would guide future research for monitoring patient undergoing a treatment at home conditions. This study presents a real-time visual analytics framework offering insightful visual representations of the multimodal big data. The proposed system was designed following the principles of User Centered Design (UCD) to ensure that it meets the needs and expectations of medical practitioners. The usability of this framework was evaluated by its application to the visualization of kinematic data of the upper limbs’ movement of patients during neuromotor rehabilitation exercises.
最近,用于患者生成健康数据(PGHD)的先进数据收集技术的普及,使得远程健康监测更容易获得。然而,大量医疗生成数据的复杂性对传统的患者监测方法提出了重大挑战,阻碍了有效提取有用信息。在这种情况下,必须开发一个健壮且经济高效的框架,以提供可伸缩性并实时处理PGHD的异构性。这样的系统可以作为一个参考,并将指导未来的研究,以监测在家庭条件下接受治疗的患者。本研究提出了一个实时可视化分析框架,为多模态大数据提供了深刻的可视化表示。拟议系统的设计遵循以用户为中心的设计(UCD)原则,以确保它满足医疗从业者的需求和期望。通过将该框架应用于神经运动康复训练中患者上肢运动的运动学数据可视化,评估了该框架的可用性。
{"title":"Real-Time Visual Analytics for Remote Monitoring of Patient’s Health","authors":"Maryam Boumrah, S. Garbaya, A. Radgui","doi":"10.24132/csrn.3301.61","DOIUrl":"https://doi.org/10.24132/csrn.3301.61","url":null,"abstract":"The recent proliferation of advanced data collection technologies for Patient Generated Health Data (PGHD) has made remote health monitoring more accessible. However, the complex nature of the big volume of medical generated data presents a significant challenge for traditional patient monitoring approaches, impeding the effective extraction of useful information. In this context, it is imperative to develop a robust and cost-effective framework that provides the scalability and deals with the heterogeneity of PGHD in real-time. Such a system could serve as a reference and would guide future research for monitoring patient undergoing a treatment at home conditions. This study presents a real-time visual analytics framework offering insightful visual representations of the multimodal big data. The proposed system was designed following the principles of User Centered Design (UCD) to ensure that it meets the needs and expectations of medical practitioners. The usability of this framework was evaluated by its application to the visualization of kinematic data of the upper limbs’ movement of patients during neuromotor rehabilitation exercises.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131110738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Learning Approach for Fine Grained Human Hand Action Recognition in Industrial Assembly 工业装配中细粒度人手动作识别的半监督学习方法
Pub Date : 2023-07-01 DOI: 10.24132/csrn.3301.58
Fabian Sturm, Rahul Sathiyababu, E. Hergenroether, M. Siegel
Until now, it has been impossible to imagine industrial manual assembly without humans due to their flexibility and adaptability. But the assembly process does not always benefit from human intervention. The error-proneness of the assembler due to disturbance, distraction or inattention requires intelligent support of the employee and is ideally suited for deep learning approaches because of the permanently occurring and repetitive data patterns. However, there is the problem that the labels of the data are not always sufficiently available. In this work, a spatio-temporal transformer model approach is used to address the circumstances of few labels in an industrial setting. A pseudo-labeling method from the field of semi-supervised transfer learning is applied for model training, and the entire architecture is adapted to the fine-grained recognition of human hand actions in assembly. This implementation significantly improves the generalization of the model during the training process over different variations of strong and weak classes from the ground truth and proves that it is possible to work with deep learning technologies in an industrial setting, even with few labels. In addition to the main goal of improving the generalization capabilities of the model by using less data during training and exploring different variations of appropriate ground truth and new classes, the recognition capabilities of the model are improved by adding convolution to the temporal embedding layer, which increases the test accuracy by over 5% compared to a similar predecessor model.
到目前为止,由于人类的灵活性和适应性,无法想象没有人类的工业手工组装。但装配过程并不总是受益于人为干预。由于干扰、分心或注意力不集中而导致的汇编程序的错误倾向需要员工的智能支持,并且由于永久发生和重复的数据模式,非常适合深度学习方法。然而,存在一个问题,即数据的标签并不总是充分可用。在这项工作中,使用时空转换器模型方法来解决工业环境中少数标签的情况。采用半监督迁移学习领域的伪标记方法进行模型训练,整个体系结构适应于装配过程中手部动作的细粒度识别。这种实现在训练过程中显著提高了模型的泛化性,并且证明了即使在很少的标签下,也可以在工业环境中使用深度学习技术。除了通过在训练过程中使用更少的数据来提高模型的泛化能力和探索合适的基础真值和新类别的不同变化的主要目标之外,通过在时间嵌入层中添加卷积来提高模型的识别能力,与类似的前辈模型相比,该模型的测试精度提高了5%以上。
{"title":"Semi-Supervised Learning Approach for Fine Grained Human Hand Action Recognition in Industrial Assembly","authors":"Fabian Sturm, Rahul Sathiyababu, E. Hergenroether, M. Siegel","doi":"10.24132/csrn.3301.58","DOIUrl":"https://doi.org/10.24132/csrn.3301.58","url":null,"abstract":"Until now, it has been impossible to imagine industrial manual assembly without humans due to their flexibility and adaptability. But the assembly process does not always benefit from human intervention. The error-proneness of the assembler due to disturbance, distraction or inattention requires intelligent support of the employee and is ideally suited for deep learning approaches because of the permanently occurring and repetitive data patterns. However, there is the problem that the labels of the data are not always sufficiently available. In this work, a spatio-temporal transformer model approach is used to address the circumstances of few labels in an industrial setting. A pseudo-labeling method from the field of semi-supervised transfer learning is applied for model training, and the entire architecture is adapted to the fine-grained recognition of human hand actions in assembly. This implementation significantly improves the generalization of the model during the training process over different variations of strong and weak classes from the ground truth and proves that it is possible to work with deep learning technologies in an industrial setting, even with few labels. In addition to the main goal of improving the generalization capabilities of the model by using less data during training and exploring different variations of appropriate ground truth and new classes, the recognition capabilities of the model are improved by adding convolution to the temporal embedding layer, which increases the test accuracy by over 5% compared to a similar predecessor model.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117064208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Error-Robust Indoor Augmented Reality Navigation: Evaluation Criteria and a New Approach 误差鲁棒室内增强现实导航:评估标准与新方法
Pub Date : 2023-07-01 DOI: 10.24132/csrn.3301.17
Oliver Scheibert, Jannis Möller, S. Grogorick, M. Eisemann
Tracking errors severely impact the effectiveness of augmented reality display techniques for indoor navigation. In this work we take a look at the sources of error and accuracy of existing tracking technologies. We derive important design criteria for robust display techniques and present objective criteria. These serve evaluation of indoor navigation techniques without or in preparation of quantitative user studies. Based on these criteria we propose a new error tolerant display technique called Bending Words, where words move along the navigation path guiding the user. Bending Words outranks the other evaluated display techniques in many of the tested criteria and provides a robust, error-tolerant alternative to established augmented reality indoor navigation display techniques.
跟踪误差严重影响了增强现实显示技术在室内导航中的有效性。在这项工作中,我们看看现有的跟踪技术的误差和准确性的来源。我们得出了鲁棒显示技术的重要设计准则,并提出了客观的准则。这些方法在没有或准备定量用户研究的情况下对室内导航技术进行评价。基于这些标准,我们提出了一种新的容错显示技术,称为弯曲单词,其中单词沿着导航路径引导用户移动。弯曲文字在许多测试标准中超过了其他评估的显示技术,并提供了一个强大的,容错的替代方案,以建立增强现实室内导航显示技术。
{"title":"Error-Robust Indoor Augmented Reality Navigation: Evaluation Criteria and a New Approach","authors":"Oliver Scheibert, Jannis Möller, S. Grogorick, M. Eisemann","doi":"10.24132/csrn.3301.17","DOIUrl":"https://doi.org/10.24132/csrn.3301.17","url":null,"abstract":"Tracking errors severely impact the effectiveness of augmented reality display techniques for indoor navigation. In this work we take a look at the sources of error and accuracy of existing tracking technologies. We derive important design criteria for robust display techniques and present objective criteria. These serve evaluation of indoor navigation techniques without or in preparation of quantitative user studies. Based on these criteria we propose a new error tolerant display technique called Bending Words, where words move along the navigation path guiding the user. Bending Words outranks the other evaluated display techniques in many of the tested criteria and provides a robust, error-tolerant alternative to established augmented reality indoor navigation display techniques.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124514071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Raytracing Renaissance: An elegant framework for modeling light at Multiple Scales 光线追踪文艺复兴:一个优雅的框架,用于在多个尺度上建模光线
Pub Date : 2023-07-01 DOI: 10.24132/csrn.3301.2
S. Semwal
Ray tracing remains of interest to Computer Graphics community with its elegant framing of how light interacts with objects, being able to easily support multiple light sources, and simple framework of merging synthetic and real cameras. Recent trends to provide implementations at the chip-level means raytracing’s constant quest of realism would propel its usage in real-time applications. AR/VR, Animations, 3DGames Industry, 3D-large scale simulations, and future social computing platforms are just a few examples of possible major impact. Raytracing is also appealing to HCI community because raytracing extends well along the 3D-space and time, seamlessly blending both synthetic and real cameras at multiple scales to support storytelling. This presentation will include a few milestones from my work such as the Slicing Extent technique and Directed Safe Zones. Our recent applications of applying machine learning techniques creating novel synthetic views, which could also provide a future doorway to handle dynamic scenes with more compute power as needed, will also be presented. It is once again renaissance for ray tracing which for last 50+ years has remained the most elegant technique for modeling light phenomena in virtual worlds at whatever scale compute power could support.
光线追踪仍然是计算机图形学社区感兴趣的,它优雅的框架光如何与物体相互作用,能够轻松地支持多个光源,以及合并合成和真实相机的简单框架。最近在芯片级提供实现的趋势意味着光线追踪对真实感的不断追求将推动其在实时应用中的应用。AR/VR、动画、3d游戏产业、3d大规模模拟和未来的社交计算平台只是可能产生重大影响的几个例子。光线追踪对HCI社区也很有吸引力,因为光线追踪沿着3d空间和时间很好地扩展,在多个尺度上无缝地混合合成和真实摄像机,以支持讲故事。这次演讲将包括我工作中的一些里程碑,如切片范围技术和定向安全区。我们最近应用机器学习技术创建新的合成视图的应用也将被展示,这也可以为未来处理动态场景提供更多的计算能力。在过去的50多年里,光线追踪一直是在虚拟世界中以任何计算能力所能支持的规模建模光现象的最优雅的技术。
{"title":"Raytracing Renaissance: An elegant framework for modeling light at Multiple Scales","authors":"S. Semwal","doi":"10.24132/csrn.3301.2","DOIUrl":"https://doi.org/10.24132/csrn.3301.2","url":null,"abstract":"Ray tracing remains of interest to Computer Graphics community with its elegant framing of how light interacts with objects, being able to easily support multiple light sources, and simple framework of merging synthetic and real cameras. Recent trends to provide implementations at the chip-level means raytracing’s constant quest of realism would propel its usage in real-time applications. AR/VR, Animations, 3DGames Industry, 3D-large scale simulations, and future social computing platforms are just a few examples of possible major impact. Raytracing is also appealing to HCI community because raytracing extends well along the 3D-space and time, seamlessly blending both synthetic and real cameras at multiple scales to support storytelling. This presentation will include a few milestones from my work such as the Slicing Extent technique and Directed Safe Zones. Our recent applications of applying machine learning techniques creating novel synthetic views, which could also provide a future doorway to handle dynamic scenes with more compute power as needed, will also be presented. It is once again renaissance for ray tracing which for last 50+ years has remained the most elegant technique for modeling light phenomena in virtual worlds at whatever scale compute power could support.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133043618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Method of Mixed States for Interactive Editing of Big Point Clouds 大点云交互编辑的混合状态方法
Pub Date : 2023-07-01 DOI: 10.24132/csrn.3301.21
W. Benger, A. Voicu, R. Baran, Loredana Gonciulea, Cosmin Barna, F. Steinbacher
We present a novel methodological approach for the interactive editing of big point clouds. Based on the mathematics of fiber bundles, the proposed approach to model a data structure that is efficient for visualization, modification and I/O including an unlimited multi-level set of editing states useful for expressing and maintaining multiple undo histories. Backed by HDF5 as high performance file format, this data structure naturally allows persistent storage for the history of modification actions, an unique new feature of our approach. The challenges of visually based manual editing of big point clouds are discussed and a proper rendering solution is presented. The implemented solution and its features as consequences of the underlying methodology is compared with two major mainstream applications providing point-cloud editing tools as well.
我们提出了一种新的大点云交互编辑方法。基于光纤束的数学模型,提出了一种具有可视化、修改和I/O效率的数据结构模型,该模型包含一个无限的多级编辑状态集,有助于表达和维护多个撤销历史。在HDF5作为高性能文件格式的支持下,这种数据结构自然允许持久存储修改操作的历史,这是我们方法的一个独特的新功能。讨论了基于视觉的大点云手工编辑所面临的挑战,并提出了适当的渲染解决方案。将实现的解决方案及其作为底层方法的结果的特性与提供点云编辑工具的两个主要主流应用程序进行比较。
{"title":"The Method of Mixed States for Interactive Editing of Big Point Clouds","authors":"W. Benger, A. Voicu, R. Baran, Loredana Gonciulea, Cosmin Barna, F. Steinbacher","doi":"10.24132/csrn.3301.21","DOIUrl":"https://doi.org/10.24132/csrn.3301.21","url":null,"abstract":"We present a novel methodological approach for the interactive editing of big point clouds. Based on the mathematics of fiber bundles, the proposed approach to model a data structure that is efficient for visualization, modification and I/O including an unlimited multi-level set of editing states useful for expressing and maintaining multiple undo histories. Backed by HDF5 as high performance file format, this data structure naturally allows persistent storage for the history of modification actions, an unique new feature of our approach. The challenges of visually based manual editing of big point clouds are discussed and a proper rendering solution is presented. The implemented solution and its features as consequences of the underlying methodology is compared with two major mainstream applications providing point-cloud editing tools as well.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129025026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sex Classification of Face Images using Embedded Prototype Subspace Classifiers 基于嵌入式原型子空间分类器的人脸图像性别分类
Pub Date : 2023-07-01 DOI: 10.24132/csrn.3301.7
A. Hast
In recent academic literature Sex and Gender have both become synonyms, even though distinct definitions do exist. This give rise to the question, which of those two are actually face image classifiers identifying? It will be argued and explained why CNN based classifiers will generally identify gender, while feeding face recognition feature vectors into a neural network, will tend to verify sex rather than gender. It is shown for the first time how state of the art Sex Classification can be performed using Embedded Prototype Subspace Classifiers (EPSC) and also how the projection depth can be learned efficiently. The automatic Gender classification, which is produced by the emph{InsightFace} project, is used as a baseline and compared to the results given by the EPSC, which takes the feature vectors produced by emph{InsightFace} as input. It turns out that the depth of projection needed is much larger for these face feature vectors than for an example classifying on MNIST or similar. Therefore, one important contribution is a simple method to determine the optimal depth for any kind of data. Furthermore, it is shown how the weights in the final layer can be set in order to make the choice of depth stable and independent of the kind of learning data. The resulting EPSC is extremely light weight and yet very accurate, reaching over $98%$ accuracy for several datasets.
在最近的学术文献中,Sex和Gender都成为同义词,尽管确实存在不同的定义。这就产生了一个问题,这两个中哪一个是人脸图像分类器识别的?将讨论并解释为什么基于CNN的分类器通常会识别性别,而将人脸识别特征向量馈送到神经网络中,将倾向于验证性别而不是性别。它首次展示了如何使用嵌入式原型子空间分类器(EPSC)执行最先进的性别分类,以及如何有效地学习投影深度。由emph{InsightFace}项目产生的自动性别分类被用作基线,并与EPSC给出的结果进行比较,EPSC将emph{InsightFace}产生的特征向量作为输入。事实证明,这些人脸特征向量所需的投影深度比在MNIST或类似方法上分类的例子要大得多。因此,一个重要的贡献是确定任何类型数据的最佳深度的简单方法。此外,还展示了如何设置最后一层的权重,以使深度的选择稳定且与学习数据的类型无关。由此产生的EPSC重量非常轻,但非常准确,在几个数据集上达到超过98%的精度。
{"title":"Sex Classification of Face Images using Embedded Prototype Subspace Classifiers","authors":"A. Hast","doi":"10.24132/csrn.3301.7","DOIUrl":"https://doi.org/10.24132/csrn.3301.7","url":null,"abstract":"In recent academic literature Sex and Gender have both become synonyms, even though distinct definitions do exist. This give rise to the question, which of those two are actually face image classifiers identifying? It will be argued and explained why CNN based classifiers will generally identify gender, while feeding face recognition feature vectors into a neural network, will tend to verify sex rather than gender. It is shown for the first time how state of the art Sex Classification can be performed using Embedded Prototype Subspace Classifiers (EPSC) and also how the projection depth can be learned efficiently. The automatic Gender classification, which is produced by the emph{InsightFace} project, is used as a baseline and compared to the results given by the EPSC, which takes the feature vectors produced by emph{InsightFace} as input. It turns out that the depth of projection needed is much larger for these face feature vectors than for an example classifying on MNIST or similar. Therefore, one important contribution is a simple method to determine the optimal depth for any kind of data. Furthermore, it is shown how the weights in the final layer can be set in order to make the choice of depth stable and independent of the kind of learning data. The resulting EPSC is extremely light weight and yet very accurate, reaching over $98%$ accuracy for several datasets.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133757922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Operational theater generation by a descriptive language 通过描述性语言生成作战战区
Pub Date : 2023-07-01 DOI: 10.24132/csrn.3301.19
Matis Ghiotto, B. Desbenoit, Romain Raffin
3D landscapes generation is an interdisciplinary field that requires expertise in both computer graphics and geographic informations systems (GIS). It is a complex and time-consuming process. In this paper, we present a new approach to simplify 3D environment generation process, by creating a go-between data-model containing a list of available source data and steps to use them. To feed the data-model, we introduce a formal language that describes the process"s sequence. We propose an adapted format, designed to be human-readable and machine-readable, allowing for easy creation and modification of the scenery. We demonstrate the utility of our approach by implementing a prototype system to generate 3D landscapes with a use-case fit for multipurpose simulation. Our system takes a description as input and outputs a complete 3D environment, including terrain and feature elements such as buildings created by chosen geometrical process. Experiments show that our approach reduces the time and effort required to generate a 3D environment, making it accessible to a wider range of users without extensive knowledge of GIS. In conclusion, our custom language and implementation provide a simple and effective solution to the complexity of 3D terrain generation, making it a valuable tool for users in the area.
三维景观生成是一个跨学科领域,需要计算机图形学和地理信息系统(GIS)的专业知识。这是一个复杂而耗时的过程。在本文中,我们提出了一种简化3D环境生成过程的新方法,通过创建包含可用源数据列表和使用它们的步骤的中间数据模型。为了提供数据模型,我们引入了一种描述过程序列的形式语言。我们提出了一种改编格式,设计为人类可读和机器可读,允许轻松创建和修改风景。我们通过实现一个原型系统来生成适合多用途模拟的用例3D景观,展示了我们方法的实用性。我们的系统将描述作为输入和输出一个完整的3D环境,包括地形和特征元素,如通过选择几何过程创建的建筑物。实验表明,我们的方法减少了生成3D环境所需的时间和精力,使更广泛的用户可以在没有广泛GIS知识的情况下访问它。总之,我们的自定义语言和实现为复杂的3D地形生成提供了简单有效的解决方案,使其成为该领域用户的宝贵工具。
{"title":"Operational theater generation by a descriptive language","authors":"Matis Ghiotto, B. Desbenoit, Romain Raffin","doi":"10.24132/csrn.3301.19","DOIUrl":"https://doi.org/10.24132/csrn.3301.19","url":null,"abstract":"3D landscapes generation is an interdisciplinary field that requires expertise in both computer graphics and geographic informations systems (GIS). It is a complex and time-consuming process. In this paper, we present a new approach to simplify 3D environment generation process, by creating a go-between data-model containing a list of available source data and steps to use them. To feed the data-model, we introduce a formal language that describes the process\"s sequence. We propose an adapted format, designed to be human-readable and machine-readable, allowing for easy creation and modification of the scenery. We demonstrate the utility of our approach by implementing a prototype system to generate 3D landscapes with a use-case fit for multipurpose simulation. Our system takes a description as input and outputs a complete 3D environment, including terrain and feature elements such as buildings created by chosen geometrical process. Experiments show that our approach reduces the time and effort required to generate a 3D environment, making it accessible to a wider range of users without extensive knowledge of GIS. In conclusion, our custom language and implementation provide a simple and effective solution to the complexity of 3D terrain generation, making it a valuable tool for users in the area.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131455334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polychromatism of all light waves: new approach to the analysis of the physical and perceptive color aspects 所有光波的多色性:分析物理和感知色彩方面的新方法
Pub Date : 2023-07-01 DOI: 10.24132/csrn.3301.43
Justyna Niewiadomska-Kaplar
Research on light vision mechanisms in biosystems and on the mechanisms of formation of deficits in color discrimination[1] reveals that not only white light is polychromatic but all light waves are. The spectrum of white light is composed of aggregations of only 4 monochromatic waves: magenta UV 384 nm, cyan 432 nm, yellow 576 nm and magenta IR 768 nm, grouped in 5 bi-chromatic waves: cinnabar red (magenta IR + yellow), green (yellow + cyan), indigo (cyan + magenta UV) and also two semi-bright bi-chromatic waves - porphyry IR (semi-infrared wave composed of the magenta IR 768 nm wave and the colorless infrared wave 864 nm) and porphyry UV (semi-ultraviolet wave composed of the magenta UV 384 nm wave and the colorless ultraviolet wave 288 nm). The light waves thus composed create the light sensations due to the mechanism of additive synthesis. The method allows a new approach to interpret the composition of the bright waves, the phenomenon of decomposition of colours and additive synthesis that constitutes the principle of colour production in computers. The new elaborate models of colour physics also constitute the basis by interpretation of the mechanisms of vision of colours.
生物系统的光视觉机制和辨色缺陷形成机制的研究[1]表明,不仅白光是多色的,所有光波都是多色的。白光的光谱仅由4个单色波的集合组成:品红色UV 384 nm,青色432 nm,黄色576 nm和品红色IR 768 nm,分为5个双色波:朱砂红(品红红外+黄色)、绿色(黄色+青色)、靛蓝(青色+品红紫外)以及两种半亮双色波——斑岩红外(由品红红外768 nm波和无色红外864 nm波组成的半红外波)和斑岩紫外(由品红紫外384 nm波和无色紫外288 nm波组成的半紫外波)。这样组成的光波由于加性合成的机制而产生光感。该方法提供了一种新的方法来解释明亮波的组成,颜色分解现象和构成计算机颜色产生原理的加性合成。新的色彩物理精细模型也构成了解释色彩视觉机制的基础。
{"title":"Polychromatism of all light waves: new approach to the analysis of the physical and perceptive color aspects","authors":"Justyna Niewiadomska-Kaplar","doi":"10.24132/csrn.3301.43","DOIUrl":"https://doi.org/10.24132/csrn.3301.43","url":null,"abstract":"Research on light vision mechanisms in biosystems and on the mechanisms of formation of deficits in color discrimination[1] reveals that not only white light is polychromatic but all light waves are. The spectrum of white light is composed of aggregations of only 4 monochromatic waves: magenta UV 384 nm, cyan 432 nm, yellow 576 nm and magenta IR 768 nm, grouped in 5 bi-chromatic waves: cinnabar red (magenta IR + yellow), green (yellow + cyan), indigo (cyan + magenta UV) and also two semi-bright bi-chromatic waves - porphyry IR (semi-infrared wave composed of the magenta IR 768 nm wave and the colorless infrared wave 864 nm) and porphyry UV (semi-ultraviolet wave composed of the magenta UV 384 nm wave and the colorless ultraviolet wave 288 nm). The light waves thus composed create the light sensations due to the mechanism of additive synthesis. The method allows a new approach to interpret the composition of the bright waves, the phenomenon of decomposition of colours and additive synthesis that constitutes the principle of colour production in computers. The new elaborate models of colour physics also constitute the basis by interpretation of the mechanisms of vision of colours.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"57 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123385467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Incremental Image Reconstruction with CNN-enhanced Poisson Interpolation 基于cnn增强泊松插值的快速增量图像重建
Pub Date : 2023-07-01 DOI: 10.24132/csrn.3301.24
Blaž Erzar, Žiga Lesar, Matija Marolt
We present a novel image reconstruction method from scattered data based on multigrid relaxation of the Poisson equation and convolutional neural networks (CNN). We first formulate the image reconstruction problem as a Poisson equation with irregular boundary conditions, then propose a fast multigrid method for solving such an equation, and finally enhance the reconstructed image with a CNN to recover the details. The method works incrementally so that additional points can be added, and the amount of points does not affect the reconstruction speed. Furthermore, the multigrid and CNN techniques ensure that the output image resolution has only minor impact on the reconstruction speed. We evaluated the method on the CompCars dataset, where it achieves up to 40% error reduction compared to a reconstruction-only approach and 9% compared to a CNN-only approach.
提出了一种基于泊松方程的多网格松弛和卷积神经网络(CNN)的图像重建方法。首先将图像重构问题表述为具有不规则边界条件的泊松方程,然后提出求解泊松方程的快速多重网格方法,最后利用CNN对重构图像进行增强以恢复细节。该方法以增量方式工作,以便可以添加额外的点,并且点的数量不影响重建速度。此外,多重网格和CNN技术确保了输出图像分辨率对重建速度的影响很小。我们在CompCars数据集上对该方法进行了评估,与仅重建方法相比,该方法的误差减少了40%,与仅cnn方法相比,误差减少了9%。
{"title":"Fast Incremental Image Reconstruction with CNN-enhanced Poisson Interpolation","authors":"Blaž Erzar, Žiga Lesar, Matija Marolt","doi":"10.24132/csrn.3301.24","DOIUrl":"https://doi.org/10.24132/csrn.3301.24","url":null,"abstract":"We present a novel image reconstruction method from scattered data based on multigrid relaxation of the Poisson equation and convolutional neural networks (CNN). We first formulate the image reconstruction problem as a Poisson equation with irregular boundary conditions, then propose a fast multigrid method for solving such an equation, and finally enhance the reconstructed image with a CNN to recover the details. The method works incrementally so that additional points can be added, and the amount of points does not affect the reconstruction speed. Furthermore, the multigrid and CNN techniques ensure that the output image resolution has only minor impact on the reconstruction speed. We evaluated the method on the CompCars dataset, where it achieves up to 40% error reduction compared to a reconstruction-only approach and 9% compared to a CNN-only approach.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115745308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Dangerous Situations Near Pedestrian Crossings using In-Car Camera 利用车载摄像头检测人行横道附近的危险情况
Pub Date : 2023-07-01 DOI: 10.24132/csrn.3301.41
M. Kubanek, Lukasz Karbowiak, J. Bobulski
The paper presents a method for detecting dangerous situations near pedestrian crossings using an in-car camera system. The approach utilizes deep learning-based object detection to identify pedestrians and vehicles, analyzing their behavior to identify potential hazards. The system incorporates vehicle sensor data for enhanced accuracy. Evaluation results show high accuracy in detecting dangerous situations. The proposed system can potentially enhance pedestrian and driver safety in urban transportation.
本文提出了一种利用车载摄像系统检测人行横道附近危险情况的方法。该方法利用基于深度学习的对象检测来识别行人和车辆,分析他们的行为以识别潜在的危险。该系统集成了车辆传感器数据,以提高准确性。评价结果表明,对危险情况的检测具有较高的准确率。该系统可以潜在地提高城市交通中行人和驾驶员的安全。
{"title":"Detection of Dangerous Situations Near Pedestrian Crossings using In-Car Camera","authors":"M. Kubanek, Lukasz Karbowiak, J. Bobulski","doi":"10.24132/csrn.3301.41","DOIUrl":"https://doi.org/10.24132/csrn.3301.41","url":null,"abstract":"The paper presents a method for detecting dangerous situations near pedestrian crossings using an in-car camera system. The approach utilizes deep learning-based object detection to identify pedestrians and vehicles, analyzing their behavior to identify potential hazards. The system incorporates vehicle sensor data for enhanced accuracy. Evaluation results show high accuracy in detecting dangerous situations. The proposed system can potentially enhance pedestrian and driver safety in urban transportation.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Science Research Notes
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1