Pub Date : 2021-09-01DOI: 10.1142/s1793351x21400079
M. Karimi, Edwin Babaians, Martin Oelsch, E. Steinbach
Robust attitude and heading estimation in an indoor environment with respect to a known reference are essential components for various robotic applications. Affordable Attitude and Heading Reference Systems (AHRS) are typically using low-cost solid-state MEMS-based sensors. The precision of heading estimation on such a system is typically degraded due to the encountered drift from the gyro measurements and distortions of the Earth’s magnetic field sensing. This paper presents a novel approach for robust indoor heading estimation based on skewed redundant inertial and magnetic sensors. Recurrent Neural Network-based (RNN) fusion is used to perform robust heading estimation with the ability to compensate for the external magnetic field anomalies. We use our previously described correlation-based filter model for preprocessing the data and for empowering perturbation mitigation. Our experimental results show that the proposed scheme is able to successfully mitigate the anomalies in the saturated indoor environment and achieve a Root-Mean-Square Error of less than [Formula: see text] for long-term use.
{"title":"Deep Fusion of a Skewed Redundant Magnetic and Inertial Sensor for Heading State Estimation in a Saturated Indoor Environment","authors":"M. Karimi, Edwin Babaians, Martin Oelsch, E. Steinbach","doi":"10.1142/s1793351x21400079","DOIUrl":"https://doi.org/10.1142/s1793351x21400079","url":null,"abstract":"Robust attitude and heading estimation in an indoor environment with respect to a known reference are essential components for various robotic applications. Affordable Attitude and Heading Reference Systems (AHRS) are typically using low-cost solid-state MEMS-based sensors. The precision of heading estimation on such a system is typically degraded due to the encountered drift from the gyro measurements and distortions of the Earth’s magnetic field sensing. This paper presents a novel approach for robust indoor heading estimation based on skewed redundant inertial and magnetic sensors. Recurrent Neural Network-based (RNN) fusion is used to perform robust heading estimation with the ability to compensate for the external magnetic field anomalies. We use our previously described correlation-based filter model for preprocessing the data and for empowering perturbation mitigation. Our experimental results show that the proposed scheme is able to successfully mitigate the anomalies in the saturated indoor environment and achieve a Root-Mean-Square Error of less than [Formula: see text] for long-term use.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132419846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1142/s1793351x21500069
Mira Kim, Hsiang-Shun Shih, P. Sheu
Influence analysis is one of the most important research in social network. Specifically, more and more researchers and advertisers are interested in the area of influence maximization (IM). The concept of influence among people or organizations has been the core basis for making business decisions as well as performing everyday social activities. In this research, we begin by extending a new influence diffusion model information diffusion model (IDM) using various constraints. We incorporate colors and additional nodes constraints. By adding colors and constraints for different types of nodes in a graph, we would be able to answer complex queries on multi-dimensional graphs such as ‘find at most two most important genes that are related to lung disease and heart disease’. More specifically, we discuss the following variations of IM-IDM; Colorblind IM-IDM, Colored IM-IDM and Colored IM-IDM with constraints. We also present our experiment results to prove the effectiveness of our model and algorithms.
{"title":"Diffusion-Based Influence Maximization in GOLAP","authors":"Mira Kim, Hsiang-Shun Shih, P. Sheu","doi":"10.1142/s1793351x21500069","DOIUrl":"https://doi.org/10.1142/s1793351x21500069","url":null,"abstract":"Influence analysis is one of the most important research in social network. Specifically, more and more researchers and advertisers are interested in the area of influence maximization (IM). The concept of influence among people or organizations has been the core basis for making business decisions as well as performing everyday social activities. In this research, we begin by extending a new influence diffusion model information diffusion model (IDM) using various constraints. We incorporate colors and additional nodes constraints. By adding colors and constraints for different types of nodes in a graph, we would be able to answer complex queries on multi-dimensional graphs such as ‘find at most two most important genes that are related to lung disease and heart disease’. More specifically, we discuss the following variations of IM-IDM; Colorblind IM-IDM, Colored IM-IDM and Colored IM-IDM with constraints. We also present our experiment results to prove the effectiveness of our model and algorithms.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115000182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1142/s1793351x21400080
A. J. Golkowski, M. Handte, Peter Roch, P. Marrón
For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application.
{"title":"An Experimental Analysis of the Effects of Different Hardware Setups on Stereo Camera Systems","authors":"A. J. Golkowski, M. Handte, Peter Roch, P. Marrón","doi":"10.1142/s1793351x21400080","DOIUrl":"https://doi.org/10.1142/s1793351x21400080","url":null,"abstract":"For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116934860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-05DOI: 10.1142/S1793351X21500045
Zanyar Zohourianshahzadi, J. Kalita
Inspired by how the human brain employs more neural pathways when increasing the focus on a subject, we introduce a novel twin cascaded attention model that outperforms a state-of-the-art image captioning model that was originally implemented using one channel of attention for the visual grounding task. Visual grounding ensures the existence of words in the caption sentence that are grounded into a particular region in the input image. After a deep learning model is trained on visual grounding task, the model employs the learned patterns regarding the visual grounding and the order of objects in the caption sentences, when generating captions. We report the results of our experiments in three image captioning tasks on the COCO dataset. The results are reported using standard image captioning metrics to show the improvements achieved by our model over the previous image captioning model. The results gathered from our experiments suggest that employing more parallel attention pathways in a deep neural network leads to higher performance. Our implementation of NTT is publicly available at: https://github.com/zanyarz/NeuralTwinsTalk.
{"title":"Neural Twins Talk and Alternative Calculations","authors":"Zanyar Zohourianshahzadi, J. Kalita","doi":"10.1142/S1793351X21500045","DOIUrl":"https://doi.org/10.1142/S1793351X21500045","url":null,"abstract":"Inspired by how the human brain employs more neural pathways when increasing the focus on a subject, we introduce a novel twin cascaded attention model that outperforms a state-of-the-art image captioning model that was originally implemented using one channel of attention for the visual grounding task. Visual grounding ensures the existence of words in the caption sentence that are grounded into a particular region in the input image. After a deep learning model is trained on visual grounding task, the model employs the learned patterns regarding the visual grounding and the order of objects in the caption sentences, when generating captions. We report the results of our experiments in three image captioning tasks on the COCO dataset. The results are reported using standard image captioning metrics to show the improvements achieved by our model over the previous image captioning model. The results gathered from our experiments suggest that employing more parallel attention pathways in a deep neural network leads to higher performance. Our implementation of NTT is publicly available at: https://github.com/zanyarz/NeuralTwinsTalk.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116024338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-01DOI: 10.1142/S1793351X21500057
Renjith P. Ravindran, K. N. Murthy
Word embeddings have recently become a vital part of many Natural Language Processing (NLP) systems. Word embeddings are a suite of techniques that represent words in a language as vectors in an n-dimensional real space that has been shown to encode a significant amount of syntactic and semantic information. When used in NLP systems, these representations have resulted in improved performance across a wide range of NLP tasks. However, it is not clear how syntactic properties interact with the more widely studied semantic properties of words. Or what the main factors in the modeling formulation are that encourages embedding spaces to pick up more of syntactic behavior as opposed to semantic behavior of words. We investigate several aspects of word embedding spaces and modeling assumptions that maximize syntactic coherence — the degree to which words with similar syntactic properties form distinct neighborhoods in the embedding space. We do so in order to understand which of the existing models maximize syntactic coherence making it a more reliable source for extracting syntactic category (POS) information. Our analysis shows that syntactic coherence of S-CODE is superior to the other more popular and more recent embedding techniques such as Word2vec, fastText, GloVe and LexVec, when measured under compatible parameter settings. Our investigation also gives deeper insights into the geometry of the embedding space with respect to syntactic coherence, and how this is influenced by context size, frequency of words, and dimensionality of the embedding space.
{"title":"Syntactic Coherence in Word Embedding Spaces","authors":"Renjith P. Ravindran, K. N. Murthy","doi":"10.1142/S1793351X21500057","DOIUrl":"https://doi.org/10.1142/S1793351X21500057","url":null,"abstract":"Word embeddings have recently become a vital part of many Natural Language Processing (NLP) systems. Word embeddings are a suite of techniques that represent words in a language as vectors in an n-dimensional real space that has been shown to encode a significant amount of syntactic and semantic information. When used in NLP systems, these representations have resulted in improved performance across a wide range of NLP tasks. However, it is not clear how syntactic properties interact with the more widely studied semantic properties of words. Or what the main factors in the modeling formulation are that encourages embedding spaces to pick up more of syntactic behavior as opposed to semantic behavior of words. We investigate several aspects of word embedding spaces and modeling assumptions that maximize syntactic coherence — the degree to which words with similar syntactic properties form distinct neighborhoods in the embedding space. We do so in order to understand which of the existing models maximize syntactic coherence making it a more reliable source for extracting syntactic category (POS) information. Our analysis shows that syntactic coherence of S-CODE is superior to the other more popular and more recent embedding techniques such as Word2vec, fastText, GloVe and LexVec, when measured under compatible parameter settings. Our investigation also gives deeper insights into the geometry of the embedding space with respect to syntactic coherence, and how this is influenced by context size, frequency of words, and dimensionality of the embedding space.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130540843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-01DOI: 10.1142/S1793351X2140002X
Olav A. Norgård Rongved, S. Hicks, Vajira Lasantha Thambawita, H. Stensland, E. Zouganeli, Dag Johansen, Cise Midoglu, M. Riegler, P. Halvorsen
Developing systems for the automatic detection of events in video is a task which has gained attention in many areas including sports. More specifically, event detection for soccer videos has been studied widely in the literature. However, there are still a number of shortcomings in the state-of-the-art such as high latency, making it challenging to operate at the live edge. In this paper, we present an algorithm to detect events in soccer videos in real time, using 3D convolutional neural networks. We test our algorithm on three different datasets from SoccerNet, the Swedish Allsvenskan, and the Norwegian Eliteserien. Overall, the results show that we can detect events with high recall, low latency, and accurate time estimation. The trade-off is a slightly lower precision compared to the current state-of-the-art, which has higher latency and performs better when a less accurate time estimation can be accepted. In addition to the presented algorithm, we perform an extensive ablation study on how the different parts of the training pipeline affect the final results.
{"title":"Using 3D Convolutional Neural Networks for Real-time Detection of Soccer Events","authors":"Olav A. Norgård Rongved, S. Hicks, Vajira Lasantha Thambawita, H. Stensland, E. Zouganeli, Dag Johansen, Cise Midoglu, M. Riegler, P. Halvorsen","doi":"10.1142/S1793351X2140002X","DOIUrl":"https://doi.org/10.1142/S1793351X2140002X","url":null,"abstract":"Developing systems for the automatic detection of events in video is a task which has gained attention in many areas including sports. More specifically, event detection for soccer videos has been studied widely in the literature. However, there are still a number of shortcomings in the state-of-the-art such as high latency, making it challenging to operate at the live edge. In this paper, we present an algorithm to detect events in soccer videos in real time, using 3D convolutional neural networks. We test our algorithm on three different datasets from SoccerNet, the Swedish Allsvenskan, and the Norwegian Eliteserien. Overall, the results show that we can detect events with high recall, low latency, and accurate time estimation. The trade-off is a slightly lower precision compared to the current state-of-the-art, which has higher latency and performs better when a less accurate time estimation can be accepted. In addition to the presented algorithm, we perform an extensive ablation study on how the different parts of the training pipeline affect the final results.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121792835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-13DOI: 10.1142/S1793351X21400018
Aysegül Özkaya Eren, M. Sert
Generating audio captions is a new research area that combines audio and natural language processing to create meaningful textual descriptions for audio clips. To address this problem, previous studies mostly use the encoder–decoder-based models without considering semantic information. To fill this gap, we present a novel encoder–decoder architecture using bi-directional Gated Recurrent Units (BiGRU) with audio and semantic embeddings. We extract semantic embedding by obtaining subjects and verbs from the audio clip captions and combine these embedding with audio embedding to feed the BiGRU-based encoder–decoder model. To enable semantic embeddings for the test audios, we introduce a Multilayer Perceptron classifier to predict the semantic embeddings of those clips. We also present exhaustive experiments to show the efficiency of different features and datasets for our proposed model the audio captioning task. To extract audio features, we use the log Mel energy features, VGGish embeddings, and a pretrained audio neural network (PANN) embeddings. Extensive experiments on two audio captioning datasets Clotho and AudioCaps show that our proposed model outperforms state-of-the-art audio captioning models across different evaluation metrics and using the semantic information improves the captioning performance.
{"title":"Audio Captioning with Composition of Acoustic and Semantic Information","authors":"Aysegül Özkaya Eren, M. Sert","doi":"10.1142/S1793351X21400018","DOIUrl":"https://doi.org/10.1142/S1793351X21400018","url":null,"abstract":"Generating audio captions is a new research area that combines audio and natural language processing to create meaningful textual descriptions for audio clips. To address this problem, previous studies mostly use the encoder–decoder-based models without considering semantic information. To fill this gap, we present a novel encoder–decoder architecture using bi-directional Gated Recurrent Units (BiGRU) with audio and semantic embeddings. We extract semantic embedding by obtaining subjects and verbs from the audio clip captions and combine these embedding with audio embedding to feed the BiGRU-based encoder–decoder model. To enable semantic embeddings for the test audios, we introduce a Multilayer Perceptron classifier to predict the semantic embeddings of those clips. We also present exhaustive experiments to show the efficiency of different features and datasets for our proposed model the audio captioning task. To extract audio features, we use the log Mel energy features, VGGish embeddings, and a pretrained audio neural network (PANN) embeddings. Extensive experiments on two audio captioning datasets Clotho and AudioCaps show that our proposed model outperforms state-of-the-art audio captioning models across different evaluation metrics and using the semantic information improves the captioning performance.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132774798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-25DOI: 10.1142/S1793351X21300016
Maria Krommyda, Verena Kantere
As the Internet of Things (IoT) systems gain in popularity, an increasing number of Big Data sources are available. Ranging from small sensor networks designed for household use to large fully auto...
随着物联网(IoT)系统的普及,越来越多的大数据源可用。从小型家用传感器网络到大型全自动…
{"title":"Spatial Data Management in IoT Systems: Solutions and Evaluation","authors":"Maria Krommyda, Verena Kantere","doi":"10.1142/S1793351X21300016","DOIUrl":"https://doi.org/10.1142/S1793351X21300016","url":null,"abstract":"As the Internet of Things (IoT) systems gain in popularity, an increasing number of Big Data sources are available. Ranging from small sensor networks designed for household use to large fully auto...","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126628083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1142/S1793351X20500075
Vinesh Ravuri, Projna Paromita, Karel Mundnich, Amrutha Nadarajan, Brandon M. Booth, Shrikanth S. Narayanan, Theodora Chaspari
Hospital workers often experience burnout due to the demanding job responsibilities and long work hours. Data yielding from ambulatory monitoring combined with machine learning algorithms can afford us a better understanding of the naturalistic processes that contribute to this burnout. Motivated by the challenges related to the accurate tracking of well-being in real-life, prior work has investigated group-specific machine learning (GS-ML) models that are tailored to groups of participants. We examine a novel GS-ML for estimating well-being from real-life multimodal measures collected in situ from hospital workers. In contrast to the majority of prior work that uses pre-determined clustering criteria, we propose an iterative procedure that refines participant clusters based on the representations learned by the GS-ML models. Motivated by prior work that highlights the differential impact of job demands on well-being, we further explore the participant clusters in terms of demography and job-related attributes. Results indicate that the GS-ML models mostly outperform general models in estimating well-being constructs. The GS-ML models further depict different degrees of predictive power for each participant cluster, as distinguished upon age, education, occupational role, and number of supervisees. The observed discrepancies with respect to the GS-ML model decisions are discussed in association with algorithmic bias.
{"title":"Investigating Group-Specific Models of Hospital Workers' Well-Being: Implications for Algorithmic Bias","authors":"Vinesh Ravuri, Projna Paromita, Karel Mundnich, Amrutha Nadarajan, Brandon M. Booth, Shrikanth S. Narayanan, Theodora Chaspari","doi":"10.1142/S1793351X20500075","DOIUrl":"https://doi.org/10.1142/S1793351X20500075","url":null,"abstract":"Hospital workers often experience burnout due to the demanding job responsibilities and long work hours. Data yielding from ambulatory monitoring combined with machine learning algorithms can afford us a better understanding of the naturalistic processes that contribute to this burnout. Motivated by the challenges related to the accurate tracking of well-being in real-life, prior work has investigated group-specific machine learning (GS-ML) models that are tailored to groups of participants. We examine a novel GS-ML for estimating well-being from real-life multimodal measures collected in situ from hospital workers. In contrast to the majority of prior work that uses pre-determined clustering criteria, we propose an iterative procedure that refines participant clusters based on the representations learned by the GS-ML models. Motivated by prior work that highlights the differential impact of job demands on well-being, we further explore the participant clusters in terms of demography and job-related attributes. Results indicate that the GS-ML models mostly outperform general models in estimating well-being constructs. The GS-ML models further depict different degrees of predictive power for each participant cluster, as distinguished upon age, education, occupational role, and number of supervisees. The observed discrepancies with respect to the GS-ML model decisions are discussed in association with algorithmic bias.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127428778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}