Pub Date : 2022-06-01DOI: 10.26599/IJCS.2022.9100015
Jin Diao;Zhangbing Zhou;Guangli Shi
Named entity recognition (NER) is a fundamental technique in natural language processing that provides preconditions for tasks, such as natural language question reasoning, text matching, and semantic text similarity. Compared to English, the challenge of Chinese NER lies in the noise impact caused by the complex meanings, diverse structures, and ambiguous semantic boundaries of the Chinese language itself. At the same time, compared with specific domains, open-domain entity types are more complex and changeable, and the number of entities is considerably larger. Thus, the task of Chinese NER is more difficult. However, existing open-domain NER methods have low recognition rates. Therefore, this paper proposes a method based on the bidirectional long short-term memory conditional random field (BiLSTM-CRF) model, which leverages integrated learning to improve the efficiency of Chinese NER. Compared with single models, including CRF, BiLSTM-CRF, and gated recurrent unit-CRF, the proposed method can significantly improve the accuracy of open-domain Chinese NER.
{"title":"Leveraging Integrated Learning for Open-Domain Chinese Named Entity Recognition","authors":"Jin Diao;Zhangbing Zhou;Guangli Shi","doi":"10.26599/IJCS.2022.9100015","DOIUrl":"10.26599/IJCS.2022.9100015","url":null,"abstract":"Named entity recognition (NER) is a fundamental technique in natural language processing that provides preconditions for tasks, such as natural language question reasoning, text matching, and semantic text similarity. Compared to English, the challenge of Chinese NER lies in the noise impact caused by the complex meanings, diverse structures, and ambiguous semantic boundaries of the Chinese language itself. At the same time, compared with specific domains, open-domain entity types are more complex and changeable, and the number of entities is considerably larger. Thus, the task of Chinese NER is more difficult. However, existing open-domain NER methods have low recognition rates. Therefore, this paper proposes a method based on the bidirectional long short-term memory conditional random field (BiLSTM-CRF) model, which leverages integrated learning to improve the efficiency of Chinese NER. Compared with single models, including CRF, BiLSTM-CRF, and gated recurrent unit-CRF, the proposed method can significantly improve the accuracy of open-domain Chinese NER.","PeriodicalId":32381,"journal":{"name":"International Journal of Crowd Science","volume":"6 2","pages":"74-79"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9736195/9815841/09815847.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42494965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic heterogeneous graphs comprise different types of events with temporal labels. In many real-world scenarios, the temporal order of different types of events possibly implies causal relationships between these event types. However, existing methods designed to model dynamic heterogeneous graphs neglect the underlying causal relationships between event types. For instance, the determination of the occurrence of a new event is misled by irrelevant historical events considering the type and could lead to performance degradation. First, this paper explicitly defines the causality of event types by the heterogeneous causality graph to utilize such causality from the perspective of the graph structure to tackle the aforementioned issue. Second, this paper proposes the event type causality based continuous-time heterogeneous attention network (ECHN) to model dynamic heterogeneous graphs. ECHN aggregates features based on the strength of different causal relationships between event types in the prediction process to utilize the causality of event types from the perspective of the modeling algorithm. The utilities of event type causality weaken the negative effect of irrelevant events. Experimental results demonstrate that ECHN outperforms state-of-the-arts in the link prediction task. The authors believe that this paper is the first study to model the causality of event types in dynamic heterogeneous graphs explicitly.
{"title":"Link Prediction in Continuous-Time Dynamic Heterogeneous Graphs with Causality of Event Types","authors":"Jiarun Zhu;Xingyu Wu;Muhammad Usman;Xiangyu Wang;Huanhuan Chen","doi":"10.26599/IJCS.2022.9100013","DOIUrl":"10.26599/IJCS.2022.9100013","url":null,"abstract":"Dynamic heterogeneous graphs comprise different types of events with temporal labels. In many real-world scenarios, the temporal order of different types of events possibly implies causal relationships between these event types. However, existing methods designed to model dynamic heterogeneous graphs neglect the underlying causal relationships between event types. For instance, the determination of the occurrence of a new event is misled by irrelevant historical events considering the type and could lead to performance degradation. First, this paper explicitly defines the causality of event types by the heterogeneous causality graph to utilize such causality from the perspective of the graph structure to tackle the aforementioned issue. Second, this paper proposes the event type causality based continuous-time heterogeneous attention network (ECHN) to model dynamic heterogeneous graphs. ECHN aggregates features based on the strength of different causal relationships between event types in the prediction process to utilize the causality of event types from the perspective of the modeling algorithm. The utilities of event type causality weaken the negative effect of irrelevant events. Experimental results demonstrate that ECHN outperforms state-of-the-arts in the link prediction task. The authors believe that this paper is the first study to model the causality of event types in dynamic heterogeneous graphs explicitly.","PeriodicalId":32381,"journal":{"name":"International Journal of Crowd Science","volume":"6 2","pages":"80-91"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9736195/9815841/09815844.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45270762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.26599/IJCS.2022.9100018
{"title":"Call for Papers Special Issue on Cyber-Physical-Social Systems and Smart Environments","authors":"","doi":"10.26599/IJCS.2022.9100018","DOIUrl":"10.26599/IJCS.2022.9100018","url":null,"abstract":"","PeriodicalId":32381,"journal":{"name":"International Journal of Crowd Science","volume":"6 2","pages":"110-110"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9736195/9815841/09815578.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47344687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-15DOI: 10.26599/IJCS.2022.9100004
Aoqiang Xing;Hongbo Sun
Crowd phenomena are widespread in human society, but they cannot be observed easily in the real world, and research on them cannot follow traditional ways. Simulation is one of the most effective means to support studies about crowd phenomena. As model-based scientific activities, crowd science simulations take extra efforts on member models, which reflect individuals who own characteristics such as heterogeneity, large scale, and multiplicate connections. Unfortunately, collecting enormous members is difficult in reality. How to generate tremendous crowd equivalent member models according to real members is an urgent problem to be solved. A crowd equivalence-based massive member model generation method is proposed. Member model generation is accomplished according to the following steps. The first step is the member metamodel definition, which provides patterns and member model data elements for member model definition. The second step is member model definition, which defines types, quantities, and attributes of member models for member model generation. The third step is crowd network definition and generation, which defines and generates an equivalent large-scale crowd network according to the numerical characteristics of existing networks. On the basis of the structure of the large-scale crowd network, connections among member models are well established and regarded as social relationships among real members. The last step is member model generation. Based on the previous steps, it generates types, attributes, and connections among member models. According to the quality-time model of crowd intelligence level measurement, a crowd-oriented equivalence for crowd networks is derived on the basis of numerical characteristics. A massive member model generation tool is developed according to the proposed method. The member models generated by this tool possess multiplicate connections and attributes, which satisfy the requirements of crowd science simulations well. The member model generation method based on crowd equivalence is verified through simulations. A simulation tool is developed to generate massive member models to support crowd science simulations and crowd science studies.
{"title":"A Crowd Equivalence-Based Massive Member Model Generation Method for Crowd Science Simulations","authors":"Aoqiang Xing;Hongbo Sun","doi":"10.26599/IJCS.2022.9100004","DOIUrl":"10.26599/IJCS.2022.9100004","url":null,"abstract":"Crowd phenomena are widespread in human society, but they cannot be observed easily in the real world, and research on them cannot follow traditional ways. Simulation is one of the most effective means to support studies about crowd phenomena. As model-based scientific activities, crowd science simulations take extra efforts on member models, which reflect individuals who own characteristics such as heterogeneity, large scale, and multiplicate connections. Unfortunately, collecting enormous members is difficult in reality. How to generate tremendous crowd equivalent member models according to real members is an urgent problem to be solved. A crowd equivalence-based massive member model generation method is proposed. Member model generation is accomplished according to the following steps. The first step is the member metamodel definition, which provides patterns and member model data elements for member model definition. The second step is member model definition, which defines types, quantities, and attributes of member models for member model generation. The third step is crowd network definition and generation, which defines and generates an equivalent large-scale crowd network according to the numerical characteristics of existing networks. On the basis of the structure of the large-scale crowd network, connections among member models are well established and regarded as social relationships among real members. The last step is member model generation. Based on the previous steps, it generates types, attributes, and connections among member models. According to the quality-time model of crowd intelligence level measurement, a crowd-oriented equivalence for crowd networks is derived on the basis of numerical characteristics. A massive member model generation tool is developed according to the proposed method. The member models generated by this tool possess multiplicate connections and attributes, which satisfy the requirements of crowd science simulations well. The member model generation method based on crowd equivalence is verified through simulations. A simulation tool is developed to generate massive member models to support crowd science simulations and crowd science studies.","PeriodicalId":32381,"journal":{"name":"International Journal of Crowd Science","volume":"6 1","pages":"23-33"},"PeriodicalIF":0.0,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9736195/9745472/09758665.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49321062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-15DOI: 10.26599/IJCS.2022.9100003
Michael Safo Oduro;Han Yu;Hong Huang
A common phenomenon that increasingly stimulates the interest of investors, companies, and entrepreneurs involved in crowd funding activities particularly on the Kickstarter website is identifying metrics that make such campaigns markedly successful. This study seeks to gauge the importance of key predictive variables or features based on statistical analysis, identify model-based machine learning methods based on performance assessment that predict success of a campaigns, and compare the selected different machine learning algorithms. To achieve our research objectives and maximize insight into the dataset used, feature engineering was performed. Then, machine learning models, inclusive of Logistic Regression (LR), Support Vector Machines (SVMs) in the form of Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), and random forest analysis (bagging and boosting), were performed and compared via cross validation approaches in terms of their resulting test error rates, F1 score, Accuracy, Precision, and Recall rates. Of the machine learning models employed for predictive analysis, the test error rates and the other classification metric scores obtained across the three cross-validation approaches identified bagging and gradient boosting (the SVMs) as more robust methods for predicting success of Kickstarter projects. The major research objectives in this paper have been achieved by accessing the performance of key statistical learning methods that guides the choice of learning methods or models and giving us a measure of the quality of the ultimately chosen model. However, Bayesian semi-parametric approaches are of future research consideration. These methods facilitate the usage of an infinite number of parameters to capture information regarding the underlying distributions of even more complex data.
{"title":"Predicting the Entrepreneurial Success of Crowdfunding Campaigns Using Model-Based Machine Learning Methods","authors":"Michael Safo Oduro;Han Yu;Hong Huang","doi":"10.26599/IJCS.2022.9100003","DOIUrl":"10.26599/IJCS.2022.9100003","url":null,"abstract":"A common phenomenon that increasingly stimulates the interest of investors, companies, and entrepreneurs involved in crowd funding activities particularly on the Kickstarter website is identifying metrics that make such campaigns markedly successful. This study seeks to gauge the importance of key predictive variables or features based on statistical analysis, identify model-based machine learning methods based on performance assessment that predict success of a campaigns, and compare the selected different machine learning algorithms. To achieve our research objectives and maximize insight into the dataset used, feature engineering was performed. Then, machine learning models, inclusive of Logistic Regression (LR), Support Vector Machines (SVMs) in the form of Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), and random forest analysis (bagging and boosting), were performed and compared via cross validation approaches in terms of their resulting test error rates, F1 score, Accuracy, Precision, and Recall rates. Of the machine learning models employed for predictive analysis, the test error rates and the other classification metric scores obtained across the three cross-validation approaches identified bagging and gradient boosting (the SVMs) as more robust methods for predicting success of Kickstarter projects. The major research objectives in this paper have been achieved by accessing the performance of key statistical learning methods that guides the choice of learning methods or models and giving us a measure of the quality of the ultimately chosen model. However, Bayesian semi-parametric approaches are of future research consideration. These methods facilitate the usage of an infinite number of parameters to capture information regarding the underlying distributions of even more complex data.","PeriodicalId":32381,"journal":{"name":"International Journal of Crowd Science","volume":"6 1","pages":"7-16"},"PeriodicalIF":0.0,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9736195/9745472/09758663.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41410670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-15DOI: 10.26599/IJCS.2022.9100005
Jianran Liu;Wen Ji
Agents are always in an interactive environment. With time, the intelligence of agents will be affected by the interactive environment. Agents need to coordinate the interaction with different environmental factors to achieve the optimal intelligence state. We consider an agent's interaction with the environment as an action-reward process. An agent balances the reward it receives by acting with various environmental factors. This paper refers to the concept of interaction between an agent and the environment in reinforcement learning and calculates the optimal mode of interaction between an agent and the environment. It aims to help agents maintain the best intelligence state as far as possible. For specific interaction scenarios, this paper takes food collocation as an example, the evolution process between an agent and the environment is constructed, and the advantages and disadvantages of the evolutionary environment are reflected by the evolution status of the agent. Our practical case study using dietary combinations demonstrates the feasibility of this interactive balance.
{"title":"Evolution of Agents in the Case of a Balanced Diet","authors":"Jianran Liu;Wen Ji","doi":"10.26599/IJCS.2022.9100005","DOIUrl":"10.26599/IJCS.2022.9100005","url":null,"abstract":"Agents are always in an interactive environment. With time, the intelligence of agents will be affected by the interactive environment. Agents need to coordinate the interaction with different environmental factors to achieve the optimal intelligence state. We consider an agent's interaction with the environment as an action-reward process. An agent balances the reward it receives by acting with various environmental factors. This paper refers to the concept of interaction between an agent and the environment in reinforcement learning and calculates the optimal mode of interaction between an agent and the environment. It aims to help agents maintain the best intelligence state as far as possible. For specific interaction scenarios, this paper takes food collocation as an example, the evolution process between an agent and the environment is constructed, and the advantages and disadvantages of the evolutionary environment are reflected by the evolution status of the agent. Our practical case study using dietary combinations demonstrates the feasibility of this interactive balance.","PeriodicalId":32381,"journal":{"name":"International Journal of Crowd Science","volume":"6 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9736195/9745472/09758662.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43540558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) is currently in a stage of rapid development. Hundreds of millions of sensing nodes and intelligent terminals undertake the tasks of sensing and transmitting data. Data collection is the key to realizing data analysis and intelligent application of IoT. The life cycle of IoT is limited by the energy of the IoT nodes in the network. A complex computing model will bring serious or even unbearable burdens to IoT nodes. In this study, we use the data prediction method to explore time correlation data and adjust the appropriate spatial sampling rate on the basis of the spatial correlation of sensory data to further reduce data. Specifically, the improved and optimized DNA-binding protein (DBP) data prediction method can increase the time interval of sensing data to further reduce energy consumption. Based on the spatial characteristics of the sensing data, substituting the data of similar nodes can reduce the sampling rate. The probabilistic wake-up strategy is also adopted to adjust the spatial correlation of the sensing data. On the basis of node priority, an optimized greedy algorithm is proposed to select the appropriate dominating node for eliminating redundant nodes and improving network energy utilization. Experiments have proven that our scheme reduces network energy consumption under the premise of ensuring data reliability.
{"title":"Energy-Efficient Sensory Data Collection Based on Spatiotemporal Correlation in IoT Networks","authors":"Jine Tang;Shuang Wu;Lingxiao Wei;Weijing Liu;Taishan Qin;Zhangbing Zhou;Junhua Gu","doi":"10.26599/IJCS.2022.9100007","DOIUrl":"10.26599/IJCS.2022.9100007","url":null,"abstract":"The Internet of Things (IoT) is currently in a stage of rapid development. Hundreds of millions of sensing nodes and intelligent terminals undertake the tasks of sensing and transmitting data. Data collection is the key to realizing data analysis and intelligent application of IoT. The life cycle of IoT is limited by the energy of the IoT nodes in the network. A complex computing model will bring serious or even unbearable burdens to IoT nodes. In this study, we use the data prediction method to explore time correlation data and adjust the appropriate spatial sampling rate on the basis of the spatial correlation of sensory data to further reduce data. Specifically, the improved and optimized DNA-binding protein (DBP) data prediction method can increase the time interval of sensing data to further reduce energy consumption. Based on the spatial characteristics of the sensing data, substituting the data of similar nodes can reduce the sampling rate. The probabilistic wake-up strategy is also adopted to adjust the spatial correlation of the sensing data. On the basis of node priority, an optimized greedy algorithm is proposed to select the appropriate dominating node for eliminating redundant nodes and improving network energy utilization. Experiments have proven that our scheme reduces network energy consumption under the premise of ensuring data reliability.","PeriodicalId":32381,"journal":{"name":"International Journal of Crowd Science","volume":"6 1","pages":"34-43"},"PeriodicalIF":0.0,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9736195/9745472/09758664.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41558204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-15DOI: 10.26599/IJCS.2022.9100001
Tao Jiang;Fengjian Kang;Wei Guo;Wei He;Lei Liu;Xudong Lu;Yonghui Xu;Lizhen Cui
In recent years, neural networks have been widely used in natural language processing, especially in sentence similarity modeling. Most of the previous studies focused on the current sentence, ignoring the commonsense knowledge related to the current sentence in the task of sentence similarity modeling. Commonsense knowledge can be remarkably useful for understanding the semantics of sentences. CK-Encoder, which can effectively acquire commonsense knowledge to improve the performance of sentence similarity modeling, is proposed in this paper. Specifically, the model first generates a commonsense knowledge graph of the input sentence and calculates this graph by using the graph convolution network. In addition, CKER, a framework combining CK-Encoder and sentence encoder, is introduced. Experiments on two sentence similarity tasks have demonstrated that CK-Encoder can effectively acquire commonsense knowledge to improve the capability of a model to understand sentences.
{"title":"CK-Encoder: Enhanced Language Representation for Sentence Similarity","authors":"Tao Jiang;Fengjian Kang;Wei Guo;Wei He;Lei Liu;Xudong Lu;Yonghui Xu;Lizhen Cui","doi":"10.26599/IJCS.2022.9100001","DOIUrl":"10.26599/IJCS.2022.9100001","url":null,"abstract":"In recent years, neural networks have been widely used in natural language processing, especially in sentence similarity modeling. Most of the previous studies focused on the current sentence, ignoring the commonsense knowledge related to the current sentence in the task of sentence similarity modeling. Commonsense knowledge can be remarkably useful for understanding the semantics of sentences. CK-Encoder, which can effectively acquire commonsense knowledge to improve the performance of sentence similarity modeling, is proposed in this paper. Specifically, the model first generates a commonsense knowledge graph of the input sentence and calculates this graph by using the graph convolution network. In addition, CKER, a framework combining CK-Encoder and sentence encoder, is introduced. Experiments on two sentence similarity tasks have demonstrated that CK-Encoder can effectively acquire commonsense knowledge to improve the capability of a model to understand sentences.","PeriodicalId":32381,"journal":{"name":"International Journal of Crowd Science","volume":"6 1","pages":"17-22"},"PeriodicalIF":0.0,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9736195/9745472/09758661.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41262160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-15DOI: 10.26599/IJCS.2022.9100008
{"title":"Message from Editors-in-Chief","authors":"","doi":"10.26599/IJCS.2022.9100008","DOIUrl":"10.26599/IJCS.2022.9100008","url":null,"abstract":"","PeriodicalId":32381,"journal":{"name":"International Journal of Crowd Science","volume":"6 1","pages":"i-i"},"PeriodicalIF":0.0,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9736195/9745472/09758681.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46816853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-15DOI: 10.26599/IJCS.2022.9100006
Yourong Wang;Lei Zhao
Although there is a consensus that the housing market is deeply affected by credit policies, little research is available on the impact of credit policies on housing market liquidity. Moreover, housing market liquidity is not scientifically quantified and monitored in China. To improve the government's intelligence in monitoring the fluctuation of the housing market and make more efficient policies in time, the dynamic relationship between credit policy and housing liquidity needs to be understood fully. On the basis of second-hand housing transaction data in Beijing from 2013 to 2018, this paper uses a time-varying parameter vector autoregressive model and reveals several important results. First, loosening credit policies improves the housing market liquidity, whereas credit tightening reduces the housing market liquidity. Second, both the direction and the duration of the impacts are time-varying and sensitive to the market conditions; when the housing market is downward, the effect of a loose credit policy to improve market liquidity is weak, and when the housing market is upward, market liquidity is more sensitive to monetary policy. Finally, the housing market confidence serves as an intermediary between credit policy and housing market liquidity. These results are of great significance to improve the intelligence and efficiency of the government in monitoring and regulating the housing market. Several policy recommendations are discussed to regulate the housing market and to stabilize market expectations.
{"title":"Credit Policy and Housing Market Liquidity: An Empirical Study in Beijing Based on the TVP-VAR Model","authors":"Yourong Wang;Lei Zhao","doi":"10.26599/IJCS.2022.9100006","DOIUrl":"10.26599/IJCS.2022.9100006","url":null,"abstract":"Although there is a consensus that the housing market is deeply affected by credit policies, little research is available on the impact of credit policies on housing market liquidity. Moreover, housing market liquidity is not scientifically quantified and monitored in China. To improve the government's intelligence in monitoring the fluctuation of the housing market and make more efficient policies in time, the dynamic relationship between credit policy and housing liquidity needs to be understood fully. On the basis of second-hand housing transaction data in Beijing from 2013 to 2018, this paper uses a time-varying parameter vector autoregressive model and reveals several important results. First, loosening credit policies improves the housing market liquidity, whereas credit tightening reduces the housing market liquidity. Second, both the direction and the duration of the impacts are time-varying and sensitive to the market conditions; when the housing market is downward, the effect of a loose credit policy to improve market liquidity is weak, and when the housing market is upward, market liquidity is more sensitive to monetary policy. Finally, the housing market confidence serves as an intermediary between credit policy and housing market liquidity. These results are of great significance to improve the intelligence and efficiency of the government in monitoring and regulating the housing market. Several policy recommendations are discussed to regulate the housing market and to stabilize market expectations.","PeriodicalId":32381,"journal":{"name":"International Journal of Crowd Science","volume":"6 1","pages":"44-52"},"PeriodicalIF":0.0,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9736195/9745472/09758666.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42938281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}