INTRODUCTION: This study was designed to give a comprehensively updated bibliometric summary of employee performance when faced with cognitive dissonance in light of recent imperatives and expanding scholarly interest. OBJECTIVE: This research provided a deep knowledge of references, cited sources, countries through network map, relevant sources map with table, relevant authors map with table, frequent keywords used by authors network map, citations per year graph, and co-occurrence of network with networking map. METHOD: In the study, the Scopus database was used to analyse large data. Biblioshiny software was also used for the analysis and verified using a VOS viewer. A mixed (combination of several) techniques is the main focus of the methodological procedure. 400 Scopus-indexed articles and 5 conference papers have been taken to prepare this bibliometrics review with the help of biblioshiny and Vos viewer software. RESULT: The results reveal that employee performance depends on their beliefs and attitudes. These two factors fall under cognitive dissonance theory (CDT). CONCLUSION: It is also fruitful for organizations to study CDT theory for organizational development and employee performance growth.
引言:本研究旨在全面更新文献计量学,总结员工在面临认知失调时的表现,以应对近期的迫切需要和不断扩大的学术兴趣。目的:本研究通过网络图、相关来源图(含表格)、相关作者图(含表格)、作者经常使用的关键词网络图、年引文图以及网络与网络图的共现,深入了解参考文献、引用来源、国家。方法:本研究使用 Scopus 数据库分析大量数据。还使用 Biblioshiny 软件进行分析,并使用 VOS 查看器进行验证。混合(多种技术的结合)技术是方法论程序的重点。在 Biblioshiny 和 Vos 浏览器软件的帮助下,400 篇 Scopus 索引文章和 5 篇会议论文被用来编写这篇文献计量学综述。结果:研究结果表明,员工的绩效取决于他们的信念和态度。这两个因素属于认知失调理论(CDT)的范畴。结论:研究 CDT 理论对组织发展和员工绩效增长也很有帮助。
{"title":"A Bibliometric Analysis of Employee Performance in the Context of Cognitive Dissonance Using Visualizing Networks","authors":"Channi Sachdeva, Veer P. Gangwar","doi":"10.4108/eetsis.4655","DOIUrl":"https://doi.org/10.4108/eetsis.4655","url":null,"abstract":"INTRODUCTION: This study was designed to give a comprehensively updated bibliometric summary of employee performance when faced with cognitive dissonance in light of recent imperatives and expanding scholarly interest. \u0000OBJECTIVE: This research provided a deep knowledge of references, cited sources, countries through network map, relevant sources map with table, relevant authors map with table, frequent keywords used by authors network map, citations per year graph, and co-occurrence of network with networking map. \u0000METHOD: In the study, the Scopus database was used to analyse large data. Biblioshiny software was also used for the analysis and verified using a VOS viewer. A mixed (combination of several) techniques is the main focus of the methodological procedure. 400 Scopus-indexed articles and 5 conference papers have been taken to prepare this bibliometrics review with the help of biblioshiny and Vos viewer software. \u0000RESULT: The results reveal that employee performance depends on their beliefs and attitudes. These two factors fall under cognitive dissonance theory (CDT). \u0000CONCLUSION: It is also fruitful for organizations to study CDT theory for organizational development and employee performance growth.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"111 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138958429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stock markets are frequently among the most volatile locations to invest in. The choice to buy or sell stocks is heavily influenced by statistical analysis of prior stock performance and external circumstances. All these variables are employed to maximize profitability. Stock value prediction is a hard undertaking that necessitates a solid computational foundation to compute longer-term share values. Stock prices are connected inside the market, making it harder to forecast expenses. Financial data is a category that includes past data from time series that provides a lot of knowledge and is frequently employed in data analysis tasks. This research provides a unique optimisation strategy for stock price prediction based on a Multi-Layer Sequential Long Short Term Memory (MLS LSTM) model and the adam optimizer in this context. Furthermore, to make reliable predictions, the MLS LSTM algorithm uses normalised time series data separated into time steps to assess the relationship between past and future values. Furthermore, it solves the vanishing gradient problem that plagues basic recurrent neural networks.
股票市场往往是最不稳定的投资场所之一。买入或卖出股票的选择在很大程度上受到对之前股票表现和外部环境的统计分析的影响。所有这些变量都是为了实现利润最大化。股票价值预测是一项艰巨的任务,需要坚实的计算基础来计算长期股票价值。股票价格在市场中是相互关联的,这就增加了预测支出的难度。金融数据是一个包括时间序列过往数据的类别,能提供大量知识,在数据分析任务中经常使用。在此背景下,本研究基于多层序列长短期记忆(MLS LSTM)模型和 adam 优化器,为股票价格预测提供了一种独特的优化策略。此外,为了做出可靠的预测,MLS LSTM 算法使用归一化的时间序列数据,按时间步长划分,以评估过去和未来值之间的关系。此外,它还解决了困扰基本递归神经网络的梯度消失问题。
{"title":"Stock Price Prediction using Multi-Layered Sequential LSTM","authors":"Jyoti Prakash Behura, Sagar Dhanaraj Pande, Janjhyman Venkata Naga Ramesh","doi":"10.4108/eetsis.4585","DOIUrl":"https://doi.org/10.4108/eetsis.4585","url":null,"abstract":"Stock markets are frequently among the most volatile locations to invest in. The choice to buy or sell stocks is heavily influenced by statistical analysis of prior stock performance and external circumstances. All these variables are employed to maximize profitability. Stock value prediction is a hard undertaking that necessitates a solid computational foundation to compute longer-term share values. Stock prices are connected inside the market, making it harder to forecast expenses. Financial data is a category that includes past data from time series that provides a lot of knowledge and is frequently employed in data analysis tasks. This research provides a unique optimisation strategy for stock price prediction based on a Multi-Layer Sequential Long Short Term Memory (MLS LSTM) model and the adam optimizer in this context. Furthermore, to make reliable predictions, the MLS LSTM algorithm uses normalised time series data separated into time steps to assess the relationship between past and future values. Furthermore, it solves the vanishing gradient problem that plagues basic recurrent neural networks.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139005387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhèng-Hóng Lin, Yuzhong Zhou, Yuliang Yang, Jiahao Shi, Jie Lin
In recent years, big AI models have demonstrated remarkable performance in various artificial intelligence (AI) tasks. However, their widespread use has introduced significant challenges in terms of model transmission and training. This paper addresses these challenges by proposing a solution that involves the compression and transmission of large models using deep learning techniques, thereby ensuring the efficiency of model training. To achieve this objective, we leverage deep convolutional networks to design a novel approach for compressing and transmitting large models. Specifically, deep convolutional networks are employed for model compression, providing an effective means to reduce the size of large models without compromising their representational capacity. The proposed framework also includes carefully devised encoding and decoding strategies to guarantee the restoration of model integrity after transmission. Furthermore, a tailored loss function is designed for model training, facilitating the optimization of both the transmission and training performance within the system. Through experimental evaluation, we demonstrate the efficacy of the proposed approach in addressing the challenges associated with large model transmission and training. The results showcase the successful compression and subsequent accurate reconstruction of large models, while maintaining their performance across various AI tasks. This work contributes to the ongoing research in enhancing the practicality and efficiency of deploying large models in real-world AI applications.
{"title":"Compression and Transmission of Big AI Model Based on Deep Learning","authors":"Zhèng-Hóng Lin, Yuzhong Zhou, Yuliang Yang, Jiahao Shi, Jie Lin","doi":"10.4108/eetsis.3803","DOIUrl":"https://doi.org/10.4108/eetsis.3803","url":null,"abstract":"In recent years, big AI models have demonstrated remarkable performance in various artificial intelligence (AI) tasks. However, their widespread use has introduced significant challenges in terms of model transmission and training. This paper addresses these challenges by proposing a solution that involves the compression and transmission of large models using deep learning techniques, thereby ensuring the efficiency of model training. To achieve this objective, we leverage deep convolutional networks to design a novel approach for compressing and transmitting large models. Specifically, deep convolutional networks are employed for model compression, providing an effective means to reduce the size of large models without compromising their representational capacity. The proposed framework also includes carefully devised encoding and decoding strategies to guarantee the restoration of model integrity after transmission. Furthermore, a tailored loss function is designed for model training, facilitating the optimization of both the transmission and training performance within the system. Through experimental evaluation, we demonstrate the efficacy of the proposed approach in addressing the challenges associated with large model transmission and training. The results showcase the successful compression and subsequent accurate reconstruction of large models, while maintaining their performance across various AI tasks. This work contributes to the ongoing research in enhancing the practicality and efficiency of deploying large models in real-world AI applications.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"47 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138981487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henry Villarreal-Torres, Julio C. Angeles-Morales, Jenny E. Cano-Mejía, Carmen Mejía-Murillo, Gumercindo Flores-Reyes, Oscar Cruz-Cruz, Manuel Urcia-Quispe, Manuel Palomino-Márquez, Miguel Solar-Jara, Reyna Escobedo-Zarzosa
The impact of artificial intelligence in our society is important due to the innovation of processes through data science to know the academic and sociodemographic factors that contribute to late payments in university students, to identify them and make timely decisions for implementing prevention and correction programs, avoiding student dropout due to this economic problem, and ensuring success in their education in a meaningful and focused way. In this sense, the research aims to compare the performance metrics of classification models for late payments in students of a private university by using AutoML algorithms from various existing platforms and solutions such as AutoKeras, AutoGluon, HyperOPT, MLJar, and H2O in a data set consisting of 8,495 records and the application of data balancing techniques. From the implementation and execution of various algorithms, similar metrics have been obtained based on the parameters and optimization functions used automatically by each tool, providing better performance to the H2O platform through the Stacked Ensemble algorithm with metrics accuracy = 0.778. F1 = 0.870, recall = 0.904 and precision = 0.839. The research can be extended to other contexts or areas of knowledge due to the growing interest in automated machine learning, providing researchers with a valuable tool in data science without the need for deep knowledge.
{"title":"Comparative analysis of performance of AutoML algorithms: Classification model of payment arrears in students of a private university","authors":"Henry Villarreal-Torres, Julio C. Angeles-Morales, Jenny E. Cano-Mejía, Carmen Mejía-Murillo, Gumercindo Flores-Reyes, Oscar Cruz-Cruz, Manuel Urcia-Quispe, Manuel Palomino-Márquez, Miguel Solar-Jara, Reyna Escobedo-Zarzosa","doi":"10.4108/eetsis.4550","DOIUrl":"https://doi.org/10.4108/eetsis.4550","url":null,"abstract":"The impact of artificial intelligence in our society is important due to the innovation of processes through data science to know the academic and sociodemographic factors that contribute to late payments in university students, to identify them and make timely decisions for implementing prevention and correction programs, avoiding student dropout due to this economic problem, and ensuring success in their education in a meaningful and focused way. In this sense, the research aims to compare the performance metrics of classification models for late payments in students of a private university by using AutoML algorithms from various existing platforms and solutions such as AutoKeras, AutoGluon, HyperOPT, MLJar, and H2O in a data set consisting of 8,495 records and the application of data balancing techniques. From the implementation and execution of various algorithms, similar metrics have been obtained based on the parameters and optimization functions used automatically by each tool, providing better performance to the H2O platform through the Stacked Ensemble algorithm with metrics accuracy = 0.778. F1 = 0.870, recall = 0.904 and precision = 0.839. The research can be extended to other contexts or areas of knowledge due to the growing interest in automated machine learning, providing researchers with a valuable tool in data science without the need for deep knowledge.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"55 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138595623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we categorise and critically evaluate the current modelling and analysis approaches and procedures created by researchers and scientists in inventory management systems across different sectors such as healthcare, supply chain, and routing problems. Furthermore, we discuss recent trends and advancements in inventory management systems that deal with shortage. Based on our literature review, we propose a comprehensive research structure that is appropriate in the current environment and helpful in future study directions.
{"title":"Recent Trends and Advancements in Inventory Management","authors":"Ankit Dubey, R. Kumar","doi":"10.4108/eetsis.4543","DOIUrl":"https://doi.org/10.4108/eetsis.4543","url":null,"abstract":"In this paper, we categorise and critically evaluate the current modelling and analysis approaches and procedures created by researchers and scientists in inventory management systems across different sectors such as healthcare, supply chain, and routing problems. Furthermore, we discuss recent trends and advancements in inventory management systems that deal with shortage. Based on our literature review, we propose a comprehensive research structure that is appropriate in the current environment and helpful in future study directions.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"124 26","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138599136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuliang Yang, Zhèng-Hóng Lin, Yuzhong Zhou, Jiahao Shi, Jie Lin
With the rapid advancement of artificial intelligence and wireless communication technologies, the abundance of textual information has grown significantly, accompanied by a plethora of multidimensional metrics such as innovation, application prospects, key technologies, and expected outcomes. Extracting valuable insights from these multifaceted indicators and establishing an effective composite evaluation weighting framework poses a pivotal challenge in text information processing. In response, we propose a novel approach in this paper to textual information processing, leveraging multi-dimensional indicator weights (MDIWs). Our method involves extracting semantic information from text and inputting it into an LSTM-based textual information processor (TIP) to generate MDIWs. These MDIWs are then processed to create a judgment matrix following by eigenvalue decomposition and normalization, capturing intricate semantic relationships. Our framework enhances the comprehension of multi-dimensional aspects within textual data, offering potential benefits in various applications such as sentiment analysis, information retrieval, and content summarization. Experimental results underscore the effectiveness of our approach in refining and utilizing MDIWs for improved understanding and decision-making. This work contributes to the enhancement of text information processing by offering a structured approach to address the complexity of multidimensional metric evaluation, thus enabling more accurate and informed decision-making in various domains.
{"title":"Textual Information Processing Based on Multi-Dimensional Indicator Weights","authors":"Yuliang Yang, Zhèng-Hóng Lin, Yuzhong Zhou, Jiahao Shi, Jie Lin","doi":"10.4108/eetsis.3805","DOIUrl":"https://doi.org/10.4108/eetsis.3805","url":null,"abstract":"With the rapid advancement of artificial intelligence and wireless communication technologies, the abundance of textual information has grown significantly, accompanied by a plethora of multidimensional metrics such as innovation, application prospects, key technologies, and expected outcomes. Extracting valuable insights from these multifaceted indicators and establishing an effective composite evaluation weighting framework poses a pivotal challenge in text information processing. In response, we propose a novel approach in this paper to textual information processing, leveraging multi-dimensional indicator weights (MDIWs). Our method involves extracting semantic information from text and inputting it into an LSTM-based textual information processor (TIP) to generate MDIWs. These MDIWs are then processed to create a judgment matrix following by eigenvalue decomposition and normalization, capturing intricate semantic relationships. Our framework enhances the comprehension of multi-dimensional aspects within textual data, offering potential benefits in various applications such as sentiment analysis, information retrieval, and content summarization. Experimental results underscore the effectiveness of our approach in refining and utilizing MDIWs for improved understanding and decision-making. This work contributes to the enhancement of text information processing by offering a structured approach to address the complexity of multidimensional metric evaluation, thus enabling more accurate and informed decision-making in various domains.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"20 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138604232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this study is to explore the policy considerations that should be taken into account regarding the practical introduction of affirmative action policies in the field of education. For this purpose, we analysed the 100 most relevant YouTube videos produced between 2015 and 2023 using network analysis, the aim being to utilize the material they provide on affirmative action so as to reflect this in future education policies. As a result, nine key policy considerations that should be considered when introducing affirmative action policies in the field of education were derived.
{"title":"Topic Modelling Analysis to Explore Policy Considerations Regarding the Practical Introduction of Affirmative Action in the Field of Education","authors":"Ji-Hyun Jang","doi":"10.4108/eetsis.4386","DOIUrl":"https://doi.org/10.4108/eetsis.4386","url":null,"abstract":"The aim of this study is to explore the policy considerations that should be taken into account regarding the practical introduction of affirmative action policies in the field of education. For this purpose, we analysed the 100 most relevant YouTube videos produced between 2015 and 2023 using network analysis, the aim being to utilize the material they provide on affirmative action so as to reflect this in future education policies. As a result, nine key policy considerations that should be considered when introducing affirmative action policies in the field of education were derived.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"10 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134901661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fusheng Wei, Jiajia Huang, Jingming Zhao, Huakun Que
This study delves into Internet of Things (IoT) networks wherein a transmitting source communicates information to a designated recipient. The presence of signal attenuation challenges the direct transmission of information from the source to the recipient. To surmount this obstacle, we investigate IoT network communication facilitated by multi-hop relays, whereby multiple relays collaboratively enable the conveyance of data from the source to the recipient across intermediate stages. For the considered IoT networks augmented by multi-hop relays, we assess the performance of the system by analyzing the probability of transmission outage. This analysis entails the derivation of an analytical expression for evaluating the occurrence of IoT network outage. Additionally, we gauge the system's effectiveness by examining the attainable transmission rate, wherein an analytical expression is furnished to assess the IoT data rate. The empirical results, along with the analytical findings, are subsequently presented to validate the formulated expressions in the context of IoT networks empowered by multi-hop relays. Notably, the utilization of multi-hop relaying emerges as a efficacious strategy for substantially expanding the coverage scope of IoT networks.
{"title":"Outage Probability Analysis of Multi-hop Relay Aided IoT Networks","authors":"Fusheng Wei, Jiajia Huang, Jingming Zhao, Huakun Que","doi":"10.4108/eetsis.3780","DOIUrl":"https://doi.org/10.4108/eetsis.3780","url":null,"abstract":"This study delves into Internet of Things (IoT) networks wherein a transmitting source communicates information to a designated recipient. The presence of signal attenuation challenges the direct transmission of information from the source to the recipient. To surmount this obstacle, we investigate IoT network communication facilitated by multi-hop relays, whereby multiple relays collaboratively enable the conveyance of data from the source to the recipient across intermediate stages. For the considered IoT networks augmented by multi-hop relays, we assess the performance of the system by analyzing the probability of transmission outage. This analysis entails the derivation of an analytical expression for evaluating the occurrence of IoT network outage. Additionally, we gauge the system's effectiveness by examining the attainable transmission rate, wherein an analytical expression is furnished to assess the IoT data rate. The empirical results, along with the analytical findings, are subsequently presented to validate the formulated expressions in the context of IoT networks empowered by multi-hop relays. Notably, the utilization of multi-hop relaying emerges as a efficacious strategy for substantially expanding the coverage scope of IoT networks.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"68 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134957635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tuti Dharmawati, Loso Judijanto, Endang Fatmawati, Abdul Rokhim, Faria Ruhana, Moh Erkamim
INTRODUCTION: Quantum computing technology has become a center of attention in various scientific disciplines, including economic analysis. The adoption of quantum computing in economic analysis offers tremendous potential to improve the processing of complex economic data and provide deep insights. However, the use of quantum technology in the context of distributed information systems also raises several challenges, including data security and the limitations of quantum technology. OBJECTIVE: This research aims to investigate the implications of adopting quantum computing in economic analysis, with a focus on distributed information systems. METHODS: This research was carried out using a descriptive qualitative approach, with data derived from the results of relevant research and previous studies. The collected data will be processed and analyzed to gain a deeper understanding of the adoption of quantum computing in economic analysis in distributed information systems. RESULTS: This research then finds that the adoption of quantum computing in economic analysis has the potential to increase efficiency, accuracy, and depth of economic insight. However, limitations of current quantum technologies, including quantum errors, limited scale of operations, and data security issues, limit their applications. In the long term, research and development will be key to overcoming these obstacles and maximizing the potential of this technology in economic analysis. CONCLUSION: The long-term implications include increased economic competitiveness and significant changes in the way economic decision-making is carried out, assuming that ethical and regulatory issues are also carefully considered.
{"title":"Adoption of Quantum Computing in Economic Analysis: Potential and Challenges in Distributed Information Systems","authors":"Tuti Dharmawati, Loso Judijanto, Endang Fatmawati, Abdul Rokhim, Faria Ruhana, Moh Erkamim","doi":"10.4108/eetsis.4373","DOIUrl":"https://doi.org/10.4108/eetsis.4373","url":null,"abstract":"INTRODUCTION: Quantum computing technology has become a center of attention in various scientific disciplines, including economic analysis. The adoption of quantum computing in economic analysis offers tremendous potential to improve the processing of complex economic data and provide deep insights. However, the use of quantum technology in the context of distributed information systems also raises several challenges, including data security and the limitations of quantum technology. OBJECTIVE: This research aims to investigate the implications of adopting quantum computing in economic analysis, with a focus on distributed information systems. METHODS: This research was carried out using a descriptive qualitative approach, with data derived from the results of relevant research and previous studies. The collected data will be processed and analyzed to gain a deeper understanding of the adoption of quantum computing in economic analysis in distributed information systems. RESULTS: This research then finds that the adoption of quantum computing in economic analysis has the potential to increase efficiency, accuracy, and depth of economic insight. However, limitations of current quantum technologies, including quantum errors, limited scale of operations, and data security issues, limit their applications. In the long term, research and development will be key to overcoming these obstacles and maximizing the potential of this technology in economic analysis. CONCLUSION: The long-term implications include increased economic competitiveness and significant changes in the way economic decision-making is carried out, assuming that ethical and regulatory issues are also carefully considered.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"37 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136346835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
None Syamsuddin, None Saharuddin, None Yusrizal, Tuti Dharmawati, Endang Fatmawati
INTRODUCTION: Global supply chain management is a critical component in the increasingly complex and connected world of modern business. In the era of globalization, companies face pressure to increase efficiency, transparency, and security in their supply chains. Blockchain technology has emerged as a potential solution to address some of these challenges by enabling more decentralized, transparent, and efficient supply chain management. However, the use of this technology in global supply chain management also raises several issues related to regulation, law, and collaboration with third parties. OBJECTIVE: This research then aims to explore the potential of blockchain technology in global supply chain management and understand the regulatory framework needed to support the implementation of this technology. METHOD: This research was carried out using a qualitative approach. The data used in this research comes from various research results and previous studies that are relevant to the discussion. RESULTS: The results of this research then found that the use of blockchain technology in global supply chain management promises to increase transparency, efficiency, and security. Smart contracts enable the automation of business processes, reducing costs and increasing visibility of operations. Collaboration with third parties is an important strategy in increasing supply chain efficiency. Regulation, data security, and international harmonization remain challenges. CONCLUSION: Defining the legal status of smart contracts and protecting data is key. Effective collaboration with third parties requires good communication and a mature strategy. With a deep understanding of blockchain technology and proper regulation, companies can maximize their benefits to create an efficient, transparent, and reliable supply chain.
{"title":"Utilizing Blockchain Technology in Global Supply Chain Management: An Exploration of Scalable Information Systems","authors":"None Syamsuddin, None Saharuddin, None Yusrizal, Tuti Dharmawati, Endang Fatmawati","doi":"10.4108/eetsis.4374","DOIUrl":"https://doi.org/10.4108/eetsis.4374","url":null,"abstract":"INTRODUCTION: Global supply chain management is a critical component in the increasingly complex and connected world of modern business. In the era of globalization, companies face pressure to increase efficiency, transparency, and security in their supply chains. Blockchain technology has emerged as a potential solution to address some of these challenges by enabling more decentralized, transparent, and efficient supply chain management. However, the use of this technology in global supply chain management also raises several issues related to regulation, law, and collaboration with third parties. OBJECTIVE: This research then aims to explore the potential of blockchain technology in global supply chain management and understand the regulatory framework needed to support the implementation of this technology. METHOD: This research was carried out using a qualitative approach. The data used in this research comes from various research results and previous studies that are relevant to the discussion. RESULTS: The results of this research then found that the use of blockchain technology in global supply chain management promises to increase transparency, efficiency, and security. Smart contracts enable the automation of business processes, reducing costs and increasing visibility of operations. Collaboration with third parties is an important strategy in increasing supply chain efficiency. Regulation, data security, and international harmonization remain challenges. CONCLUSION: Defining the legal status of smart contracts and protecting data is key. Effective collaboration with third parties requires good communication and a mature strategy. With a deep understanding of blockchain technology and proper regulation, companies can maximize their benefits to create an efficient, transparent, and reliable supply chain.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"64 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136347871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}