Pub Date : 2022-10-02DOI: 10.7494/csci.2022.23.3.4227
Falguni Roy, M. Hasan
Information overload is the biggest challenge nowadays for any website, especially e-commerce websites. However, this challenge arises for the fast growth of information on the web (WWW) with easy access to the internet. Collaborative filtering based recommender system is the most useful application to solve the information overload problem by filtering relevant information for the users according to their interests. But, the existing system faces some significant limitations such as data sparsity, low accuracy, cold-start, and malicious attacks. To alleviate the mentioned issues, the relationship of trust incorporates in the system where it can be between the users or items, and such system is known as the trust-based recommender system (TBRS). From the user perspective, the motive of the TBRS is to utilize the reliability between the users to generate more accurate and trusted recommendations. However, the study aims to present a comparative analysis of different trust metrics in the context of the type of trust definition of TBRS. Also, the study accomplishes twenty-four trust metrics in terms of the methodology, trust properties & measurement, validation approaches, and the experimented dataset.
{"title":"Comparative Analysis of Different Trust Metrics of User-User Trust-Based Recommendation System","authors":"Falguni Roy, M. Hasan","doi":"10.7494/csci.2022.23.3.4227","DOIUrl":"https://doi.org/10.7494/csci.2022.23.3.4227","url":null,"abstract":"Information overload is the biggest challenge nowadays for any website, especially e-commerce websites. However, this challenge arises for the fast growth of information on the web (WWW) with easy access to the internet. Collaborative filtering based recommender system is the most useful application to solve the information overload problem by filtering relevant information for the users according to their interests. But, the existing system faces some significant limitations such as data sparsity, low accuracy, cold-start, and malicious attacks. To alleviate the mentioned issues, the relationship of trust incorporates in the system where it can be between the users or items, and such system is known as the trust-based recommender system (TBRS). From the user perspective, the motive of the TBRS is to utilize the reliability between the users to generate more accurate and trusted recommendations. However, the study aims to present a comparative analysis of different trust metrics in the context of the type of trust definition of TBRS. Also, the study accomplishes twenty-four trust metrics in terms of the methodology, trust properties & measurement, validation approaches, and the experimented dataset.","PeriodicalId":23063,"journal":{"name":"Theor. Comput. Sci.","volume":"51 1","pages":"375-"},"PeriodicalIF":0.0,"publicationDate":"2022-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84310633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-02DOI: 10.7494/csci.2022.23.3.4389
Jakup Fondaj, Zirije Hasani, Samedin Krrabaj
Anomaly detection is very important in every sector as health, education, business, etc. Knowing what is going wrong with data/digital system help peoples from every sector to take decision. Detection anomalies in real time Big Data is nowadays very crucial. Dealing with real time data requires speed, for this reason the aim of this paper is to measure the performance of our previously proposed HW-GA algorithm compared with other anomaly detection algorithms. Many factors will be analyzed which may affect the performance of HW-GA as visualization of result, amount of data and performance of computers. Algorithm execution time and CPU usage are the parameters which will be measured to evaluate the performance of HW-GA algorithm. Also, another aim of this paper is to test the HW-GA algorithm with large amount of data to verify if it will find the possible anomalies and the result to compare with other algorithms. The experiments will be done in R with different datasets as real data Covid-19 and e-dnevnik data and three benchmarks from Numenta datasets. The real data have not known anomalies but in the benchmark data the anomalies are known this is in order to evaluate how the algorithms work in both situations. The novelty of this paper is that the performance will be tested in three different computers which one of them is high performance computer.
{"title":"Performance measurement with high performance computer of HW-GA anomaly detection algorithms for streaming data","authors":"Jakup Fondaj, Zirije Hasani, Samedin Krrabaj","doi":"10.7494/csci.2022.23.3.4389","DOIUrl":"https://doi.org/10.7494/csci.2022.23.3.4389","url":null,"abstract":"Anomaly detection is very important in every sector as health, education, business, etc. Knowing what is going wrong with data/digital system help peoples from every sector to take decision. Detection anomalies in real time Big Data is nowadays very crucial. Dealing with real time data requires speed, for this reason the aim of this paper is to measure the performance of our previously proposed HW-GA algorithm compared with other anomaly detection algorithms. Many factors will be analyzed which may affect the performance of HW-GA as visualization of result, amount of data and performance of computers. Algorithm execution time and CPU usage are the parameters which will be measured to evaluate the performance of HW-GA algorithm. Also, another aim of this paper is to test the HW-GA algorithm with large amount of data to verify if it will find the possible anomalies and the result to compare with other algorithms. The experiments will be done in R with different datasets as real data Covid-19 and e-dnevnik data and three benchmarks from Numenta datasets. The real data have not known anomalies but in the benchmark data the anomalies are known this is in order to evaluate how the algorithms work in both situations. The novelty of this paper is that the performance will be tested in three different computers which one of them is high performance computer.","PeriodicalId":23063,"journal":{"name":"Theor. Comput. Sci.","volume":"198 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88139385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-02DOI: 10.7494/csci.2022.23.3.4622
Zekeriya Anil Guven, B. Diri, Tolgahan Cakaloglu
Data analysis becomes difficult with the increase of large amounts of data. More specifically, extracting meaningful insights from this vast amount of data and grouping them based on their shared features without human intervention requires advanced methodologies. There are topic modeling methods to overcome this problem in text analysis for downstream tasks, such as sentiment analysis, spam detection, and news classification. In this research, we benchmark several classifiers, namely Random Forest, AdaBoost, Naive Bayes, and Logistic Regression, using the classical LDA and n-stage LDA topic modeling methods for feature extraction in headlines classification. We run our experiments on 3 and 5 classes publicly available Turkish and English datasets. We demonstrate that n-stage LDA as a feature extractor obtains state-of-the-art performance for any downstream classifier. It should also be noted that Random Forest was the most successful algorithm for both datasets.
{"title":"The Impact of n-stage Latent Dirichlet Allocation on Analysis of Headline Classification","authors":"Zekeriya Anil Guven, B. Diri, Tolgahan Cakaloglu","doi":"10.7494/csci.2022.23.3.4622","DOIUrl":"https://doi.org/10.7494/csci.2022.23.3.4622","url":null,"abstract":"Data analysis becomes difficult with the increase of large amounts of data. More specifically, extracting meaningful insights from this vast amount of data and grouping them based on their shared features without human intervention requires advanced methodologies. There are topic modeling methods to overcome this problem in text analysis for downstream tasks, such as sentiment analysis, spam detection, and news classification. In this research, we benchmark several classifiers, namely Random Forest, AdaBoost, Naive Bayes, and Logistic Regression, using the classical LDA and n-stage LDA topic modeling methods for feature extraction in headlines classification. We run our experiments on 3 and 5 classes publicly available Turkish and English datasets. We demonstrate that n-stage LDA as a feature extractor obtains state-of-the-art performance for any downstream classifier. It should also be noted that Random Forest was the most successful algorithm for both datasets.","PeriodicalId":23063,"journal":{"name":"Theor. Comput. Sci.","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90041058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-02DOI: 10.7494/csci.2022.23.3.4487
S. Deore
Handwritten Script Recognition is a vital application of Machine Learning domain. Applications like automatic number plate detection, pin code detection and managing historical documents increasing more attention towards handwritten script recognition. English is the most widely spoken language, hence there has been a lot of research into identifying a script using a machine. Devanagari is popular script used by a huge number of people in the Indian Subcontinent. In this paper, level-wised efficient transfer learning approach presented on VGG16 model of Convolutional Neural Network (CNN) for identification of Devanagari isolated handwritten characters. In this work a new dataset of Devanagari characters is presented and made accessible publicly. Newly created dataset comprises 5800 samples for 12 vowels, 36 consonants and 10 digits. Initially simple CNN is implemented and trained on this new small dataset. In next stage transfer learning approach is implemented on VGG16 model and in last stage fine-tuned efficient VGG16 model is implemented. The training and testing accuracy of fine-tuned model are obtained as 98.16% and 96.47% respectively.
{"title":"A DHCR_SmartNet: A smart Devanagari Handwritten Character Recognition using Level-wised CNN Architecture DHCR_SmartNet","authors":"S. Deore","doi":"10.7494/csci.2022.23.3.4487","DOIUrl":"https://doi.org/10.7494/csci.2022.23.3.4487","url":null,"abstract":"Handwritten Script Recognition is a vital application of Machine Learning domain. Applications like automatic number plate detection, pin code detection and managing historical documents increasing more attention towards handwritten script recognition. English is the most widely spoken language, hence there has been a lot of research into identifying a script using a machine. Devanagari is popular script used by a huge number of people in the Indian Subcontinent. In this paper, level-wised efficient transfer learning approach presented on VGG16 model of Convolutional Neural Network (CNN) for identification of Devanagari isolated handwritten characters. In this work a new dataset of Devanagari characters is presented and made accessible publicly. Newly created dataset comprises 5800 samples for 12 vowels, 36 consonants and 10 digits. Initially simple CNN is implemented and trained on this new small dataset. In next stage transfer learning approach is implemented on VGG16 model and in last stage fine-tuned efficient VGG16 model is implemented. The training and testing accuracy of fine-tuned model are obtained as 98.16% and 96.47% respectively.","PeriodicalId":23063,"journal":{"name":"Theor. Comput. Sci.","volume":"60 1","pages":"303-"},"PeriodicalIF":0.0,"publicationDate":"2022-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73727460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.4230/LIPIcs.CPM.2021.15
Meng He, Serikzhan Kazi
Consider an ordinal tree T on n nodes, each of which is assigned a category from an alphabet [σ] = {1, 2, . . . , σ}. We preprocess the tree T in order to support categorical path counting queries, which ask for the number of distinct categories occurring on the path in T between two query nodes x and y. For this problem, we propose a linear-space data structure with query time O( √ n lg lg σ lg w ), where w = Ω(lg n) is the word size in the word-RAM. As shown in our proof, from the assumption that matrix multiplication cannot be solved in time faster than cubic (with only combinatorial methods), our result is optimal, save for polylogarithmic speed-ups. For a trade-off parameter 1 ≤ t ≤ n, we propose an O(n + n 2 t2 )-word, O(t lg lg σ lg w ) query time data structure. We also consider c-approximate categorical path counting queries, which must return an approximation to the number of distinct categories occurring on the query path, by counting each such category at least once and at most c times. We describe a linear-space data structure that supports 2-approximate categorical path counting queries in O(lg n/ lg lg n) time. Next, we generalize the categorical path counting queries to weighted trees. Here, a query specifies two nodes x, y and an orthogonal range Q. The answer to thus formed categorical path range counting query is the number of distinct categories occurring on the path from x to y, if only the nodes with weights falling inside Q are considered. We propose an O(n lg lg n + (n/t)4)-word data structure with O(t lg lg n) query time, or an O(n + (n/t)4)-word data structure with O(t lg n) query time. For an appropriate choice of the trade-off parameter t, this implies a linear-space data structure with O(n3/4 lg n) query time. We then extend the approach to the trees weighted with vectors from [n], where d is a constant integer greater than or equal to 2. We present a data structure with O(n lgd−1+ε n + (n/t)2d+2) words of space and O(t lg d−1 n (lg lg n)d−2 ) query time. For an O(n · polylog n)-space solution, one thus has O(n 2d+1 2d+2 · polylog n) query time. The inherent difficulty revealed by the lower bound we proved motivated us to consider data structures based on sketching. In unweighted trees, we propose a sketching data structure to solve the approximate categorical path counting problem which asks for a (1 ± ε)-approximation (i.e. within 1 ± ε of the true answer) of the number of distinct categories on the given path, with probability 1 − δ, where 0 < ε, δ < 1 are constants. The data structure occupies O(n + n t lg n) words of space, for the query time of O(t lg n). For trees weighted with d-dimensional weight vectors (d ≥ 1), we propose a data structure with O((n + n t lg n) lg n) words of space and O(t lgd+1 n) query time. All these problems generalize the corresponding categorical range counting problems in Euclidean space Rd+1, for respective d, by replacing one of the dimensions with a tree topology. 2012 ACM Subject
考虑一棵有n个节点的有序树T,每个节点被分配一个类别,从字母表[σ] ={1,2,…,σ}。我们对树T进行预处理,以支持分类路径计数查询,该查询要求查询T中两个查询节点x和y之间的路径上出现的不同类别的数量。对于这个问题,我们提出了一个线性空间数据结构,查询时间为O(√n lg lg σ lg w),其中w = Ω(lg n)是单词ram中的单词大小。正如我们的证明所示,假设矩阵乘法不能比三次乘法更快地求解(仅使用组合方法),我们的结果是最优的,除了多对数加速。对于权衡参数1≤t≤n,我们提出了一个O(n + n2 t2)字,O(t lg lg σ lg w)查询时间的数据结构。我们还考虑c近似分类路径计数查询,它必须返回查询路径上出现的不同类别数量的近似值,方法是对每个类别至少计数一次,最多计数c次。我们描述了一个线性空间数据结构,它支持在O(lgn / lglgn)时间内进行2-近似分类路径计数查询。接下来,我们将分类路径计数查询推广到加权树。这里,查询指定了两个节点x、y和一个正交范围Q。这样形成的分类路径范围计数查询的答案是,如果只考虑权重在Q内的节点,则从x到y的路径上出现的不同类别的数量。我们提出了一个查询时间为O(t lg lg n)的O(n lg lg n + (n/t)4)字的数据结构,或者一个查询时间为O(t lg n)的O(n + (n/t)4)字的数据结构。对于权衡参数t的适当选择,这意味着查询时间为O(n3/ 4lgn)的线性空间数据结构。然后,我们将该方法扩展到由来自[n]的向量加权的树,其中d是大于或等于2的常数整数。我们提出了一个具有O(n lgd−1+ε n + (n/t)2d+2)字空间和O(t lgd−1)n (lg lgn)d−2)查询时间的数据结构。对于O(n·polylog n)空间解,则有O(n 2d+1 2d+2·polylog n)查询时间。我们证明的下限所揭示的固有困难促使我们考虑基于草图的数据结构。在非加权树中,我们提出了一种草图数据结构来解决近似分类路径计数问题,该问题要求给定路径上不同类别的数量(1±ε)近似(即在真实答案的1±ε范围内),概率为1−δ,其中0 < ε, δ < 1是常数。该数据结构占用O(n + n t lgn)个字的空间,查询时间为O(t lgn)。对于d维权向量(d≥1)加权的树,我们提出了O((n + n t lgn) lgn个字的空间和O(t lgd+ 1n)个字的查询时间的数据结构。所有这些问题都推广了欧几里德空间Rd+1中相应的范畴范围计数问题,对于各自的d,通过用树形拓扑替换其中一个维度。2012 ACM学科分类:计算理论→数据结构设计与分析
{"title":"Data Structures for Categorical Path Counting Queries","authors":"Meng He, Serikzhan Kazi","doi":"10.4230/LIPIcs.CPM.2021.15","DOIUrl":"https://doi.org/10.4230/LIPIcs.CPM.2021.15","url":null,"abstract":"Consider an ordinal tree T on n nodes, each of which is assigned a category from an alphabet [σ] = {1, 2, . . . , σ}. We preprocess the tree T in order to support categorical path counting queries, which ask for the number of distinct categories occurring on the path in T between two query nodes x and y. For this problem, we propose a linear-space data structure with query time O( √ n lg lg σ lg w ), where w = Ω(lg n) is the word size in the word-RAM. As shown in our proof, from the assumption that matrix multiplication cannot be solved in time faster than cubic (with only combinatorial methods), our result is optimal, save for polylogarithmic speed-ups. For a trade-off parameter 1 ≤ t ≤ n, we propose an O(n + n 2 t2 )-word, O(t lg lg σ lg w ) query time data structure. We also consider c-approximate categorical path counting queries, which must return an approximation to the number of distinct categories occurring on the query path, by counting each such category at least once and at most c times. We describe a linear-space data structure that supports 2-approximate categorical path counting queries in O(lg n/ lg lg n) time. Next, we generalize the categorical path counting queries to weighted trees. Here, a query specifies two nodes x, y and an orthogonal range Q. The answer to thus formed categorical path range counting query is the number of distinct categories occurring on the path from x to y, if only the nodes with weights falling inside Q are considered. We propose an O(n lg lg n + (n/t)4)-word data structure with O(t lg lg n) query time, or an O(n + (n/t)4)-word data structure with O(t lg n) query time. For an appropriate choice of the trade-off parameter t, this implies a linear-space data structure with O(n3/4 lg n) query time. We then extend the approach to the trees weighted with vectors from [n], where d is a constant integer greater than or equal to 2. We present a data structure with O(n lgd−1+ε n + (n/t)2d+2) words of space and O(t lg d−1 n (lg lg n)d−2 ) query time. For an O(n · polylog n)-space solution, one thus has O(n 2d+1 2d+2 · polylog n) query time. The inherent difficulty revealed by the lower bound we proved motivated us to consider data structures based on sketching. In unweighted trees, we propose a sketching data structure to solve the approximate categorical path counting problem which asks for a (1 ± ε)-approximation (i.e. within 1 ± ε of the true answer) of the number of distinct categories on the given path, with probability 1 − δ, where 0 < ε, δ < 1 are constants. The data structure occupies O(n + n t lg n) words of space, for the query time of O(t lg n). For trees weighted with d-dimensional weight vectors (d ≥ 1), we propose a data structure with O((n + n t lg n) lg n) words of space and O(t lgd+1 n) query time. All these problems generalize the corresponding categorical range counting problems in Euclidean space Rd+1, for respective d, by replacing one of the dimensions with a tree topology. 2012 ACM Subject ","PeriodicalId":23063,"journal":{"name":"Theor. Comput. Sci.","volume":"9 1","pages":"97-111"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75979880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Online scheduling of parallelizable jobs in the directed acyclic graphs and speed-up curves models","authors":"Ben Moseley, Ruilong Zhang, S. Zhao","doi":"10.2139/ssrn.4043347","DOIUrl":"https://doi.org/10.2139/ssrn.4043347","url":null,"abstract":"","PeriodicalId":23063,"journal":{"name":"Theor. Comput. Sci.","volume":"39 1","pages":"24-38"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91342829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.7494/csci.2023.24.2.4378
Khloud Al Jallad, Nada Ghneim
Natural Language Inference (NLI) is a hot topic research in natural language processing, contradiction detection between sentences is a special case of NLI. This is considered a difficult NLP task which has a big influence when added as a component in many NLP applications, such as Question Answering Systems, text Summarization. Arabic Language is one of the most challenging low-resources languages in detecting contradictions due to its rich lexical, semantics ambiguity. We have created a dataset of more than 12k sentences and named ArNLI, that will be publicly available. Moreover, we have applied a new model inspired by Stanford contradiction detection proposed solutions on English language. We proposed an approach to detect contradictions between pairs of sentences in Arabic language using contradiction vector combined with language model vector as an input to machine learning model. We analyzed results of different traditional machine learning classifiers and compared their results on our created dataset (ArNLI) and on an automatic translation of both PHEME, SICK English datasets. Best results achieved using Random Forest classifier with an accuracy of 99%, 60%, 75% on PHEME, SICK and ArNLI respectively.
{"title":"ArNLI: Arabic Natural Language Inference for Entailment and Contradiction Detection","authors":"Khloud Al Jallad, Nada Ghneim","doi":"10.7494/csci.2023.24.2.4378","DOIUrl":"https://doi.org/10.7494/csci.2023.24.2.4378","url":null,"abstract":"Natural Language Inference (NLI) is a hot topic research in natural language processing, contradiction detection between sentences is a special case of NLI. This is considered a difficult NLP task which has a big influence when added as a component in many NLP applications, such as Question Answering Systems, text Summarization. Arabic Language is one of the most challenging low-resources languages in detecting contradictions due to its rich lexical, semantics ambiguity. We have created a dataset of more than 12k sentences and named ArNLI, that will be publicly available. Moreover, we have applied a new model inspired by Stanford contradiction detection proposed solutions on English language. We proposed an approach to detect contradictions between pairs of sentences in Arabic language using contradiction vector combined with language model vector as an input to machine learning model. We analyzed results of different traditional machine learning classifiers and compared their results on our created dataset (ArNLI) and on an automatic translation of both PHEME, SICK English datasets. Best results achieved using Random Forest classifier with an accuracy of 99%, 60%, 75% on PHEME, SICK and ArNLI respectively.","PeriodicalId":23063,"journal":{"name":"Theor. Comput. Sci.","volume":"69 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81516930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.48550/arXiv.2209.10262
Takehiro Ito, Naonori Kakimura, Naoyuki Kamiyama, Yusuke Kobayashi, Yuta Nozaki, Y. Okamoto, K. Ozeki
We consider the problem of determining whether a target item assignment can be reached from an initial item assignment by a sequence of pairwise exchanges of items between agents. In particular, we consider the situation where each agent has a dichotomous preference over the items, that is, each agent evaluates each item as acceptable or unacceptable. Furthermore, we assume that communication between agents is limited, and the relationship is represented by an undirected graph. Then, a pair of agents can exchange their items only if they are connected by an edge and the involved items are acceptable. We prove that this problem is PSPACE-complete even when the communication graph is complete (that is, every pair of agents can exchange their items), and this problem can be solved in polynomial time if an input graph is a tree.
{"title":"On Reachable Assignments under Dichotomous Preferences","authors":"Takehiro Ito, Naonori Kakimura, Naoyuki Kamiyama, Yusuke Kobayashi, Yuta Nozaki, Y. Okamoto, K. Ozeki","doi":"10.48550/arXiv.2209.10262","DOIUrl":"https://doi.org/10.48550/arXiv.2209.10262","url":null,"abstract":"We consider the problem of determining whether a target item assignment can be reached from an initial item assignment by a sequence of pairwise exchanges of items between agents. In particular, we consider the situation where each agent has a dichotomous preference over the items, that is, each agent evaluates each item as acceptable or unacceptable. Furthermore, we assume that communication between agents is limited, and the relationship is represented by an undirected graph. Then, a pair of agents can exchange their items only if they are connected by an edge and the involved items are acceptable. We prove that this problem is PSPACE-complete even when the communication graph is complete (that is, every pair of agents can exchange their items), and this problem can be solved in polynomial time if an input graph is a tree.","PeriodicalId":23063,"journal":{"name":"Theor. Comput. Sci.","volume":"16 1","pages":"114196"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86995060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}