Pub Date : 2023-10-10DOI: 10.1016/j.peva.2023.102382
Mehdi Karamollahi, Carey Williamson, Martin Arlitt
In this paper, we develop a synthetic workload model for the Zoom network application based on empirical Zoom traffic measurements from a campus network. We then use this model in a simulation study of Zoom network traffic at the campus scale. The simulation results show that hybrid learning places a substantial load on the campus network. Additional simulation experiments investigate the potential benefits of locally-hosted Zoom infrastructure, improved load balancing strategies for Zoom servers, and multicast delivery for Zoom network traffic. The simulation results show that the multicast approach offers the greatest potential benefit for improving Zoom performance on our campus network.
{"title":"Simulation modeling of Zoom traffic on a campus network: A case study","authors":"Mehdi Karamollahi, Carey Williamson, Martin Arlitt","doi":"10.1016/j.peva.2023.102382","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102382","url":null,"abstract":"<div><p>In this paper, we develop a synthetic workload model for the Zoom network application based on empirical Zoom traffic measurements from a campus network. We then use this model in a simulation study of Zoom network traffic at the campus scale. The simulation results show that hybrid learning places a substantial load on the campus network. Additional simulation experiments investigate the potential benefits of locally-hosted Zoom infrastructure, improved load balancing strategies for Zoom servers, and multicast delivery for Zoom network traffic. The simulation results show that the multicast approach offers the greatest potential benefit for improving Zoom performance on our campus network.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"162 ","pages":"Article 102382"},"PeriodicalIF":2.2,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-10DOI: 10.1016/j.peva.2023.102373
Kumar Saurav, Rahul Vaze
We consider a single source–destination pair, where information updates (in short, updates) arrive at the source at arbitrary time instants. For each update, its size, i.e. the service time required for complete transmission to the destination, is also arbitrary. At any time, the source may choose which update to transmit, while incurring transmission cost that is proportional to the duration of transmission. We consider the age of information (AoI) metric that quantifies the staleness of the update (information) at the destination. At any time, AoI is equal to the difference between the current time, and the arrival time of the latest update (at the source) that has been completely transmitted (to the destination). The goal is to find a causal (i.e. online) scheduling policy that minimizes the sum of the AoI and the transmission cost, where the possible decisions at any time are (i) whether to preempt the update under transmission upon arrival of a new update, and (ii) if no update is under transmission, then choose which update to transmit (among the available updates). In this paper, we propose a causal policy called SRPT that at each time, (i) preempts the update under transmission if a new update arrives with a smaller size (compared to the remaining size of the update under transmission), and (ii) if no update is under transmission, then from the set of available updates with size less than a threshold (which is a function of the transmission cost and the current AoI), begins to transmit the update for which the ratio of the reduction in AoI upon complete transmission (if not preempted in future) and the remaining size, is maximum. We characterize the performance of SRPT using a metric called the competitive ratio, i.e. the ratio of the cost of causal policy and the cost of an optimal offline policy (that knows the entire input in advance), maximized over all possible inputs. We show that the competitive ratio of SRPT is at most 5. In the special case when there is no transmission cost, we further show that the competitive ratio of SRPT is at most 3.
{"title":"Minimizing age of information under arbitrary arrival model with arbitrary packet size","authors":"Kumar Saurav, Rahul Vaze","doi":"10.1016/j.peva.2023.102373","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102373","url":null,"abstract":"<div><p>We consider a single source–destination pair, where information updates (in short, updates) arrive at the source at arbitrary time instants. For each update, its size, i.e. the service time required for complete transmission to the destination, is also arbitrary. At any time, the source may choose which update to transmit, while incurring transmission cost that is proportional to the duration of transmission. We consider the age of information (AoI) metric that quantifies the staleness of the update (information) at the destination. At any time, AoI is equal to the difference between the current time, and the arrival time of the latest update (at the source) that has been completely transmitted (to the destination). The goal is to find a causal (i.e. online) scheduling policy that minimizes the sum of the AoI and the transmission cost, where the possible decisions at any time are (i) whether to preempt the update under transmission upon arrival of a new update, and (ii) if no update is under transmission, then choose which update to transmit (among the available updates). In this paper, we propose a causal policy called SRPT<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span> that at each time, (i) preempts the update under transmission if a new update arrives with a smaller size (compared to the remaining size of the update under transmission), and (ii) if no update is under transmission, then from the set of available updates with size less than a threshold (which is a function of the transmission cost and the current AoI), begins to transmit the update for which the ratio of the reduction in AoI upon complete transmission (if not preempted in future) and the remaining size, is maximum. We characterize the performance of SRPT<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span><span> using a metric called the competitive ratio, i.e. the ratio of the cost of causal policy and the cost of an optimal offline policy (that knows the entire input in advance), maximized over all possible inputs. We show that the competitive ratio of SRPT</span><span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span> is at most 5. In the special case when there is no transmission cost, we further show that the competitive ratio of SRPT<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span> is at most 3.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"162 ","pages":"Article 102373"},"PeriodicalIF":2.2,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-06DOI: 10.1016/j.peva.2023.102370
Diletta Olliaro , Marco Ajmone Marsan , Simonetta Balsamo , Andrea Marin
We consider a multiserver queue where jobs request for a varying number of servers for a random service time. The requested number of servers is assigned to each job following a First-In First-Out (FIFO) order. When the number of free servers is not sufficient to accommodate the next job in line, that job and any subsequent jobs in the queue are forced to wait. As a result, not all available servers are allocated to jobs if the next job requires more servers than are currently free. This queuing system is often called a Multiserver Job Queuing Model (MJQM).
In this paper, we study the behavior of a MJQM under saturation, i.e., when the waiting line always contains jobs to be served. We categorize jobs into two classes: the first class consists of jobs that only require one server, while the second class includes jobs that require a larger number of servers. We obtain the system utilization and the throughput of the two job classes for the case in which the number of servers requested by jobs in the second class is equal to the number of available servers, using a simple approach that allows for a general distribution of the service time of jobs in the second class. Hence, we derive the stability condition of the non-saturated MJQM under these assumptions. Additionally, we develop an approximate analysis for the case in which the jobs of the second class require a fraction of the available servers.
Based on analytical and numerical results, we highlight interesting system properties and insights.
{"title":"The saturated Multiserver Job Queuing Model with two classes of jobs: Exact and approximate results","authors":"Diletta Olliaro , Marco Ajmone Marsan , Simonetta Balsamo , Andrea Marin","doi":"10.1016/j.peva.2023.102370","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102370","url":null,"abstract":"<div><p><span>We consider a multiserver queue where jobs request for a varying number of servers for a random service time. The requested number of servers is assigned to each job following a First-In First-Out (FIFO) order. When the number of free servers is not sufficient to accommodate the next job in line, that job and any subsequent jobs in the queue are forced to wait. As a result, not all available servers are allocated to jobs if the next job requires more servers than are currently free. This queuing system is often called a </span><span><em>Multiserver Job </em><em>Queuing Model</em></span> (MJQM).</p><p>In this paper, we study the behavior of a MJQM under saturation, i.e., when the waiting line always contains jobs to be served. We categorize jobs into two classes: the first class consists of jobs that only require one server, while the second class includes jobs that require a larger number of servers. We obtain the system utilization and the throughput of the two job classes for the case in which the number of servers requested by jobs in the second class is equal to the number of available servers, using a simple approach that allows for a general distribution of the service time of jobs in the second class. Hence, we derive the stability condition of the non-saturated MJQM under these assumptions. Additionally, we develop an approximate analysis for the case in which the jobs of the second class require a fraction of the available servers.</p><p>Based on analytical and numerical results, we highlight interesting system properties and insights.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"162 ","pages":"Article 102370"},"PeriodicalIF":2.2,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1016/j.peva.2023.102374
F. Serhan Daniş , Cem Ersoy , A. Taylan Cemgil
We construct a practical and real-time probabilistic framework for fine target tracking. In our scenario, a Bluetooth Low-Energy (BLE) device navigating in the environment publishes BLE packets that are captured by stationary BLE sensors. The aim is to accurately estimate the live position of the BLE device emitting these packets. The framework is built upon a hidden Markov model (HMM), the components of which are determined with a combination of heuristic and data-driven approaches. In the data-driven part, we rely on the fingerprints formed priorly by extracting received signal strength indicators (RSSI) from the packets. These data are then transformed into probabilistic radio-frequency maps that are used for measuring the likelihood between an RSSI data and a position. The heuristic part involves the movement of the tracked object. Having no access to any inertial information of the object, this movement is modeled with Gaussian densities with variable model parameters that are to be determined heuristically. The practicality of the framework comes from the associated small parameter set used to discretize the components of the HMM. By tuning these parameters, such as the grid size of the area, the mask size and the covariance of the Gaussian; a probabilistic filtering becomes tractable for discrete state spaces. The filtering is then performed by the forward algorithm given the instantaneous sequential RSSI measurements. The performance of the system is evaluated by taking the mean squared errors of the most probable positions at each time step to their corresponding ground-truth positions. We report the statistics of the error distributions and see that we achieve promising results. The approach is also finally evaluated by its runtime and memory usage.
{"title":"Probabilistic indoor tracking of Bluetooth Low-Energy beacons","authors":"F. Serhan Daniş , Cem Ersoy , A. Taylan Cemgil","doi":"10.1016/j.peva.2023.102374","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102374","url":null,"abstract":"<div><p><span>We construct a practical and real-time probabilistic framework<span> for fine target tracking. In our scenario, a Bluetooth Low-Energy (BLE) device navigating in the environment publishes BLE packets that are captured by stationary BLE sensors. The aim is to accurately estimate the live position of the BLE device emitting these packets. The framework is built upon a hidden Markov model (HMM), the components of which are determined with a combination of heuristic and data-driven approaches. In the data-driven part, we rely on the fingerprints formed priorly by extracting received signal strength<span> indicators (RSSI) from the packets. These data are then transformed into probabilistic radio-frequency maps that are used for measuring the likelihood between an RSSI data and a position. The heuristic part involves the movement of the tracked object. Having no access to any inertial information of the object, this movement is modeled with Gaussian densities with variable model parameters that are to be determined heuristically. The practicality of the framework comes from the associated small parameter set used to discretize the components of the HMM. By tuning these parameters, such as the grid size of the area, the mask size and the covariance of the Gaussian; a probabilistic filtering becomes tractable for discrete state spaces. The filtering is then performed by the forward algorithm given the instantaneous sequential RSSI measurements. The performance of the system is evaluated by taking the mean squared errors of the most probable positions at each time step to their corresponding ground-truth positions. We report the </span></span></span>statistics of the error distributions and see that we achieve promising results. The approach is also finally evaluated by its runtime and memory usage.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"162 ","pages":"Article 102374"},"PeriodicalIF":2.2,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-20DOI: 10.1016/j.peva.2023.102368
Chengzhen Meng, Hongjun Dai
Nowadays, various AI applications based on Convolutional Neural Networks (CNNs) are widely deployed on GPU-accelerated devices. However, due to the lack of visibility into GPU internal scheduling, accurately modeling the performance of CNN inference tasks or estimating the latency of CNN tasks that are executing or waiting on the GPU is challenging. This hurts the multi-model scheduling on multi-device and CNN real-time inference. Therefore, in this paper, we propose a time estimation method to estimate the forward execution time of a convolutional layer with an arbitrary shape on a GPU. The proposed method divides an explicit General Matrix Multiplication (GEMM) convolution operation into a series of estimatable GPU operations and constructs performance models at the level of sub-operations rather than hardware instructions or entire models. Also, the proposed method can be easily adapted to different hardware devices or underlying algorithm implementations, since it focuses on the variation of execution time relative to the input data scale, without focusing on specific instructions or hardware actions. According to the experiments on four typical CUDA compatible platforms, the proposed method has an average error rate of less than 5% for convolutional layers in some practical CNN models, and has about 8% error rate in estimating GEMM convolution implementations provided by cuDNN library. Experiments show that the proposed method can predict the forward execution time of convolutional layers of arbitrary size in CNN inference tasks on different GPU models.
{"title":"A hardware-independent time estimation method for inference process of convolutional layers on GPU","authors":"Chengzhen Meng, Hongjun Dai","doi":"10.1016/j.peva.2023.102368","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102368","url":null,"abstract":"<div><p><span><span>Nowadays, various AI applications based on </span>Convolutional Neural Networks<span> (CNNs) are widely deployed on GPU-accelerated devices. However, due to the lack of visibility into GPU internal scheduling, accurately modeling the performance of CNN inference tasks or estimating the latency of CNN tasks that are executing or waiting on the GPU is challenging. This hurts the multi-model scheduling on multi-device and CNN real-time inference. Therefore, in this paper, we propose a time estimation<span> method to estimate the forward execution time of a convolutional layer with an arbitrary shape on a GPU. The proposed method divides an explicit </span></span></span><em>General Matrix Multiplication</em><span> (GEMM) convolution<span> operation into a series of estimatable GPU operations and constructs performance models at the level of sub-operations rather than hardware instructions or entire models. Also, the proposed method can be easily adapted to different hardware devices or underlying algorithm implementations, since it focuses on the variation of execution time relative to the input data scale, without focusing on specific instructions or hardware actions. According to the experiments on four typical CUDA compatible platforms, the proposed method has an average error rate of less than 5% for convolutional layers in some practical CNN models, and has about 8% error rate in estimating GEMM convolution implementations provided by cuDNN library. Experiments show that the proposed method can predict the forward execution time of convolutional layers of arbitrary size in CNN inference tasks on different GPU models.</span></span></p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"162 ","pages":"Article 102368"},"PeriodicalIF":2.2,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-11DOI: 10.1016/j.peva.2023.102367
R. Sri Prakash, Nikhil Karamchandani, Sharayu Moharir
We consider the problem of service hosting where a service provider can dynamically rent edge resources via short term contracts to ensure better quality of service to its customers. The service can also be partially hosted at the edge, in which case, customers’ requests can be partially served at the edge. The total cost incurred by the system is modeled as a combination of the rent cost, the service cost incurred due to latency in serving customers, and the fetch cost incurred as a result of the bandwidth used to fetch the code/databases of the service from the cloud servers to host the service at the edge. In this paper, we compare multiple hosting policies with regret as a metric, defined as the difference in the cost incurred by the policy and the optimal policy over some time horizon . In particular we consider the Retro Renting (RR) and Follow The Perturbed Leader (FTPL) policies proposed in the literature and provide performance guarantees on the regret of these policies. We show that under i.i.d stochastic arrivals, RR policy has linear regret while FTPL policy has constant regret. Next, we propose a variant of FTPL, namely Wait then FTPL (W-FTPL), which also has constant regret while demonstrating much better dependence on the fetch cost. We also show that under adversarial arrivals, RR policy has linear regret while both FTPL and W-FTPL have regret which is order-optimal.
{"title":"On the regret of online edge service hosting","authors":"R. Sri Prakash, Nikhil Karamchandani, Sharayu Moharir","doi":"10.1016/j.peva.2023.102367","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102367","url":null,"abstract":"<div><p><span>We consider the problem of service hosting where a service provider can dynamically rent edge resources via short term contracts to ensure better quality of service to its customers. The service can also be partially hosted at the edge, in which case, customers’ requests can be partially served at the edge. The total cost incurred by the system is modeled as a combination of the rent cost, the service cost incurred due to latency in serving customers, and the fetch cost incurred as a result of the bandwidth used to fetch the code/databases of the service from the cloud servers to host the service at the edge. In this paper, we compare multiple hosting policies with regret as a metric, defined as the difference in the cost incurred by the policy and the optimal policy over some time horizon </span><span><math><mi>T</mi></math></span>. In particular we consider the Retro Renting (RR) and Follow The Perturbed Leader (FTPL) policies proposed in the literature and provide performance guarantees on the regret of these policies. We show that under i.i.d stochastic arrivals, RR policy has linear regret while FTPL policy has constant regret. Next, we propose a variant of FTPL, namely Wait then FTPL (W-FTPL), which also has constant regret while demonstrating much better dependence on the fetch cost. We also show that under adversarial arrivals, RR policy has linear regret while both FTPL and W-FTPL have regret <span><math><mrow><mi>O</mi><mrow><mo>(</mo><msqrt><mrow><mi>T</mi></mrow></msqrt><mo>)</mo></mrow></mrow></math></span> which is order-optimal.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"162 ","pages":"Article 102367"},"PeriodicalIF":2.2,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NFV (Network Functions Virtualization) is a technology to provide network services by applying virtualization. While the virtualization technology provides flexible architecture by itself, a specific environment where physical legacy equipment and virtualized machines coexist is also considered to meet a wide range of requirements from not only users but also service providers. Motivated by such hybrid systems, we propose queueing models with two types of service facilities: legacy servers and virtual machines. Key features to focus on are that while legacy servers are always on standby, virtual machines need setup time to be ready for service because they are shutdown to reduce the power consumption if no jobs are waiting. Keeping in mind delay-sensitive real-time services, we evaluate the performance of the queueing models, in particular, the delay and energy efficiency.
{"title":"Modeling and performance analysis of hybrid systems by queues with setup time","authors":"Mitsuki Sato , Kohei Kawamura , Ken’ichi Kawanishi , Tuan Phung-Duc","doi":"10.1016/j.peva.2023.102366","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102366","url":null,"abstract":"<div><p>NFV (Network Functions Virtualization) is a technology to provide network services<span> by applying virtualization<span>. While the virtualization technology provides flexible architecture by itself, a specific environment where physical legacy equipment and virtualized machines coexist is also considered to meet a wide range of requirements from not only users but also service providers. Motivated by such hybrid systems, we propose queueing models with two types of service facilities: legacy servers and virtual machines. Key features to focus on are that while legacy servers are always on standby, virtual machines need setup time to be ready for service because they are shutdown to reduce the power consumption if no jobs are waiting. Keeping in mind delay-sensitive real-time services, we evaluate the performance of the queueing models, in particular, the delay and energy efficiency.</span></span></p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"162 ","pages":"Article 102366"},"PeriodicalIF":2.2,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The demand of electricity at the Charging Stations (CSs) by Electric Vehicle (EV) users is tremendously increasing. However, EV users still face limited resources at the CSs, both in terms of the number of parking spaces equipped with a charging point, and in terms of available power. This paper deals with the choice of a CS among two CSs by the EV users in a competitive environment. The stochastic nature of arrivals and departures at the CSs is modeled by a queueing system. A queueing game is studied where the EV users are the players and choose the CS that gives the highest expected energy received. An approximation of the expected energy received at the CSs is theoretically provided and the quality of this approximation is numerically illustrated and analyzed through simulations. The existence and uniqueness of the equilibrium of the game is proved, and bounds on the Price of Anarchy (PoA) are also provided. Moreover, the model is simulated using a discrete event framework and a sensitivity analysis of the main metrics of the system with respect to the average parking duration and the power sizing coefficient is provided. The results show that the utility of EV users at equilibrium is close to the optimal utility. This study can help a Charging Point Operator (CPO) to design incentives for EV users in order for instance to limit the parking duration so as to improve the social welfare of the EV users.
{"title":"A loss queueing game for electric vehicle charging performance evaluation","authors":"Alix Dupont , Yezekael Hayel , Tania Jiménez , Olivier Beaude , Jean-Baptiste Breal","doi":"10.1016/j.peva.2023.102350","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102350","url":null,"abstract":"<div><p><span>The demand of electricity at the Charging Stations (CSs) by Electric Vehicle (EV) users is tremendously increasing. However, EV users still face limited resources at the CSs, both in terms of the number of parking spaces equipped with a charging point, and in terms of available power. This paper deals with the choice of a CS among two CSs by the EV users in a competitive environment. The stochastic nature of arrivals and departures at the CSs is modeled by a queueing system. A queueing game is studied where the EV users are the players and choose the CS that gives the highest expected energy received. An </span>approximation of the expected energy received at the CSs is theoretically provided and the quality of this approximation is numerically illustrated and analyzed through simulations. The existence and uniqueness of the equilibrium of the game is proved, and bounds on the Price of Anarchy (PoA) are also provided. Moreover, the model is simulated using a discrete event framework and a sensitivity analysis of the main metrics of the system with respect to the average parking duration and the power sizing coefficient is provided. The results show that the utility of EV users at equilibrium is close to the optimal utility. This study can help a Charging Point Operator (CPO) to design incentives for EV users in order for instance to limit the parking duration so as to improve the social welfare of the EV users.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"161 ","pages":"Article 102350"},"PeriodicalIF":2.2,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49700929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.peva.2023.102351
Xuchuang Wang , Hong Xie , Pinghui Wang , John C.S. Lui
User abandonment behaviors are quite common in recommendation applications such as online shopping recommendation and news recommendation. To maximize its total “reward” under the risk of user abandonment, the online platform needs to carefully optimize its recommendations for its users. Because inappropriate recommendations can lead to user abandoning the platform, which results in a short learning duration and reduces the cumulative reward. To address this problem, we formulate a new online decision model and propose an algorithmic framework to transfer similar users’ information via parametric estimation, and employ this knowledge to optimize later decisions. The framework’s theoretical guarantees depend on requirements for its transfer learning oracle and online decision oracle. We then design an online learning algorithm consisting of two components that fulfills each corresponding oracle’s requirements. We also conduct extensive experiments to demonstrate our algorithm’s performance.
{"title":"Optimizing recommendations under abandonment risks: Models and algorithms","authors":"Xuchuang Wang , Hong Xie , Pinghui Wang , John C.S. Lui","doi":"10.1016/j.peva.2023.102351","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102351","url":null,"abstract":"<div><p>User abandonment behaviors are quite common in recommendation applications such as online shopping recommendation and news recommendation. To maximize its total “reward” under the risk of user abandonment, the online platform needs to carefully optimize its recommendations for its users. Because inappropriate recommendations can lead to user abandoning the platform, which results in a short learning duration and reduces the cumulative reward. To address this problem, we formulate a new online decision model and propose an algorithmic framework to <em>transfer similar users’ information</em><span> via parametric estimation, and employ this knowledge to </span><em>optimize later decisions</em><span>. The framework’s theoretical guarantees depend on requirements for its transfer learning oracle and online decision oracle. We then design an online learning algorithm consisting of two components that fulfills each corresponding oracle’s requirements. We also conduct extensive experiments to demonstrate our algorithm’s performance.</span></p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"161 ","pages":"Article 102351"},"PeriodicalIF":2.2,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49757519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.peva.2023.102354
Li Tang, Scott Pakin
Characterization of program execution plays a key role in performance improvement. There are numerous transformations applied to each step that a program takes on its lowering from source code to a compiler intermediate representation to machine language to microarchitecture-specific execution. The unpredictable benefit of each transformation step could lead a notionally superior algorithm to exhibit inferior performance once actually run, and it can be hard to discern which step in the transformation path contradicted the code developer’s assumptions.
Conventional approaches to program-execution characterization consider the behavior after only a single one of those steps, which limits the information that can be provided to the user. To help address the issue of myopic views of program execution, this paper presents a novel cross-level characterization approach for understanding the behavior of program execution at different levels in the process of writing, compiling, and running a program. We show that this approach provides a richer view of the sources of performance gains and losses and helps identify program execution in a more accurate manner.
{"title":"CLC: A cross-level program characterization method","authors":"Li Tang, Scott Pakin","doi":"10.1016/j.peva.2023.102354","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102354","url":null,"abstract":"<div><p>Characterization of program execution plays a key role in performance improvement. There are numerous transformations applied to each step that a program takes on its lowering from source code to a compiler intermediate representation to machine language to microarchitecture-specific execution. The unpredictable benefit of each transformation step could lead a notionally superior algorithm to exhibit inferior performance once actually run, and it can be hard to discern which step in the transformation path contradicted the code developer’s assumptions.</p><p>Conventional approaches to program-execution characterization consider the behavior after only a single one of those steps, which limits the information that can be provided to the user. To help address the issue of myopic views of program execution, this paper presents a novel cross-level characterization approach for understanding the behavior of program execution at different levels in the process of writing, compiling, and running a program. We show that this approach provides a richer view of the sources of performance gains and losses and helps identify program execution in a more accurate manner.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"161 ","pages":"Article 102354"},"PeriodicalIF":2.2,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49701144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}