Pub Date : 2024-01-03DOI: 10.1142/s0217595923400225
Yihong Hu, Yongrui Duan, Shengnan Qu, Jiazhen Huo
This paper explores the strategic motivation for a platform to open its superior logistics service to a third-party seller with an endogenous service level. We consider a Stackelberg game between the platform and the seller who sells products to consumers who perceive the platform’s product as having a higher value than the seller’s product. We characterize the equilibrium results in two schemes regarding opening or not opening the service and present conditions for the platform to open the service and the seller to accept the service. In equilibrium, the platform’s logistics service remains at the same level before and after opening. Particularly, we demonstrate that the motivation for the platform to open the service is not simply collecting extra revenue from the service but can be understood from mitigating price competition and securing its demand and price. We find that the platform is always willing to open the logistics system because it provides the platform an additional tool to manipulate the seller’s pricing behavior and therefore improves its own profit. With a high commission rate, the platform is even willing to subsidize the seller for using the logistics service. A Pareto improvement can be realized for two firms when consumers are highly sensitive to the service level. Consumers are worse off after service opening in most cases. Our analysis offers insights into the incentives of one retailer providing high-quality service for its rival when retailers differentiate in price and service.
{"title":"Logistics Service Openness Strategy of Online Platforms with Vertical Differentiation and Endogenous Service Level","authors":"Yihong Hu, Yongrui Duan, Shengnan Qu, Jiazhen Huo","doi":"10.1142/s0217595923400225","DOIUrl":"https://doi.org/10.1142/s0217595923400225","url":null,"abstract":"This paper explores the strategic motivation for a platform to open its superior logistics service to a third-party seller with an endogenous service level. We consider a Stackelberg game between the platform and the seller who sells products to consumers who perceive the platform’s product as having a higher value than the seller’s product. We characterize the equilibrium results in two schemes regarding opening or not opening the service and present conditions for the platform to open the service and the seller to accept the service. In equilibrium, the platform’s logistics service remains at the same level before and after opening. Particularly, we demonstrate that the motivation for the platform to open the service is not simply collecting extra revenue from the service but can be understood from mitigating price competition and securing its demand and price. We find that the platform is always willing to open the logistics system because it provides the platform an additional tool to manipulate the seller’s pricing behavior and therefore improves its own profit. With a high commission rate, the platform is even willing to subsidize the seller for using the logistics service. A Pareto improvement can be realized for two firms when consumers are highly sensitive to the service level. Consumers are worse off after service opening in most cases. Our analysis offers insights into the incentives of one retailer providing high-quality service for its rival when retailers differentiate in price and service.","PeriodicalId":55455,"journal":{"name":"Asia-Pacific Journal of Operational Research","volume":"137 29","pages":""},"PeriodicalIF":1.4,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139387518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-30DOI: 10.1142/s0217595923500422
S. Saini, N. Kailey
{"title":"Unification of Higher-Order Dual Programs Over Cones","authors":"S. Saini, N. Kailey","doi":"10.1142/s0217595923500422","DOIUrl":"https://doi.org/10.1142/s0217595923500422","url":null,"abstract":"","PeriodicalId":55455,"journal":{"name":"Asia-Pacific Journal of Operational Research","volume":" 12","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139140292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1142/s0217595923990014
{"title":"Author Index Volume 40","authors":"","doi":"10.1142/s0217595923990014","DOIUrl":"https://doi.org/10.1142/s0217595923990014","url":null,"abstract":"","PeriodicalId":55455,"journal":{"name":"Asia-Pacific Journal of Operational Research","volume":"1 10","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138623811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-08DOI: 10.1142/s021759592350029x
Danqing Zhou, Haiwen Xu, Junfeng Yang
Proximal alternating direction method of multipliers (PADMM) is a classical primal-dual splitting method for solving separable convex optimization problems with linear equality constraints, which have numerous applications in, e.g., signal and image processing, machine learning, and statistics. In this paper, we propose a new variant of PADMM, called PADMC, whose proximal centers are constructed by convex combinations of the iterates. PADMC is able to take advantage of problem structures and preserves the desirable properties of the classical PADMM. We establish iterate convergence as well as [Formula: see text] ergodic and [Formula: see text] nonergodic sublinear convergence rate results measured by function residual and feasibility violation, where [Formula: see text] denotes the iteration number. Moreover, we propose two fast variants of PADMC, one achieves faster [Formula: see text] ergodic convergence rate when one of the component functions is strongly convex, and the other ensures faster [Formula: see text] nonergodic convergence rate measured by constraint violation. Finally, preliminary numerical results on the LASSO and the elastic-net regularization problems are presented to demonstrate the performance of the proposed methods.
{"title":"Proximal alternating direction method of multipliers with convex combination proximal centers","authors":"Danqing Zhou, Haiwen Xu, Junfeng Yang","doi":"10.1142/s021759592350029x","DOIUrl":"https://doi.org/10.1142/s021759592350029x","url":null,"abstract":"Proximal alternating direction method of multipliers (PADMM) is a classical primal-dual splitting method for solving separable convex optimization problems with linear equality constraints, which have numerous applications in, e.g., signal and image processing, machine learning, and statistics. In this paper, we propose a new variant of PADMM, called PADMC, whose proximal centers are constructed by convex combinations of the iterates. PADMC is able to take advantage of problem structures and preserves the desirable properties of the classical PADMM. We establish iterate convergence as well as [Formula: see text] ergodic and [Formula: see text] nonergodic sublinear convergence rate results measured by function residual and feasibility violation, where [Formula: see text] denotes the iteration number. Moreover, we propose two fast variants of PADMC, one achieves faster [Formula: see text] ergodic convergence rate when one of the component functions is strongly convex, and the other ensures faster [Formula: see text] nonergodic convergence rate measured by constraint violation. Finally, preliminary numerical results on the LASSO and the elastic-net regularization problems are presented to demonstrate the performance of the proposed methods.","PeriodicalId":55455,"journal":{"name":"Asia-Pacific Journal of Operational Research","volume":" 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135293373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-04DOI: 10.1142/s0217595923500288
Gaoxi Li, Ying Yi, Yingquan Huang
The double-proximal gradient algorithm (DPGA) is a new variant of the classical difference-of-convex algorithm (DCA) for solving difference-of-convex (DC) optimization problems. In this paper, we propose an accelerated version of the double-proximal gradient algorithm for DC programming, in which the objective function consists of three convex modules (only one module is smooth). We establish convergence of the sequence generated by our algorithm if the objective function satisfies the Kurdyka–[Formula: see text]ojasiewicz (K[Formula: see text]) property and show that its convergence rate is not weaker than DPGA. Compared with DPGA, the numerical experiments on an image processing model show that the number of iterations of ADPGA is reduced by 43.57% and the running time is reduced by 43.47% on average.
{"title":"An accelerated double-proximal gradient algorithm for DC programming","authors":"Gaoxi Li, Ying Yi, Yingquan Huang","doi":"10.1142/s0217595923500288","DOIUrl":"https://doi.org/10.1142/s0217595923500288","url":null,"abstract":"The double-proximal gradient algorithm (DPGA) is a new variant of the classical difference-of-convex algorithm (DCA) for solving difference-of-convex (DC) optimization problems. In this paper, we propose an accelerated version of the double-proximal gradient algorithm for DC programming, in which the objective function consists of three convex modules (only one module is smooth). We establish convergence of the sequence generated by our algorithm if the objective function satisfies the Kurdyka–[Formula: see text]ojasiewicz (K[Formula: see text]) property and show that its convergence rate is not weaker than DPGA. Compared with DPGA, the numerical experiments on an image processing model show that the number of iterations of ADPGA is reduced by 43.57% and the running time is reduced by 43.47% on average.","PeriodicalId":55455,"journal":{"name":"Asia-Pacific Journal of Operational Research","volume":"14 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}