Artificial intelligence (AI) refers to the ability of machines or software to mimic or even surpass human intelligence in a given cognitive task. While humans learn by both induction and deduction, the success of current AI is rooted in induction, relying on its ability to detect statistical regularities in task input -- an ability learnt from a vast amount of training data using enormous computation resources. We examine the performance of such a statistical AI in a human task through the lens of four factors, including task learnability, statistical resource, computation resource, and learning techniques, and then propose a three-phase visual framework to understand the evolving relation between AI and jobs. Based on this conceptual framework, we develop a simple economic model of competition to show the existence of an inflection point for each occupation. Before AI performance crosses the inflection point, human workers always benefit from an improvement in AI performance, but after the inflection point, human workers become worse off whenever such an improvement occurs. To offer empirical evidence, we first argue that AI performance has passed the inflection point for the occupation of translation but not for the occupation of web development. We then study how the launch of ChatGPT, which led to significant improvement of AI performance on many tasks, has affected workers in these two occupations on a large online labor platform. Consistent with the inflection point conjecture, we find that translators are negatively affected by the shock both in terms of the number of accepted jobs and the earnings from those jobs, while web developers are positively affected by the very same shock. Given the potentially large disruption of AI on employment, more studies on more occupations using data from different platforms are urgently needed.
{"title":"AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online Labor Platform","authors":"Dandan Qiao, Huaxia Rui, Qian Xiong","doi":"arxiv-2312.04180","DOIUrl":"https://doi.org/arxiv-2312.04180","url":null,"abstract":"Artificial intelligence (AI) refers to the ability of machines or software to\u0000mimic or even surpass human intelligence in a given cognitive task. While\u0000humans learn by both induction and deduction, the success of current AI is\u0000rooted in induction, relying on its ability to detect statistical regularities\u0000in task input -- an ability learnt from a vast amount of training data using\u0000enormous computation resources. We examine the performance of such a\u0000statistical AI in a human task through the lens of four factors, including task\u0000learnability, statistical resource, computation resource, and learning\u0000techniques, and then propose a three-phase visual framework to understand the\u0000evolving relation between AI and jobs. Based on this conceptual framework, we\u0000develop a simple economic model of competition to show the existence of an\u0000inflection point for each occupation. Before AI performance crosses the\u0000inflection point, human workers always benefit from an improvement in AI\u0000performance, but after the inflection point, human workers become worse off\u0000whenever such an improvement occurs. To offer empirical evidence, we first\u0000argue that AI performance has passed the inflection point for the occupation of\u0000translation but not for the occupation of web development. We then study how\u0000the launch of ChatGPT, which led to significant improvement of AI performance\u0000on many tasks, has affected workers in these two occupations on a large online\u0000labor platform. Consistent with the inflection point conjecture, we find that\u0000translators are negatively affected by the shock both in terms of the number of\u0000accepted jobs and the earnings from those jobs, while web developers are\u0000positively affected by the very same shock. Given the potentially large\u0000disruption of AI on employment, more studies on more occupations using data\u0000from different platforms are urgently needed.","PeriodicalId":501487,"journal":{"name":"arXiv - QuantFin - Economics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138554783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen Chris Gong, Falko Ueckerdt, Christoph Bertram, Yuxin Yin, David Bantje, Robert Pietzcker, Johanna Hoppe, Michaja Pehl, Gunnar Luderer
Decarbonizing China's energy system requires both greening the power supply and end-use electrification. While the latter speeds up with the electric vehicle adoption, a rapid power sector transformation can be technologically and institutionally challenging. Using an integrated assessment model, we analyze the synergy between power sector decarbonization and end-use electrification in China's net-zero pathway from a system perspective. We show that even with a slower coal power phase-out, reaching a high electrification rate of 60% by 2050 is a robust optimal strategy. Comparing emission intensity of typical end-use applications, we find most have reached parity with incumbent fossil fuel technologies even under China's current power mix due to efficiency gains. Since a 10-year delay in coal power phase-out can result in an additional cumulative emission of 28% (4%) of the global 1.5{deg}C (2{deg}C) CO2 budget, policy measures should be undertaken today to ensure a power sector transition without unexpected delays.
{"title":"Robust CO2-abatement from early end-use electrification under uncertain power transition speed in China's netzero transition","authors":"Chen Chris Gong, Falko Ueckerdt, Christoph Bertram, Yuxin Yin, David Bantje, Robert Pietzcker, Johanna Hoppe, Michaja Pehl, Gunnar Luderer","doi":"arxiv-2312.04332","DOIUrl":"https://doi.org/arxiv-2312.04332","url":null,"abstract":"Decarbonizing China's energy system requires both greening the power supply\u0000and end-use electrification. While the latter speeds up with the electric\u0000vehicle adoption, a rapid power sector transformation can be technologically\u0000and institutionally challenging. Using an integrated assessment model, we\u0000analyze the synergy between power sector decarbonization and end-use\u0000electrification in China's net-zero pathway from a system perspective. We show\u0000that even with a slower coal power phase-out, reaching a high electrification\u0000rate of 60% by 2050 is a robust optimal strategy. Comparing emission intensity\u0000of typical end-use applications, we find most have reached parity with\u0000incumbent fossil fuel technologies even under China's current power mix due to\u0000efficiency gains. Since a 10-year delay in coal power phase-out can result in\u0000an additional cumulative emission of 28% (4%) of the global 1.5{deg}C\u0000(2{deg}C) CO2 budget, policy measures should be undertaken today to ensure a\u0000power sector transition without unexpected delays.","PeriodicalId":501487,"journal":{"name":"arXiv - QuantFin - Economics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138554126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Thomas Goheen, Zayon Deshon Mobley-Wright, Rayne Symone Strother, Ryan Thomas Guthrie
This research report journal aims to investigate the feasibility of establishing a nuclear presence at Western Michigan University. The report will analyze the potential benefits and drawbacks of introducing nuclear technology to WMUs campus. The study will also examine the current state of nuclear technology and its applications in higher education. The report will conclude with a recommendation on whether WMU should pursue the establishment of a nuclear presence on its campus.
{"title":"The Feasibility of Establishing A Nuclear Presence at Western Michigan University","authors":"Andrew Thomas Goheen, Zayon Deshon Mobley-Wright, Rayne Symone Strother, Ryan Thomas Guthrie","doi":"arxiv-2312.03249","DOIUrl":"https://doi.org/arxiv-2312.03249","url":null,"abstract":"This research report journal aims to investigate the feasibility of\u0000establishing a nuclear presence at Western Michigan University. The report will\u0000analyze the potential benefits and drawbacks of introducing nuclear technology\u0000to WMUs campus. The study will also examine the current state of nuclear\u0000technology and its applications in higher education. The report will conclude\u0000with a recommendation on whether WMU should pursue the establishment of a\u0000nuclear presence on its campus.","PeriodicalId":501487,"journal":{"name":"arXiv - QuantFin - Economics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138547442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As governments race to implement new climate adaptation policies that prepare for more frequent flooding, they must seek policies that are effective for all communities and uphold climate justice. This requires evaluating policies not only on their overall effectiveness but also on whether their benefits are felt across all communities. We illustrate the importance of considering such disparities for flood adaptation using the FEMA National Flood Insurance Program Community Rating System and its dataset of $sim$2.5 million flood insurance claims. We use ${rm C{scriptsize AUSAL}F{scriptsize LOW}}$, a causal inference method based on deep generative models, to estimate the treatment effect of flood adaptation interventions based on a community's income, diversity, population, flood risk, educational attainment, and precipitation. We find that the program saves communities $5,000--15,000 per household. However, these savings are not evenly spread across communities. For example, for low-income communities savings sharply decline as flood-risk increases in contrast to their high-income counterparts with all else equal. Even among low-income communities, there is a gap in savings between predominantly white and non-white communities: savings of predominantly white communities can be higher by more than $6000 per household. As communities worldwide ramp up efforts to reduce losses inflicted by floods, simply prescribing a series flood adaptation measures is not enough. Programs must provide communities with the necessary technical and economic support to compensate for historical patterns of disenfranchisement, racism, and inequality. Future flood adaptation efforts should go beyond reducing losses overall and aim to close existing gaps to equitably support communities in the race for climate adaptation.
{"title":"Exposing Disparities in Flood Adaptation for Equitable Future Interventions","authors":"Lidia Cano Pecharroman, ChangHoon Hahn","doi":"arxiv-2312.03843","DOIUrl":"https://doi.org/arxiv-2312.03843","url":null,"abstract":"As governments race to implement new climate adaptation policies that prepare\u0000for more frequent flooding, they must seek policies that are effective for all\u0000communities and uphold climate justice. This requires evaluating policies not\u0000only on their overall effectiveness but also on whether their benefits are felt\u0000across all communities. We illustrate the importance of considering such\u0000disparities for flood adaptation using the FEMA National Flood Insurance\u0000Program Community Rating System and its dataset of $sim$2.5 million flood\u0000insurance claims. We use ${rm C{scriptsize AUSAL}F{scriptsize LOW}}$, a\u0000causal inference method based on deep generative models, to estimate the\u0000treatment effect of flood adaptation interventions based on a community's\u0000income, diversity, population, flood risk, educational attainment, and\u0000precipitation. We find that the program saves communities $5,000--15,000 per\u0000household. However, these savings are not evenly spread across communities. For\u0000example, for low-income communities savings sharply decline as flood-risk\u0000increases in contrast to their high-income counterparts with all else equal.\u0000Even among low-income communities, there is a gap in savings between\u0000predominantly white and non-white communities: savings of predominantly white\u0000communities can be higher by more than $6000 per household. As communities\u0000worldwide ramp up efforts to reduce losses inflicted by floods, simply\u0000prescribing a series flood adaptation measures is not enough. Programs must\u0000provide communities with the necessary technical and economic support to\u0000compensate for historical patterns of disenfranchisement, racism, and\u0000inequality. Future flood adaptation efforts should go beyond reducing losses\u0000overall and aim to close existing gaps to equitably support communities in the\u0000race for climate adaptation.","PeriodicalId":501487,"journal":{"name":"arXiv - QuantFin - Economics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138556901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The well-being of individuals in a crowd is interpreted as a product of the crossover of individuals from heterogeneous communities, which may occur via interactions with other crowds. The index moving-direction entropy corresponding to the diversity of the moving directions of individuals is introduced to represent such an inter-community crossover. Multi-scale moving direction entropies, composed of various geographical mesh sizes to compute the index values, are used to capture the information flow owing to human movements from/to various crowds. The generated map of high values of multiscale moving direction entropy is shown to coincide significantly with the preference of people to live in each region.
{"title":"Generating a Map of Well-being Regions using Multiscale Moving Direction Entropy on Mobile Sensors","authors":"Yukio Ohsawa, Sae Kondo, Yi Sun, Kaira Sekiguchi","doi":"arxiv-2312.02516","DOIUrl":"https://doi.org/arxiv-2312.02516","url":null,"abstract":"The well-being of individuals in a crowd is interpreted as a product of the\u0000crossover of individuals from heterogeneous communities, which may occur via\u0000interactions with other crowds. The index moving-direction entropy\u0000corresponding to the diversity of the moving directions of individuals is\u0000introduced to represent such an inter-community crossover. Multi-scale moving\u0000direction entropies, composed of various geographical mesh sizes to compute the\u0000index values, are used to capture the information flow owing to human movements\u0000from/to various crowds. The generated map of high values of multiscale moving\u0000direction entropy is shown to coincide significantly with the preference of\u0000people to live in each region.","PeriodicalId":501487,"journal":{"name":"arXiv - QuantFin - Economics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138535081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nir Chemaya, Lin William Cong, Emma Jorgensen, Dingyue Liu, Luyao Zhang
DeFi is transforming financial services by removing intermediaries and producing a wealth of open-source data. This transformation is propelled by Layer 2 (L2) solutions, aimed at boosting network efficiency and scalability beyond current Layer 1 (L1) capabilities. This study addresses the lack of detailed L2 impact analysis by examining over 50 million transactions from Uniswap. Our dataset, featuring transactions from L1 and L2 across networks like Ethereum and Polygon, provides daily indices revealing adoption, scalability, and decentralization within the DeFi space. These indices help to elucidate the complex relationship between DeFi and L2 technologies, advancing our understanding of the ecosystem. The dataset is enhanced by an open-source Python framework for computing decentralization indices, adaptable for various research needs. This positions the dataset as a vital resource for machine learning endeavors, particularly deep learning, contributing significantly to the development of Blockchain as Web3's infrastructure.
{"title":"Uniswap Daily Transaction Indices by Network","authors":"Nir Chemaya, Lin William Cong, Emma Jorgensen, Dingyue Liu, Luyao Zhang","doi":"arxiv-2312.02660","DOIUrl":"https://doi.org/arxiv-2312.02660","url":null,"abstract":"DeFi is transforming financial services by removing intermediaries and\u0000producing a wealth of open-source data. This transformation is propelled by\u0000Layer 2 (L2) solutions, aimed at boosting network efficiency and scalability\u0000beyond current Layer 1 (L1) capabilities. This study addresses the lack of\u0000detailed L2 impact analysis by examining over 50 million transactions from\u0000Uniswap. Our dataset, featuring transactions from L1 and L2 across networks\u0000like Ethereum and Polygon, provides daily indices revealing adoption,\u0000scalability, and decentralization within the DeFi space. These indices help to\u0000elucidate the complex relationship between DeFi and L2 technologies, advancing\u0000our understanding of the ecosystem. The dataset is enhanced by an open-source\u0000Python framework for computing decentralization indices, adaptable for various\u0000research needs. This positions the dataset as a vital resource for machine\u0000learning endeavors, particularly deep learning, contributing significantly to\u0000the development of Blockchain as Web3's infrastructure.","PeriodicalId":501487,"journal":{"name":"arXiv - QuantFin - Economics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138535096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses a continuous-time contracting model that extends the problem introduced by Sannikov and later rigorously analysed by Possama"{i} and Touzi. In our model, a principal hires a risk-averse agent to carry out a project. Specifically, the agent can perform two different tasks, namely to increase the instantaneous growth rate of the project's value, and to reduce the likelihood of accidents occurring. In order to compensate for these costly actions, the principal offers a continuous stream of payments throughout the entire duration of a contract, which concludes at a random time, potentially resulting in a lump-sum payment. We examine the consequences stemming from the introduction of accidents, modelled by a compound Poisson process that negatively impact the project's value. Furthermore, we investigate whether certain economic scenarii are still characterised by a golden parachute as in Sannikov's model. A golden parachute refers to a situation where the agent stops working and subsequently receives a compensation, which may be either a lump-sum payment leading to termination of the contract or a continuous stream of payments, thereby corresponding to a pension.
{"title":"Golden parachutes under the threat of accidents","authors":"Dylan Possamaï, Chiara Rossato","doi":"arxiv-2312.02101","DOIUrl":"https://doi.org/arxiv-2312.02101","url":null,"abstract":"This paper addresses a continuous-time contracting model that extends the\u0000problem introduced by Sannikov and later rigorously analysed by Possama\"{i}\u0000and Touzi. In our model, a principal hires a risk-averse agent to carry out a\u0000project. Specifically, the agent can perform two different tasks, namely to\u0000increase the instantaneous growth rate of the project's value, and to reduce\u0000the likelihood of accidents occurring. In order to compensate for these costly\u0000actions, the principal offers a continuous stream of payments throughout the\u0000entire duration of a contract, which concludes at a random time, potentially\u0000resulting in a lump-sum payment. We examine the consequences stemming from the\u0000introduction of accidents, modelled by a compound Poisson process that\u0000negatively impact the project's value. Furthermore, we investigate whether\u0000certain economic scenarii are still characterised by a golden parachute as in\u0000Sannikov's model. A golden parachute refers to a situation where the agent\u0000stops working and subsequently receives a compensation, which may be either a\u0000lump-sum payment leading to termination of the contract or a continuous stream\u0000of payments, thereby corresponding to a pension.","PeriodicalId":501487,"journal":{"name":"arXiv - QuantFin - Economics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138535141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyle R. Myers, Wei Yang Tham, Jerry Thursby, Marie Thursby, Nina Cohodes, Karim Lakhani, Rachel Mural, Yilun Xu
We introduce a new survey of professors at roughly 150 of the most research-intensive institutions of higher education in the US. We document seven new features of how research-active professors are compensated, how they spend their time, and how they perceive their research pursuits: (1) there is more inequality in earnings within fields than there is across fields; (2) institutions, ranks, tasks, and sources of earnings can account for roughly half of the total variation in earnings; (3) there is significant variation across fields in the correlations between earnings and different kinds of research output, but these account for a small amount of earnings variation; (4) measuring professors' productivity in terms of output-per-year versus output-per-research-hour can yield substantial differences; (5) professors' beliefs about the riskiness of their research are best predicted by their fundraising intensity, their risk-aversion in their personal lives, and the degree to which their research involves generating new hypotheses; (6) older and younger professors have very different research outputs and time allocations, but their intended audiences are quite similar; (7) personal risk-taking is highly predictive of professors' orientation towards applied, commercially-relevant research.
{"title":"New Facts and Data about Professors and their Research","authors":"Kyle R. Myers, Wei Yang Tham, Jerry Thursby, Marie Thursby, Nina Cohodes, Karim Lakhani, Rachel Mural, Yilun Xu","doi":"arxiv-2312.01442","DOIUrl":"https://doi.org/arxiv-2312.01442","url":null,"abstract":"We introduce a new survey of professors at roughly 150 of the most\u0000research-intensive institutions of higher education in the US. We document\u0000seven new features of how research-active professors are compensated, how they\u0000spend their time, and how they perceive their research pursuits: (1) there is\u0000more inequality in earnings within fields than there is across fields; (2)\u0000institutions, ranks, tasks, and sources of earnings can account for roughly\u0000half of the total variation in earnings; (3) there is significant variation\u0000across fields in the correlations between earnings and different kinds of\u0000research output, but these account for a small amount of earnings variation;\u0000(4) measuring professors' productivity in terms of output-per-year versus\u0000output-per-research-hour can yield substantial differences; (5) professors'\u0000beliefs about the riskiness of their research are best predicted by their\u0000fundraising intensity, their risk-aversion in their personal lives, and the\u0000degree to which their research involves generating new hypotheses; (6) older\u0000and younger professors have very different research outputs and time\u0000allocations, but their intended audiences are quite similar; (7) personal\u0000risk-taking is highly predictive of professors' orientation towards applied,\u0000commercially-relevant research.","PeriodicalId":501487,"journal":{"name":"arXiv - QuantFin - Economics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138535095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main objective of this paper is to develop a martingale-type solution to optimal consumption--investment choice problems ([Merton, 1969] and [Merton, 1971]) under time-varying incomplete preferences driven by externalities such as patience, socialization effects, and market volatility. The market is composed of multiple risky assets and multiple consumption goods, while in addition there are multiple fluctuating preference parameters with inexact values connected to imprecise tastes. Utility maximization is a multi-criteria problem with possibly function-valued criteria. To come up with a complete characterization of the solutions, first we motivate and introduce a set-valued stochastic process for the dynamics of multi-utility indices and formulate the optimization problem in a topological vector space. Then, we modify a classical scalarization method allowing for infiniteness and randomness in dimensions and prove results of equivalence to the original problem. Illustrative examples are given to demonstrate practical interests and method applicability progressively. The link between the original problem and a dual problem is also discussed, relatively briefly. Finally, using Malliavin calculus with stochastic geometry, we find optimal investment policies to be generally set-valued, each of whose selectors admits a four-way decomposition involving an additional indecisiveness risk-hedging portfolio. Our results touch on new directions for optimal consumption--investment choices in the presence of incomparability and time inconsistency, also signaling potentially testable assumptions on the variability of asset prices. Simulation techniques for set-valued processes are studied for how solved optimal policies can be computed in practice.
{"title":"Optimal Consumption--Investment Problems under Time-Varying Incomplete Preferences","authors":"Weixuan Xia","doi":"arxiv-2312.00266","DOIUrl":"https://doi.org/arxiv-2312.00266","url":null,"abstract":"The main objective of this paper is to develop a martingale-type solution to\u0000optimal consumption--investment choice problems ([Merton, 1969] and [Merton,\u00001971]) under time-varying incomplete preferences driven by externalities such\u0000as patience, socialization effects, and market volatility. The market is\u0000composed of multiple risky assets and multiple consumption goods, while in\u0000addition there are multiple fluctuating preference parameters with inexact\u0000values connected to imprecise tastes. Utility maximization is a multi-criteria\u0000problem with possibly function-valued criteria. To come up with a complete\u0000characterization of the solutions, first we motivate and introduce a set-valued\u0000stochastic process for the dynamics of multi-utility indices and formulate the\u0000optimization problem in a topological vector space. Then, we modify a classical\u0000scalarization method allowing for infiniteness and randomness in dimensions and\u0000prove results of equivalence to the original problem. Illustrative examples are\u0000given to demonstrate practical interests and method applicability\u0000progressively. The link between the original problem and a dual problem is also\u0000discussed, relatively briefly. Finally, using Malliavin calculus with\u0000stochastic geometry, we find optimal investment policies to be generally\u0000set-valued, each of whose selectors admits a four-way decomposition involving\u0000an additional indecisiveness risk-hedging portfolio. Our results touch on new\u0000directions for optimal consumption--investment choices in the presence of\u0000incomparability and time inconsistency, also signaling potentially testable\u0000assumptions on the variability of asset prices. Simulation techniques for\u0000set-valued processes are studied for how solved optimal policies can be\u0000computed in practice.","PeriodicalId":501487,"journal":{"name":"arXiv - QuantFin - Economics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138535083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Lnenicka, Anastasija Nikiforova, Mariusz Luterek, Petar Milic, Daniel Rudmark, Sebastian Neumaier, Caterina Santoro, Cesar Casiano Flores, Marijn Janssen, Manuel Pedro Rodríguez Bolívar
Open government and open (government) data are seen as tools to create new opportunities, eliminate or at least reduce information inequalities and improve public services. More than a decade of these efforts has provided much experience, practices, and perspectives to learn how to better deal with them. This paper focuses on benchmarking of open data initiatives over the years and attempts to identify patterns observed among European countries that could lead to disparities in the development, growth, and sustainability of open data ecosystems. To do this, we studied benchmarks and indices published over the last years (57 editions of 8 artifacts) and conducted a comparative case study of eight European countries, identifying patterns among them considering different potentially relevant contexts such as e-government, open government data, open data indices and rankings, and others relevant for the country under consideration. Using a Delphi method, we reached a consensus within a panel of experts and validated a final list of 94 patterns, including their frequency of occurrence among studied countries and their effects on the respective countries. Finally, we took a closer look at the developments in identified contexts over the years and defined 21 recommendations for more resilient and sustainable open government data initiatives and ecosystems and future steps in this area.
{"title":"Identifying patterns and recommendations of and for sustainable open data initiatives: a benchmarking-driven analysis of open government data initiatives among European countries","authors":"Martin Lnenicka, Anastasija Nikiforova, Mariusz Luterek, Petar Milic, Daniel Rudmark, Sebastian Neumaier, Caterina Santoro, Cesar Casiano Flores, Marijn Janssen, Manuel Pedro Rodríguez Bolívar","doi":"arxiv-2312.00551","DOIUrl":"https://doi.org/arxiv-2312.00551","url":null,"abstract":"Open government and open (government) data are seen as tools to create new\u0000opportunities, eliminate or at least reduce information inequalities and\u0000improve public services. More than a decade of these efforts has provided much\u0000experience, practices, and perspectives to learn how to better deal with them.\u0000This paper focuses on benchmarking of open data initiatives over the years and\u0000attempts to identify patterns observed among European countries that could lead\u0000to disparities in the development, growth, and sustainability of open data\u0000ecosystems. To do this, we studied benchmarks and indices published over the\u0000last years (57 editions of 8 artifacts) and conducted a comparative case study\u0000of eight European countries, identifying patterns among them considering\u0000different potentially relevant contexts such as e-government, open government\u0000data, open data indices and rankings, and others relevant for the country under\u0000consideration. Using a Delphi method, we reached a consensus within a panel of\u0000experts and validated a final list of 94 patterns, including their frequency of\u0000occurrence among studied countries and their effects on the respective\u0000countries. Finally, we took a closer look at the developments in identified\u0000contexts over the years and defined 21 recommendations for more resilient and\u0000sustainable open government data initiatives and ecosystems and future steps in\u0000this area.","PeriodicalId":501487,"journal":{"name":"arXiv - QuantFin - Economics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138535142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}