国际关系中的预测有时很难:对特特洛克等人(2023 年)的评论

Paul Poast
{"title":"国际关系中的预测有时很难:对特特洛克等人(2023 年)的评论","authors":"Paul Poast","doi":"10.1002/ffo2.171","DOIUrl":null,"url":null,"abstract":"<p>Prediction is hard, especially about the future. But not always. Predicting human behavior at the extremes is fairly easy. Within reason, it's quite straightforward to predict what someone will do tomorrow, at least with respect to their day-to-day routine. It's called a “routine” for a reason. At the other extreme, over eons of human existence, it's quite plausible to predict that the continents will reconnect, dramatically altering the current geographic balance of power. Even further out, although humans could well explore the universe and even establish new homes outside of Earth, we also know, at least according to our current knowledge, that the universe will suffer from heat death.</p><p>However, those extremes are not what we care about. The relevant time frame, as acknowledged by the Tetlock et al. piece, is between these extremes, say several years or even a few decades from now. On the one hand, examples of amazingly accurate predictions based on long-term forecasts do seem possible. Perhaps the classic example is John Maynard Keynes' <i>Economic Consequences of the Peace</i>. Noting that the Treaty of Versailles had “nothing to make the defeated Central Empires into good neighbors, nothing to stabilize the new States of Europe, nothing to reclaim Russia,” he predicted, quite ominously and perhaps more accurately than even he realized, that “great privation and great risks to society have become unavoidable” (Keynes, <span>1919</span>, pp. 226 &amp; 255).</p><p>And yet, for each prediction that exhibits such accuracy, there many that are, quite frankly, way off. Consider a data rich enterprise in which accurate forecasts are sought after and valued: population growth. Forecasts of population growth over decades are notoriously difficult despite great effort to make them sound. The uncertainty in such forecasts needs to be explicit, because, as demographer Lee (<span>2011</span>, p. 572) observed, “population projections motivate painful decisions about tax increases, benefit cuts, retirement age, and measures to offset global warming, we need careful measures of their uncertainty”.</p><p>Rather than “cherry picking” a particularly good or bad prediction from the past, Tetlock et al. provide systematic assessment of medium-term prediction accuracy. Specifically, they offer an assessment of the Expert Political Judgment project, evaluating the forecasts offered by project participants in 2 years, 1988 and 1997. Moreover, rather than considering a range of topics, the authors reassess the experts’ predictive judgments on two “slower moving” topics: stability versus change in national borders, and nuclear-power status. By the year 2022, 25 years had passed since the later set of forecasts and 34 years had passed since the first set of forecasts. This offers ample time for the predictions offered in those years to pan out. If medium term geopolitical forecasting is in any way possible, it will be found here.</p><p>What they find encouraging from the perspective of medium term forcecasting is that, in both issue-area domains, the forecasters performed well. The forecasters had correct classifications of over 90%. But there is a twist. Expert forecasters outperformed nonexpert forecasters in nonproliferation domain (especially with respect to the false-positive rate), but not in the border-change domain. Hence, as the authors remark (on page 16), “Expertise failed to translate into accuracy on over half of the questions: those on border-change/secession.” In other words, while forecasting is good, expertise appears overrated.</p><p>Of course, the question is why? Why did expertise seem to offer an advantage in the realm of non-proliferation forecasts (such as whether Iran will acquire the bomb in a given time period), but not border change forecasts (such as whether Ukraine and Russia would go to war over their border)?</p><p>One explanation is that nuclear proliferation is a topic sufficiently technical in nature that it is almost guaranteed that a well-informed “civilian”, that is, a nonexpert, will be missing key insights. Someone reading the news and receiving updates on Iran's nuclear program would still likely miss or misinterpret critical aspects regarding the progress of their nuclear program. Consistent with the high false-positive rate of the nonexperts in this issue domain, nonexperts overestimate the ease of developing a nuclear weapon.</p><p>This stands in contrast to border disputes. The nature of border clashes and border disputes do not appear to require the same level of technical expertise and, hence, it is (relatively) easier for no experts to conceptualize and understand the prospects of a militarized conflict. Moreover, although neither event—a border change or a country acquiring a nuclear weapons—is common, disputes and conflicts over borders are more common than the development of a nuclear program. Hence, there is less opportunity, compared with nuclear programs, for one to acquire a high false-positive rate.</p><p>This assessment of the ability of experts and nonexperts to forecast political events is valuable, but it also points to a more fundamental question: are these even the right type of predictions to be evaluating?</p><p>The analysts have a theory in their minds that led them to draw their respective inferences. But a good theory will not specify a simple yes or no outcome. It will be conditional: “yes if this, no, if that.” This is why Friedman and Zeckhauser (<span>2012</span>) argued for intelligence analysts to focus on assessing uncertainty, rather than eliminating uncertainty: the world is too complex and history too contingent for specific events to be fully predictable.</p><p>Consider how the field of international relations handled the end of the Cold War. Although there was much lamenting and evaluation over why scholars failed to predict the end, including much opprobrium being directed toward a core theoretical framework of international politics, realism, more important were prognostications for what was to come (Lebow, <span>1994</span>). The predictions were wide ranging, from claims of history being over to the emergence of a new world order. Most infamously, the eminent realist scholar Mearsheimer (<span>1990</span>) predicted that Europe would head “Back to the Future,” meaning a return to war and violence on a scale not seen since 1945. With the Balkan Wars of the 1990s and the current Russo–Ukraine War, Europe as a whole has been far from peaceful since the end of the Cold War. However, the Western and Central European powers did not return to blows and, in that sense, Europe's future did not look like its past. One could say that Mearsheimer's prediction failed.</p><p>However, assessing Mearsheimer's prediction as a failure is to miss a critical point: his prediction was conditional. He specified, in the first footnote of the piece, that his argument presumed NATO dissolving: if NATO dissolves, Europe will return to instability. That did not come to pass. Indeed, the opposite transpired, with NATO expanding its membership starting with the former Warsaw Pact countries of Hungary, the Czech Republic, and Poland in 1999. Given his premise of NATO's demise being necessary for Europe to go “Back to the Future,” it seems that his forecast was actually quite accurate.</p><p>All of this simply underscores the difficulty of prediction. Tetlock et al. provide a valuable assessment of whether and how accurate predictions can be made on medium-term events. However, future assessments must do more to account for the contingent and conditional nature of prediction. Analysts and policy makers will gain from not simply knowing if an event could occur, but why it could occur.</p>","PeriodicalId":100567,"journal":{"name":"FUTURES & FORESIGHT SCIENCE","volume":"6 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ffo2.171","citationCount":"0","resultStr":"{\"title\":\"Prediction in international relations is hard, sometimes: A commentary on Tetlock et al. (2023)\",\"authors\":\"Paul Poast\",\"doi\":\"10.1002/ffo2.171\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Prediction is hard, especially about the future. But not always. Predicting human behavior at the extremes is fairly easy. Within reason, it's quite straightforward to predict what someone will do tomorrow, at least with respect to their day-to-day routine. It's called a “routine” for a reason. At the other extreme, over eons of human existence, it's quite plausible to predict that the continents will reconnect, dramatically altering the current geographic balance of power. Even further out, although humans could well explore the universe and even establish new homes outside of Earth, we also know, at least according to our current knowledge, that the universe will suffer from heat death.</p><p>However, those extremes are not what we care about. The relevant time frame, as acknowledged by the Tetlock et al. piece, is between these extremes, say several years or even a few decades from now. On the one hand, examples of amazingly accurate predictions based on long-term forecasts do seem possible. Perhaps the classic example is John Maynard Keynes' <i>Economic Consequences of the Peace</i>. Noting that the Treaty of Versailles had “nothing to make the defeated Central Empires into good neighbors, nothing to stabilize the new States of Europe, nothing to reclaim Russia,” he predicted, quite ominously and perhaps more accurately than even he realized, that “great privation and great risks to society have become unavoidable” (Keynes, <span>1919</span>, pp. 226 &amp; 255).</p><p>And yet, for each prediction that exhibits such accuracy, there many that are, quite frankly, way off. Consider a data rich enterprise in which accurate forecasts are sought after and valued: population growth. Forecasts of population growth over decades are notoriously difficult despite great effort to make them sound. The uncertainty in such forecasts needs to be explicit, because, as demographer Lee (<span>2011</span>, p. 572) observed, “population projections motivate painful decisions about tax increases, benefit cuts, retirement age, and measures to offset global warming, we need careful measures of their uncertainty”.</p><p>Rather than “cherry picking” a particularly good or bad prediction from the past, Tetlock et al. provide systematic assessment of medium-term prediction accuracy. Specifically, they offer an assessment of the Expert Political Judgment project, evaluating the forecasts offered by project participants in 2 years, 1988 and 1997. Moreover, rather than considering a range of topics, the authors reassess the experts’ predictive judgments on two “slower moving” topics: stability versus change in national borders, and nuclear-power status. By the year 2022, 25 years had passed since the later set of forecasts and 34 years had passed since the first set of forecasts. This offers ample time for the predictions offered in those years to pan out. If medium term geopolitical forecasting is in any way possible, it will be found here.</p><p>What they find encouraging from the perspective of medium term forcecasting is that, in both issue-area domains, the forecasters performed well. The forecasters had correct classifications of over 90%. But there is a twist. Expert forecasters outperformed nonexpert forecasters in nonproliferation domain (especially with respect to the false-positive rate), but not in the border-change domain. Hence, as the authors remark (on page 16), “Expertise failed to translate into accuracy on over half of the questions: those on border-change/secession.” In other words, while forecasting is good, expertise appears overrated.</p><p>Of course, the question is why? Why did expertise seem to offer an advantage in the realm of non-proliferation forecasts (such as whether Iran will acquire the bomb in a given time period), but not border change forecasts (such as whether Ukraine and Russia would go to war over their border)?</p><p>One explanation is that nuclear proliferation is a topic sufficiently technical in nature that it is almost guaranteed that a well-informed “civilian”, that is, a nonexpert, will be missing key insights. Someone reading the news and receiving updates on Iran's nuclear program would still likely miss or misinterpret critical aspects regarding the progress of their nuclear program. Consistent with the high false-positive rate of the nonexperts in this issue domain, nonexperts overestimate the ease of developing a nuclear weapon.</p><p>This stands in contrast to border disputes. The nature of border clashes and border disputes do not appear to require the same level of technical expertise and, hence, it is (relatively) easier for no experts to conceptualize and understand the prospects of a militarized conflict. Moreover, although neither event—a border change or a country acquiring a nuclear weapons—is common, disputes and conflicts over borders are more common than the development of a nuclear program. Hence, there is less opportunity, compared with nuclear programs, for one to acquire a high false-positive rate.</p><p>This assessment of the ability of experts and nonexperts to forecast political events is valuable, but it also points to a more fundamental question: are these even the right type of predictions to be evaluating?</p><p>The analysts have a theory in their minds that led them to draw their respective inferences. But a good theory will not specify a simple yes or no outcome. It will be conditional: “yes if this, no, if that.” This is why Friedman and Zeckhauser (<span>2012</span>) argued for intelligence analysts to focus on assessing uncertainty, rather than eliminating uncertainty: the world is too complex and history too contingent for specific events to be fully predictable.</p><p>Consider how the field of international relations handled the end of the Cold War. Although there was much lamenting and evaluation over why scholars failed to predict the end, including much opprobrium being directed toward a core theoretical framework of international politics, realism, more important were prognostications for what was to come (Lebow, <span>1994</span>). The predictions were wide ranging, from claims of history being over to the emergence of a new world order. Most infamously, the eminent realist scholar Mearsheimer (<span>1990</span>) predicted that Europe would head “Back to the Future,” meaning a return to war and violence on a scale not seen since 1945. With the Balkan Wars of the 1990s and the current Russo–Ukraine War, Europe as a whole has been far from peaceful since the end of the Cold War. However, the Western and Central European powers did not return to blows and, in that sense, Europe's future did not look like its past. One could say that Mearsheimer's prediction failed.</p><p>However, assessing Mearsheimer's prediction as a failure is to miss a critical point: his prediction was conditional. He specified, in the first footnote of the piece, that his argument presumed NATO dissolving: if NATO dissolves, Europe will return to instability. That did not come to pass. Indeed, the opposite transpired, with NATO expanding its membership starting with the former Warsaw Pact countries of Hungary, the Czech Republic, and Poland in 1999. Given his premise of NATO's demise being necessary for Europe to go “Back to the Future,” it seems that his forecast was actually quite accurate.</p><p>All of this simply underscores the difficulty of prediction. Tetlock et al. provide a valuable assessment of whether and how accurate predictions can be made on medium-term events. However, future assessments must do more to account for the contingent and conditional nature of prediction. Analysts and policy makers will gain from not simply knowing if an event could occur, but why it could occur.</p>\",\"PeriodicalId\":100567,\"journal\":{\"name\":\"FUTURES & FORESIGHT SCIENCE\",\"volume\":\"6 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ffo2.171\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"FUTURES & FORESIGHT SCIENCE\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ffo2.171\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"FUTURES & FORESIGHT SCIENCE","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ffo2.171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

这种对专家和非专家预测政治事件能力的评估是有价值的,但它也指出了一个更根本的问题:这些预测是否正确?分析师们心中有一个理论,引导他们得出各自的推论。但是一个好的理论不会指定一个简单的是或否的结果。它将是有条件的:“如果这样就行,如果那样就不行。”这就是为什么Friedman和Zeckhauser(2012)主张情报分析师专注于评估不确定性,而不是消除不确定性:世界太复杂,历史太偶然,具体事件无法完全预测。想想国际关系领域是如何处理冷战结束的。尽管对学者们未能预测到世界末日的原因有很多遗憾和评价,包括对国际政治核心理论框架——现实主义的许多谴责,但更重要的是对未来的预测(Lebow, 1994)。预言的范围很广,从宣称历史已经结束到世界新秩序的出现。最臭名昭著的是,著名现实主义学者米尔斯海默(Mearsheimer, 1990)预测,欧洲将走向“回到未来”,这意味着战争和暴力将以1945年以来从未见过的规模回归。随着上世纪90年代的巴尔干战争和当前的俄乌战争,自冷战结束以来,欧洲作为一个整体远谈不上和平。然而,西欧和中欧列强并没有反击,从这个意义上说,欧洲的未来与过去不同。有人可能会说,米尔斯海默的预测失败了。然而,将米尔斯海默的预测视为失败是忽略了一个关键点:他的预测是有条件的。在这篇文章的第一个脚注中,他特别指出,他的论点假定北约解体:如果北约解体,欧洲将重新陷入不稳定。这并没有发生。事实上,相反的情况发生了,1999年,北约从前华沙条约国家匈牙利、捷克共和国和波兰开始扩大其成员。考虑到他的前提,即北约的消亡是欧洲“回到未来”的必要条件,他的预测似乎实际上相当准确。所有这些都凸显了预测的难度。Tetlock等人对中期事件的预测是否准确以及如何准确提供了有价值的评估。然而,未来的评估必须更多地考虑到预测的偶然性和条件性。分析人士和政策制定者不仅能从知道某件事是否会发生中获益,还能从知道它为什么会发生中获益。作者感谢Phil Tetlock邀请他对他们的论文发表评论。作者从阅读中学到了很多东西。数据共享不适用——没有新数据作为数据生成。 这种对专家和非专家预测政治事件的能力的评估很有价值,但它也指出了一个更根本的问题:这些预测是否是我们应该评估的正确类型?但是,一个好的理论不会简单地给出 "是 "或 "否 "的结果。它将是有条件的:"如果这样则是,如果那样则不是"。这就是为什么弗里德曼和泽克豪瑟(2012)认为情报分析师应专注于评估不确定性,而不是消除不确定性:世界太复杂,历史太偶然,具体事件无法完全预测。尽管人们对学者们未能预测冷战结束的原因进行了大量的哀叹和评价,包括对国际政治的核心理论框架--现实主义--的抨击,但更重要的是对未来的预测(Lebow,1994 年)。预言的范围很广,从历史终结的说法到新世界秩序的出现,不一而足。最臭名昭著的是,著名现实主义学者米尔斯海默(1990 年)预言欧洲将 "回到未来",这意味着战争和暴力将以 1945 年以来从未见过的规模卷土重来。随着 20 世纪 90 年代的巴尔干战争和当前的俄乌战争,冷战结束后的欧洲作为一个整体已经远非和平。然而,西欧和中欧列强并没有重蹈覆辙,从这个意义上说,欧洲的未来并不像它的过去。可以说,米尔斯海默的预测失败了。然而,将米尔斯海默的预测评价为失败是忽略了一个关键点:他的预测是有条件的。他在文章的第一个脚注中明确指出,他的论点假定北约解散:如果北约解散,欧洲将重回不稳定。但事实并非如此。事实上,情况恰恰相反,北约从 1999 年开始扩大成员,首先加入的是匈牙利、捷克共和国和波兰等前华约国家。鉴于他认为北约的消亡是欧洲 "回到未来 "的必要条件,看来他的预测实际上是相当准确的。特特洛克等人对能否以及如何准确预测中期事件进行了有价值的评估。然而,未来的评估必须更多地考虑预测的偶然性和条件性。分析师和政策制定者不仅要知道事件是否会发生,还要知道它为什么会发生。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Prediction in international relations is hard, sometimes: A commentary on Tetlock et al. (2023)

Prediction is hard, especially about the future. But not always. Predicting human behavior at the extremes is fairly easy. Within reason, it's quite straightforward to predict what someone will do tomorrow, at least with respect to their day-to-day routine. It's called a “routine” for a reason. At the other extreme, over eons of human existence, it's quite plausible to predict that the continents will reconnect, dramatically altering the current geographic balance of power. Even further out, although humans could well explore the universe and even establish new homes outside of Earth, we also know, at least according to our current knowledge, that the universe will suffer from heat death.

However, those extremes are not what we care about. The relevant time frame, as acknowledged by the Tetlock et al. piece, is between these extremes, say several years or even a few decades from now. On the one hand, examples of amazingly accurate predictions based on long-term forecasts do seem possible. Perhaps the classic example is John Maynard Keynes' Economic Consequences of the Peace. Noting that the Treaty of Versailles had “nothing to make the defeated Central Empires into good neighbors, nothing to stabilize the new States of Europe, nothing to reclaim Russia,” he predicted, quite ominously and perhaps more accurately than even he realized, that “great privation and great risks to society have become unavoidable” (Keynes, 1919, pp. 226 & 255).

And yet, for each prediction that exhibits such accuracy, there many that are, quite frankly, way off. Consider a data rich enterprise in which accurate forecasts are sought after and valued: population growth. Forecasts of population growth over decades are notoriously difficult despite great effort to make them sound. The uncertainty in such forecasts needs to be explicit, because, as demographer Lee (2011, p. 572) observed, “population projections motivate painful decisions about tax increases, benefit cuts, retirement age, and measures to offset global warming, we need careful measures of their uncertainty”.

Rather than “cherry picking” a particularly good or bad prediction from the past, Tetlock et al. provide systematic assessment of medium-term prediction accuracy. Specifically, they offer an assessment of the Expert Political Judgment project, evaluating the forecasts offered by project participants in 2 years, 1988 and 1997. Moreover, rather than considering a range of topics, the authors reassess the experts’ predictive judgments on two “slower moving” topics: stability versus change in national borders, and nuclear-power status. By the year 2022, 25 years had passed since the later set of forecasts and 34 years had passed since the first set of forecasts. This offers ample time for the predictions offered in those years to pan out. If medium term geopolitical forecasting is in any way possible, it will be found here.

What they find encouraging from the perspective of medium term forcecasting is that, in both issue-area domains, the forecasters performed well. The forecasters had correct classifications of over 90%. But there is a twist. Expert forecasters outperformed nonexpert forecasters in nonproliferation domain (especially with respect to the false-positive rate), but not in the border-change domain. Hence, as the authors remark (on page 16), “Expertise failed to translate into accuracy on over half of the questions: those on border-change/secession.” In other words, while forecasting is good, expertise appears overrated.

Of course, the question is why? Why did expertise seem to offer an advantage in the realm of non-proliferation forecasts (such as whether Iran will acquire the bomb in a given time period), but not border change forecasts (such as whether Ukraine and Russia would go to war over their border)?

One explanation is that nuclear proliferation is a topic sufficiently technical in nature that it is almost guaranteed that a well-informed “civilian”, that is, a nonexpert, will be missing key insights. Someone reading the news and receiving updates on Iran's nuclear program would still likely miss or misinterpret critical aspects regarding the progress of their nuclear program. Consistent with the high false-positive rate of the nonexperts in this issue domain, nonexperts overestimate the ease of developing a nuclear weapon.

This stands in contrast to border disputes. The nature of border clashes and border disputes do not appear to require the same level of technical expertise and, hence, it is (relatively) easier for no experts to conceptualize and understand the prospects of a militarized conflict. Moreover, although neither event—a border change or a country acquiring a nuclear weapons—is common, disputes and conflicts over borders are more common than the development of a nuclear program. Hence, there is less opportunity, compared with nuclear programs, for one to acquire a high false-positive rate.

This assessment of the ability of experts and nonexperts to forecast political events is valuable, but it also points to a more fundamental question: are these even the right type of predictions to be evaluating?

The analysts have a theory in their minds that led them to draw their respective inferences. But a good theory will not specify a simple yes or no outcome. It will be conditional: “yes if this, no, if that.” This is why Friedman and Zeckhauser (2012) argued for intelligence analysts to focus on assessing uncertainty, rather than eliminating uncertainty: the world is too complex and history too contingent for specific events to be fully predictable.

Consider how the field of international relations handled the end of the Cold War. Although there was much lamenting and evaluation over why scholars failed to predict the end, including much opprobrium being directed toward a core theoretical framework of international politics, realism, more important were prognostications for what was to come (Lebow, 1994). The predictions were wide ranging, from claims of history being over to the emergence of a new world order. Most infamously, the eminent realist scholar Mearsheimer (1990) predicted that Europe would head “Back to the Future,” meaning a return to war and violence on a scale not seen since 1945. With the Balkan Wars of the 1990s and the current Russo–Ukraine War, Europe as a whole has been far from peaceful since the end of the Cold War. However, the Western and Central European powers did not return to blows and, in that sense, Europe's future did not look like its past. One could say that Mearsheimer's prediction failed.

However, assessing Mearsheimer's prediction as a failure is to miss a critical point: his prediction was conditional. He specified, in the first footnote of the piece, that his argument presumed NATO dissolving: if NATO dissolves, Europe will return to instability. That did not come to pass. Indeed, the opposite transpired, with NATO expanding its membership starting with the former Warsaw Pact countries of Hungary, the Czech Republic, and Poland in 1999. Given his premise of NATO's demise being necessary for Europe to go “Back to the Future,” it seems that his forecast was actually quite accurate.

All of this simply underscores the difficulty of prediction. Tetlock et al. provide a valuable assessment of whether and how accurate predictions can be made on medium-term events. However, future assessments must do more to account for the contingent and conditional nature of prediction. Analysts and policy makers will gain from not simply knowing if an event could occur, but why it could occur.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.00
自引率
0.00%
发文量
0
期刊最新文献
Issue Information Issue Information Simplification errors in predictive models Don't push the wrong button. The concept of microperspective in futures research Science fiction in military planning—Case allied command transformation and visions of warfare 2036
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1