首页 > 最新文献

LISP and Functional Programming最新文献

英文 中文
Incremental reduction in the lambda calculus 演算中的增量减少
Pub Date : 1990-05-01 DOI: 10.1145/91556.91679
J. Field, T. Teitelbaum
An incremental algorithm is one that takes advantage of the fact that the function it computes is to be evaluated repeatedly on inputs that differ only slightly from one another, avoiding unnecessary duplication of common computations.We define here a new notion of incrementality for reduction in the untyped λ-calculus and describe an incremental reduction algorithm, Λinc. We show that Λinc has the desirable property of performing non-overlapping reductions on related terms, yet is simple enough to allow a practical implementation. The algorithm is based on a novel λ-reduction strategy that may prove useful in a non-incremental setting as well.Incremental λ-reduction can be used to advantage in any setting where an algorithm is specified in a functional or applicative manner.
增量算法利用了这样一个事实,即它所计算的函数将对彼此之间仅略有不同的输入重复求值,从而避免了不必要的公共计算重复。本文在无类型λ-微积分中定义了递增约简的新概念,并描述了一个递增约简算法Λinc。我们展示了Λinc具有对相关项执行非重叠约简的理想特性,但它足够简单,可以实际实现。该算法基于一种新颖的λ-缩减策略,该策略在非增量设置中也可能被证明是有用的。增量λ-约简可以在任何设置中使用,其中算法以功能或应用方式指定。
{"title":"Incremental reduction in the lambda calculus","authors":"J. Field, T. Teitelbaum","doi":"10.1145/91556.91679","DOIUrl":"https://doi.org/10.1145/91556.91679","url":null,"abstract":"An <italic>incremental</italic> algorithm is one that takes advantage of the fact that the function it computes is to be evaluated repeatedly on inputs that differ only slightly from one another, avoiding unnecessary duplication of common computations.\u0000We define here a new notion of incrementality for reduction in the untyped λ-calculus and describe an incremental reduction algorithm, Λ<supscrpt>inc</supscrpt>. We show that Λ<supscrpt>inc</supscrpt> has the desirable property of performing <italic>non-overlapping</italic> reductions on related terms, yet is simple enough to allow a practical implementation. The algorithm is based on a novel λ-reduction strategy that may prove useful in a non-incremental setting as well.\u0000Incremental λ-reduction can be used to advantage in any setting where an algorithm is specified in a functional or applicative manner.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121501278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
A module system for scheme 方案模块系统
Pub Date : 1990-05-01 DOI: 10.1145/91556.91573
Pavel Curtis, James Rauen
This paper presents a module system designed for large-scale programming in Scheme. The module system separates specifications of objects from their implementations, permitting the separate development, compilation, and testing of modules. The module system also includes a robust macro facility.We discuss our design goals, the design of the module system, implementation issues, and our future plans.
本文介绍了一个用Scheme设计的用于大规模编程的模块系统。模块系统将对象的规范与其实现分离开来,从而允许模块的单独开发、编译和测试。模块系统还包括一个健壮的宏工具。我们讨论了我们的设计目标,模块系统的设计,实现问题,以及我们未来的计划。
{"title":"A module system for scheme","authors":"Pavel Curtis, James Rauen","doi":"10.1145/91556.91573","DOIUrl":"https://doi.org/10.1145/91556.91573","url":null,"abstract":"This paper presents a module system designed for large-scale programming in Scheme. The module system separates specifications of objects from their implementations, permitting the separate development, compilation, and testing of modules. The module system also includes a robust macro facility.\u0000We discuss our design goals, the design of the module system, implementation issues, and our future plans.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121150581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
A compositional analysis of evaluation-order and its application 评价顺序的组成分析及其应用
Pub Date : 1990-05-01 DOI: 10.1145/91556.91658
M. Draghicescu, S. Iyer
We present a compositional definition of the order of evaluation of variables in a lazy first-order functional language. Unlike other published work, our analysis applies to all evaluation strategies which may use strictness information to change the normal (lazy) order of evaluation. At the same time it can be adapted to pure lazy evaluation yielding a sharper analysis in this case. It can also be adapted to take advantage of any information about the order in which primitive functions evaluate their arguments. The time complexity of the method is that of strictness analysis.We also present a compositional definition of the set of variables which denote locations where the result of an expression might be stored. This analysis yields a simple solution to the aliasing problem.Using these two analyses we develop a new algorithm for the destructive update problem.
给出了一种惰性一阶函数语言中变量求值顺序的组合定义。与其他已发表的工作不同,我们的分析适用于所有可能使用严格性信息来改变正常(惰性)评估顺序的评估策略。同时,它可以适应于纯惰性计算,在这种情况下产生更清晰的分析。还可以对其进行调整,以利用有关基本函数求值其参数的顺序的任何信息。该方法的时间复杂度为严格性分析的时间复杂度。我们还给出了一组变量的组合定义,这些变量表示可能存储表达式结果的位置。这一分析为混叠问题提供了一个简单的解决方案。利用这两种分析方法,我们提出了一种新的算法来解决破坏性更新问题。
{"title":"A compositional analysis of evaluation-order and its application","authors":"M. Draghicescu, S. Iyer","doi":"10.1145/91556.91658","DOIUrl":"https://doi.org/10.1145/91556.91658","url":null,"abstract":"We present a compositional definition of the order of evaluation of variables in a lazy first-order functional language. Unlike other published work, our analysis applies to all evaluation strategies which may use strictness information to change the normal (lazy) order of evaluation. At the same time it can be adapted to pure lazy evaluation yielding a sharper analysis in this case. It can also be adapted to take advantage of any information about the order in which primitive functions evaluate their arguments. The time complexity of the method is that of strictness analysis.\u0000We also present a compositional definition of the set of variables which denote locations where the result of an expression might be stored. This analysis yields a simple solution to the aliasing problem.\u0000Using these two analyses we develop a new algorithm for the destructive update problem.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123434127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Trap architectures for Lisp systems Lisp系统的陷阱体系结构
Pub Date : 1990-05-01 DOI: 10.1145/91556.91595
Douglas Johnson
Recent measurements of Lisp systems show a dramatic skewing of operation frequency. For example, small integer (fix-num) arithmetic dominates most programs, but other number types can occur on almost any operation. Likewise, few memory references trigger special handling for garbage collection, but nearly all memory operations could trigger such special handling. Systems like SPARC and SPUR have shown that small amounts of special hardware can significantly reduce the need for inline software checks by trapping when an unusual condition is detected.A system's trapping architecture now becomes key to performance. In most systems, the trap architecture is intended to handle errors (e.g., address faults) or conditions requiring large amounts of processing (e.g., page faults). The requirements for Lisp traps are quite different. In particular, the trap frequency is higher, processing time per trap is shorter, and most need to be handled in the user's address space and context.This paper looks at these requirements, evaluates current trap architectures, and proposes enhancements for meeting those requirements. These enhancements increase performance for Lisp 11%-35% at a cost of about 1.6% more CPU logic. They also aid debugging in general and speed floating point exception handling.
最近对Lisp系统的测量显示了工作频率的显著倾斜。例如,小整数(固定数目)算术在大多数程序中占主导地位,但其他数字类型几乎可以出现在任何操作中。同样,很少有内存引用会触发垃圾收集的特殊处理,但几乎所有内存操作都可能触发这种特殊处理。像SPARC和SPUR这样的系统已经表明,少量的特殊硬件可以通过在检测到异常情况时捕获来显著减少对内联软件检查的需求。系统的捕获体系结构现在成为性能的关键。在大多数系统中,陷阱架构旨在处理错误(例如,地址错误)或需要大量处理的条件(例如,页面错误)。对Lisp陷阱的要求是完全不同的。特别是,陷阱频率更高,每个陷阱的处理时间更短,并且大多数需要在用户的地址空间和上下文中处理。本文研究了这些需求,评估了当前的陷阱体系结构,并提出了满足这些需求的增强建议。这些增强使Lisp的性能提高了11%-35%,但代价是增加了1.6%的CPU逻辑。它们还有助于一般调试并加快浮点异常处理速度。
{"title":"Trap architectures for Lisp systems","authors":"Douglas Johnson","doi":"10.1145/91556.91595","DOIUrl":"https://doi.org/10.1145/91556.91595","url":null,"abstract":"Recent measurements of Lisp systems show a dramatic skewing of operation frequency. For example, small integer (fix-num) arithmetic dominates most programs, but other number types can occur on almost any operation. Likewise, few memory references trigger special handling for garbage collection, but nearly all memory operations could trigger such special handling. Systems like SPARC and SPUR have shown that small amounts of special hardware can significantly reduce the need for inline software checks by trapping when an unusual condition is detected.\u0000A system's trapping architecture now becomes key to performance. In most systems, the trap architecture is intended to handle errors (e.g., address faults) or conditions requiring large amounts of processing (e.g., page faults). The requirements for Lisp traps are quite different. In particular, the trap frequency is higher, processing time per trap is shorter, and most need to be handled in the user's address space and context.\u0000This paper looks at these requirements, evaluates current trap architectures, and proposes enhancements for meeting those requirements. These enhancements increase performance for Lisp 11%-35% at a cost of about 1.6% more CPU logic. They also aid debugging in general and speed floating point exception handling.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117301193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Context information for lazy code generation 用于惰性代码生成的上下文信息
Pub Date : 1990-05-01 DOI: 10.1145/91556.91665
H. R. Nielson, F. Nielson
Functional languages like Miranda and Haskell employ a non-strict semantics. This is important for the functional programming style as it allows one to compute with infinite data structures. However, a straightforward implementation of the language will result in a rather inefficient implementation and therefore it is often combined with strictness analysis. A sticky version of the analysis is used to collect the information and annotate the program so that the information can be used by the subsequent passes of the compiler. The strictness analysis and its correctness properties are well understood by means of abstract interpretation whereas its sticky version is more subtle. — The purpose of the present paper is therefore to investigate how far one can go without introducing a sticky version of the analysis and thereby avoid the correctness problems connected with it.
像Miranda和Haskell这样的函数式语言采用了非严格语义。这对于函数式编程风格非常重要,因为它允许使用无限的数据结构进行计算。然而,该语言的直接实现将导致相当低效的实现,因此它通常与严格分析相结合。分析的粘性版本用于收集信息并注释程序,以便编译器的后续传递可以使用这些信息。通过抽象解释可以很好地理解严格性分析及其正确性,而粘滞性分析则更加微妙。因此,本文的目的是研究在不引入黏性分析版本的情况下可以走多远,从而避免与之相关的正确性问题。
{"title":"Context information for lazy code generation","authors":"H. R. Nielson, F. Nielson","doi":"10.1145/91556.91665","DOIUrl":"https://doi.org/10.1145/91556.91665","url":null,"abstract":"Functional languages like Miranda and Haskell employ a non-strict semantics. This is important for the functional programming style as it allows one to compute with infinite data structures. However, a straightforward implementation of the language will result in a rather inefficient implementation and therefore it is often combined with strictness analysis. A sticky version of the analysis is used to collect the information and annotate the program so that the information can be used by the subsequent passes of the compiler. The strictness analysis and its correctness properties are well understood by means of abstract interpretation whereas its sticky version is more subtle. — The purpose of the present paper is therefore to investigate how far one can go without introducing a sticky version of the analysis and thereby avoid the correctness problems connected with it.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123251864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Comparing mark-and sweep and stop-and-copy garbage collection 比较标记-清除和停止-复制垃圾收集
Pub Date : 1990-05-01 DOI: 10.1145/91556.91597
B. Zorn
Stop-and-copy garbage collection has been preferred to mark-and-sweep collection in the last decade because its collection time is proportional to the size of reachable data and not to the memory size. This paper compares the CPU overhead and the memory requirements of the two collection algorithms extended with generations, and finds that mark-and-sweep collection requires at most a small amount of additional CPU overhead (3-6%) but, requires an average of 20% (and up to 40%) less memory to achieve the same page fault rate. The comparison is based on results obtained using trace-driven simulation with large Common Lisp programs.
在过去十年中,停止-复制垃圾收集比标记-清除收集更受欢迎,因为它的收集时间与可访问数据的大小成正比,而与内存大小无关。本文比较了两种逐代扩展的收集算法的CPU开销和内存需求,发现标记-清除收集最多只需要少量的额外CPU开销(3-6%),但平均需要减少20%(最高40%)的内存来实现相同的页面故障率。这种比较是基于在大型通用Lisp程序中使用跟踪驱动仿真得到的结果。
{"title":"Comparing mark-and sweep and stop-and-copy garbage collection","authors":"B. Zorn","doi":"10.1145/91556.91597","DOIUrl":"https://doi.org/10.1145/91556.91597","url":null,"abstract":"Stop-and-copy garbage collection has been preferred to mark-and-sweep collection in the last decade because its collection time is proportional to the size of reachable data and not to the memory size. This paper compares the CPU overhead and the memory requirements of the two collection algorithms extended with generations, and finds that mark-and-sweep collection requires at most a small amount of additional CPU overhead (3-6%) but, requires an average of 20% (and up to 40%) less memory to achieve the same page fault rate. The comparison is based on results obtained using trace-driven simulation with large Common Lisp programs.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132284575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Speculative computation in multilisp 多lisp的推测计算
Pub Date : 1989-06-05 DOI: 10.1145/91556.91644
R. Osborne
We present experimental evidence that performing computations in parallel before their results are known to be required can yield performance improvements over conventional approaches to parallel computing. We call such eager computation of expressions speculative computation, as opposed to conventional mandatory computation that is used in almost all contemporary parallel programming languages and systems. The two major requirements for speculative computation are: 1) a means to control computation to favor the most promising computations and 2) a means to abort computation and reclaim computation resources.We discuss these requirements in the parallel symbolic language Multilisp and present a sponsor model for speculative computation in Multilisp which handles control and reclamation of computation in a single, elegant framework. We outline an implementation of this sponsor model and present performance results for several applications of speculative computation. The results demonstrate that our support for speculative computation adds expressive and computational power to Multilisp, with observed performance improvement as great as 26 times over conventional approaches to parallel computation.
我们提供的实验证据表明,在已知需要的结果之前进行并行计算可以比传统的并行计算方法产生性能改进。与几乎所有当代并行编程语言和系统中使用的传统强制计算相反,我们将这种表达式的渴望计算称为推测计算。推测计算的两个主要要求是:1)一种控制计算的手段,以支持最有希望的计算;2)一种中止计算和回收计算资源的手段。我们在并行符号语言Multilisp中讨论了这些要求,并提出了Multilisp中推测计算的赞助模型,该模型在一个单一的,优雅的框架中处理计算的控制和回收。我们概述了这个赞助者模型的实现,并给出了几个推测计算应用的性能结果。结果表明,我们对推测计算的支持增加了Multilisp的表达能力和计算能力,观察到的性能提高是传统并行计算方法的26倍。
{"title":"Speculative computation in multilisp","authors":"R. Osborne","doi":"10.1145/91556.91644","DOIUrl":"https://doi.org/10.1145/91556.91644","url":null,"abstract":"We present experimental evidence that performing computations in parallel before their results are known to be required can yield performance improvements over conventional approaches to parallel computing. We call such eager computation of expressions speculative computation, as opposed to conventional mandatory computation that is used in almost all contemporary parallel programming languages and systems. The two major requirements for speculative computation are: 1) a means to control computation to favor the most promising computations and 2) a means to abort computation and reclaim computation resources.\u0000We discuss these requirements in the parallel symbolic language Multilisp and present a sponsor model for speculative computation in Multilisp which handles control and reclamation of computation in a single, elegant framework. We outline an implementation of this sponsor model and present performance results for several applications of speculative computation. The results demonstrate that our support for speculative computation adds expressive and computational power to Multilisp, with observed performance improvement as great as 26 times over conventional approaches to parallel computation.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1989-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128331434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
期刊
LISP and Functional Programming
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1