Skip to Main content Skip to Navigation
Conference papers

What We Eval in the Shadows: A Large-Scale Study of Eval in R Programs

Abstract : Most dynamic languages allow users to turn text into code using various functions, often named eval, with language-dependent semantics. The widespread use of these reflective functions hinders static analysis and prevents compilers from performing optimizations. This paper aims to provide a better sense of why programmers use eval. Understanding why eval is used in practice is key to finding ways to mitigate its negative impact. We have reasons to believe that reflective feature usage is language and application domain specific; we focus on data science code written in R and compare our results to previous work that analyzed web programming in JavaScript. We analyze 49,296,059 calls to eval from 240,327 scripts extracted from 15,401 R packages. We find that eval is indeed in widespread use; R's eval is more pervasive and arguably dangerous than what was previously reported for JavaScript.
Complete list of metadata
Contributor : Pierre Donat-Bouillud Connect in order to contact the contributor
Submitted on : Monday, October 11, 2021 - 2:02:15 PM
Last modification on : Thursday, October 14, 2021 - 10:25:07 AM
Long-term archiving on: : Wednesday, January 12, 2022 - 7:51:49 PM


Publisher files allowed on an open archive



Aviral Goel, Pierre Donat-Bouillud, Filip Křikava, Christoph M Kirsch, Jan Vitek. What We Eval in the Shadows: A Large-Scale Study of Eval in R Programs. ACM on Programming Languages, Oct 2021, Chicago, United States. ⟨10.1145/3485502⟩. ⟨hal-03373248⟩



Record views


Files downloads