12 Yorihiro et al.
25. K. Havelund, D. Peled, and D. Ulus. First-order temporal logic monitoring with
BDDs. FMSD, 56(1-3):1–21, 2020.
26. K. Havelund and G. Roşu. Monitoring programs using rewriting. In ASE, pages
135–143, 2001.
27. java.util.Collections. https://docs.oracle.com/javase/8/docs/api/java/util/
Collections.html.
28. JGit. http://www.eclipse.org/jgit.
29. D. Jin, P. O. Meredith, C. Lee, and G. Roşu. JavaMOP: Efficient parametric
runtime monitoring framework. In ICSE Demo, pages 1427–1430, 2012.
30. M. Kim, S. Kannan, I. Lee, O. Sokolsky, and M. Viswanathan. Java-MaC: A
run-time assurance tool for Java programs. In RV, pages 218–235, 2001.
31. C. Lee, D. Jin, P. O. Meredith, and G. Roşu. Towards categorizing and formalizing
the JDK API. Technical report, Computer Science Dept., UIUC, 2012.
32. O. Legunsen, N. Al Awar, X. Xu, W. U. Hassan, G. Roşu, and D. Marinov. How
effective are existing Java API specifications for finding bugs during runtime veri-
fication? ASEJ, 26(4):795–837, 2019.
33. O. Legunsen, F. Hariri, A. Shi, Y. Lu, L. Zhang, and D. Marinov. An extensive
study of static regression test selection in modern software evolution. In FSE,
pages 583–594, 2016.
34. O. Legunsen, W. U. Hassan, X. Xu, G. Roşu, and D. Marinov. How good are the
specs? A study of the bug-finding effectiveness of existing Java API specifications.
In ASE, pages 602–613, 2016.
35. O. Legunsen, D. Marinov, and G. Rosu. Evolution-aware monitoring-oriented pro-
gramming. In ICSE NIER, pages 615–618, 2015.
36. O. Legunsen, A. Shi, and D. Marinov. STARTS: STAtic Regression Test Selection.
In ASE Demo, pages 949–954, 2017.
37. O. Legunsen, Y. Zhang, M. Hadzi-Tanovic, G. Rosu, and D. Marinov. Techniques
for evolution-aware runtime verification. In ICST, pages 300–311, 2019.
38. M. Leucker and C. Schallhart. A brief account of runtime verification. In Formal
Languages and Analysis of Contract-Oriented Software, pages 293–303, 2007.
39. Y. Liu, J. Zhang, P. Nie, M. Gligoric, and O. Legunsen. More precise regression
test selection via reasoning about semantics-modifying changes. In ISSTA, pages
664–676, 2023.
40. B. Miranda, I. Lima, O. Legunsen, and M. d’Amorim. Prioritizing runtime verifi-
cation violations. In ICST, pages 297–308, 2020.
41. P. W. O’Hearn. Continuous reasoning: Scaling the impact of formal methods. In
LICS, pages 13–25, 2018.
42. F. Palomba and A. Zaidman. Does refactoring of test smells induce fixing flaky
tests? In ICSME, pages 1–12, 2017.
43. G. Reger, H. C. Cruz, and D. Rydeheard. MarQ: monitoring at runtime with QEA.
In TACAS, pages 596–610, 2015.
44. F. B. Schneider. Enforceable security policies. TISSEC, 3(1):30–50, 2000.
45. A. Shi, A. Gyori, O. Legunsen, and D. Marinov. Detecting assumptions on de-
terministic implementations of non-deterministic specifications. In ICST, pages
80–90, 2016.
46. A. Shi, M. Hadzi-Tanovic, L. Zhang, D. Marinov, and O. Legunsen. Reflection-
aware static regression test selection. PACML, 3(OOPSLA):1–29, 2019.
47. A. Shi, W. Lam, R. Oei, T. Xie, and D. Marinov. iFixFlakies: A framework for
automatically fixing order-dependent flaky tests. In FSE, pages 545–555, 2019.
48. About surefire. https://maven.apache.org/surefire.