A CATALOGUE OF MATHEMATICAL FORMULAS INVOLVING π , WITH ANALYSIS

. This paper presents a catalogue of mathematical formulas and iterative algorithms for evaluating the mathematical constant π , ranging from Archimedes’ 2200-year-old iteration to some formulas that were discovered only in the past few decades. Computer implementations and timing results for these formulas and algorithms are also included. In particular, timings are presented for evaluations of various infinite series formulas to approximately 10,000-digit precision, for evaluations of various integral formulas to approximately 4,000-digit precision, and for evaluations of several iterative algorithms to approximately 100,000-digit precision, all based on carefully designed comparative computer runs.

The mathematical constant known as π = 3.141592653589793 . . . is undeniably the most famous and arguably the most important mathematical constant. Mathematicians since the days of Archimedes, up to and including the present day, have analyzed its properties and computed its numerical value.
The question of whether π is given by a simple ratio or algebraic construction has transfixed mathematicians since ancient times. Squaring the circle, i.e., constructing a square with the same area as a given circle using classical ruler-and-compass procedures, was one of the three premier unsolved problems of ancient Greek mathematics. In the 1760s, the Swiss-French mathematician Johann Heinrich Lambert first proved that π is irrational [27]. Then in 1882, the German mathematician Ferdinand von Lindemann proved that π is transcendental [28], meaning that π is not the root of any polynomial with integer or rational coefficients. Among other things, Lindemann's result brought a merciful end to the countless attempts over the centuries to square the circle. This is because any point or line segment that can be constructed using classical ruler-and-compass procedures is provably given by a finite algebraic expression involving only the basic arithmetic operations and square roots, and thus is the root of an integer coefficient polynomial of degree 2 d for some integer d.
Attempts to compute numerical values of π are as old as π itself. Archimedes was the first to devise a rigorous scheme, based on inscribed and circumscribed polygons. He obtained the bounds 3 10 71 < π < 3 1 7 , or, in other words, 3.1408 . . . < π < 3.1428 . . . [4]. Subsequently other mathematicians, in Europe, India, China and the Middle East, used Archimedes' approach to compute more accurate values. For example, about 500 CE, the Indian mathematician Aryabhata found π to four digits, and, at roughly the same time if not before, the Chinese mathematician Tsu Chung-Chih found π to seven digits. These and later computations were spurred by the invention of positional base-10 arithmetic with zero, by unknown Indian mathematicians in the first two or three centuries CE, a discovery which certainly deserves to be ranked as among the most important mathematical discoveries of all time [7].
With the development of calculus by Newton and Leibniz in the 1600s, and the discovery of infinite series and other formulas using calculus, π was computed to tens, then hundreds of digits, culminating with Shanks' 1874 hand computation to 707 digits (alas, only the first 527 were correct). With the advent of the computer in the 1940s, π was computed first to thousands of digits, then to millions, then to billions of digits by the end of the twentieth century. Since then, the pace of progress has continued uninterrupted. In the latest computation, announced on 18 August 2021, a record 62.8 trillion decimal digits were computed by a team of Swiss researchers [22]. For other details on the historical computation of π, see [9].
One intriguing development in this area was the discovery, in 1997, of a formula for π that permits one to calculate a string of binary or base-16 digits of π, beginning at an arbitrary starting position, without needing to calculate any of the digits that came before [14]. Since 1997, numerous other formulas with this property have been found for a variety of other mathematical constants. One of these formulas was used in 2010 to calculate a string of binary digits of π beginning at the two quadrillionth position [30]. Others were used to calculate base-64 digits of π 2 , base-729 digits of π 2 , and base-4096 digits of Catalan's constant, in each case beginning at position 10 trillion. For additional details, see [13] and [3]. This paper presents a collection of 72 formulas and algorithms that have been found by mathematicians over the years involving π. While a comprehensive collection is of course not possible, preference is given in this collection for formulas that satisfy the following criteria: • Formulas that give π or a very simple expression involving π explicitly, as opposed to implicit relations such as e iπ + 1 = 0. • Formulas that give π or a very simple expression involving π as an infinite series, definite integral or simple iterative algorithm.
• Formulas that involve simple notation, such as summations, integrals, binomial coefficients, exponentials, logarithms, etc., that would be familiar to anyone who has completed a beginning calculus course. • Formulas that are relatively new, discovered within the last 100 years or so.
Included in this listing are several formulas for π that have actually have been used in large calculations of π, both before and since the rise of computer technology. These include formulas (2) through (5) prior to the 20th century, and formulas (6), (7), (11), (12), (13), (14), (16), (18), (69) and (71) in the late 20th and early 21st century. Several of these formulas, as we will see, are quite efficient. Formula (11) (known as the Ramanujan-Sato formula), for example, adds roughly eight correct digits per term, while formula (12) (due to the Chudnovskys) adds roughly 14 digits per term.
Formulas (13) through (18) have the intriguing property, mentioned above, that they permit digits (in certain specific bases) of the constant specified on the left-hand side to be calculated beginning at an arbitrary starting position, without needing to calculate any of the digits that came before, by means of relatively simple algorithms. Formulas (13) and (14) have been used in computations of high-order binary digits of π [17, Sec 3.4-3.6], while formula (16) has been used in computations of high-order binary digits of π 2 , and formula (18) has been used in computations of high-order base-3 digits of π 2 [13]. Numerous other recently-discovered formulas that possess the arbitrary digit-computation property for various mathematical constants are catalogued in [3].
Many of these formulas are relatively new, in the sense that they were discovered only in the past few decades. The formulas mentioned in the previous paragraph are certainly in this category, having been discovered only since 1996. Many of the formulas from (19) through (50) were not well known until recently. Formulas (64) through (67) are also relatively new, in the sense that they are part of a class of integral formulas that are the subject of current research [10,11,12]. Formula (69) was discovered in 1976. Formulas (70), (71) and (72) were first published in 1984.

A CATALOGUE OF FORMULAS FOR
15π 16π 15π 20π Then both a k and b k converge to π: each iteration decreases the distance between a k and b k (which interval contains π) by a factor of approximately four.
Then p k converges quadratically to π: each iteration approximately doubles the number of correct digits. • (The Borwein cubic iteration). Set a 0 = 1/3 and s 0 = ( Then 1/a k converges cubically to π: each iteration approximately triples the number of correct digits. The Borweins also published a quadratically convergent algorithm, but that is not listed here. • (The Borwein quartic iteration). Set a 0 = 6 − 4 √ 2 and y 0 = √ 2 − 1. Iterate, beginning with k = 0, Then 1/a k converges quartically to π: each iteration approximately quadruples the number of correct digits. Brent has shown that the Borwein quartic iteration is in fact mathematically equivalent to two iterations of the Brent-Salamin algorithm [20].
Then 1/a k converge nonically to π: each iteration approximately ninetimes the number of correct digits.

CREDITS
• Formula (1) was discovered by Leibniz and Gregory in the 1600s. Formula (2) was attributed to Euler in 1738. Formula (3) was discovered about the same time by Machin [17,105]. The related arctangentbased formulas (4), (5), (6) and (7) (11) is due to Ramanujan, and was used by Gosper in 1986 to compute π to over 17 million digits. The similar but more complicated formula (12) is due to David and Gregory Chudnovsky, and was used by them to compute π to over one billion decimal digits [17,108]. • Formula (13) is known as the "BBP" formula for π, named for the initials of the co-authors of the 1997 paper where it was first presented [14], [17,[119][120][121][122][123][124]. It was discovered by a computer program running the "PSLQ" algorithm of mathematician-sculptor Helaman Ferguson [24,15]. Formula (14) is a variant of the BBP formula due to Bellard [17,124]. Formula (15) was found by Helaman Ferguson and independently by Adamchik and Wagon, who first published it [1]. • Formula (16) appeared in [14]. Formulas (17) and (18) are due to David Broadhurst [21]. • Some of the summation formulas involving factorials and combinatorial coefficients (i.e., formulas (19) through (50)) were found by Ramanujan; others are due to David and Gregory Chudnovsky. The Chudnovskys had these and many other formulas of this general type inscribed on the floor of their research center at Brooklynn Polytechnic University in New York City [23]. Four exceptions are formula (36), which is due to Ramanujan but appeared in [19,188], formulas (45) and (46), which are due to Guillera [26], and formula (50), which is due to Almkvist and Guillera [2]. • Formulas (51) through (63) have been known for many years; many are from [18,5,48, 320-321]. • Formulas (64) through (66) are examples of recent discoveries, by computational methods involving the PSLQ algorithm [24,15], in the theory of box integrals [11,12]. Formula (65), for instance, can be thought of as specifying the average distance from the origin to a point in the unit 3-cube.

PERFORMANCE RESULTS
One question that frequently arises in discussions of formulas and algorithms for π is how they compare when implemented on the computer. To that end, we present here timings for a carefully designed set of comparative computer runs. Timings are presented for the infinite series summation formulas (using 10,000-digit precision), for the integral formulas (using 4,000-digit precision), and for the iterative algorithm formulas (using 100,000-digit precision).

5.1.
Software. The present author tried several different approaches to these timings, including Mathematica software and various high-precision arithmetic libraries. One difficulty with Mathematica implementations is that it is difficult to control how much symbolic manipulation and simplification is being done "under the covers" of the user code. Also, evaluation of the integral formulas is problematic using Mathematica, because it is difficult to control details of the implementation, even when specifying the method to be used.
In the end, the author decided to base the timings below on Fortran implementations utilizing the author's MPFUN-2015 package [5], in particular the MPFUN-MPFR version of MPFUN-2015. This is a high-level multiprecision package, in the sense that it permits one to perform arbitrarily high-precision computations in a Fortran program by making only a few changes to standard double-precision code. For the most part, one only needs to declare high-precision variables to be of a certain datatype, and then the software automatically calls the requisite lower-level routines from the package whenever one of these variables appears in an expression. The package supports both high-precision real and high-precision complex datatypes. The MPFUN-2015 package is entirely thread-safe, so that applications using the package can be performed safely in shared-memory parallel implementations. For full details, see [5].
The MPFUN-2015 package is available in two versions, MPFUN-Fort and MPFUN-2015. The MPFUN-Fort version is written in Fortran, and thus it is a simple matter to compile, install and use. The MPFUN-MPFR version has the same functionality as MPFUN-Fort, but calls the MPFR library [25] for all lower-level computations. At the present time the MPFR library features the fastest runtimes of any arbitrary precision floating-point library [29]. It also produces results that are correctly rounded to the last bit. Thus while the the installation process of the MPFUN-MPFR is more involved (because both the MPFR library and the GMP library must be installed first, using administrator privilege), it features very fast run times.
A newer version of this package, MPFUN-2020, which features significantly faster all-Fortran run times, is now available [6], although it was not used for these computations.

Evaluation of summation formulas.
The summation formulas (formula (1) through formula (50)) were all evaluated using a consistent approach and coding style. It is important to note that these are purely numerical evaluations -symbolic manipulations and simplifications, such as by noting that n≥0 (−1) n /((2n + 1)8 2n+1 ) = arctan(1/8), were not employed. Some straightforward computational simplifications were employed, typical of those that would be utilized in any efficient implementation. For example, numerators and denominators were separately evaluated, because they are integers, and powers such as 2 2n+1 and binomial coefficients such as 2n n were evaluated incrementally from iteration to iteration in a loop. Each individual summation was performed only until its terms were less than 10 −10000 .
It should be added, though, that advanced techniques such as "multisectioning" and "divide and conquer" strategies were not performed. These terms refer to evaluating sections of consecutive terms in the summation, with integer coefficients in the numerators and denominators factored out as much as possible. Such techniques do not make much difference for runs up to 10,000 digits or so, as in these tests, but have been employed in computations of π to many millions of digits. The most successful of these implementations is the "y-cruncher" program of Alexander J. Yee [31], which has been used in the most recent computations of π, based on the Chudnovsky formula (12). In the latest computation, announced on 18 August 2021, a record 62.8 trillion decimal digits were computed by a team of Swiss researchers [22].
The timings for the summation formula computer runs are presented in Table 1. As mentioned above, these timings are based on runs to 10,000-digit precision. In some cases, timings are not listed, because these formulas would require astronomical numbers of terms to produce 10,000-digit results. algorithms, but are still nonetheless fairly reasonable if performed using an efficient quadrature (numerical integration) scheme. To that end, the author used variations of the tanh-sinh algorithm for all of the integral formulas (formulas (51) through (67)). The tanh-sinh scheme, while often not quite as efficient as Gaussian quadrature for entirely regular integrand functions, nonetheless has significant advantages for this type of very high-precision computation [16].
Given a function f (t) defined on [−1, 1], the tanh-sinh quadrature rule is where g(t) = tanh( π 2 · sinh t) and the abscissas x j and weights w j are given by The tanh-sinh scheme can be used for functions on any finite interval. A variation of the tanh-sinh scheme known as the exp-sinh scheme, based on the function g(t) = exp( π 2 · sinh t), can be employed for integrals on a semi-infinite interval such as (0, ∞). The sinh-sinh scheme, based on g(t) = sinh( π 2 · sinh t), can be employed for integrals on the entire real line.
One advantage of the tanh-sinh scheme is that it can be readily used for problems, such as formulas (56) and (58), whose integrand functions (or their higher-order derivatives) have a vertical derivatives or singularities at one or both endpoints. It is often not easy to determine whether or not an integral has such a singularity. For example, consider the integral 1 0 sin p (πx)ζ(p, x) dt, where ζ(p, x) denotes the Hurwitz zeta function. When p = 3, this integrand function and its higher derivatives are all regular, and can be integrated using Gaussian quadrature, but when p = 7/2 = 3.5, while the plot of this function looks unremarkable, its fourth and higher derivatives have singularities at the endpoints, and Gaussian quadrature fails badly. By contrast, the tanh-sinh quadrature rule easily integrates this function to high precision [8].
Another major advantage of the tanh-sinh scheme for very-high-precision computation is that the cost of computation of abscissas and weights, using (74), increases only linearly with N , whereas the abscissa-weight computation in Gaussian quadrature increases quadratically with N . Given that the number of evaluation points required to achieve a given precision increases roughly linearly with the number of digits, this means that the cost of computing abscissas and weights for the Gaussian scheme increases roughly cubically with the precision desired, compared with quadratically with tanh-sinh, even before the increases in the cost of the arithmetic with higher precision are considered. For example, computation of tanh-sinh abscissas and weights sufficient to evaluate the integral problems in this paper to approximately 4,000-digit accuracy required only 167 seconds, whereas the corresponding calculations for Gaussian quadrature would require over 100 hours run time (and the Gaussian scheme, as mentioned above, could not be used for formulas (56) and (58), because of singularities at endpoints).
Timings for evaluations of the integral formulas are shown in Table 2. The tanh-sinh scheme was employed for all formulas except formulas (57), (58) and (63), which employed the exp-sinh scheme.

Evaluations of multiple integrals.
As noted in the introduction, formulas (64), (65) and (66) derive from a recent study of box integrals [11,12]. In this study, high-precision numerical values of these integrals (and others) were used, in conjunction with the PSLQ integer relation algorithm [24,15], to numerically discover the relations indicated. A similar approach was taken to numerically discover evaluations of integrals such as formula (67) that derive from the Ising theory of mathematical physics [10]. Indeed, such studies overwhelmingly demonstrate the value of very high-precision quadrature in experimental mathematics.
The Archimedes iteration is mathematically equivalent to the scheme sketched by the ancient Greek mathematician Archimedes in approximately 250 BCE. For details on how this iterative formula is derived, and a rigorous proof that it converges to π, see [4].
The remaining iterative algorithms have the remarkable property that they converge quadratically (formula (69)), cubically (formula (70)), quartically (formula (71)) and nonically (formula (72)), respectively, meaning that the number of correct digits approximately doubles, triples, quadruples and ninetimes, respectively, with each iteration, provided of course that all iterations are performed with a level of numeric precision that is at least as high as the precision desired for the final result.
These iterations were implemented in an entirely straightforward fashion. The only change from the formulas as stated were to save results of some intermediate expressions, such as the value of (1 − y 4 k ) 1/4 in (71)), rather than recomputing them each time they appear.

TIMING RESULTS
Timing results for the summation formulas (using 10,000-digit precision), the integral formulas (using 4,000-digit precision) and the iterative algorithms (using 100,000-precision) are given in Table 1, Table 2 and Table 3, respectively. These runs were performed on a single processor of a 2019 MacPro with a 3 GHz 8-core Intel Xeon E5 processor and 32 GB RAM. It utilized the MPFUN-MPFR package, version 9, with version 4.0.2 of the MPFR library and version 6.1.2 of the GMP library.
The final results of each calculation were checked against the reference values to verify that the relative errors met the prescribed tolerance. Each of the summation formula results met the relative error tolerance 10 −10000 , except formulas (9), (19), (31) and (50) (which met a relative error of 10 −9998 ). Each of the integral formula results met the relative error tolerance 10 −4000 , except formula (51) (10 −3948 ), formula (58) (10 −3688 ) and formula (63) (10 −3954 ). Each of the iterative algorithm results met the relative error tolerance 10 −100000 .
These results clearly indicate a very wide range in timings, even among formulas of the same class. Among the summation formulas, timings ranged from 0.12 seconds for formula (11) (a formula due to Ramanujan that has been used in some recent large computations of π) to 45.53 seconds for formula (31), which in spite of its similar outward appearance to other formulas involving binomial coefficients, converges very slowly. And, of course, several other formulas ((1), (10), (20) and (47)) converge so slowly that no timings are presented, since evaluations of π to 10,000-digit precision using these formulas would require astronomically long run times. For example, evaluating π to 10,000-digit precision strictly using Gregory's series for π (formula (1)) would require evaluating roughly 10 10000 terms and vastly more run time than the age of the universe.
From a computational perspective, the integral formulas are more challenging, as they require advanced techniques for evaluation to high precision, as noted above. Even here, though, we observe vast differences in run time, ranging from 1.26 seconds using the simple integral in formula (52) to 100.45 seconds for the integral in formula (57). The large run time here mostly reflects the cost of computing the exponential function in (57). Note, however, that a complicated integrand function by itself does not guarantee a very long run time. The most complicated integral in the list, namely formula (67), which was evaluated using the equivalent but still very complicated 1-D integral in formula (78), required only 35.2 seconds to produce a 4,000-digit value.
The iterative algorithm formulas show a particularly dramatic contrast in run times. Here, as mentioned above, the timings are for computations using 100,000-digit arithmetic. The Archimedean iteration, namely formula (68), required 166,133 iterations and 1015.39 seconds run time to produce a result accurate to 100,000 digits. But each of the more modern iterations, specifically the Brent-Salamin algorithm ((69)) and the three Borwein algorithms ((70), (71), (72)), all required between 0.11 and 0.14 seconds, which are truly remarkable speeds for 100,000-digit results.
It should be noted, however, that although the Brent-Salamin algorithm (69) and the Borwein quartic algorithm (71) have been used in several recent large computations of π, the Chudnovsky formula (12) is now the most widely used formula for very large computations of π. Even though it is significantly slower than the Brent-Salamin and Borwein formulas, by the tests reported in this paper, advanced techniques such as multisectioning and divide-andconquer strategies can be employed with this formula, which techniques prevail in computations to millions, billions or trillions of digits. 0.14 5 Table 3. Timings for evaluations of iterative algorithm formulas to approximately 100,000-digit precision.