Which Coefficients Matter Most—Consecutive <inline-formula><tex-math notation="LaTeX">$k$</tex-math></inline-formula>-Out-of-<inline-formula><tex-math notation="LaTeX">$n$</tex-math></inline-formula>:<inline-formula><tex-math notation="LaTeX">$F$</tex-math></inline-formula> Systems Revisited

Consecutive-<inline-formula><tex-math notation="LaTeX">$k$</tex-math></inline-formula>-out-of-<inline-formula><tex-math notation="LaTeX">$n$</tex-math></inline-formula>:F systems are one of the most well-studied types of networks when discussing reliability. They have been used from safety–critical environments, such as nuclear power plants or hospital's emergency backup power supplies, to classical transportation problems, such as public water systems and oil/gas pipelines. Exact formulae for the reliability polynomial of a consecutive system are known for quite a long time. In addition, several alternatives for computing exactly the reliability polynomial are also known. However, when dealing with large consecutive systems, exact calculations become prohibitive and approximations/bounds are the common route. We begin this article by providing an in-depth review of many known bounds. Next, we focus on the coefficients of the reliability polynomial of a consecutive system in its Bernstein form. By deriving shape properties of these coefficients, we are able to identify new bounds. Our approach is uncommon for this case, as none of the previously used bounding techniques has looked closely at each and every coefficient. This is probably the reason why we obtain tight bounds with low complexity costs. Finally, detailed simulations provide strong evidence of the fidelity of the proposed bounds.


I. INTRODUCTION
C ONSECUTIVE systems were initially introduced as r-succesive-out-of-n:F in [33] and renamed consecutivek-out-of-n:F just one year later [16].On a "reliability timescale," consecutive systems came rather late into play, almost 30 years after the concepts of majority-voting and multiplexing (both gate-level based reliability schemes) were introduced by John von Neumann in January 1952, and published in April 1956 [51].Soon afterward, in September 1956, Moore and Shannon [36] introduced hammock networks, the first device-level based reliability scheme.Such schemes were targeting computations (a major concern for early computers), while consecutive systems are aiming at communications, hence on a diverging path.Despite such differences, both hammock and consecutive-k-out-of-n:F systems belong to the class of device-level based reliability schemes (although "devices" could end up being reasonably "complex blocks").Such systems can be abstracted as networks/graphs, network reliability being a field pioneered by Moore and Shannon in [36], and which has evolved significantly (see [15], [18], [42], [11], and [10]).The major problems in network reliability are to determine: twoterminal [36], k-terminal [53], and all-terminal [11] reliability of a network, and are all known to be very difficult in general (#P-complete [33], [35], [45], [50]).That is why even the best algorithms are time consuming [17], [28], [35], and lower and upper bounds, as well as approximations [25], have been investigated as efficient alternatives to exact but tedious computations.
Even though for consecutive-k-out-of-n:F systems exact computation of the reliability polynomial in symbolic form can be determined by state-of-the-art-algorithms, this is possible only for reasonably small values of n and k, while numeric evaluations allow for achieving way larger values (e.g., n = 10 000 and k ≤ 20 were presented in [23]).Hence, bounds have been reported early on (starting since 1981 [16]), and constantly improved upon over time (see [15], [18], [23], [42], and [44]).While most of the bounding techniques tend to use generic approaches (e.g., a direct bounding scheme on the reliability polynomial), more complex techniques do exist outside the realm of consecutive systems.Indeed, bounding/approximating some of the coefficients of the reliability polynomial [12], [41], [4], [46] (speeding up the estimations of those coefficients), followed by an exact or approximate polynomial evaluation turns out to provide more accurate approximations.All of the many different bounding approaches reveal wide trade-offs between accuracy and time-complexity.Inspired by some of these techniques, and by combinatorial interpretations of the reliability coefficients in the Bernstein basis, we will propose new bounding techniques.

A. Contributions
In this article, the reliability polynomial of a consecutivek-out-of-n:F system [Rel(k, n; p)] will be expressed in the Bernstein basis.Despite being quite common in network reliability, this was rarely exploited at its full potential for consecutive systems.The most common basis for consecutive systems requires the computation of n/k coefficients, while in Bernstein basis n + 1 coefficients (out of which the first n/k are 0, while the last k are binomial coefficients).This gap between the two representations apparently disfavors the Bernstein basis.However, the information contained in the coefficients N n,k,i (in the Bernstein basis) are more useful, from a reliability point of view, as having a clear combinatorial meaning.More precisely, one can use N n,k,i to deduce analytical properties like, e.g., local behavior of Rel(n, k; p) in p = 0, p = 0.5, and p = 1.Another argument in favor of the Bernstein basis is that evaluating (numerically) a polynomial in Bernstein basis can be performed in a stable manner using Casteljau's algorithm.This makes a polynomial in the Bernstein basis less prone to (computational) errors (see [24])-an important aspect when targeting large n and k.All of these are supporting the title, as we believe that "the coefficients which matter most" are those in the Bernstein basis.
Overall, there are three basic methods to compute Rel(k, n; p) in the Bernstein basis.
1) Compute Rel(k, n; p) in a different basis, particular for consecutive systems ( [19], [30], [37], [43]), and either perform a change of basis (using linear algebra), or deduce the Bernstein-form (using combinatorial/probabilistic methods).2) Use Markov chains to compute Rel(k, n; p) directly, employing linear algebra operations, as proposed in [26].3) Compute the coefficients N n,k,i individually.This can be done using enumerative combinatorics or probabilities [30], [37], [39], [43], or based on an algebraic formulae [22].In this article, we carry out a thorough investigation of the coefficients N n,k,i .More precisely, we will provide the following contributions.
a) The valid range of parameters for the known bounds: We make an in-depth assessment of the existing bounding techniques from the literature.While this part of the article might be considered a cursory survey of the state-of-the-art, we do complement the reported results with additional properties.In particular, for each relevant bounding technique we discuss in details the conditions on the parameters n, k, and p.As these bounds cover a reliability polynomial it is only normal to identify the region of values of p where such bounds are valid, i.e., inside the [0,1] interval.Indeed, any values of p where a bound evaluates outside this interval should clearly be excluded, a fact that in many articles is left wanting.In order not to exceed the definition domain, two trivial conditions could be imposed for these regions, namely max{0, v} and min{v, 1}, where v is the value of the bound.We do clarify such aspects, which makes for fairer comparisons among the existing bounds.In addition, we take a particular interest in the interval p ∈ [0.5, 1].We prove that bounds such as those in [3] and [23] are valid for any p ≥ 0.5 as long as k is asymptotically larger than log 2 (n).This particular value of k has lately been mentioned in connection to optimal linear consecutive systems [5], [7].
b) Shape properties of the Bernstein coefficients: We decompose each N n,k,i into an alternating sum of positive terms (which we will call F n,k,i,j ).In order to have a better understanding on the structural properties of N n,k,i we analyze the shape of the sequence F n,k,i,j in the parameters i and j.We discover analytical properties related to these sequences, more precisely, we prove that the sequence F n,k,i,j is log-concave, hence unimodal in both i and j. c) Upper and lower bounds for the entire interval [0,1]: Next, we establish a general condition on the parameters n, k, i under which the sequence of F n,k,i,j in j is decreasing.The main reason for looking at this sequence is that it allows us to set a local bound on each coefficient N n,k,i .Indeed, as we shall show later, one can efficiently establish upper and lower bounds on each and every coefficient, which lead to many bounding polynomials.One of the key feature of such a bounding is that it is valid for the entire domain of definition p ∈ [0, 1], which makes them appealing for a variety of different applications.Our simulations will show that our bounds, although not valid for all the coefficients and all values of p, become valid when p > p 0 (a threshold value to be determined).
d) Upper and lower bounds for the sub-interval [0.5,1]:While the majority of the coefficients will easily satisfy the bounding conditions from the previous point, some coefficients need closer inspection.There are two characteristics boiling down from numerical simulations.First, for small values of k the sequence of F n,k,i,j in j is not decreasing.Hence, we propose to restrict the bounding interval to p > 0.5, where our bounds become valid.Second, we noticed that, in general, the first coefficients are the most time consuming ones to evaluate/bound, while the last coefficients can be exactly computed using only a few terms F n,k,i,j (j ≤ 3).When p > 0.5 almost all coefficients become efficiently computable if using our bounds.
e) Evaluate performances versus state-of-the-art bounding techniques: Last, but not least, we will provide numerical simulations for comparing our technique with other existing solutions.Although the technique we present here is rather conventional in network reliability, it has not been used for consecutive systems.In addition, as we shall prove, it gives significantly better results from several points of view.We will show through extensive simulations that computing the exact values for some of the coefficients and bounding the remaining coefficients allows us to approximate the reliability polynomial using lower and upper bounds that are sharper than the state-ofthe-art results [23], [38], [39].
This article builds on a conference paper [22], where the idea of using the Bernstein-form for bounding the coefficients of a consecutive system was proposed.However, in the short conference paper only a small fraction of results and simulations for small values of n were performed.

B. Outline
Section II starts by introducing consecutive systems.It focuses mainly on computational aspects related to the reliability polynomial of consecutive-k-out-of-n:F systems.Next, in Section III, we provide a detailed list of approximations, with a strong emphasis on various practical aspects, such as, efficiency, complexity, and parameter constrains.Section IV begins with a subsection on the analytical properties of the reliability polynomial coefficients and ends with two techniques of bounding the reliability polynomials.Numerical simulations will be provided in Section V. Finally, Section VI concludes this article.For readability all the proofs are included in Appendices.

A. Definitions
A consecutive-k-out-of-n:F system corresponds to a sequence of n independent, identically distributed (i.i.d.) Bernoulli trials, with common probability of success p, in which the system itself is deemed to have failed if the sequence includes a run of at least k consecutive failures, and to have succeeded, otherwise.The reliability of the system is the probability Rel(k, n; p) that it succeeds.We can write this probability as a homogeneous polynomial of degree n in p and q (where q = 1 − p), as follows: ( The coefficient N n,k,i is the number of sequences of n trials that include exactly i successes, in which the longest consecutive run of failures has length strictly less than k.This representation is known in the literature as the N -form of the reliability polynomial [2], [9], [18], and the coefficients can be understood in terms of pathsets and cutsets of the underlying graph.Notice that N n,k,i satisfies 0 ≤ N n,k,i ≤ n i for any i ∈ {0, 1, . . ., n} [9].By rewriting N n,k,i = n n,k,i n i , we also have that n n,k,i are the coefficients of Rel(k, n; p) in the Bernstein basis.For emphasizing this aspect we will also call this the Bernstein-form.
a) Notations and conventions: Integers and real numbers will be denoted by small symbols, e.g., n, k, p, q.We will use bold symbols M , v to denote matrices and vectors.Particular functions, such as the Lambert W function will be denoted by LW (n).Also, we will employ the usual small-o, big-O notations for asymptotic behavior.

B. Computing the Reliability Polynomial
There are several distinct techniques for computing the reliability polynomial of consecutive systems (see [13] for a comprehensive list).We shall mention here only those that are relevant to this work, i.e., algebraic methods (using linear algebra and Markov chain), and combinatorial methods.We do not seek to provide complexity analyses of these methods, however, they will be emphasized when worth mentioning.
1) Markov Chain and Linear Algebra: While the Markov chain approach for reliability computation of consecutive systems was discovered in the mid-eighties it is one of the most efficient methods in terms of time complexity.Initially suggested by Griffith and Govindarajulu [27], it was detailed and established by Fu [26] in 1987.
Proposition 1 (see [26]): Let n, k be positive integers, I k denote the identity matrix of dimension k and 1 k denote the k length column vector of ones.Let M be a square matrix of size k + 1 given by M = p1 k qI k 0 (0. . .01) . Then, The time complexity if an exact numerical method is used is O(k c log(n)), where c is a constant for matrix multiplication.This makes it appealing for large values of k and n.It is to be mentioned that one could generate the reliability polynomial using this method symbolically and, as we shall demonstrate, the resulting polynomial is in Bernstein-form.Unfortunately, an exact symbolical matrix exponentiation has exponential time complexity [29], and, even if performing the row multiplication first, the complexity is O(k 2 n).
Proposition 2: Let Rel(k, n; p) be obtained using Proposition 1.Then, Rel(k, n; p) is written in Bernstein-form.
2) Combinatorial Methods: a) de Moivre's formula: While consecutive systems were introduced only in 1981 [16], the associated counting problem on which they are relying was proposed and solved for the first time by de Moivre [19].The following result [49] is simply a rephrasing of de Moivre's solution in modern writing style.
Proposition 3: Let n, k, l be positive integers and denote Moreover, there is an algorithm that computes Rel(k, n; p) in time O( n 2 k 2 ).b) A common basis for consecutive: Expanding the terms in the previous formula 3, leads to another well-known form of Rel(k, n; p).Similar transformations were investigated starting from the 1980's [30], [34], [43], [48], and are either handling the generating functions of N n,k,n−i , or rearranging the terms from Proposition 3, and lead to the following.
Proposition 4: For any strictly positive integer j define the functions f j (n, k; p) = q jk p j n−jk j + q jk p j−1 n−jk j−1 .Then, for any Moreover, there is an algorithm that computes Rel(k, n; p) in time O( n 2 k 2 ).c) Bernstein basis using the standard multinomial coefficient: The coefficients of the reliability polynomial in Bernstein-form come from a well-known bins-and-balls counting problem, namely: "What is the number of ways in which n identical balls can be distributed among a sequence of i distinct bins, such that bins may be empty, and no bin may contain more than k balls?"The answer to this problem is given by the standard multinomial coefficient, denoted i n k .The algebraic description of i n k is the following: Fig. 1.Triangle (1 + x + x 2 + x 3 ) j+1 for j ≤ 12.The sequence 0, 3, 10, 10, 5, 1 (in orange) corresponds to N 5,4,i , their sum being 29).All values inside the red rectangle are binomial coefficients.N n,k,i are the tetranacci numbers (A000078 in OEIS [40]) being equal to 2 n Rel(4, n; 0.5).
with i a 1 the usual binomial coefficient, and i a k =0 for a>ik.Notice that ( 5) is a natural way to extend the concept of Pascal triangle (see [8] for more details).
Theorem 5 (see [22]): We have From an algebraic point of view 1 illustrates all the coefficients N n,k,i for k = 4 and n ≤ 12.It is straightforward now to deduce a(nother) formula for the reliability polynomial of a consecutive system in Bernstein-form.
d) Closed form using combinatorial sums: Similar expressions were mentioned in [30], [34], and [39].All of them can be deduced from the alternating sum formula: which leads to the following.Proposition 6: Let r < n/2 be a strictly positive integer and (8) By rewriting Proposition 6 backward, i.e., in p n−i q i , one recovers the results reported in [30], [34], and [39].
Proposition 7: There is an algorithm that computes Rel(n, k; p) in Bernstein-form in time O(n 2 ).
At first sight this result is not very encouraging as O(n 2 ) > O( n 2 k 2 ) (given in Propositions 3 and 4).In fact, computing all the multinomial coefficients can be done in O(n log n) [1], but such an advantage seems unfair as the other approaches have not been properly optimized.Still, if taking the time complexity with respect to the number of coefficients # coef all of them are 2 ), as # coef is n k for Propositions 3 and 4, and n for Proposition 6.
Remark 8: Having more coefficients follows from using the Bernstein-form, and the advantages are: i) a finer/better characterization, and ii) calculations are very stable.In particular, one can determine the local behavior in 0, and 1 of Rel(n, k; p) by simply looking at its first nonzero coefficient N n,k,i n,k , and at N n,k,n−k , respectively.Also, Rel(n, k; 0.5) has a particular combinatorial interpretation in terms of generalized Fibonnacci numbers as Rel(n, k; n+2 /2 n (see [5], [7]).n−i for 0 ≤ i ≤ n (see also [43]).For k = 3 we have

C. Particular Cases
This form was determined using the following identity [8]: The case k = 3 was also reported in [20].b) Large values of k: From Proposition 6 one can infer the following particular cases.
1) For any k ≥ n 2 we have Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
2) For any Operating a change of basis the following alternatives can be established.
Corollary 9: 1) For any k ≥ n 2 we have ( [21], [48]) 2) For any In closing this section, when k is linear in n or when k is small (e.g., k = 1, 2, 3) simplified combinatorial formulae exist.For the remaining possible values of k (e.g., no results are known yet, except for the generic formulae/methods.

III. BOUNDING THE RELIABILITY OF CONSECUTIVE SYSTEMS
Many lower and upper bounds for consecutive systems have been proposed, among the best ones being those of Muselli [38], [39] and of Beiu and Dȃuş [23].As we shall see, many of the proposed bounds are subject to various constraints on the parameters n, k, p, imposed by 0 ≤ Rel(n, k; p) ≤ 1 for any p ∈ [0, 1].The original articles have rarely, if at all, discussed the constraints in details.Therefore, in this section, we will try to shed light on the different ways each and every bound is constraint.Such analyses are necessary in order to allow for a fair comparison of the proposed bounds.We also mention that the bounds to be discussed have time complexity between O(log k) (for simple formulas) through O(log k log n) (formulas including a few exponential terms), and up to O(n log k log n) (formulas including sums of exponential terms).
Chiang and Niu [16] were the first to propose upper and lower bounds on consecutive systems.
Proposition 10 (see [16]): Notice that these bounds are valid for any n, k, p. Indeed, both the upper and the lower bound are functions with image in [0,1].In particular, when q is close to 0 (p close to 1), the two bounds are approaching the exact reliability ) Also, when q = 0.5 the two bounds converge to the same value, as long as k is at least or order k = O( √ n) One year later, Salvia [47] proposed new bounds.Proposition 11 (see [47]): Here, some constrains are needed.For the lower bound, we observe that q has to satisfy q ≤ 1 (n−k+1) in order for the bound to be positive.This yields Using an identical argument we can show that both upper and lower bounds in [47] are approaching the exact value for p close to 1. Also, when q = 0.5 we have the following asymptotics for the lower and upper bounds: A few years after Salvia presented the bounds (17), a new approach of describing consecutive systems was introduced by Chao and Lin [14].First, it was shown that under some particular constraints and k ≤ 4 the reliability of a consecutive system can be approximated by a Poisson distribution.This approach was followed by Fu [26] and Barbour et al. [3].We deliberately omit Fu's results because: i) Fu's lower bound equals the lower bound of Chiang and Niu [16]; and ii) the upper bound is not as tight as Chaing and Niu's [16] upper bound.However, it is Barbour, Chryssaphinou and Roos [3] who showed that Poisson approximation provided improvements.
Proposition 12 (see [3]) Compared to the previous two bounds, Barbour's method has two drawbacks: i) it is computationally more demanding (a tower of exponentials); and ii) the constrains on the parameters are both stricter and more complicated to derive.One can see from Fig. 2 that for small values of k the valid interval for p gets quite limited.
Considering p 0 = 0.5 as a significant threshold, let us examine what happens in this particular case.
The condition on the lower bound implies p ≥ 1/(k + 1).Using the trivial bound on the reliability for the rest of the definition domain we set 0 as lower bound.
Still, the main drawback of Muselli's bounds is represented by their numerical complexity.This comes from the need to evaluate a tower of exponentials in order to determine the bounds (for any value of p or q).
Regarding p 0 = 0.5, the same limits as in the case of Barbour et al. [3] are obtained, supporting the soundness of these bounds.Last, when q is close to 0 (i.e., p close to 1), we notice that the lower bound gets 1 − O( 1 n α ).A different bounding techniques was investigated by Beiu and Dȃuş in [6] and [23].
Proposition 15 (see [23]): In Fig. 3, we plot the valid region of parameters p, k for the bounds from [23] when n = 1000.The red curves are defined by the pairs (p k , k) where p k is the solution to the equation  function from [23].The function g(n, k) is positive over the interval [0,1], increasing on [0, 1 k+1 ], and decreasing on k as long as n ≥ n k and no roots for n < n k .
From Proposition 16, we deduce that when n < n k all values of p ∈ [0, 1] are valid for the bounds in [23].As one can see from Fig. 3, the nonvalid interval is decreasing when k is increasing.This follows from the fact that g(p, k) is decreasing in k (see Appendix).
One can put a rather simple limit on the parameter p (1) k (the blue line in Fig. 3).This yields a condition on k, n, for the case when we restrict the devices to satisfy p ≥ 0.5.
Proposition 17: Let n be a positive integer.Then, for any , the upper and lower bounds in [23] are valid for any p ≥ 0.5.
Asymptotically, for n → ∞ we have We have computed the lower limit on k for n = 1000 and have obtained k ≥ 9.95.The blue line in Fig. 3 evaluates at k = 9 with p = 0.535 and at k = 10 with p = 0.498.This shows that for k ≥ 10 the blue curve is constantly to the left of the vertical line when p = 0.5 (see the brick-wall pattern in Fig. 3).

IV. APPROXIMATING THE COEFFICIENTS OF THE RELIABILITY POLYNOMIAL
First of all we shall focus on the convexity properties of the coefficients of the reliability polynomial of consecutive systems in Bernstein-form.We will prove that: 1) any coefficient can be written as an alternating sum of strictly positive terms; 2) these positive terms form a log-concave sequence; 3) for some particular indices i this sequence is decreasing.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Using all of these, we will set upper and lower bounds on the coefficients, and thus retrieve polynomials which are lower and upper bounding Rel(k, n; p).

A. Analytical Properties of the Coefficients
An in-depth analysis of the coefficients N n,k,i will reveal for the first time shape properties, such as concavity.For any positive integers n, k, i, j denote F n,k,i,j i+1 j n−jk i .From the definition of N n,k,i and using [8] we have Let us analyze the sequence of F n,k,i,j .a) Log-concavity and unimodality in j: Lemma 18: {F n,k,i,j } j=0... n−i k is log-concave.Lemma 19: For any i ∈ {0, n} there is an integer Moreover, for any j ≥ ε i,n,k we have F n,k,i,j > F n,k,i,j+1 .Typically, for each value i we can determine the threshold j = ε i,n,k where F n,k,i,j changes monotony.Let us see how ε i,n,k behaves as a function of i. b) Log concavity in i: Proposition 20: Let n, k, j be positive integers and let i . The sequence {F n,k,i,j } in i is log-concave (hence unimodal) with maximum in i * j .The unimodality in i has a straightforward implication for the behavior of the parameter ε i,n,k .Indeed, since the sequence of F n,k,i,j is unimodal in i, the value of j where F n,k,i,j changes monotony in j will increase with i until it reaches a maximum and then decreases.In Fig. 4, we plot ε i,n,k for n = 1000 and 4 ≤ k ≤ 20.Notice that ε i,n,k < ε i,n,k−1 and that the maximum value of the sequence ε i,n,k in i, shifts left from the first position i = 250 (k = 4) to the last position i = 50 (k = 20).This value decreases rather fast at the beginning 48, 41, 35 (k ∈ {4, 5, 6}), and slower toward the end 15, 14, 13, 12 (k ∈ {17, 18, 19, 20}).Also ε depends on n, k and i and is always strictly smaller than n−i k .Let us give an example to illustrate the importance of this parameter when bounding the coefficients.Let n, k, i be such Hence, the two combinations cannot be used solely for creating nontrivial upper and lower bounds on N n,k,i .If ε i,n,k = 0 then the sequence of F n,k,i,j is decreasing in j.
In this case, one can easily put an upper/lower bound on the coefficients, the question being how large this region is.
The large green region in Fig. 5 represents (k, i) pairs, where ε i,n,k = 0.For n = 100 the percentage of pairs where the condition ε i,n,k = 0 does not hold (computed for 2 ≤ k ≤ 51), is 8.66%.When all values of k are considered, the percentage drops to 4.3%.If n = 500 the percentage of nonvalid pairs is 2%, while for n = 1000 this drops to 1.3%.

B. Bounding the Coefficients
We introduce here techniques for bounding the coefficients of a consecutive system.We will start with bounds valid for any p ∈ [0, 1] by taking ε i,n,k = 0, while afterward restricting p > 0.5 by choosing ε i,n,k = 0.
1) The Case ε i,n,k = 0: As we have previously mentioned, when ε i,n,k = 0 we can bound the coefficients.Let U n,s,i 2 s j=0 (−1) j F n,k,i,j , L n,s,i 2s−1 j=0 (−1) j F n,k,i,j .A particular interesting case is when the sequence of F n,k,i,j is decreasing, a fact that determines the values of i depending on n and k for which any U n,s,i and L n,s,i are upper, respectively, lower bounds for N n,k,i .
Proposition 21: For any n, k, i satisfying n i > (i + 1) n − ki the sequence {F n,k,i,j } j=0... n−i k is decreasing.Moreover, U n,s,i and L n,s,i are upper and lower bounds for N n,k,i .
Proposition 22: Let r > 3 be an integer.For any k < n r−1 and ∀i ∈ {i n,k + 1, . . ., n − rk} we have either r = 2s + 1 and then or r = 2(s + 1) and then Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.It is worth mentioning that as the parameter s is increased, the bounds are approaching the exact value.This also depends on k, e.g., when n/4 ≤ k < n/3 we have s = 1, while for n/8 ≤ k < n/7 we have s = 3, which implies that there are only a few computations to be carried out for larger values of k.As k decreases, the value of s increases, and the number of coefficients that have to be considered (for these computations) increases accordingly.Still, it is always possible to stop the "bounding process" for any value of k < n/3 at any intermediate step.As an example for n/8 ≤ k < n/7 , instead of taking s = 3, one might consider less sharper bounds by taking s = 2, or even s = 1, hence reducing the computations.Such trade-offs depend on the (accuracy) requirements imposed on by designers.
2) The Case ε i,n,k > 0, or From Local to Global Behavior: One of the possible solutions is to cut the values L n,s,i , U n,s,i outside the interval [0, n i ].This method was suggested in [22] by setting U n,s,i = min{U n,s,i , n i } and L n,s,i = max{L n,s,i , 0}.
In Fig. 6, we plot the heatmap of the conditions for the first coefficients when n = 1000 and k = 12.In general, one of the bounds reaches the interval [0, n i ] more rapidly.This suggests that instead of waiting for both bounds to close in on the desired interval, one could stop the computations for one of the bounds way before the minimum value of j m is achieved.This is exactly where p = 0.5 comes into play.
Corollary 23: Let n, k be positive integers.For any k < n r−1 we have (27) While the first method of bounding (ε i,n,k = 0) is looking locally at each and every coefficient, the second bounding technique (ε i,n,k > 0) is considering the coefficients globally.Indeed, if in the first case we know that the contribution of each coefficient will provide a valid bound for the sum, in the second case some of the approximated coefficients might be outside the valid interval, while their sum will still be inside the [0, 2 n ] co-domain.On top of these, when looking at p ∈ [0.5, 1] we are giving even more freedom to our approximations.Now, the question waiting for an answer is: Which are the values of k, for which these bounds hold?

V. SIMULATIONS AND RESULTS
In order to visualize the general trend of these approximations, we have computed Rel(k, n; p) for several values of n and k as well as the upper and lower bounds enumerated in this article.We have used the following symbols: for Chiang and Niu [16], for Salvia [47], for Barbour et al. [3], for Muselli [38], for Dȃuş and Beiu [23], and for the bounds we have introduced in this article.Both in our case as well as Dȃuş and Beiu's [23] work we have computed the first three bounds, i.e., we have choosen L n,s,i and U n,s,i with s = 1, 2, 3.
We have started with n = 100 and considered k = 6, because log 2 (100) = 6.64 allowing us to illustrate the local behavior when p is close to 0, 0.5, and 1.As indicated in Section IV when p is close to 1 all bounds provide very good approximations for Rel (see Fig. 7(c), where the vertical axis is 1 − Rel(6, 100; p)].In addition, in 0.5 we do know that all bounds, except [16], [48], converge toward the correct value [see Fig. 7(b)].What remains to be analyzed carefully is the local behavior toward p = 0.As per Fig. 7(a), we see that Chiang and Niu's [16] as well as Muselli's upper bounds outperform all the others.However, our lower bounds outperform all the others starting with the smallest value of s, namely s = 1.In order to get a clearer view we shall compute and illustrate the local behavior in p = 0 of all the bounds for several different parameters.
For n = 100, we have selected the following four values k = 8, 10, 15, 20, corresponding to k = log and k = n/6, respectively.In Fig. 8, we plot the exact reliability polynomial (yellow line) together with all the upper and lower bounds.Our lower bounds are always closer to the exact reliability as compared to all the aforementioned bounds for s > 1.When k is linear in n [Fig.8(c), (d)] the only bounds close to ours are those of Dȃuş and Beiu [23].Indeed, the pointwise distance between these two bounds is decreasing when p is increasing.However, for small values of p we significantly outperform the results from [23].A similar trend can be seen for the upper bounds.In this case Muselli's bounds outperform ours on particular intervals.This happens either because we have constrained our results by s (for reducing computations), or because we bound systems with small k using only a few options.Even in such cases, for k linear in n we outperform the bounds from [38] with the second term approximation s = 2.
Chiang and Niu [16] and Muselli are solely outperforming upper bounds when k is sublinear in n [see k = 8, 10 in Fig. 8 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Although, for p = 0.5 and toward p = 1 the bounds are approaching the correct values [see Fig. 7(b), (c)], we decided to determine the distances between the exact value and the different bounds.Fig. 9 shows the absolute errors between the lower/upper bounds and Rel(100, 10; p).Looking at Fig. 9(a), we notice that our lower bound provides the smallest absolute errors for any value of p. Regarding the upper bounds, we have already seen that for p ≤ 0.1 Muselli as well as Chaing and Niu [16] are the best ones [see Fig. 9(b)].However, for p > 0.1 the absolute errors between our bound and Rel(n, k; p) are sharply decreasing becoming the best by far, being significantly smaller than any others, with Beiu and Dȃuş bounds the only competitive ones.
At the next simulation step, we have increased n to 500 and computed the exact reliability as well as all the upper and lower bounds for k = 10, 20, 40, 60 (i.e., log 2 (n) + 1 = 9.96, √ n = 22.3, n/12 = 41.6, n/8 = 62.5) in Fig. 10.Our lower bounds outperform all the other lower bounds for p < 0.1 and any k.As k increases our bounds, both lower and upper, are becoming sharper while the others are moving away from the exact reliability, for small values of p.We notice the same behavior as for n = 100, namely that small values of k are more favorable in terms of upper bounds for Chiang & Niu [16] and Muselli [see Fig. 10(a), (b)].
A snapshot in p = 0.5 for n = 500 and k increasing from 4 to 14 is presented in Fig. 11.This is covering the region of interest for k, namely going from a small value to log 2 n and toward √ n.These numerical results support our theoretical claims from Section III, as follows: 1) Salvia and Chiang and Niu [16] provide weak estimations for values of k in the lower range, but start getting better and better from larger values of k, i.e., 2) all the other bounds converge to the exact value starting from k = log 2 n; 3) for the smallest values of k the best bounds are one of our lower bound and one of Muselli's upper bound.Another interesting fact is that s = 1 is sufficient for our bounds to provide a sharp estimation of the reliability for n = 500 and k ≥ 9, which corresponds to the critical value log 2 n.
For larger values of n, e.g., n = 1000, 2000, 5000 the results reveal the same type of behavior, therefore, we have omitted them.We still want to mention that our bounds get better and better as k becomes larger, even for the simplest case s = 1.
All the simulations presented here, including a few of the theoretical results detailed in the previous sections, are synthesized in a compact form in Table I.This gives a glimpse  of the advantages and disadvantages of the different bounds, which obviously one would want as simple as possible.Still, two other important characteristics of a bound (besides how simple the formula is) are: i) how accurate can the bound be calculated/evaluated, and ii) how close are such evaluations to the exact reliability values (over the whole range of p).That is why we have included a column for the accuracy of the formulas (linking to the exponentials which need to be evaluated), as well as tightness when p is close to 0 and to 0.5.We have omitted a similar column for p close to 1 as in this range all the bounds are performing very well [see Fig.

VI. CONCLUSION
Although consecutive systems have been thoroughly investigated, for over 40 years, it looks like they might still hold some secrets.For example, it has just been shown that the roots of their reliability polynomials are unbounded [31], and that the roots of circular consecutive are growing even faster (than those of linear consecutive) [32].In addition, very fresh results have been reported on using Bayesian networks to evaluate the reliability of consecutive systems [52], as well as a fully polynomial-time randomized approximation scheme (FPRAS) for estimating two terminal reliability [25].This article is following on this path by revisiting upper and lower bounds from a new angle.More precisely, we have been looking at the Bernstein-form of the reliability polynomial and decided to closely scrutinize its coefficients.
While reviewing the state-of-the-art of various bounding techniques, we have analyzed in details the range of values of p where different upper and lower bounds are valid.This gives a clearer picture of their applicability allowing for fairer comparisons.
We have shown that any coefficient of the reliability polynomial of consecutive systems can be written as an alternating sum of the elements of a positive log-concave sequence.Such Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
analytical properties provide one of the main ingredients for our new upper and lower bounds on the coefficients.As opposed to state-of-the-art techniques, which are globally looking at the system, our approach for bounding reliability starts from the local bounding of each and every coefficient.Obviously, this is more efficient locally, but, as all our simulations are revealing, more accurate even globally.
The approximations proposed here do depend on two parameters: p (and its range of values), and s (a complexity parameter).While the first has a rather limited constraint, the second one features significant improvements.Numerical results show that when s = 2 or s = 3, for small values of k our bounds are outperforming state-of-the-art bounds, let alone the case when k is sub-linear in n and where s = 1 suffices.

APPENDIX A PROOF OF PROPOSITION 2
Let us begin by stating two useful lemmas.The following notation will be used.Let M be a matrix, where elements are indexed as M [i][j].Also, M [i] will denote the ith row of M .
Lemma 25: Let M be a square matrix of size k + 1 defined as in Proposition 2.Then, ∀i ∈ {1, . . ., k} we have that M n [1][i] is a polynomial expressed in Bernstein-form.
Proof: For an arbitrary k we will proceed by induction on n.It is trivial to verify that the result is valid for n = 1 and n = 2. Hence, we assume that our statement is true for m < n.Now, let us prove that M m+1 also satisfies the conditions from our lemma.
As (−pq k ) j is given in Table II.
A precomputing step evaluates q k and −pq k which require log k + 1 multiplications.Now, each term can be obtained by evaluating 1 combination n−jk j (requires 2j arithmetic operations: j multiplications and j divisions), 1 multiplication by (−pq k ) of the previous term, and 1 multiplication of these two (−pq k ) j and notice that q k , −pq k , (−pq k ) j have been computed (see Table II), as well as a significant part of n−(j+1)k j .We still need 1 multiplication for n−(j+1)k j × (−pq k ) j and the additions (for computing the sum).These make for n k − 1 multiplications and n k − 1 additions.Obviously, there is no n 2 k 2 in this second part.Finally, by counting 1 multiplication for q k times the second part, and 1 subtraction, we end up having O( n 2 k 2 ).We proceed similarly for Proposition 4. Proof: For computing f j (n, k; p)) = q jk p j n−jk j + q jk p j−1 n−jk j−1 , we rewrite this as . This implies that the maximum value of the index j in N n,k,i is (n − i)/k < r, from the condition that k is satisfying.Also, notice that for any strictly positive integer l and i ∈ {n − lk + 1, n − (l − 1)k} we have j ∈ {0, 1, . . ., l − 1}.This implies that Rel(k, n; p) equals Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

APPENDIX D PROOF OF PROPOSITION 7
Proof: Any algorithm evaluating Rel(n, k; p) in Bernsteinform needs to start by computing all the n + 1 coefficients N n,k,i .Unfortunately, using (8) to generate these coefficients is inefficient, as many combinations will end up being calculated more than once (for different coefficients).
A better solution is to rely on the generalized Pascal triangle presented in Fig. 1, avoiding repeating computations.In addition, we replace k − 1 additions (for adding k numbers; see blue rectangle in Fig. 1), by just two operations: one addition and one subtraction (see yellow highlights in Fig. 1).

2
(i.e., the "area" of the triangle) evaluations of two operations each.We could be more accurate, and subtract from (n+1) 2 2 those "areas" where no computations are needed, i.e., where coefficients are known to be 1 (first row and column), or 0, or the 1, 2, 3, . . .sequence (second column), as well as those "areas" where fewer arithmetic operations suffice.Still, these do not change the time complexity of such an approach, which remains O(n 2 ).

APPENDIX E PROOF OF PROPOSITION 13
Proof: Substituting p = q = 1 2 in the equation v 1 (n, k, p) − e 1 (n, k, p) = 0 one gets the solution Taking the Taylor expansion in k one obtains g . This also implies that n k > 2 ln(2)k2 k .Putting the equation in n, i.e., n = 2 ln(2)k2 k , one gets a lower bound on k as a function of n.Solving n = 2 ln(2)k2 k in k (belongs to generic equations of type xe x = y), can be done using the LW function.Indeed, the equation ca be rewritten as k+1 as a single solution in (0,1).We thus deduce that g(k, p) is increasing for 0 ≤ p ≤ p * and decreasing for p * ≤ p ≤ 1, while being g(k, 0) = g(k, 1) = 0.This concludes the positive and monotony properties of g(k, p).The maximum of g(k, p) is thus g(k, p * ) = n−k k+1 ( k k+1 ) k .It follows that g(k, p) crosses the horizontal line y = 1 in two points p (0) k , p (1) k as long as its maximum is greater than 1, i.e., when the following inequality holds: which implies n ≥ k + (k + 1)( k 1+k ) k .For n < k + (k + 1)( k 1+k ) k the function g(k, p) is always smaller than 1 and hence, the interval where the bounds from [23] hold is [0,1].

APPENDIX G PROOF OF PROPOSITION 17
Proof: To find the limit value of k we need to solve the equation that admits a solution k = n − LW(ln(2)2 n ) ln (2) .Using the first two terms in the asymptotic expansion of LW, we obtain the desired result.

APPENDIX H PROOF OF LEMMA 18
Proof: Define  This implies that G n,k,i,j is decreasing in j, i.e., G n,k,i,0 > G n,k,i,1 > . . .> G n,k,i, n−i k .Using the definition for G n,k,i,j , we obtain This implies that for any j in the definition domain we have F n,k,i,j−1 F n,k,i,j+1 < F 2 n,k,i,j , concluding the proof.

APPENDIX I PROOF OF LEMMA 19
Proof: Let us suppose that there is an integer i 0 for which no such ε exists.This means that F n,k,i 0 ,2 m < F n,k,i 0 ,2m+1 for any value of m ∈ {0, . . ., ( n−i  .Hence, for any i ≤ i * j we have F n,k,i+1,j ≥ F n,k,i,j while for i ≥ i * j we have F n,k,i+1,j ≤ F n,k,i,j .

Proof of Proposition 22
Proof: It is straightforward to verify from the properties of F n,k,i,j that L n,s,i and U n,s,i (resp.U n,s−1,i ) are bounds for N n,k,i .Hence, the bounds for the reliability polynomials in the Bernstein form can easily be deduces.

APPENDIX K PROOF OF PROPOSITION 24
Proof: Solving 2 n − 2 n−k n−k+2 2 = 0 in k gives the desired result.For the asymptotic expansion we use the expansion of LW.
a) Small values of k: Particular cases are known for k = 2 and k = 3.When k = 2 we have N n,2,i = i+1

Fig. 3 .
Fig. 3. Condition on p, k for n = 1000 (Proposition 15).Green stripes represent the valid region of parameters, while the red curves denotes its limit.The blue line is an upper bound on the right part of the red curve.

Fig. 6 .
Fig. 6.Heatmap of jm j=0 (−1) j F n,k,i,j ∈ [0, n i ] for n = 1000 and k = 12.Rows represent increasing values of j m , and columns represent the value of the index i.Blue pixels are denoting valid conditions, magenta denotes negative values of the approximation, while dark-magenta denote values larger than n i .
(a) and (b)].It is worth mentioning that, although much simpler and computationally more efficient, Chiang and Niu upper and lower bounds are competitive with respect to Muselli's ones.

GG
n,k,i,j = F n,k,i,j+1 F n,k,i,j n,k,i,j+1 G n,k,i,j =Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

k − 1 m=1(
,k,i 0 ,2 m − F n,k,i 0 ,2m+1 ) < 0 which is impossible.If nF n,k,i 0 ,2m−1 7(c)].From this table it looks like for getting the tightest approximations one should use a lower bound from Muselli and an upper bound from this article.

[1][i] is
a polynomial in Bernstein-form for any j ∈ {1, . .., k} it is trivial to deduce the desired result.Lemma 26: Let A be a square matrix of size k + 1, s.t.Aa polynomial in Bernstein basis for any i ∈ {1, . . ., k}.

TABLE II ARITHMETIC
OPERATIONS FOR COMPUTING β n,k, n/(k+1) − F n,k,i 0 ,2 m ) Proof: Let us begin by the log-concavity.By definition and reducing the terms we obtain F ≤ 1, which implies thatF n,k,i,j is log-concave in i.Computing the solution in i of H n,k,i,j = 1 is equivalent to finding the index where the monotony changes.The positive n,k,i+1,j F n,k,i−1,j F 2 n,k,i,j = i(i + 2)(i − j + 1)(n − jk − i) (i + 1) 2 (n − jk − i)(i − j + 2).Since i(i + 2) < (i + 1) 2 we