What is ? One may think of this geometrically: We start with a square of edge length . Its area is given by . Now, we proceed cutting it in equal-sized pieces by alterningly using horizontal and vertical cuts. The first cut yields a rectangle of area , then we get a square of size , in the next step a rectangle of area and so on. We may now arrange those rectangles in a specific way:
Combining all areas, we obtain a -rectangle with total area . Hence, we expect the value of the infinite sum to be . When considering the partial sums of that infinite sum, we get a similar result:
The partial sums seemingly tend towards . This supports our assumption that for the infinite sum, is a sensible definition.
So we just assigned a specific value to a particular infinite sum. Now, we would like to extend this intuitive concept to an exact definition for infinite sums. This opens up a set of questions:
How can we determine the value of an arbitrary infinite sum?
Are there maybe infinite sums which cannot be assigned a value?
And if yes, how do we see whether an infinite series can be assigned a value or not?
In this chapter, we will use the concept of a series to formally define what the value of an infinite sum is. Series will be defined using partial sums, which are finite and hence may be easily evaluated. In the following chapters, we will see that there actually are infinite sums, which cannot be assigned a value. And we will obtain some criteria for finding out whether an infinite sum may be evaluated or not.
A finite sum (as you may suspect from its name) is nothing else but a sum with finitely many summands. There is an efficient way of denoting such a sum, which we have already seen in Sum and product (missing). Instead of writing
we used the notation
Here, is the summation index, which assumes all integer values starting from up to its finite value of . For each assumed , the expression will return a summand - and all of these summands are finally added up. This principle is made clear in the following animation:
Example (Example for a finite sum)
Let us consider the finite sum
Here, assumes all integer values starting from through . The funcion assigning to each index its respective summand is given by , so we map . That means, for the summand is , for it is and so on, until we have for . The final sum we obtain is now:
As we now know how to define finite sums, we may proceed with the formal definition of an infinite sum. We start with the form, which seems intuitively most plausible:
And consider the sequence of partial sums:
This sequence will be used to define infinite sums. is the sum over the first summands - and hence a finite sum:
These partial sums are parts of the finite sum. Formally, we may define:
Definition (Partialsumme)
Let be a sequnce in . Then, the following sum is called -th Partial sum:
The value of an infinite sum should be the limit of its partial sums:
We may first construct the sequence of all possible partial sums and then consider their limit. This sequence of partial sums is defined to be a series. We denote it by . This notation is very similar to that of the -th partial sum . But instead of writing down the final index at which the summation has to be stopped, we use the infinity symbol , emphasizing that our summation does not end. The formal definition now reads:
Definition (Reihe)
For a real-valued sequence , the series is defined as the sequence of partial sums :
Next, we identify the outcome of the infinite summation with the limit of the partial sum sequence. This sequence of partial sums is just an ordinary sequence. It either has a limit value, or it diverges. If the partial sum sequence diverges, then the infinite sum / series is also said to diverge. In case, it actually converges towards a certain limit, then the limit value is also assigned to the infinite sum. Technically, an infinite sum is nothing else but the limit of its partial sum sequence. This limit is denoted by as well:
Definition (Limit of a series)
The limit of a series is the limit of the corresponding partial sum sequence:
Hint
In the article „Cauchy-Kriterium für Reihen“ we will see that only the value of almost all summands is relevant for the convergence behavior (convergent / divergent). That means, if we change the value of finitely many summands in the series, the convergence behavior stays the same. However, the limit of a converging series may of course change if we replace some of its summands.
As we already noticed, the expression is used to denote either the sequence of partial sums (= series) or its limit (= value of the series) . This contradicts the basic principle that notations in mathematics should always be unique! The expression cannot describe two different objects (a sequence and a number) at once. Instead, we have to choose the correct meaning from the context. This problem is also treated in the book „Analysis 1“ by Otto Forster:
„The Symbol may mean two things:
The sequence of partial sums.
In case of convergence the limit .“
– Otto Forster in „Analysis 1“[1], translated from German
If we, for instance, say "the series converges" or talk about "the series " in general, then the expression is meant to describe a sequence (first meaning). In case that is treated like a number in calculations, then the expression is meant to denote the limit (second meaning). So we need to pay attention what is meant when using the expression : a sequence of partial sums or its limit.
We just formally defined the idea of an infinite sum:
We defined the sum of the first summands as -th partial sum.
We called the sequence of these partial sums a "series". The limit of this sequence was defined to be the value of our infinite sum.
Example: geometrical series with
Let us consider again the infinite sum given in the beginning. According to our definition, this sum is equivalent to the series . First, let us compute its partial sums:
The infinite sum is equivalent to the sequence of those partial sums:
We may directly give the result for any -th partial sum, using the geometrical summation formula for . Plugging in , this yields:
Hence, our series corresponds to the following sequence:
The sequence converges, as (geometrical sequence with ). Therefore, the value of our series equals 2:
As we have seen, a series is the same thing as the sequence of its partial sums . Let us assume that converges. That means, the limit exists and equals the value of the series. Hence, .
Let us consider the difference between the above limit and the partial sums. The difference between the value of the series (the limit) and the -th partial sum is called -th remainder. We may also thik of it as the "error" of the -th partial sum approximating the value of the series.
The formal definition of the -th remainder reads:
Definition (-tes Restglied einer Reihe)
Let be a series. Then the -th remainder is given by the following series:
The remainders will therefore take the following form:
Now, consider the sequence of remainders . How do we expect this sequence to behave? We already mentioned above that for convrgent series, the remainders should tend to zero, so makes sense. And we will prove that claim in the following theorem:
Theorem (sequence of remainders)
Let be a convergent series. Then, the sequence of remainders will converge towards .
Proof (sequence of remainders)
Since the series converges, the limit exists. Now
Using the calculation rules for limits, we obtain
Therefore, is a null sequence.
Hint
Usually, it is not possible to write down an explicit formula for the sequence of remainders . However, in many cases one may give estimates or bounds for them. For instance, when considering alternating series in general, the Leibniz-Kriteriums will yield us an "error estimate", i.e. an upper bound to the remainder. Taylor series allow for such error estimates as well.