## Limits of Functions

We can easily understand the limits of functions if we clearly understand the limits of sequences. The intuitive picture of the limit of a sequence is that the value of a_n infinitely approaches a constant value \alpha as the index n increases. This picture is precisely characterized by the definition that for any \varepsilon > 0, there exists a natural number N(\varepsilon) such that for all n \geq N(\varepsilon), we have | \, a_n \, - \, \alpha \, | < \varepsilon.

On the other hand, the intuitive picture of the limit of a function is that the value of f(x) infinitely approaches a constant value \alpha as the value of x with x \neq a approaches a. If we regard the variable x as corresponding to the index n, we find that this picture of the limit of a function is essentially the same as that of a sequence. The difference between these pictures is in the phrase “as the value of x with x \neq a approaches a.” In the case of sequences, “as the index n increases” is expressed by the phrase “for all n \geq N(\varepsilon)” since the value of N(\varepsilon) is generally large. In contrast, in the case of functions, “as the value of x with x \neq a approaches a” is expressed by:

0 < | \, x \, - \, a \, | < \delta(\varepsilon),

where \delta denotes the Greek letter called “delta.” Moreover, we consider that \delta(\varepsilon) is small. In fact, recalling that | \, x \, - \, a \, | means the distance between x and a, the inequality | \, x \, - \, a \, | < \delta(\varepsilon) implies that x is in a ball (interval) with radius \delta(\varepsilon) and center a. Therefore, x is close to a if \delta(\varepsilon) is small. In addition, we have x \neq a because of 0 < | \, x \, - \, a \, |. Thus, we arrive at the following definition of the limit of a function:

When f(x) satisfies this condition, we say that f(x) converges to \alpha as x \to a, which is denoted by \displaystyle\lim_{ x \to a} f(x) = \alpha. Moreover, \alpha is called the limit value of f(x) as x \to a. This is the definition of the limit of a function by the epsilon-delta argument. The key point of this definition is that \delta is a function of \varepsilon. The existence of the function \delta(\varepsilon) guarantees the convergence of a function in the same way as that of a sequence.

I understand that \delta is a function of \varepsilon, and that the existence of \delta(\varepsilon) guarantees the convergence of a function. However, I wonder whether the value of \delta(\varepsilon) decreases as \varepsilon decreases. In the case of sequences, the value of N(\varepsilon) increases as \varepsilon decreases.

In general, the value of \delta(\varepsilon) decreases as \varepsilon decreases. I think that the following figure is helpful for understanding this property intuitively.

In order to understand the definition of the limit of a function, we will show \displaystyle\lim_{x \to 2} f(x) = 4, where f(x) = x^2. That is,

for any \varepsilon > 0, there exists a positive real number \delta(\varepsilon) such that for all x satisfying 0 < | \, x \, - \, 2 \, | < \delta(\varepsilon), we have | \, f(x) \, - \, 4 \, | < \varepsilon.

Note the last inequality | \, f(x) \, - \, 4 \, | < \varepsilon in the above condition. Since

| \, f(x) \, - \, 4 \, | = | \, x^2 \, - \, 4 \, | = | \, x \, - \, 2 \, | | \, x \, + \, 2 \, |,

we seek the range of x where | \, x \, - \, 2 \, | | \, x \, + \, 2 \, |< \varepsilon holds. We consider that the value of | x + 2 | can be approximated by 4 because we take x in a small neighborhood of 2 for examining the limit of f(x) as x \to 2. Therefore, we assume that | x + 2 | < 5 holds, where we add 1 to 4 with a margin. Noting that |x+2| | x+2 | < 5|x - 2|, we define

\delta(\varepsilon) = \frac{\varepsilon}{5}.

Then, for all x satisfying 0 < | x \, - 2 | < \delta(\varepsilon), we have

|f(x) \, - \, 4| = |x-2| |x+2| < 5 | x-2| < 5 \, \delta(\varepsilon) = 5 \cdot \frac{\varepsilon}{5} = \varepsilon

The above argument holds under the assumption | x + 2 | < 5. This inequality is equivalent to -5 < x + 2 < 5; i.e., -7 < x < 3. Hence, we can obtain | x + 2 | < 5 as long as we restrict the range of x to satisfy 1 < x < 3; i.e., | x \, - 2 | < 1. Thus, instead of assuming | x + 2 | < 5, we modify \delta(\varepsilon) as follows:

\delta(\varepsilon) = \min( \frac{\varepsilon}{5}, \ 1 ),

where \min(x, y) denotes the minimum value of x and y. We note that \delta(\varepsilon) \leq \displaystyle\frac{\varepsilon}{5} and \delta(\varepsilon) \leq 1 hold.

Let x satisfy 0 < | \, x \, - \, 2 \, | < \delta(\varepsilon). Then, noting that \delta(\varepsilon) \leq 1, we have | x \, - 2 | < 1, which leads to | x + 2 | < 5. Therefore, we have

|f(x) \, - \, 4| = |x - 2| |x + 2| < 5 | x -2| < 5 \, \delta(\varepsilon) \leq 5 \cdot \frac{\varepsilon}{5} = \varepsilon.

Thus, for any \varepsilon > 0, there exists a real positive number \delta(\varepsilon) = \min( \displaystyle\frac{\varepsilon}{5}, 1 ) such that for all x satisfying 0 < | \, x \, - \, 2 \, | < \delta(\varepsilon), we have | \, f(x) \, - \, 4 \, | < \varepsilon.

Exercise 8: Let f(x) = \sqrt{x}. Show that \displaystyle\lim_{ x \to 1 }f(x) = 1.

In the same way as for the limits of sequences, we can express the definition of the limit of a function by using logic symbols as follows:

^{\forall}\varepsilon >0, \ ^{\exists}\delta(\varepsilon) >0 \ \ s.t. \ \ 0 < | x \, - \, a| < \delta(\varepsilon) \ \Longrightarrow \ | \, f(x) \, - \, \alpha \, | < \varepsilon,

where the symbol “ \Longrightarrow” means “implies.” Therefore, this expression using logic symbols means:

For any \varepsilon > 0, there exists a positive real number \delta(\varepsilon) such that 0 < | \, x \, - \, a \, | < \delta(\varepsilon) implies | \, f(x) \, - \, \alpha \, | < \varepsilon,

which is superficially different from the previous definition.

Why do we use the logical symbol “ \Longrightarrow” when we express the definition of the limit of a function by using logic symbols ?

I think that the part after s.t. should be expressed by

^{\forall}x \in \{ \, x \, | \, 0 < | x \, - \, a| < \delta(\varepsilon) \, \}, \ \ | \, f(x) \, - \, \alpha \, | < \varepsilon.

This expression is precise, and prevents misunderstanding and confusion when we consider the negation of the definition of the limit of a function. However, the above expression using the symbols of set theory may be unnecessarily complicated. Therefore, the expression “ 0 < | x \, - \, a| < \delta(\varepsilon) \ \Longrightarrow \ | \, f(x) \, - \, \alpha \, | < \varepsilon” is widely used by convention. Similarly, in the case of the limits of sequences, the expression “ n \geq N(\varepsilon) \Longrightarrow | a_n \ - \ \alpha | < \varepsilon” is widely used by convention.

Certainly. Recalling how to negate a statement, I see that the negation of the definition of the limit of a function is given by

^{\exists}\varepsilon >0, \ ^{\forall}\delta >0, \ ^{\exists}x(\delta) \in \{ \, x \, | \, 0 < | x \, - \, a| < \delta \, \}, \ | \, f(x) \, - \, \alpha \, | \geq \varepsilon.

Probably, I could not obtain this correct result if I use the expression with the symbol “ \Longrightarrow” for the definition of the limit of a function.

In the case of sequences, it was shown that \displaystyle\lim_{n \to \infty} (a_n + b_n) = \alpha + \beta if \displaystyle\lim_{n \to \infty} a_n = \alpha and \displaystyle\lim_{n \to \infty} b_n=\beta. In the case of functions, applying the same lines of argument, we can obtain a similar result. Here, we use the above expression using logic symbols for the definition of the limit of a function.

Example 3: Show that \displaystyle\lim_{x \to a} (f(x) + g(x)) = \alpha +\beta if \displaystyle\lim_{x \to a} f(x) = \alpha and \displaystyle\lim_{x \to a} g(x) = \beta

[Solution] Let h(x) = f(x) + g(x). We will show that \displaystyle\lim_{x \to a} h(x) = \alpha +\beta; i.e.,

^{\forall}\varepsilon>0, \ ^{\exists}\delta(\varepsilon) >0 \ \ s.t. \ \ 0 < | x \, - \, a | < \delta(\varepsilon) \ \Longrightarrow \ | h(x) \, - \, (\alpha+\beta) | < \varepsilon.

Our goal is to prove the existence of a function \delta(\varepsilon).

From the given conditions \displaystyle\lim_{x \to a} f(x) = \alpha and \displaystyle\lim_{x \to a} g(x) = \beta, we have

^{\forall}\varepsilon_1>0, \ ^{\exists}\delta_1(\varepsilon_1) >0 \ \ s.t. \ \ 0 < | x \, - \, a| < \delta_1(\varepsilon_1) \ \Longrightarrow \ |f(x) \, - \, \alpha| < \varepsilon_1and

^{\forall}\varepsilon_2>0, \ ^{\exists}\delta_2(\varepsilon_2) > 0 \ \ s.t. \ \ 0 < | x \, - \, a| < \delta_2(\varepsilon_2)\ \Longrightarrow \ |g(x) \, - \, \beta| < \varepsilon_2.

In order to prove the existence of \delta(\varepsilon), we define \delta(\varepsilon) by using \delta_1(\varepsilon_1) and \delta_2(\varepsilon_2), whose existence has already been guaranteed.

Since \varepsilon_1 and \varepsilon_2 are arbitrary, taking \varepsilon_1 = \displaystyle\frac{\varepsilon}{2} and \varepsilon_2 = \displaystyle\frac{\varepsilon}{2}, we have

^{\exists}\delta_1(\displaystyle\frac{\varepsilon}{2}) >0 \ \ s.t. \ \ 0 < | x \, - \, a| < \delta_1(\displaystyle\frac{\varepsilon}{2}) \ \Longrightarrow \ |f(x) \, - \, \alpha| < \displaystyle\frac{\varepsilon}{2}and

^{\exists}\delta_2(\displaystyle\frac{\varepsilon}{2}) >0 \ \ s.t. \ \ 0 < | x \, - \, a| < \delta_2(\displaystyle\frac{\varepsilon}{2}) \ \ \Longrightarrow |g(x) \, - \, \beta| < \displaystyle\frac{\varepsilon}{2}.

For ^{\forall}\varepsilon >0, we define

\delta(\varepsilon) = \min( \delta_1(\displaystyle\frac{\varepsilon}{2}), \ \delta_2(\displaystyle\frac{\varepsilon}{2}) ).

Then, noting \delta(\varepsilon) \leq \delta_1(\displaystyle\frac{\varepsilon}{2}) and \delta(\varepsilon) \leq \delta_2(\displaystyle\frac{\varepsilon}{2}), it follows from 0 < | x \, - \, a| < \delta(\varepsilon) that 0 < | x \, - \, a| < \delta_1(\displaystyle\frac{\varepsilon}{2}) and 0 < | x \, - \, a| < \delta_2(\displaystyle\frac{\varepsilon}{2}). Therefore, for all x satisfying 0 < | x \, - \, a| < \delta(\varepsilon), we have

\begin{array}{l} {\large|} h(x) \, - \, (\alpha + \beta) {\large|} \\[2ex] \ \ \ = {\large|} (f(x) + g(x) ) \, - \, (\alpha + \beta) {\large|} \\[2ex] \ \ \ = {\large|} (f(x) \, - \, \alpha ) \, + \, ( g(x) \, - \, \beta) {\large|} \\[2ex] \ \ \ \leq {\large|} f(x) \, - \, \alpha {\large|} + {\large|} g(x) \, - \, \beta {\large|} \\[2ex] \ \ \ < \displaystyle\frac{\varepsilon}{2} +\displaystyle\frac{\varepsilon}{2} = \varepsilon. \end{array}

This completes the proof.

In the above solution, we define a function \delta(\varepsilon) by using the functions \delta_1(\varepsilon_1) and \delta_2(\varepsilon_2) on the basis of the same strategy that was used in the argument for sequences. However, note that the use of “ \min” in the definition of \delta(\varepsilon), which is different from “ \max” in that of N(\varepsilon) for sequences.

Exercise 9: Let \displaystyle\lim_{x \to a} f(x) = \alpha and \displaystyle\lim_{x \to a} g(x) = \beta.

(1) Show that there exists M > 0 such that the following assertion holds:

There exists \rho > 0 such that for all x satisfying 0 < | x \, - \, a| < \rho, we have | f(x) | < M.

(2) Show \displaystyle\lim_{x \to a} f(x)g(x) = \alpha\beta.

## Continuity of Functions

When the graph of a function is neither cut off nor has a hole, we say that the function is continuous. To be precise, we say that a function f(x) is continuous at x = a if

(A) the value of f(x) is defined at x = a,

and

(B) the limit value \displaystyle\lim_{x \to a} f(x) exists, and \displaystyle\lim_{x \to a} f(x) = f(a) holds.

For example, the function presented in Figure 1 is continuous at x = a because the limit value \displaystyle\lim_{x \to a} f(x) exists and \displaystyle\lim_{x \to a} f(x) = f(a) holds; the graph of the function is neither cut off nor has a hole at x = a. In contrast, the functions presented in Figures 2 and 3 are not continuous at x = a. In the case of Figure 2, the limit value \displaystyle\lim_{x \to a} f(x) does not exist; the graph of the function is cut off at x = a. In the case of Figure 3, \displaystyle\lim_{x \to a} f(x) exists, but \displaystyle\lim_{x \to a} f(x) \neq f(a) does not hold; the graph of the function has a hole at x = a.

By using the definition of the limit of a function, the above condition B that the limit value \displaystyle\lim_{x \to a} f(x) exists and \displaystyle\lim_{x \to a} f(x) = f(a) holds is given by:

For any \varepsilon > 0, there exists a real number \delta(\varepsilon) > 0 such that for all x satisfying 0 < | \, x \, - \, a \, | < \delta(\varepsilon), we have | \, f(x) \, - \, f(a) \, | < \varepsilon.

However, noting that the value of f(x) is defined at x = a, the last inequality in this statement | \, f(x) \, - \, f(a) \, | < \varepsilon holds at x = a. In fact, we have | \, f(a) \, - \, f(a) \, | = 0 < \varepsilon. Therefore, the above statement can be slightly modified as follows:

This is exactly the condition that f(x) is continuous at x = a when the value of f(x) is defined at x = a.

In order to understand the definition of the continuity of a function, we will show that f(x) = x^3 is continuous at x = a. Since it is clear that the value of f(x) is defined for all x, our goal is to show the following:

For any \varepsilon > 0, there exists a real number \delta(\varepsilon) > 0 such that for all x satisfying | \, x \, - \, a \, | < \delta(\varepsilon), we have | \, f(x) \, - \, f(a) \, | < \varepsilon.

As was explained before, noting the last inequality in the above assertion | \, f(x) \, - \, f(a) \, | < \varepsilon, we seek the range of x where this inequality holds. Since

| \, f(x) \, - \, f(a) \, | = | \, x^3 \, - \, a^3 \, | = | \, x \, - \, a \,| | \, x^2 \, + \, xa \, + \, a^2 \,|,

we must seek the range of x where |x \, - \, a| | x^2 + xa + a^2 | < \varepsilon holds.

When x is close to a, we can consider that | x^2 + xa + a^2 | is approximated by 3a^2. Therefore, when x is close to a, we can consider that

| x^2 + xa + a^2 | < 3a^2 + 1

holds, where we add 1 to 3a^2 with a margin. We define

\delta(\varepsilon) = \frac{\varepsilon}{3a^2 + 1}.

Then, for all x satisfying | x \, - \, a | < \delta(\varepsilon), we have

| \, f(x) \, - \, f(a) \, | = | \, x \, - \, a \,| | \, x^2 \, + \, xa \, + \, a^2 \, | < \delta(\varepsilon) \cdot (3a^2 + 1 ) = \varepsilon.

This result is obtained by using the inequality | x^2 + xa + a^2 | < 3a^2 + 1. However, we have not yet specified the range of x where this inequality holds.

Suppose | x \, - \, a | < \varepsilon. Then, we have |x| < |a| + \varepsilon because x is in a ball (interval) with radius \varepsilon and center a. Therefore, we have

\begin{array}{l} |x^2 + xa + a^2| \leq |x^2| + |xa| + |a^2| \\[2ex] \ \ \ \ < (|a| + \varepsilon)^2 + |a|(|a| + \varepsilon) + a^2 \\[2ex] \ \ \ \ = |a|^2 + 2 |a| \varepsilon + \varepsilon^2 + |a|^2 + |a| \varepsilon + a^2 \\[2ex] \ \ \ \ = 3a^2 + 3|a|\varepsilon + \varepsilon^2. \end{array}

Moreover, if \varepsilon < 1, then, by \varepsilon^2 < \varepsilon, we have

\begin{array}{l} |x^2 + xa + a^2| < 3a^2 + 3|a| \varepsilon + \varepsilon^2 \\[2ex] \ \ \ \ < 3a^2 + 3|a| \varepsilon + \varepsilon \\[2ex] \ \ \ \ = 3a^2 + (3|a| + 1) \varepsilon. \end{array}

Consequently, noting that (3|a| + 1) \varepsilon <1 implies \varepsilon < 1, when 0< \varepsilon < \displaystyle\frac{1}{3|a|+1}, we see that | x \, - \, a | < \varepsilon implies | x^2 + xa + a^2 | < 3a^2 + 1.

Thus, for any \varepsilon satisfying 0< \varepsilon < \displaystyle\frac{1}{3|a|+1}, we define

\delta(\varepsilon) = \frac{\varepsilon}{3a^2 + 1}.

Then, | x \, - \, a | < \delta(\varepsilon) implies | x \, - \, a | < \varepsilon by \displaystyle\frac{\varepsilon}{3a^2+1} \leq \varepsilon. Therefore, for all x satisfying | x \, - \, a | < \delta(\varepsilon), we have | x^2 + xa + a^2 | < 3a^2 + 1, which leads to

\begin{array}{l} | f(x) \, - \, f(a) | = | x \, - \, a| |x^2 + xa + a^2| \\[2ex] \ \ \ \ < | x \, - \, a| (3a^2 + 1) \\[2ex] \ \ \ \ < \delta(\varepsilon) (3a^2 + 1) \\[2ex] \ \ \ \ = \displaystyle\frac{\varepsilon}{3a^2 + 1} \cdot (3a^2 + 1) = \varepsilon. \end{array}

Hence, we see that f(x) = x^3 is continuous at x = a. It should be noted that the restriction 0< \varepsilon < \displaystyle\frac{1}{3|a|+1} is imposed on \varepsilon. As seen in the case of sequences, there is no problem in imposing such a restriction on \varepsilon because we must take as small a value of \varepsilon as possible.

From the definition of the continuity of a function, I understand that \delta is a function of \varepsilon because \delta depends on \varepsilon. However, in the above example, we define \delta = \displaystyle\frac{\varepsilon}{3a^2 + 1}. Therefore, I think that \delta can be regarded as a function of a as well as of \varepsilon.

That’s a very good point. You are completely right.

It is usual that a function is defined on a set on the real line {\bf R}. This set is called the “domain.” In many cases, the domain of a function is given by an interval on {\bf R}. For example, when the domain of a function f(x) is the interval I = [a, \, b], the interval I is given by the set

[a, \, b] = \{ \, x \, | \, a \leq x \leq b \, \}.

In this case, the value of the function f(x) is defined for x satisfying a \leq x \leq b. We can define a function on an interval that does not include its endpoints, such as

(a, \, b) = \{ \, x \, | \, a < x < b \, \}.

Moreover, the real line denoted by {\bf R} can be regarded as an interval, and is often expressed by {\bf R } = (-\infty, \, \infty).

Suppose that the domain of a function f(x) is an interval I. Then, we say that f(x) is continuous on I if for any a \in I, f(x) is continuous at x = a. That is, the definition that f(x) is continuous on I is as follows:

It should be noted that \delta is a function of \varepsilon and a in this definition. For example, the function f(x) = x^3 is continuous on {\bf R}. In fact, as was explained above, for any a \in {\bf R} and for any \varepsilon > 0 satisfying 0< \varepsilon < \displaystyle\frac{1}{3|a|+1}, there exists a real positive number \delta(\varepsilon ; a) defined by

\delta(\varepsilon ; a) = \displaystyle\frac{\varepsilon}{3a^2 + 1}

such that for all x satisfying | \, x \, - \, a \, | < \delta(\varepsilon ; a), we have | \, f(x) \, - \, f(a) \, | < \varepsilon.

I understand that when a function f (x) is continuous on an interval I, \delta is a function of a \in I and \varepsilon > 0. However, I wonder whether there exists a function such that \delta is a function of only \varepsilon?

Indeed. When \delta is a function of only \varepsilon, we say that f(x) is uniformly continuous on I. As seen in the following exercise, there are many uniformly continuous functions. The uniform continuity of functions often plays an important role in advanced mathematical analysis.

Exercise 10 Let f(x) = x^2.

(1) Show that f(x) is continuous at any p \in {\bf R}.

(2) Show that f(x) is uniformly continuous on an interval [a, b], where a and b are finite real numbers.

(3) Is f(x) uniformly continuous on {\bf R}?

## Infimum and Supremum

Consider the set

S = \{ \ 1, \, \frac{1}{2}, \, \frac{1}{3}, \, \frac{1}{4}, \ \cdots \ \ \} = \{ \ \frac{1}{n} \ | \ n = 1, \, 2, \, 3, \cdots \ \ \}.

It is easy to see that the maximum of S is 1. In fact, 1 is an element of S, and \displaystyle\frac{1}{n} \leq 1 holds for all \displaystyle\frac{1}{n} \in S. In contrast, the minimum of S does not exist. However, we can consider that 0 plays the almost same role of the minimum, because there is no element of S less than 0 and there exists an element of S in any small neighborhood of 0. In other words, 0 is not the minimum of S, because 0 is not an element of S but the lower limit of S.

Let A be a set on the real line {\bf R}. We say that \ell \in {\bf R} is the infimum of A if

and

The infimum of A is denoted by \inf A. Note that x(\varepsilon) in condition II depends on \varepsilon; i.e., x(\varepsilon) is a function of \varepsilon.

It follows from (I) that there is no element of A less than \ell. Moreover, noting (I), we see that x(\varepsilon) \in A in condition II satisfies

\ell \leq x(\varepsilon) < \ell + \varepsilon,

which intuitively means that x(\varepsilon) exists between \ell and \ell + \varepsilon. Therefore, x(\varepsilon) can be made arbitrarily close to \ell by taking \varepsilon sufficiently small. In other words, condition II is crucial for the assertion that \ell is the lower limit of A.

It should be noted that the infimum of A is not necessarily an element of A. This implies that the conditions for the infimum are looser than those for the minimum, because the minimum of A must be an element of A. In other words, if we add \ell \in A to the above conditions of the infimum, then \ell would be the minimum of A because \ell would be an element of A such that x \geq \ell holds for all x \in A.

Here, we confirm that if \ell is the minimum of A, then \ell is the infimum of A. It is clear that condition I holds. Moreover, for any \varepsilon > 0, we define

x(\varepsilon) = \ell \in A.

Then, we have x(\varepsilon) < \ell + \varepsilon, which implies that condition II holds.

I think that it is not easy to find a key to define x(\varepsilon) = \ell for any \varepsilon.

That makes sense. I think that the key is a blind spot for beginners.

In summary, the minimum of a set is the infimum of the set. In contrast, the infimum of a set is not necessarily the minimum of the set. However, the infimum of a set is also the minimum of the set if the infimum is an element of the set.

We now verify that 0 is the infimum of the set S = \{ \ 1, \, \displaystyle\frac{1}{2}, \, \displaystyle\frac{1}{3}, \, \displaystyle\frac{1}{4}, \ \cdots \ \ \}. Since we have

0 < \displaystyle\frac{1}{n}

for any \displaystyle\frac{1}{n} \in S, condition I for the infimum holds. Moreover, for any \varepsilon > 0, we define n(\varepsilon) as the minimum natural number greater than \displaystyle\frac{1}{\varepsilon}. Then, \displaystyle\frac{1}{n(\varepsilon)} \in S, and by using n(\varepsilon) > \displaystyle\frac{1}{\varepsilon}, we have

\frac{1}{n(\varepsilon)} < \varepsilon = 0 + \varepsilon.

Therefore, condition II for the infimum holds. Thus, we see \inf S = 0.

Similarly, we say that m \in {\bf R} is the supremum of A if

and

The supremum of A, denoted by \sup A, is not necessarily an element of A. If we add m \in A to the above condition, then

m is the maximum of A. Moreover, the maximum of A is the supremum of A.

Exercise 11 Find the infimum and supremum of the set S = \{ \ \displaystyle\frac{n}{n+1} \ | \ n = 1, \, 2, \, 3, \cdots \ \ \}.

Finally, we explain the greatest lower bound and the least upper bound. In what follows, we use simplifications utilizing logic symbols to describe statements.

Let S be a set on the real line {\bf R}. We say that S is bounded below if

^{\exists}a \in {\bf R} s.t. ^{\forall}x \in S, \ x \geq a

holds, where a is said to be the lower bound of S. Similarly, we say that S is bounded above if

^{\exists}b \in {\bf R} s.t. ^{\forall}x \in S, \ x \leq b

holds, where b is said to be the upper bound of S. Moreover, we say that S is bounded if S is bounded both below and above. For example,

S = \{ \ x \ | \ x > 0 \} is bounded below.

S = \{ \ x \ | \ x < 1 \} is bounded above.

S = \{ \ x \ | \ 0< x < 1 \} is bounded.

When S is bounded below, we say that \ell \in {\bf R} is the greatest lower bound of S if \ell is the maximum of the set consisting of the lower bounds of S

S_L = \{ \ x_a \ | \ ^{\forall}x \in S, \ x_a \leq x \ \}.

Similarly, when S is bounded above, we say that m \in {\bf R} is the least upper bound of S if m is the minimum of the set consisting of the upper bounds of S

S_U = \{ \ x_b \ | \ ^{\forall}x \in S, \ x_b \geq x \ \}.

Example 4: Let S be a set on the real line {\bf R} and bounded below. Show that if the infimum of S exists, then the greatest lower bound of S exists, and coincides with the infimum of S.

[Solution] Let \ell = \inf S. We will show that \ell is the maximum of S_L, the set consisting of the lower bounds of S. It follows from condition I for the infimum that

^{\forall}x \in S, \ \ell \leq x,

which leads to \ell \in S_L. Next, we will show

^{\forall}x_a \in S_L, \ x_a \leq \ell.

Assume that this assertion is false. Then,

^{\exists}x' \in S_L s.t. \ \ell < x',

which means that there exists x' \in S_L satisfying \ell < x'. By using x' \in S_L, we see that

^{\forall}x \in S, \ x' \leq xholds. Let \varepsilon' = x' \, - \, \ell > 0. Then, we cannot take x \in S satisfying

x < \ell + \varepsilon' = x',

which contradicts condition II for the infimum. Therefore, the assertion ^{\forall}x_a \in S_L, \ x_a \leq \ell \, is true. Thus, \ell is the maximum of S_L. This completes the proof.

Exercise 12: Let S be a set on the real line {\bf R} and bounded below. Show that if the greatest lower bound of S exists, then the infimum of S exists, and coincides with the greatest lower bound of S.

Suppose that S is a set on the real line {\bf R} and bounded below. From the above example and exercise, we see that if either the infimum of S or the greatest lower bound of S exists, then the other exists, and they are coincident with each other. The intuitive picture of the process of finding the infimum of a set is to find the lower limit from the inside of the set. In contrast, that of finding the greatest lower bound of a set is to put a lid perfectly from the outside of the set.

When a set S is bounded below, can we prove the existence of the infimum or the greatest lower bound of S ?

No. In general, we cannot prove the existence of the infimum or the greatest lower bound, and hence must admit it as an axiom. This is called the completeness axiom of real numbers. For more details, you should consult advanced calculus textbooks.

Similarly, when S is a set on the real line {\bf R} and bounded above, we see that if either the supremum of S or the least upper bound of S exists, then, the other exists, and they are coincident with each other.

Exercise 13: Let A and B be bounded sets on {\bf R}. Show that \inf B \leq \inf A and \sup A \leq \sup B if A \subset B.

Exercise 14: Let \{ a_n \} be a decreasing sequence; i.e., \{ a_n \} satisfies a_1 \geq a_2 \geq a_3 \geq \cdots \geq a_n \geq \cdots.

Show that \displaystyle\lim_{n \to \infty} a_n = \alpha if \{ a_n \} is bounded below, where \alpha is given by \alpha = \inf\{ \ a_n \ | \ n = 1, 2, 3, \cdots \ \}. Note that a similar assertion holds for an increasing sequence.

## Concluding Remarks

We come to the end of our explanation for the basis of the epsilon-delta argument. The idea of the epsilon-delta argument is to formulate our naive intuition for the limits of functions as the existence problem of a function \delta = \delta(\varepsilon). Although this idea is quite simple, the epsilon-delta argument requires some technical training in using logic symbols, estimating inequalities, negating statements, and so on. We will be happy if this website can help calculus beginners to understand the basic underlying ideas and to overcome the technical barriers to using the epsilon-delta argument.