A Gem from Charles-Ange Laisant

In a 1905 article, Charles-Ange Laisant, a French politician and mathematician, introduced the following theorem:

Given a function f with inverse f^{-1}, then

\displaystyle \boxed{(y=f(x) \iff x = f^{-1}(y), F'(x) = f(x)) \implies \int f^{-1}(y)\;dy = y f^{-1}(y) - F(x) + C \quad\quad\quad(1)}

where C is an arbitrary real constant.

It can also be stated equivalently as:

Given a function f with inverse f^{-1}, then

\displaystyle \boxed{(y=f(x) \iff x=f^{-1}(y), F'(y)=f^{-1}(y)) \implies \int f(x)\;dx = x f(x)-F(y) +C \quad\quad\quad(2)}

where C is an arbitrary real constant.

Moreover, this theorem gives

\displaystyle \boxed{(y=f(x) \iff x = f^{-1}(y), F'(x) = f(x)) \implies \int\limits_{b}^{a} f^{-1}(y)dy = bf^{-1}(b)-af^{-1}(a) - (F(f^{-1}(b))-F(f^{-1}(a)))\quad\quad\quad(3)}

and

\displaystyle \boxed{(y=f(x) \iff x = f^{-1}(y), F'(y) = f^{-1}(y)) \implies \int\limits_{a}^{b}f(x)\;dx=bf(b)-af(a) - (F(f(b))-F(f(a))\quad\quad\quad(4)}

Frequently, obtaining an antiderivative for f^{-1} is relatively easier than finding one for f. In such instances, substituting the integrals of f with integrals involving f^{-1} can be advantageous.

For example, let f=\arcsin \implies f^{-1} = \sin, we have

\displaystyle F'(y) = f^{-1}(y) = \sin(y) \implies F(y) = \int \sin(y)\; dy = -\cos(y) + C_1.

As a result,

\displaystyle \int \arcsin(x)\;dx \overset{(2)}{=} x \arcsin(x) - F(y)

= x \arcsin(x) - (-\cos(y)+C_1)

y=f(x)=\arcsin(x) \implies \left(-\frac{\pi}{2} \le y \le \frac{\pi}{2}, x = \sin(y)\right)

\implies \cos(y) = +\sqrt{1-\sin(y)} = \sqrt{1-x^2}.

= x \arcsin(x) + \sqrt{1-x^2} + C, \quad C=-C_1.

That is,

\displaystyle\int \arcsin(x)\;dx = x \arcsin(x) + \sqrt{1-x^2}+C.

Another illuminating example is as follows:

\displaystyle \int\limits_{0}^{1}\log(x)\;dx

= \displaystyle \lim\limits_{a \rightarrow 0^+}\int\limits_{a}^{1}\log(x)\;dx

f = \log \implies f^{-1} = \exp.

\overset{(4)}{=} \displaystyle\lim\limits_{a \rightarrow 0^+}\left(x\log(x)\bigg|_{a}^{1}-(e^{\log(1)} - e^{\log(a))}\right)

= \displaystyle\lim\limits_{a \rightarrow 0^+}\left(1\cdot\log(1)-a\cdot\log(a)-(1-a)\right)

= \lim\limits_{a \rightarrow 0^+}(-a\cdot\log(a) - (1-a)).

Since \boxed{\lim\limits_{a \rightarrow 0^+} a\cdot\log(a)=0}, \; \lim\limits_{a \rightarrow 0^+} 1-a = 1, we have

\displaystyle \int\limits_{0}^{1}\log(x)\;dx= -1.


Prove\displaystyle \boxed{(y=f(x) \iff x = f^{-1}(y), F'(x) = f(x)) \implies \int f^{-1}(y)\;dy = y f^{-1}(y) - F(x) + C \quad\quad\quad(1)}

\frac{d}{dy} (yf^{-1}(y)-F(y) + C)

= \frac{d}{dy}yf^{-1}(y) + y\frac{d}{dy}f^{-1}(y) -\frac{d}{dy}F(x) + \frac{d}{dy}C

\overset{y=f(x)}{=} f^{-1}(y) + f(x) \frac{d}{dy}f^{-1}(y) -\frac{d}{dx}F(x)\frac{d}{dy}x + 0

\overset{F'(x)=f(x), x=f^{-1}(y)}{=} f^{-1}(y) +f(x)\frac{d}{dy}f^{-1}(y) -f(x)\frac{d}{dy}f^{-1}(y)

= f^{-1}(y)

\implies yf^{-1}(y) - F(x) + C is an antiderivative of f^{-1}(y).

Therefore, \displaystyle \int f^{-1}(y)\;dy = y f^{-1}(y) - F(x)) + C.


Prove \displaystyle \boxed{(y=f(x) \iff x = f^{-1}(y), F'(y) = f^{-1}(y)) \implies\int\limits_{a}^{b}f(x)\;dx=bf(b)-af(a) - (F(f(b))-F(f(a))\quad\quad\quad(4)}

By (2),

\displaystyle\int\limits_{a}^{b}f(x)\;dx \overset{y=f(x)}{=} xf(x)-F(f(x))+C\bigg|_{a}^{b}

=bf(b)-F(f(b)) - (af(a)-F(f(a)))

= bf(b)-af(a) - (\underbrace{F(f(b))-F(f(a))}_{\displaystyle \int_{f(b)}^{f(a)}f^{-1}(x)\;dx})


Prove \boxed{\lim\limits_{a \rightarrow 0^+} a\cdot\log(a)=0}:

\lim\limits_{a \rightarrow 0^+} a\cdot\log(a)=\lim\limits_{a \rightarrow 0^+} \frac{\log(a)}{\frac{1}{a}}

\overset{\lim\limits_{a \rightarrow 0^+}\log(a) = -\infty, \lim\limits_{a \rightarrow 0^+}\frac{1}{a} = \infty}{=} \lim\limits_{a \rightarrow 0^+} \frac{\frac{d}{da}(\log(a))}{\frac{d}{da}(\frac{1}{a})}= \lim\limits_{a \rightarrow 0^+} \frac{\frac{1}{a}}{-\frac{1}{a^2}}= \lim\limits_{a \rightarrow 0^+} -a= 0.


Exercise-1 Prove (2)

Exercise-2 Prove (3)

Exercise-3 What is \displaystyle \int \arctan(x)\;dx ?

(Hint: \displaystyle \int \tan(x)\;dx = \log(|\sec(x)|), 1+\tan^2(x)=\sec^2(x))

Exercise-4 Explain \lim\limits_{a \rightarrow 0^+}\log(a) = -\infty and \lim\limits_{a \rightarrow 0^+}\frac{1}{a} = \infty.

Exercise-5 Show that (2) can be written as

\displaystyle \int f(x)\;dx = xf(x)-\int f^{-1}(y)\;dy + C.

And, prove

\displaystyle \int f^p(x)\;dx = xf^p(x)-p\int y^{p-1} f^{-1}(y)\;dy + C,  \quad p\in \mathbb{R}.

Exercise-6 Derive (1) (Hint: The foundation of a technique for evaluating definite integrals and Integration by Parts Done Right)

A Melody on Pi Day

Evaluate \displaystyle \int\limits_{-\infty}^{\infty} \frac{\sin(x)}{x}\;dx


This integral is known as the Dirichlet Integral, named in honor of the esteemed German mathematician Peter Dirichlet. Due to the absence of an elementary antiderivative for the integrand, its evaluation by applying the Newton-Leibniz rule renders an impasse. However, the Feynman’s integral technique offers a solution.

The even nature of the function \frac{\sin(x)}{x} implies that

\displaystyle\int\limits_{-\infty}^{\infty}\frac{\sin(x)}{x}\;dx = 2 \int\limits_{0}^{\infty}\frac{\sin(x)}{x}\;dx.\quad\quad\quad(*)

Let’s consider

I = \displaystyle\int\limits_{0}^{\infty} \frac{\sin(x)}{x}\;dx\quad\quad\quad(**)

and define

J(\beta) = \displaystyle\int\limits_{0}^{\infty} \frac{\sin(x)}{x}e^{-\beta x}\;dx ,\quad\beta\ge 0.\quad\quad\quad(***)

We can differentiate J(\beta) with respect to \beta:

\frac{dJ(\beta)}{d\beta} = \displaystyle\int\limits_{0}^{\infty}\frac{\partial}{\partial\beta} \left(\frac{\sin(x)}{x}e^{-\beta x}\right)\;dx= \displaystyle \int\limits_{0}^{\infty}\frac{\sin(x)}{x}\cdot(-x) e^{-\beta x}\;dx= \boxed{\displaystyle\int\limits_{0}^{\infty}-\sin(x)e^{-\beta x}\;dx=\frac{-1}{1+\beta^2}}.

Hence, we find

\frac{dJ(\beta)}{d\beta} = \frac{-1}{1+\beta^2}.

Integrating with respect to \beta from 0 to \infty :

\displaystyle\int\limits_{0}^{\infty} \frac{dJ(\beta)}{d\beta}\;d\beta = \int\limits_{0}^{\infty}\frac{-1}{1+\beta^2}\;d\beta

= -\arctan(\beta)\bigg|_{0}^{\infty}=\left(\lim\limits_{\beta\rightarrow \infty}-\arctan(\beta)\right)-\left(-\arctan(0)\right)=-\frac{\pi}{2}

gives

J(\infty)-J(0) = -\frac{\pi}{2}.

Since

J(0) = \displaystyle\int\limits_{0}^{\infty}\frac{\sin(x)}{x}e^{-0\cdot x}\;dx = \int\limits_{0}^{\infty}\frac{\sin(x)}{x}\;dx \overset{(**)}{=}I

and

\boxed{J(\infty) = 0},

we arrive at

0 - I = -\frac{\pi}{2} \implies I = \frac{\pi}{2}\overset{(**)}{\implies} \displaystyle \int\limits_{0}^{\infty} \frac{\sin(x)}{x}\;dx=\frac{\pi}{2}.

It follows that by (*):

\displaystyle\int\limits_{-\infty}^{\infty}\frac{\sin(x)}{x}\;dx = \pi.


Show that \boxed{J(\infty) = 0}:

From the inequality

\forall x>0, \frac{\sin(x)}{x}e^{-\beta x} > 0

and

\forall x > 0, \frac{\sin(x)}{x} < 1

(see (3) in A Proof without Calculus),

we deduce that

\displaystyle 0 < \int\limits_{0}^{\infty}\frac{\sin(x)}{x}e^{-\beta x}\;dx < \int\limits_{0}^{\infty}e^{-\beta x}\; dx = \frac{-e^{-\beta x}}{\beta} \bigg|_{0}^{\infty} = (\lim\limits_{x \rightarrow \infty} \frac{-e^{-\beta x}}{\beta})-\frac{-e^{\beta\cdot 0}}{\beta}= \frac{1}{\beta}.

That is,

\displaystyle 0 < \int\limits_{0}^{\infty}\frac{\sin(x)}{x}e^{-\beta x}\;dx < \frac{1}{\beta}.

By the Sandwich Theorem for Functions 2,

\lim\limits_{\beta \rightarrow \infty}\frac{1}{\beta } = 0 \implies \displaystyle\lim\limits_{\beta \rightarrow }\int\limits_{0}^{\infty}\frac{\sin(x)}{x}e^{-\beta x}\;dx =0.\quad\quad\quad(\star)

Consequently,

J(\infty) = \lim\limits_{\beta \rightarrow \infty} J(\beta) \overset{(***)}{=}\displaystyle\lim\limits_{\beta \rightarrow \infty}\int\limits_{0}^{\infty}\frac{\sin(x)}{x}e^{-\beta x}\;dx\overset{(\star)}{\implies} J(\infty) = 0.


Show that \displaystyle \boxed{\int\limits_{0}^{\infty} -\sin(x)e^{-\beta x}\;dx=\frac{-1}{1+\beta^2}}:

\displaystyle \int\limits_{0}^{\infty} -\sin(x)e^{-\beta x}\;dx

= \displaystyle\int\limits_{0}^{\infty} (\cos(x))'e^{-\beta x}\;dx

= \displaystyle \cos(x)e^{-\beta x}\bigg|_{0}^{\infty} - \int\limits_{0}^{\infty}e^{-\beta x}\cdot -\beta \cdot \cos(x)\; dx

= \displaystyle 0 - 1 + \beta\int\limits_{0}^{\infty} e^{-\beta x}\cos(x)\;dx

= \displaystyle -1 + \beta\left(\sin(x)e^{-\beta x}\bigg|_{0}^{\infty}-\beta\int\limits_{0}^{\infty}-\sin(x)e^{-\beta x}\; dx\right)

=\displaystyle  -1 -\beta^2\int\limits_{0}^{\infty}-\sin(x)e^{-\beta x}\; dx.

That is,

\displaystyle\underline{\int\limits_{0}^{\infty}-\sin(x)e^{-\beta x} \;dx} = -1 -  \beta^2\underline{\int\limits_{0}^{\infty}-\sin(x)e^{-\beta x} \;dx}.

Therefore,

\displaystyle (1+\beta^2)\int\limits_{0}^{\infty}-\sin(x)e^{-\beta x}\;dx = -1 \implies \int\limits_{0}^{\infty}-\sin(x)e^{-\beta x}\;dx =\frac{-1}{1+\beta^2}.


Exercise-1 Evaluate \displaystyle \int\limits_{0}^{\infty} -\sin(x)e^{-\beta x}\;dx by the schematic method (hint: Schematic Integration by Parts)

Harnessing Feynman’s Integral Technique

Show that

\displaystyle\int_{-\infty}^{\infty}e^{-x^2}\;dx = \sqrt{\pi}


This integral is renowned in mathematics as the Gaussian integral. Its evaluation poses a challenge due to the absence of an elementary antiderivative expressed in standard functions. Conventionally, one method involves “squaring the integral” and subsequently interpreting the resulting double integral in polar coordinates. However, an alternative approach, which we present here, employs Feynman’s integral technique.

The even nature of the function e^{-x^2} implies that

\displaystyle\int\limits_{-\infty}^{\infty}e^{-x^2}\;dx = 2 \int\limits_{0}^{\infty} e^{-x^2}\;dx.\quad\quad\quad(*)

Let’s consider

I = \displaystyle\int\limits_{0}^{\infty} e^{-x^2}\;dx\quad\quad\quad(**)

and define

J(\beta) = \displaystyle\int\limits_{0}^{\infty} \frac{e^{-\beta^2(1+x^2)}}{1+x^2}\;dx ,\quad\beta\ge 0.

We can differentiate J(\beta) with respect to \beta:

\frac{dJ(\beta)}{d\beta} = \displaystyle\int\limits_{0}^{\infty}\frac{\partial}{\partial\beta} \left(\frac{e^{-\beta^2(1+x^2)}}{1+x^2}\right)\;dx

= \displaystyle \int\limits_{0}^{\infty}\frac{e^{-\beta^2(1+x^2)}}{1+x^2}\cdot -2\beta(1+x^2)\;dx

= -2\beta \displaystyle\int\limits_{0}^{\infty}e^{-\beta^2}\cdot e^{-\beta^2x^2}\;dx

=-2\beta e^{-\beta^2}\displaystyle\int\limits_{0}^{\infty}e^{-\beta^2x^2}\;dx.

Given a differentiable function \phi(t) = \frac{t}{\beta} on [0, \infty) with derivate \phi'(t) = \frac{1}{\beta }, the expression can be written as

-2\beta e^{-\beta^2} \displaystyle\int\limits_{0}^{\infty}e^{-\beta^2(\frac{t}{\beta})^2}\frac{1}{\beta}\;dx = -2e^{-\beta^2}\displaystyle\int\limits_{0}^{\infty}e^{-t^2}\;dt\overset{(**)}{=}-2e^{-\beta^2}\cdot I.

Hence, we find

\frac{dJ(\beta)}{d\beta} = -2e^{-\beta^2}I.

Integrating with respect to \beta from 0 to \infty :

\displaystyle\int\limits_{0}^{\infty} \frac{dJ(\beta)}{\beta}\;d\beta = \int\limits_{0}^{\infty}-2e^{-\beta^2}I\;d\beta = -2 I \underbrace{\int\limits_{0}^{\infty} e^{-\beta^2}\;d\beta}_{I} = -2I^2

gives

J(\infty)-J(0) = -2I^2.

Since

J(0) = \displaystyle\int\limits_{0}^{\infty}\frac{e^{0\cdot (1+x^2)}}{1+x^2}\;dx = \int\limits_{0}^{\infty}\frac{1}{1+x^2}\;dx=\arctan(x)\bigg|_{0}^{\infty} = \frac{\pi}{2}

and

\boxed{J(\infty) = 0},

we arrive at

0 -\frac{\pi}{2} = -2I^2 \implies I^2 = \frac{\pi}{4} \implies I = \frac{\sqrt{\pi}}{2}.

It follows that by (*):

\displaystyle\int_{-\infty}^{\infty}e^{-x^2}\;dx = \sqrt{\pi}.


\boxed{ J(\infty) = 0}

From the inequality

\displaystyle 0 < \int\limits_{0}^{\infty}\frac{e^{-\beta^2x^2}}{1+x^2}\;dx \le \int\limits_{0}^{\infty}\frac{1}{1+x^2}\;dx = \arctan(x)\bigg|_{0}^{\infty} = \frac{\pi}{2},

we deduce that L = \displaystyle\int\limits_{0}^{\infty}\frac{e^{-\beta^2x^2}}{1+x^2}\;dx is a finite number.

As a result,

\displaystyle J(\infty) = \lim\limits_{\beta\rightarrow \infty} J(\beta) = \lim\limits_{\beta \rightarrow \infty}\int\limits_{0}^{\infty}\frac{e^{-\beta^2(1+x^2)}}{1+x^2}\;dx

= \displaystyle\lim\limits_{\beta \rightarrow \infty}\left(e^{-\beta^2}\cdot\int\limits_{0}^{\infty}\frac{e^{-\beta^2x^2}}{1+x^2}\;dx\right) = \displaystyle\lim\limits_{\beta \rightarrow \infty}\left(e^{-\beta^2}\cdot L\right) = L\cdot \lim\limits_{\beta \rightarrow \infty} e^{-\beta^2} = L\cdot 0 = 0.


Excercise-1 Show that \lim\limits_{\beta \rightarrow \infty} e^{-\beta^2} = 0.

The foundation of a technique for evaluating definite integrals

Given f(x) is continuous on [a, b] and \phi(t) satisfies the following conditions:

[1] \forall t \in [\alpha, \beta], \phi(t) \in [a, b]

[2] \phi(\alpha)=a, \; \phi(\beta)=b

[3] \phi(t) has continuous derivative on [\alpha, \beta]

Prove:

\displaystyle\int\limits_{a}^{b}f(x)\;dx =\displaystyle\int\limits_{\alpha}^{\beta}f(\phi(t))\phi'(t)\;dt.


The given premises

f(x) is a continuous function on [a, b]\quad\quad\quad(1)

ensures the existence of the definite integral \displaystyle \int\limits_{a}^{b}f(x)\;dx and the antiderivative of f(x).

Denoting the antiderivative of f(x) as F(x), we obtain

\displaystyle\int\limits_{a}^{b} f(x) \;dx = F(b)-F(a).\quad\quad\quad(2)

We also deduce from [3] that

\phi(t) is continuous on [\alpha, \beta].\quad\quad\quad(3)

Combining [3], [1] and (1),

f(\phi(t)) is continuous on [\alpha, \beta].\quad\quad\quad(4)

Additionally, as per [3],

\phi'(t) is continuous on [\alpha, \beta].\quad\quad\quad(5)

Together, (4) and (5) give

f(\phi(t))\phi'(t) is continuous on [\alpha, \beta].

Consequently, \displaystyle\int\limits_{\alpha}^{\beta}f(\phi(t))\phi'(t)\;dt exists as well.

Now let’s examine F(u), where u=\phi(t). We have

\frac{dF(u)}{dx} = F'(u) \frac{du}{dx} = F'(u)\phi'(t) = f(u)g'(t) = f(\phi(t))\phi'(t)

\implies F(u) = f(\phi(t)) is the antiderivative of f(\phi(t))\phi'(t)

\implies \displaystyle\int\limits_{\alpha}^{\beta}f(\phi(t))\phi'(t)\;dt = F(\phi(t))\bigg|_{\alpha}^{\beta}

= F(\phi(\beta))-F(\phi(\alpha)) \overset{[2]}{=} F(b)-F(a)\overset{(2)}{=}\displaystyle\int\limits_{a}^{b}f(x)\;dx.

This theorem serves as the foundation of a technique widely employed in evaluating definite integrals. By selecting a suitable substitution, the definite integral is transformed into a form that is more manageable or matches a known solution. This method proves particularly valuable for evaluating definite integrals involving complex functions or expressions.

For Example, to evaluate integral

\displaystyle\int\limits_{0}^{a}\log(x+\sqrt{x^2+a^2})\;dx \quad (a>0),

we choose the following substitution for variable x:

\phi(t) = at \implies \phi'(t) =a.

Since 0 \le \phi(t) \le a \implies 0 \le t \le 1, we can rewrite the integral as:

\displaystyle\int\limits_{0}^{1}\left(\log(at + \sqrt{(at)^2+a^2}\right)\cdot a \; dt

and proceed to evaluate the new integral as follows:

\displaystyle\int\limits_{0}^{1}\left(\log(at + |a|\sqrt{t^2+1}\right)\cdot a \; dt

\overset{a>0}{=} \displaystyle\int\limits_{0}^{1}\left(\log(at + a\sqrt{t^2+1}\right)\cdot a \; dt

= \displaystyle\int\limits_{0}^{1}a\left(\log(a\cdot(t+\sqrt{t^2+1})\right)\; dt

= \displaystyle\int\limits_{0}^{1}a\left(\log(a)+\log(t+\sqrt{t^2+1})\right)\; dt

= \displaystyle\int\limits_{0}^{1}a\log(a)\;dt+a\int\limits_{0}^{1}\log(t+\sqrt{t^2+1})\; dt

= \displaystyle a\log(a) t\bigg|_{0}^{1} + a\int\limits_{0}^{1}\mathrm{arcsinh(t)}\;dt

= a\log(a) + a\cdot\left(t\cdot\mathrm{arcsinh(t)} - \sqrt{t^2+1}\right)\bigg|_{0}^{1}

= a\log(a) + a\cdot\left(\mathrm{arcsinh(1)}-\sqrt{2} + 1\right)

= a\log(a) + a\cdot\mathrm{arcsinh(1)} -\sqrt{2}a + a.


Let’s verify the result:

Suppose G(x) = \displaystyle\int \log(x+\sqrt{x^2+a^2})\; dx. Then

G(x) = \displaystyle \int x' \cdot\log(x+\sqrt{x^2+a^2})\;dx

= \displaystyle x\log(x+\sqrt{x^2+a^2})-\int x\cdot \frac{1+\frac{1}{2}\frac{2x}{\sqrt{x^2+a^2}}}{x+\sqrt{x^2+a^2}}\;dx

= \displaystyle x\log(x+\sqrt{x^2+a^2})-\int x \cdot \frac{(1+\frac{x}{\sqrt{x^2+a^2}})(\sqrt{x^2+a^2}-x)}{a^2}\;dx

= \displaystyle  x\log(x+\sqrt{x^2+a^2}) -\int x \cdot\frac{\sqrt{x^2+a^2}+x-x-\frac{x^2}{\sqrt{x^2+a^2}}}{a^2}\;dx

= \displaystyle  x\log(x+\sqrt{x^2+a^2}) - \int x\cdot\frac{\sqrt{x^2+a^2}-\frac{x^2}{x^2+a^2}}{a^2}\;dx

= \displaystyle x\log(x+\sqrt{x^2+a^2}) - \int x\cdot\frac{\frac{x^2+a^2-x^2}{\sqrt{x^2+a^2}}}{a^2}\;dx

= \displaystyle x\log(x+\sqrt{x^2+a^2}) -\int \frac{x}{\sqrt{x^2+a^2}}\; dx

= x\log(x+\sqrt{x^2+a^2}) - \sqrt{x^2+a^2}.

By Newton-Leibniz rule,

\displaystyle\int\limits_{0}^{a} \log(x+\sqrt{x^2+a^2}) \; dx=G(a) - G(0)

= \underbrace{a\log(a+\sqrt{a^2+a^2})-\sqrt{a^2+a^2}}_{G(a)}-\underbrace{(0 - \sqrt{0+a^2})}_{G(0)}

=  a\log(a\cdot(1 + \sqrt{1+1}))-\sqrt{2}a-(0 - a)

= a\log(a) + a\cdot\log(1+\sqrt{1 + 1}) - \sqrt{2}a + a

= a\log(a) + a\cdot\mathrm{arcsinh(1)} -\sqrt{2}a + a.


For grins, we also verify as follows:

I(\beta) = \displaystyle\int\limits_{0}^{a}\log(x+\sqrt{x^2+\beta^2})\;dx, \quad\beta>0

\implies \displaystyle\frac{dI(\beta)}{d\beta} = \int\limits_{0}^{a} \frac{\partial}{\partial \beta}\log(x + \sqrt{x^2+\beta^2})\;dx

= \displaystyle\int\limits_{0}^{a}\frac{1}{x+\sqrt{x^2+\beta^2}}\cdot\frac{1}{2\sqrt{x^2+\beta^2}}\cdot 2\beta\; dx

= \displaystyle\int\limits_{0}^{a}\frac{\sqrt{x^2+\beta^2}-x}{\beta^2}\cdot\frac{\beta}{\sqrt{x^2+\beta^2}}\;dx

= \displaystyle\int\limits_{0}^{a}\frac{\sqrt{x^2+\beta^2}-x}{\beta}\cdot\frac{1}{\sqrt{x^2+\beta^2}}\;dx

= \displaystyle\int\limits_{0}^{a}\frac{1}{\beta}(1-\frac{x}{\sqrt{x^2+a^2}})\;dx

= \frac{1}{\beta}(x-\sqrt{x^2+\beta^2})\bigg|_{0}^{a}

= \frac{1}{\beta}(a-\sqrt{a^2+\beta^2}+\beta).

That is,

\frac{dI(\beta)}{d\beta} = \frac{a}{\beta}+1-\frac{\sqrt{a^2+\beta^2}}{\beta}.

Integrate it from 1 to \beta,

\displaystyle\int\limits_{1}^{\beta} \frac{dI(\beta)}{d\beta} \;d\beta = \int\limits_{1}^{\beta}\frac{a}{\beta} \;d\beta + \int\limits_{1}^{\beta}1\;d\beta - \int\limits_{1}^{\beta}\frac{\sqrt{\beta^2+a^2}}{\beta}\;d\beta

\implies \displaystyle I(\beta) - I(1) = a \log(\beta)\bigg|_{1}^{\beta} + \beta\bigg|_{1}^{\beta} -\int\limits_{1}^{\beta}\frac{\sqrt{\beta^2+a^2}}{\beta}\;d\beta

\implies \displaystyle I(\beta) - I(1) = a\log(\beta) + \beta -1 - \int\limits_{1}^{\beta}\frac{\sqrt{\beta^2+a^2}}{\beta}\;d\beta

\overset{(*), (**)}{\implies} \displaystyle I(\beta) - \underbrace{(a \cdot \mathrm{arcsinh}(a) - \sqrt{a^2+1}+1)}_{I(1)=\int\limits_{0}^{a}\log(x+\sqrt{x^2+1})\;dx} =

a\log(a) + \beta -1 -\underbrace{\left(\sqrt{\beta^2+a^2} - \sqrt{a^2+1} - a \cdot \mathrm{arcsinh}(\frac{a}{\beta})+a\cdot \mathrm{arcsinh}(a)\right)}_{\int\limits_{1}^{\beta}\frac{\sqrt{\beta^2+a^2}}{\beta}\;d\beta}

\implies I(\beta) = \mathrm{a\log(\beta) + \beta -\sqrt{\beta^2+a^2} + a\cdot arcsinh(\frac{a}{\beta})}.

Let \beta = a, we obtain

\displaystyle\int\limits_{0}^{a}\log(x+\sqrt{x^2+a^2})\; dx = \mathrm{a\log(a) + a -\sqrt{2}a+a\cdot arcsinh(1)} = \mathrm{a\log(a)+a\cdot arcsinh(1) -\sqrt{2}a + a}.


From “Deriving Two Inverse Functions“, we see that \log(x+\sqrt{x^2+1}) = \mathrm{arcsinh}(x). Therefore,

I(1) = \displaystyle \int\limits_{0}^{a}\log(x + \sqrt{x^2+1}) \; dx

= \displaystyle\int\limits_{0}^{a} \mathrm{arcsinh}(x)\;dx

= \displaystyle\int\limits_{0}^{a} x'\cdot \mathrm{arcsinh}(x)\;dx

= \displaystyle x\cdot\mathrm{arcsinh}(x)\bigg|_{0}^{a} -\int\limits_{0}^{a}\frac{x}{\sqrt{x^2+1}}\;dx

= a\cdot\mathrm{arcsinh}(a) - \sqrt{x^2+1}\bigg|_{0}^{a}

= a\cdot \mathrm{arcsinh}(a) - \sqrt{a^2+1}+1.\quad\quad\quad(*)

Now, letting \phi(t) = a\cdot t, we have

1 \le at \le \beta \implies \frac{1}{a} \le t \le \frac{\beta}{a}, \phi'(t) = a and \displaystyle \int\limits_{1}^{\beta}\frac{\sqrt{\beta^2+a^2}}{\beta}\;d\beta becomes

\displaystyle \int\limits_{\frac{1}{a}}^{\frac{\beta}{a}}a\frac{\sqrt{t^2+1}}{t}\;dt

= \displaystyle a\int\limits_{\frac{1}{a}}^{\frac{\beta}{a}}\frac{\sqrt{t^2+1}}{t}\;dt

= a\left(\sqrt{1+t^2}-\mathrm{arcsinh}(\frac{1}{t})\right)\bigg|_{\frac{1}{a}}^{\frac{\beta}{a}}

(see “Integral: I vs. CAS“)

= \sqrt{\beta^2+1} - \sqrt{a^2+1} - a\cdot \mathrm{arcsinh}(\frac{a}{\beta}) + a\cdot\mathrm{arcsinh}(a).\quad\quad\quad(**)


Exercise-1 \displaystyle\int\limits_{0}^{1} \sqrt{1-x^2} \;dx

Exercise-2 \displaystyle\int\limits_{0}^{1} \frac{\log(1+x)}{1+x^2}\;dx (Hint: Consider \phi(t) = \frac{1-t}{1+t})