A CAS-Aided Analysis of Rocket Flight Performance Optimization

Research on rocket flight performance has shown that typical single-stage rockets cannot serve as the carrier vehicle for launching satellite into orbit. Instead, multi-stage rockets are used in practice with two-stage rockets being the most common. The jettisoning of stages allows decreasing the mass of the remaining rocket in order for it to accelerate rapidly till reaching its desired velocity and height.

Optimizing flight performance is a non-trivial problem in the field of rocketry. This post examines a two-stage rocket flight performance through rigorous mathematical analysis. A Computer Algebra System (CAS) is employed to carry out the symbolic computations in the process. CAS has been proven to be an efficient tool in carry out laborious mathematical calculations for decades. This post reports on the process and the results of using Omega CAS explorer, a Maxima based CAS to solve this complex problem.

A two-stage rocket consists of a payload P propelled by two stages of masses m_1 (first stage) and m_2(second stage), both with structure factor 1-e. The exhaust speed of the first stage is c_1, and of second stage c_2. The initial total mass, m_1 + m_2 is fixed. The ratio b = \frac{P}{m_1+m_2} is small.

Based on Tsiolkovsky’s equation, we derived the multi-stage rocket flight equation [1]. For a two-stage rocket, the final velocity can be calculated from the following:

v = -c_1\log(1-\frac{em_1}{m_1+m_2+P}) - c_2\log(1-\frac{em_2}{m_2+P})\quad\quad\quad(1)

Let a = \frac{m_2}{m_1+m_2}, so that (1) becomes

v=-c_1\log(1-\frac{e(1-a)}{1+b}) - c_2\log(1-\frac{ea}{a+b})\quad\quad\quad(2)

where 0<a<1, b>0, 0<e<1, c_1>0, c_2>0.

We seek an appropriate value of a that maximizes v.

Consider v as a function of a, its derivative \frac{d}{da}v is computed (see Fig. 1)

Fig. 1

We have \frac{d}{da}v = -\frac{e(abc_2e+abc_1e+a^2c_1e+b^2c_2+bc_2-b^2c_1-2abc_1-a^2c_1)}{(b+a)(ae-b-a)(ae-e+b+1)}.

That is, \frac{d}{da}v = \frac{e(abc_2e+abc_1e+a^2c_1e+b^2c_2+bc_2-b^2c_1-2abc_1-a^2c_1)}{(b+a)(b+a(1-e))(ae+b+1-e)}.

Fig. 2

As shown in Fig. 2, \frac{d}{da}v can be expressed as

\frac{d}{da}v = \frac{e(A_2a^2+B_2a+C_2)}{(b+a)(b+a(1-e))(ae+b+1-e)}\quad\quad\quad(3)

Notice that A_2 = c_1(e-1) = -c_1(1-e) < 0.

Solving \frac{d}{da}v = 0 for a gives two solutions a_1, a_2 (see Fig. 3)

Fig. 3

We rewrite the expression under the square root in a_1 and a_2 as a quadratic function of e: Ae^2+Be+ Cand compute B^2-4AC (see Fig. 4)

Fig. 4

If c_1 \ne c_2, B^2-4AC< 0. It implies that Ae^2+Be+C is positive since A>0. When c_1=c_2, Ae^2+Be+C where A>0, 0<e<1 is still positive since as a result of B^2-4AC=0, the zero point of function Ax^2+Bx+C is \frac{-B}{2A} = \frac{(8b^2+8b)c_1^2}{2(b^2c_1^2+(2b^2+4b)c_1^2+b^2c_1^2)} = 1.

The expression under the square root is positive means both a_1 and a_2 are real-valued and a_1-a_2 >0 (see Fig. 5), i.e., a_1 > a_2.

Fig. 5

From (3) where A_2  < 0, we deduce the following:

For all a \ge 0, if a>a_1 then \frac{d}{da}v(a) < 0 \quad\quad\quad(\star)

For all a \ge 0, if a_2<a<a_1 then \frac{d}{da}v(a) >0 \quad\quad\quad(\star\star)

For all a \ge 0, if a<a_2 then \frac{d}{da}v(a) < 0 \quad\quad\quad(\star\star\star)

Fig. 6

Moreover, from Fig. 6,

\frac{d}{da}v(0)\cdot \frac{d}{da}v(1) = -\frac{e^2(c_1e+bc_2-bc_1-c_1)(c_2e-bc_2-c_2+bc_1)}{b(b+1)(e-b-1)^2}=-\frac{e^2(c_1e+bc_2-bc_1-c_1)(c_2e-bc_2-c_2+bc_1)}{b(b+1)(b+1-e)^2}.

Since the expression in the numerator of \frac{d}{da}v(0)\cdot \frac{d}{da}v(1), namely

(c_1e + bc_2-bc_1-c_1)(c_2e-bc_2-c_2 +bc_1)

= (c_1(e-1-b) + bc_2)(c_2(e-1-b)+bc_1)

=c_1c_2(e-1-b)^2+b^2c_1c_2+b(c_1^2+c_2^2)(e-1-b)

\ge c_1c_2(e-1-b)^2 + b^2c_1c_2+2bc_1c_2(e-1-b)

=c_1c_2(e-1-b+b)^2

=c_1c_2(e-1)^2>0,

It follows that

\frac{d}{da}v(0) \cdot \frac{d}{da}v(1) < 0\quad\quad\quad(4)

The implication is that \frac{d}{da}v has at least one zero point between 0 and 1.

However, if both a_1 and a_2, the two known zero points of \frac{d}{da}v are between 0 and 1, by (\star) and (\star\star\star), \frac{d}{da}v(0)\cdot  \frac{d}{da}v(1) must be positive, which contradicts (4). Therefore, \frac{d}{da}v must have only one zero point between 0 and 1.

We will proceed to show that the only zero lies between 0 and 1 is a_1.

There are two cases to consider.

Case 1 (c_1 \le c_2) \frac{d}{da} v(0)=\frac{e(bc_2(1-e)+b^2(c_2-c1))}{b^2(b+1-e)} >0 since b>0, c_2>0, 0<e<1 and c_2-c_1>0. But this contradicts (\star\star\star). Therefore, a_2 must not be positive.

Case 2 (c_1 >  c_2) The denominator of a_2 is negative since c_1 > 0, e < 1. However, (-bc_2-bc_1)e + 2bc_1, the terms not under the square root in the numerator of a_2 can be expressed as bc_1(1-e) + b(c_1-c_2e). This is a positive expression since c_1 >  c_2 > 0, 0<e<1 implies that c_1-c_2e >  c_1 - c_1e = c_1(1-e) >0. Therefore, a_2< 0.

The fact that only a_1 lies between 0 and 1, together with (\star) and (\star\star) proves that a_1 is where the global maximum of v occurs.

a_1 can be simplified to a Taylor series expansion (see Fig. 7)

Fig. 7

The result -\frac{b((c_2+c_1)e-2c_1)}{2c_1e-2c_1} - \frac{\sqrt{b}\sqrt{c_2}}{\sqrt{c_1}} produced by CAS can be written as -\sqrt{\frac{bc_2}{c_1}} + O(b). However, it is incorrect as \lim\limits_{b\rightarrow 0} \frac{-\sqrt{\frac{bc_2}{c_1}}+O(b)}{\sqrt{b}} = -\sqrt{\frac{c_2}{c_1}} < 0 would suggest that a_1 is a negative quantity when b is small.

To obtain a correct Taylor series expansion for a_1, we rewrite a_1 as \sqrt{b^2D+bE}+bF first where

D=\frac{c_2^2e^2+2c_1c_2e^2+c_1^2e^2-8c_1c_2e+4c_1c_2}{(2c_1-2c_1e)^2}

E=\frac{4c_1c_2e^2-8c_1c_2e+4c_1c_2}{(2c_1-2c1e)^2}=\frac{c_2}{c_1}

F=\frac{c1e+c_2e-2c_1}{2c_1-2c_1e}

Its first order Taylor series is then computed (see Fig. 8)

Fig. 8

The first term of the result can be written as O(b). Bring the value of E into the result, we have:

a_1= \sqrt{bE} + O(b) = \sqrt{\frac{bc_2}{c_1}} + O(b)\quad\quad\quad(6)

To compute v(a_1) from (6) , we substitute \sqrt{Db^2+Eb} + Fb for a in (2) and compute its Taylor series expansion about b=0 (see Fig. 9 )

Fig. 9

Writing its first term as O(b) and substituting the value E yields:

v = -(c_1+c_2)\log(1-e)+\frac{\sqrt{b}(c_1E^{1/2}+c_2E^{-1/2})}{e-1} + O(b)

= -(c_1+c_2)\log(1-e) + \frac{\sqrt{b}(c_1e\sqrt{\frac{c_2}{c_1}}+c_2e\sqrt{\frac{c_1}{c_2}})}{e-1}+ O(b)

= -(c_1+c_2)\log(1-e)-2e\frac{\sqrt{c_1c_2b}}{1-e} + O(b)

It is positive when b is small.

We have shown the time-saving, error-reduction advantages of using CAS to aid manipulation of complex mathematical expressions. On the other hand, we also caution that just as is the cases with any software system, CAS may contain software bugs that need to be detected and weeded out with a well- trained mathematical mind.

References

[1] M. Xue, Viva Rockettry! Part 2 http://vroomlab.wordpress.com/2019/01/31/viva-rocketry-part-2

[2] Omega: A Computer Algebra System Explorer http://www.omega-math.com

Round Two on Finite Difference Approximations of Derivatives

There is another way to obtain the results stated in “Finite Difference Approximations of Derivatives“.

Let y'_i, y''_i denotes y'(x_i) and y''(x_i) respectively and,

h=x_{i+1}-x_i=x_i-x_{i-1} >0.

We define

y'_i = \alpha_1 y_{i-1} + \alpha_2 y_{i+1}\quad\quad\quad(1).

By Taylor’s expansion around x_i,

y_{i-1} \approx y_i + y'_i(-h)+\frac{y''_i}{2!}(-h)^2\quad\quad\quad(1-1)

y_{i+1} \approx y_i + y'_ih+\frac{y''_i}{2!}h^2\quad\quad\quad(1-2)

Substituting (1-1), (1-2) into (1),

y'_i \approx \alpha_1 y_i - \alpha y'_ih +\alpha_1\frac{y''_i}{2}h^2 + \alpha_2 y_i +\alpha_2y'_i h +\alpha_2\frac{y''_i}{2}h^2.

That is,

y'_i \approx (\alpha_1+\alpha_2) y_i +(\alpha_2-\alpha_1)h y'_i + (\alpha_1+\alpha_2)\frac{y''_i}{2}h^2.

It follows that

\begin{cases} \alpha_1+\alpha_2=0 \\ (\alpha_2-\alpha_1)h=1\\ \alpha_1+\alpha_2=0 \end{cases}\quad\quad\quad(1-3)

Fig. 1

Solving (1-3) for \alpha_1, \alpha_2 (see Fig. 1) yields

\alpha_1=-\frac{1}{2h}, \alpha_2=\frac{1}{2h}.

Therefore,

y'_i \approx -\frac{1}{2h} y_{i-1} + \frac{1}{2h} y_{i+1} = \frac{y_{i+1}-y_{i-1}}{2h}

or,

\boxed{y'_i \approx \frac{y_{i+1}-y_i}{2h}}

Now, let

y''_i = \alpha_1y_{i-1}+\alpha_2 y_i + \alpha_3 y_{i+1}.

From

y_{i-1} \approx  y_i - y'_i h + \frac{y''_i}{2!} h^2

and

y_{i+1} \approx y_i +y'_i h + \frac{y''_i}{2!}h^2,

we have,

y''_i \approx \alpha_1 y_i - \alpha_1 y'_i h + \frac{\alpha_1}{2}y''_i h^2 + \alpha_2 y_i + \alpha_3 y_i + \alpha_3 y'_i h + \frac{\alpha_3}{2}y''_i h^2.

It leads to

\begin{cases} \alpha_1+\alpha_2+\alpha_3=0\\-\alpha_1+\alpha_3=0\\(\frac{\alpha_1}{2}+\frac{\alpha_3}{2})h^2=1 \end{cases}

Fig. 2

whose solution (see Fig. 2) is

\alpha_1 = \frac{1}{h^2}, \alpha_2 = -\frac{2}{h^2}, \alpha_3=\frac{1}{h^2}.

Hence,

y''_i \approx \frac{1}{h^2}y_{i-1}-\frac{2}{h^2}y_i +\frac{1}{h^2}y_{i+1}=\frac{y_{i-1}-2y_i+y_{i+1}}{h^2}

i.e.,

\boxed{ y''_i \approx \frac{y_{i+1}-2y_i + y_{i-1}}{h^2}}